Home » Exam Preparation » Certification » AWS SysOps Administrator Associate Certification Exam Dumps » Page 10

AWS SysOps Administrator Associate Certification Exam Dumps

Question #46

An organization’s security policy requires multiple copies of all critical data to be replicated across at least a primary and backup data center. The organization has decided to store some critical data on Amazon S3.
Which option should you implement to ensure this requirement is met?

  • A. Use the S3 copy API to replicate data between two S3 buckets in different regions
  • B. You do not need to implement anything since S3 data is automatically replicated between regions
  • C. Use the S3 copy API to replicate data between two S3 buckets in different facilities within an AWS Region
  • D. You do not need to implement anything since S3 data is automatically replicated between multiple facilities

Correct Answer: D
You specify a region when you create your Amazon S3 bucket. Within that region, your objects are redundantly stored on multiple devices across multiple facilities. Please refer to Regional Products and
Services for details of Amazon S3 service
availability by region.
https://aws.amazon.com/s3/faqs/

Question #47

You are tasked with setting up a cluster of EC2 Instances for a NoSQL database. The database requires random read I/O disk performance up to a 100,000 IOPS at 4KB block side per node.
Which of the following EC2 instances will perform the best for this workload?

Related:  How to Manage AWS Transit Gateway and Attachments Using CLI
  • A. A High-Memory Quadruple Extra Large (m2.4xlarge) with EBS-Optimized set to true and a PIOPs EBS volume
  • B. A Cluster Compute Eight Extra Large (cc2.8xlarge) using instance storage
  • C. High I/O Quadruple Extra Large (hi1.4xlarge) using instance storage
  • D. A Cluster GPU Quadruple Extra Large (cg1.4xlarge) using four separate 4000 PIOPS EBS volumes in a

Correct Answer: C
The SSD storage is local to the instance. Using PV virtualization, you can expect 120,000 random read IOPS
(Input/Output Operations Per Second) and between 10,000 and 85,000 random write IOPS, both with 4K blocks.
For HVM and Windows AMIs, you can expect 90,000 random read IOPS and 9,000 to 75,000 random write
IOPS.
https://aws.amazon.com/blogs/aws/new-high-io-ec2-instance-type-hi14xlarge/

Question #48

When an EC2 EBS-backed (EBS root) instance is stopped, what happens to the data on any ephemeral store volumes?

  • A. Data will be deleted and win no longer be accessible
  • B. Data is automatically saved in an EBS volume.
  • C. Data is automatically saved as an EBS snapshot
  • D. Data is unavailable until the instance is restarted

Correct Answer: A
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html#instance-store-lifetime
However, data in the instance store is lost under the following circumstances:

The underlying disk drive fails –

The instance stops –
The instance terminates

Related:  An Introduction to Terraform Using AWS

Question #49

Your team Is excited about the use of AWS because now they have access to programmable Infrastructure”
You have been asked to manage your AWS infrastructure in a manner similar to the way you might manage application code You want to be able to deploy exact copies of different versions of your infrastructure, stage changes into different environments, revert back to previous versions, and identify what versions are running at any particular time (development test QA. production).
Which approach addresses this requirement?

  • A. Use cost allocation reports and AWS Opsworks to deploy and manage your infrastructure.
  • B. Use AWS CloudWatch metrics and alerts along with resource tagging to deploy and manage your infrastructure.
  • C. Use AWS Beanstalk and a version control system like GIT to deploy and manage your infrastructure.
  • D. Use AWS CloudFormation and a version control system like GIT to deploy and manage your infrastructure.

Correct Answer: D
https://aws.amazon.com/opsworks/chefautomate/faqs/
OpsWorks for Chef Automate automatically performs updates for new Chef minor versions.
OpsWorks for Chef Automate does not perform major platform version updates automatically (for example, a major new platform version such as Chef Automate 13) because these updates might include backward-incompatible changes and require additional testing. In these cases, you must manually initiate the update.

Related:  How to Install SSL Certificate on Amazon Web Services (AWS)

Question #50

You have a server with a 5O0GB Amazon EBS data volume. The volume is 80% full. You need to back up the volume at regular intervals and be able to re-create the volume in a new Availability Zone in the shortest time possible. All applications using the volume can be paused for a period of a few minutes with no discernible user impact.
Which of the following backup methods will best fulfill your requirements?

  • A. Take periodic snapshots of the EBS volume
  • B. Use a third party Incremental backup application to back up to Amazon Glacier
  • C. Periodically back up all data to a single compressed archive and archive to Amazon S3 using a parallelized multi-part upload
  • D. Create another EBS volume in the second Availability Zone attach it to the Amazon EC2 instance, and use

Correct Answer: A
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-restoring-volume.html
EBS volumes can only be attached to EC2 instances within the same Availability Zone.

Leave a Comment