♥♥ 2021 NEW RECOMMEND ♥♥

Free VCE & PDF File for Amazon AWS-Certified-DevOps-Engineer-Professional Real Exam (Full Version!)

★ Pass on Your First TRY ★ 100% Money Back Guarantee ★ Realistic Practice Exam Questions

Free Instant Download NEW AWS-Certified-DevOps-Engineer-Professional Exam Dumps (PDF & VCE):
Available on: http://www.surepassexam.com/AWS-Certified-DevOps-Engineer-Professional-exam-dumps.html

Q11. Your API requires the ability to stay online during AWS regional failures. Your API does not store any state, it only aggregates data from other sources - you do not have a database. What is a simple but effective way to achieve this uptime goal?

A. Use a CloudFront distribution to serve up your API. Even if the region your API is in goes down, the edge locations CIoudFront uses will be fine.

B. Use an ELB and a cross-zone ELB deployment to create redundancy across datacenters. Even if a region fails, the other AZ will stay online.

C. Create a Route53 Weighted Round Robin record, and if one region goes down, have that region redirect to the other region.

D. Create a Route53 Latency Based Routing Record with Failover and point it to two identical deployments of your stateless API in two different regions. Make sure both regions use Auto Scaling Groups behind ELBs.

Answer:

Explanation:

Latency Based Records allow request distribution when all is well with both regions, and the Failover component enables fallbacks between regions. By adding in the ELB and ASG, your system in the survMng region can expand to meet 100% of demand instead of the original fraction, whenever failover occurs.

Reference:       http://docs.aws.amazon.com/Route53/Iatest/DeveIoperGuide/dns-failover.html

You are designing an enterprise data storage system. Your data management software system requires mountable disks and a real filesystem, so you cannot use S3 for storage. You need persistence, so you will be using AWS EBS Volumes for your system. The system needs as low-cost storage as possible, and access is not frequent or high throughput, and is mostly sequential reads. Which is the most appropriate EBS Volume Type for this scenario?

A. gpl

B. iol

C. standard

D. gp2 

Answer: C

Explanation:

standard volumes, or Magnetic volumes, are best for: Cold workloads where data is infrequently accessed, or scenarios where the lowest storage cost is important.

Reference:       http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVoIumeTypes.htmI


Q12. You are hired as the new head of operations for a SaaS company. Your CTO has asked you to make debugging any part of your entire operation simpler and as fast as possible. She complains that she has   no idea what is going on in the complex, service-oriented architecture, because the developers just log to disk, and it's very hard to find errors in logs on so many services. How can you best meet this requirement and satisfy your CTO?

A. Copy all log files into AWS S3 using a cron job on each instance. Use an S3 Notification Configuration on the <code>PutBucket</code> event and publish events to AWS Lambda. Use the Lambda to analyze logs as soon as they come in and flag issues.

B. Begin using CIoudWatch Logs on every service. Stream all Log Groups into S3 objects. Use AWS EMR clusterjobs to perform ad-hoc MapReduce analysis and write new queries when needed.

C. Copy all log files into AWS S3 using a cron job on each instance. Use an S3 Notification Configuration on the <code>PutBucket</code> event and publish events to AWS Kinesis. Use Apache Spark on AWS EMR to perform at-scale stream processing queries on the log chunks and flag issues.

D. Begin using CIoudWatch Logs on every service. Stream all Log Groups into an AWS Elasticsearch Service Domain running Kibana 4 and perform log analysis on a search cluster.

Answer:

Explanation:

The Elasticsearch and Kibana 4 combination is called the ELK Stack, and is designed specifically for real-time, ad-hoc log analysis and aggregation. All other answers introduce extra delay or require pre-defined queries.

Amazon Elasticsearch Service is a managed service that makes it easy to deploy, operate, and scale Elasticsearch in the AWS Cloud. Elasticsearch is a popular open-source search and analytics engine for use cases such as log analytics, real-time application monitoring, and click stream analytics.    Reference:     https://aws.amazon.com/elasticsearch-service/


Q13. When thinking of AWS Elastic Beanstalk, the 'Swap Environment URLs' feature most directly aids in what?

A. Immutable Rolling Deployments

B. MutabIe Rolling Deployments

C. Canary Deployments

D. Blue-Green Deployments 

Answer: D

Explanation:

Simply upload the new version of your application and let your deployment service (AWS Elastic Beanstalk, AWS CIoudFormation, or AWS OpsWorks) deploy a new version (green). To cut over to the new version, you simply replace the ELB URLs in your DNS records. Elastic Beanstalk has a Swap

Environment URLs feature to facilitate a simpler cutover process.

Reference:        https://d0.awsstatic.com/whitepapers/overview-of-deployment-options-on-aws.pdf


Q14. Which of these is not a Pseudo Parameter in AWS CIoudFormation?

A. AWS::StackName

B. AWS::AccountId

C. AWS::StackArn

D. AWS::NotificationARNs 

Answer: C

Explanation:

This is the complete list of Pseudo Parameters: AWS::Account|d, AWS::NotificationARNs, AWS::NoVaIue, AWS::Region, AWS::StackId, AWS::StackName

Reference:

http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/pseudo-parameter-reference.html


Q15. Which of these is not a reason a Mu|ti-AZ RDS instance will failover?

A. An Availability Zone outage

B. A manual failover of the DB instance was initiated using Reboot with failover

C. To autoscale to a higher instance class

D. The primary DB instance fails 

Answer: C

Explanation:

The primary DB instance switches over automatically to the standby replica if any of the > following conditions occur: An Availability Zone outage, the primary DB instance fails, the DB instance's server type is changed, the operating system of the DB instance is, undergoing software patching, a manual failover

of the DB instance was initiated using Reboot with failover

Reference:        http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.Mu|tiAZ.htmI


Q16. You need your API backed by DynamoDB to stay online during a total regional AWS failure. You can tolerate a couple minutes of lag or slowness during a large failure event, but the system should recover with normal operation after those few minutes. What is a good approach?

A. Set up DynamoDB cross-region replication in a master-standby configuration, with a single standby in another region. Create an Auto Scaling Group behind an ELB in each of the two regions DynamoDB is running in. Add a Route53 Latency DNS Record with DNS Failover, using the ELBs in the two regions as the resource records.

B. Set up a DynamoDB MuIti-Region table. Create an Auto Scaling Group behind an ELB in each of the two regions DynamoDB is running in. Add a Route53 Latency DNS Record with DNS Failover, using the ELBs in the two regions as the resource records.

C. Set up a DynamoDB Mu|ti-Region table. Create a cross-region ELB pointing to a cross-region Auto Scaling Group, and direct a Route53 Latency DNS Record with DNS Failover to the cross-region ELB.

D. Set up DynamoDB cross-region replication in a master-standby configuration, with a single standby in another region. Create a cross-region ELB pointing to a cross-region Auto Scaling Group, and direct a Route53 Latency DNS Record with DNS Failover to the cross-region ELB.

Answer:

Explanation:

There is no such thing as a cross-region ELB, nor such thing as a cross-region Auto Scaling Group, nor such thing as a DynamoDB Multi-Region Table. The only option that makes sense is the cross-regional replication version with two ELBs and ASGs with Route53 Failover and Latency DNS.

Reference: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.CrossRegionRepI.htmI


Q17. When thinking of AWS Elastic BeanstaIk's model, which is true?

A. Applications have many deployments, deployments have many environments.

B. Environments have many applications, applications have many deployments.

C. Applications have many environments, environments have many deployments.

D. Deployments have many environments, environments have many applications. 

Answer: C

Explanation:

Applications group logical services. Environments belong to Applications, and typically represent different deployment levels (dev, stage, prod, fo forth). Deployments belong to environments, and are pushes of bundles of code for the environments to run.

Reference:      http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/NeIcome.html


Q18. For AWS CloudFormation, which is true?

A. Custom resources using SNS have a default timeout of 3 minutes.

B. Custom resources using SNS do not need a <code>ServiceToken</code> property.

C. Custom resources using Lambda and <code>Code.ZipFiIe</code> allow inline nodejs resource composition.

D. Custom resources using Lambda do not need a <code>ServiceToken</code>property 

Answer: C

Explanation:

Code is a property of the AWS::Lambda::Function resource that enables to you specify the source code of an AWS Lambda (Lambda) function. You can point to a file in an Amazon Simple Storage Service (Amazon S3) bucket or specify your source code as inline text (for nodejs runtime environments only). Reference:

http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/template-custom-resources.html


Q19. You run a clustered NoSQL database on AWS EC2 using AWS EBS. You need to reduce latency for database response times. Performance is the most important concern, not availability. You did not perform the initial setup, someone without much AWS knowledge did, so you are not sure if they configured everything optimally. Which of the following is NOT likely to be an issue contributing to increased latency?

A. The EC2 instances are not EBS Optimized.

B. The database and requesting system are both in the wrong Availability Zone.

C. The EBS Volumes are not using PIOPS.

D. The database is not running in a placement group. 

Answer: B

Explanation:

For the highest possible performance, all instances in a clustered database like this one should be in a single Availability Zone in a placement group, using EBS optimized instances, and using PIOPS SSD EBS Volumes. The particular Availability Zone the system is running in should not be important, as long as it is the same as the requesting resources.

Reference:       http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html


Q20. Your system automatically provisions EIPs to EC2 instances in a VPC on boot. The system provisions the whole VPC and stack at once. You have two of them per VPC. On your new AWS account, your attempt to create a Development environment failed, after successfully creating Staging and Production environments in the same region. What happened?

A. You didn't choose the Development version of the AMI you are using.

B. You didn't set the Development flag to true when deploying EC2 instances.

C. You hit the soft limit of 5 EIPs per region and requested a 6th.

D. You hit the soft limit of 2 VPCs per region and requested a 3rd. 

Answer: C

Explanation:

There is a soft limit of 5 E|Ps per Region for VPC on new accounts. The third environment could not allocate the 6th EIP.

Reference:        http://docs.aws.amazon.com/generaI/latest/gr/aws_service_|imits.htmI#Iimits_vpc