We have every one of the necessary Amazon AWS-Certified-Solutions-Architect-Professional practice questions along with answers which are closely equal on the Amazon AWS-Certified-Solutions-Architect-Professional actual exam. The actual Amazon Amazon exam dumps are created and verified through multiple times. All of the Amazon AWS-Certified-Solutions-Architect-Professional exam syllabuses are provided in our AWS-Certified-Solutions-Architect-Professional products. The kind of Amazon practice questions could be the same as the real Amazon AWS-Certified-Solutions-Architect-Professional exam, which is multiple choice which tends to make you just as if on the real test evening.


♥♥ 2021 NEW RECOMMEND ♥♥

Free VCE & PDF File for Amazon AWS-Certified-Solutions-Architect-Professional Real Exam (Full Version!)

★ Pass on Your First TRY ★ 100% Money Back Guarantee ★ Realistic Practice Exam Questions

Free Instant Download NEW AWS-Certified-Solutions-Architect-Professional Exam Dumps (PDF & VCE):
Available on: http://www.surepassexam.com/AWS-Certified-Solutions-Architect-Professional-exam-dumps.html

2021 Apr AWS-Certified-Solutions-Architect-Professional exam question

Q31. Your company produces customer commissioned one-of-a-kind skiing helmets, combining high fashion with custom technical enhancements. Customers can show off their individuality on the ski slopes and have access to head-up-displays, GPS, rear-view cams and any other technical Innovation they wish to embed in the helmet. The current manufacturing process is data rich and complex, including assessments to ensure that the custom electronics and materials used to assemble the helmets are to the highest standards. Assessments are a mixture of human and automated assessments. You need to add a new set of assessment to model the failure modes of the custom electronics using GPUs with CUDA, across a cluster of servers with low latency networking. What architecture would allow you to automate the existing process using a hybrid approach, and ensure that the architecture can support the evolution of processes over time. 

A. Use Amazon Simple Workflow (SWF) to manage assessments, movement of data & meta- data. Use an auto-scaling group of G2 instances in a placement group. 

B. Use Amazon Simple Workflow (SWF) to manage assessments, movement of data & meta- data. Use an auto-scaling group of C3 instances with SR-IOV (Single Root I/O Visualization). 

C. Use AWS Data Pipeline to manage movement of data & meta-data and assessments. Use auto-scaling group of C3 with SR-IOV (Single Root I/O Visualization). 

D. Use AWS Data Pipeline to manage movement of data & meta-data and assessments. Use an auto-scaling group of G2 instances in a placement group. 

Answer:


Q32. Your company hosts a social media site supporting users in multiple countries. You have been asked to provide a highly available design for the application that leverages multiple regions for the most recently accessed content and latency sensitive portions of the web site. The most latency sensitive component of the application Involves reading user preferences to support web site personalization and ad selection. In addition to running your application in multiple regions, which option will support this application's requirements? 

A. Use the S3 Copy API to copy recently accessed content to multiple regions and serve user content from S3, CloudFront with dynamic content, and an ELB in each region. Retrieve user preferences from an ElastiCache cluster in each region and leverage SNS notifications to propagate user preference changes to a worker node in each region. 

B. Serve user content from S3, CloudFront with dynamic content, and an ELB in each region. Retrieve user preferences from an ElastiCache cluster in each region and leverage Simple Workflow (SWF) to manage the propagation of user preferences from a centralized DB to each ElastiCache cluster. 

C. Serve user content from S3, CloudFront, and use Route53 latency-based routing between ELBs in each region. Retrieve user preferences from a local DynamoDB table in each region and leverage SQS to capture changes to user preferences with SQS workers for propagating updates to each table. 

D. Use the S3 Copy API to copy recently accessed content to multiple regions and serve user content from S3, CloudFront, and Route53 latency-based routing between ELBs in each region. Retrieve user preferences from a DynamoDB table and leverage SQS to capture changes to user preferences with SQS workers for propagating DynamoDB updates. 

Answer:


Q33. You require the ability to analyze a customer's clickstream data on a website, so they can do behavioral analysis. Your customer needs to know what sequence of pages and ads their customer clicked on. This data will be used in real time to modify the page layouts as customers dick through the site, to increase stickiness and advertising click-through. Which option meets the requirements for capturing and analyzing this data? 

A. Log dicks in weblogs by URL, store to Amazon S3, and then analyze with Elastic Map Reduce. 

B. Publish web clicks by session to an Amazon SQS queue; then periodically drain these events to Amazon RDS and analyze with SQL. 

C. Push web clicks by session to Amazon Kinesis, then analyze behavior using Kinesis workers. 

D. Write click events directly to Amazon Redshift, and then analyze with SQL. 

Answer:


Q34. You require the ability to analyze a large amount of data which is stored on Amazon S3 using Amazon Elastic MapReduce. You are using the cc2.8xlarge instance type, whose CPUs are mostly idle during processing. Which of the below would be the most cost efficient way to reduce the runtime of the job? 

A. Create fewer, larger files m Amazon S3. 

B. Use smaller instances that have higher aggregate I/O performance. 

C. Create more, smaller files on Amazon S3. 

D. Add additional cc2.8xlarge instances by introducing a task group. 

Answer:


Q35. You need a persistent and durable storage to trace call activity of an IVR (Interactive Voice Response) system. Call duration is mostly in the 2-3 minutes timeframe. Each traced call can be either active or terminated. An external application needs to know each minute the list of currently active calls. Usually there are a few calls/second, but once per month there is a periodic peak up to 1000 calls/second for a few hours. The system is open 24/7 and any downtime should be avoided. Historical data is periodically archived to files. Cost saving is a priority for this project. What database implementation would better fit this scenario, keeping costs as low as possible? 

A. Use DynamoDB with a "Calls" table and a Global Secondary Index on a "State" attribute that can equal to "active" or "terminated". In this way the Global Secondary Index can be used for all items in the table. 

B. Use RDS Multi-AZ with a "CALLS" table and an indexed "STATE" field that can be equal to "ACTIVE" or 'TERMINATED". In this way the SQL query is optimized by the use of the Index. 

C. Use RDS Multi-AZ with two tables, one for "ACTIVE_CALLS" and one for "TERMINATED_CALLS". In this way the "ACTIVE_CALLS" table is always small and effective to access. 

D. Use DynamoDB with a "Calls" table and a Global Secondary Index on a "IsActive" attribute that is present for active calls only. In this way the Global Secondary Index is sparse and more effective. 

Answer:


Down to date AWS-Certified-Solutions-Architect-Professional exam answers:

Q36. Your system recently experienced down time. During the troubleshooting process you found that a new administrator mistakenly terminated several production EC2 instances. Which of the following strategies will help prevent a similar situation in the future? The administrator still must be able to: 

-launch, start, stop, and terminate development resources, 

-launch and start production instances. 

A. Leverage EC2 termination protection and multi-factor authentication, which together require users to authenticate before terminating EC2 instances. 

B. Leverage resource based tagging, along with an IAM user which can prevent specific users from terminating production EC2 resources. 

C. Create an IAM user which is not allowed to terminate instances by leveraging production EC2 termination protection. 

D. Create an IAM user and apply an IAM role which prevents users from terminating production EC2 instances. 

Answer:


Q37. Refer to the Exhibit:

Refer to the architecture diagram above of a batch processing solution using Simple Queue Service (SQS) to set up a message queue between EC2 instances which are used as batch processors. CloudWatch monitors the number of job requests (queued messages) and an Auto Scaling group adds or deletes batch servers automatically based on parameters set in CloudWatch alarms. You can use this architecture to implement which of the following features in a cost effective and efficient manner? 

A. Coordinate number of EC2 instances with number of Job requests automatically, thus improving cost effectiveness. 

B. Reduce the overall time for executing Jobs through parallel processing by allowing a busy EC2 instance that receives a message to pass it to the next instance in a daisy-chain setup. 

C. Implement fault tolerance against EC2 instance failure since messages would remain in SQS and work can continue with recovery of EC2 instances. Implement fault tolerance against SQS failure by backing up messages to S3. 

D. Handle high priority Jobs before lower priority Jobs by assigning a priority metadata field to SQS messages. 

E. Implement message passing between EC2 instances within a batch by exchanging messages through SQS. 

Answer:


Q38. You are running a news website in the eu-west-1 region that updates every 15 minutes. The website has a world-wide audience. It uses an Auto Scaling group behind an Elastic Load Balancer and an Amazon RDS database. Static content resides on Amazon S3, and is distributed through Amazon CloudFront. Your Auto Scaling group is set to trigger a scale up event at 60% CPU utilization. You use an Amazon RDS extra large DB instance with 10,000 Provisioned IOPS, its CPU utilization is around 80%, while freeable memory is in the 2 GB range. web analytics reports show that the average load time of your web pages is around 1.5 to 2 seconds, but your SEO consultant wants to bring down the average load time to under 0.5 seconds. How would you Improve page load times for your users? Choose 3 answers 

A. Configure Amazon CloudFront dynamic content support to enable caching of re-usable content from your site. 

B. Set up a second installation in another region, and use the Amazon Route 53 latency-based routing feature to select the right region. 

C. Lower the scale up trigger of your Auto Scaling group to 30% so it scales more aggressively. 

D. Add an Amazon ElastiCache caching layer to your application for storing sessions and frequent DB queries. 

E. Switch the Amazon RDS database to the high memory extra large instance type. 

Answer: C, D, E 


Q39. You are designing the network infrastructure for an application server in Amazon VPC. Users will access all the application instances from the Internet, as well as from an on-premises network. The on-premises network is connected to your VPC over an AWS Direct Connect link. How would you design routing to meet the above requirements? 

A. Configure a single routing table with a default route via the Internet gateway. Propagate a default route via BGP on the AWS Direct Connect customer router. Associate the routing table with all VPC subnets. 

B. Configure a single routing table with a default route via the Internet gateway. Propagate specific routes for the on-premises networks via BGP on the AWS Direct Connect customer router. Associate the routing table with all VPC subnets. 

C. Configure two routing tables: one that has a default route via the Internet gateway, and another that has a default route via the VPN gateway. Associate both routing tables with each VPC subnet. 

D. Configure a single routing table with two default routes: one to the Internet via an Internet gateway, the other to the on-premises network via the VPN gateway. Use this routing table across all subnets in your VPC. 

Answer:


Q40. Your company runs a customer facing event registration site. This site is built with a 3-tier architecture with web and application tier servers and a MySQL database. The application requires 6 web tier servers and 6 application tier servers for normal operation, but can run on a minimum of 65% server capacity and a single MySQL database. When deploying this application in a region with three availability zones (AZs), which architecture provides high availability? 

A. A web tier deployed across 2 AZs with 3 EC2 (Elastic Compute Cloud) instances in each A2 inside an Auto Scaling Group behind an ELB (elastic load balancer), and an application tier deployed across 2 AZs with 3 EC2 instances In each AZ inside an Auto Scaling Group behind an ELB, and one RDS (Relational Database Service) instance deployed with read replicas in the other AZ. 

B. A web tier deployed across 3 AZs with 2 EC2 (Elastic Compute Cloud) instances in each AZ inside an Auto Scaling Group behind an ELB (elastic load balancer), and an application tier deployed across 3 AZs with 2 EC2 instances In each AZ inside an Auto Scaling Group behind an ELB, and a Multi-AZ RDS (Relational Database Service) deployment. 

C. d. A web tier deployed across 2 AZs with 3 EC2 (Elastic Compute Cloud) instances in each AZ inside an Auto Scaling Group behind an ELB (elastic load balancer), and an application tier deployed across 2 AZs with 3 EC2 instances in each AZ inside an Auto Scaling Group behind an ELB, and a Multi-AZ RDS (Relational Database Service) deployment 

D. A web tier deployed across 3 AZs with 2 EC2 (Elastic Compute Cloud) instances in each AZ inside an Auto Scaling Group behind an ELB (elastic load balancer), and an application tier deployed across 3 AZs with 2 EC2 instances in each AZ inside an Auto Scaling Group behind an ELB, and one RDS (Relational Database Service) instance deployed with read replicas in the two other AZs. 

Answer: