It is impossible to pass Amazon AWS-Certified-Solutions-Architect-Professional exam without any help in the short term. Come to Testking soon and find the most advanced, correct and guaranteed Amazon AWS-Certified-Solutions-Architect-Professional practice questions. You will get a surprising result by our Avant-garde AWS-Certified-Solutions-Architect-Professional practice guides.

2021 Mar AWS-Certified-Solutions-Architect-Professional exams

Q1. You are designing a social media site and are considering how to mitigate distributed denial-of- service (DDoS) attacks. Which of the below are viable mitigation techniques? Choose 3 answers 

A. Use Dedicated Instances to ensure that each Instance has the maximum performance possible. 

B. Add alerts to Amazon CloudWatch to look for high Network In and CPU utilization. 

C. Create processes and capabilities to quickly add and remove rules to the instance OS firewall. 

D. Use an Elastic Load Balancer with auto scaling groups at the web, app, and Amazon Relational Database Service (RDS) tiers. 

E. Use an Amazon CloudFront distribution for both static and dynamic content. 

F. Add multiple elastic network Interfaces (ENIs) to each EC2 instance to Increase the network bandwidth. 

Answer: A, C, D 


Q2. Your startup wants to implement an order fulfillment process for selling a personalized gadget that needs an average of 3-4 days to produce with some orders taking up to 6 months. You expect 10 orders per day on your first day, 1000 orders per day after 6 months and 10,000 orders after 12 months. Orders coming in are checked for consistency, then dispatched to your manufacturing plant for production, quality control, packaging, shipment and payment processing. If the product does not meet the quality standards at any stage of the process, employees may force the process to repeat a step. Customers are notified via email about order status and any critical issues with their orders such as payment failure. Your base architecture includes AWS Elastic Beanstalk for your website with an RDS MySQL instance for customer data and orders. How can you implement the order fulfillment process while making sure that the emails are delivered reliably? 

A. Add a business process management application to your Elastic Beanstalk app servers and re-use the RDS database for tracking order status. Use one of the Elastic Beanstalk instances to send emails to customers. 

B. Use SWF with an Auto Scaling group of activity workers and a decider instance in another Auto Scaling group with min/max=1. Use SES to send emails to customers. 

C. Use an SQS queue to manage all process tasks. Use an Auto Scaling group of EC2 instances that poll the tasks and execute them. Use SES to send emails to customers. 

D. Use SWF with an Auto Scaling group of activity workers and a decider instance in another Auto Scaling group with min/max=1. Use the decider instance to send emails to customers. 

Answer:


Q3. Your company policies require encryption of sensitive data at rest. You are considering the possible options for protecting data while storing it at rest on an EBS data volume, attached to an EC2 instance. Which of these options would allow you to encrypt your data at rest? Choose 3 answers 

A. Implement third party volume encryption tools 

B. Implement SSL/TLS for all services running on the server 

C. Encrypt data inside your applications before storing it on EBS 

D. Encrypt data using native data encryption drivers at the file system level 

E. Do nothing as EBS volumes are encrypted by default 

Answer: B, C, D 


Q4. Your company has HQ in Tokyo and branch offices all over the world and is using a logistics software with a multi-regional deployment on AWS in Japan, Europe and US A. The logistic software has a 3-tier architecture and currently uses MySQL 5.6 for data persistence. Each region has deployed its own database. In the HQ region you run an hourly batch process reading data from every region to compute cross- regional reports that are sent by email to all offices. This batch process must be completed as fast as possible to quickly optimize logistics. How do you build the database architecture in order to meet the requirements? 

A. For each regional deployment, use MySQL on EC2 with a master in the region and use S3 to copy data files hourly to the HQ region. 

B. For each regional deployment, use RDS MySQL with a master in the region and send hourly RDS snapshots to the HQ region. 

C. Use Direct Connect to connect all regional MySQL deployments to the HQ region and reduce network latency for the batch process. 

D. For each regional deployment, use RDS MySQL with a master in the region and a read replica In the HQ region. 

E. For each regional deployment, use MySQL on EC2 with a master in the region and send hourly EBS snapshots to the HQ region. 

Answer:


Q5. An AWS customer runs a public blogging website. The site users upload two million blog entries a month. The average blog entry size is 200 KB. The access rate to blog entries drops to negligible 6 months after publication and users rarely access a blog entry 1 year after publication. Additionally, blog entries have a high update rate during the first 3 months following publication, this drops to no updates after 6 months. The customer wants to use CloudFront to improve his user's load times. Which of the following recommendations would you make to the customer? 

A. Duplicate entries into two different buckets and create two separate CloudFront distributions where S3 access is restricted only to CloudFront identity. 

B. Create a CloudFront distribution with "US/Europe" price class for US/Europe users and a different CloudFront distribution with "All Edge Locations" for the remaining users. 

C. Create a CloudFront distribution with Restrict Viewer Access, Forward Query String set to true and minimum TTL of 0. 

D. Create a CloudFront distribution with S3 access restricted only to the CloudFront identity and partition the blog entry's location in S3 according to the month it was uploaded to be used with CloudFront behaviors. 

Answer:


Renewal AWS-Certified-Solutions-Architect-Professional question:

Q6. Your company has recently extended its datacenter into a VPC on AWS to add burst computing capacity as needed. Members of your Network Operations Center need to be able to go to the AWS Management Console and administer Amazon EC2 instances as necessary. You don't want to create new IAM users for each NOC member and make those users sign in again to the AWS Management Console. Which option below will meet the needs for your NOC members? 

A. Use your on-premises SAML 2.0-compliant identity provider (IdP) to grant the NOC members federated access to the AWS Management Console via the AWS single sign-on (SSO) endpoint. 

B. Use Web Identity Federation to retrieve AWS temporary security credentials to enable your NOC members to sign in to the AWS Management Console. 

C. Use your on-premises SAML 2.0-compllant identity provider (IdP) to retrieve temporary security credentials to enable NOC members to sign in to the AWS Management Console. D. Use OAuth 2.0 to retrieve temporary AWS security credentials to enable your NOC members to sign in to the AWS Management Console. 

Answer:


Q7. Your application is using an ELB in front of an Auto Scaling group of web/application servers deployed across two AZs and a Multi-AZ RDS Instance for data persistence. The database CPU is often above 80% usage and 90% of I/O operations on the database are reads. To improve performance you recently added a single-node Memcached ElastiCache Cluster to cache frequent DB query results. In the next weeks the overall workload is expected to grow by 30%. Do you need to change anything in the architecture to maintain the high availability of the application with the anticipated additional load? Why? 

A. Yes, you should deploy two Memcached ElastiCache Clusters in different AZs because the RDS instance will not be able to handle the load if the cache node fails. 

B. No, if the cache node fails you can always get the same data from the DB without having any availability impact. 

C. No, if the cache node fails the automated ElastiCache node recovery feature will prevent any availability impact. 

D. Yes, you should deploy the Memcached ElastiCache Cluster with two nodes in the same AZ as the RDS DB master instance to handle the load if one cache node fails. 

Answer:


Q8. You have launched an EC2 instance with four (4) 500 GB EBS Provisioned IOPS volumes attached. The EC2 instance is EBS-Optimized and supports 500 Mbps throughput between EC2 and EBS. The four EBS volumes are configured as a single RAID 0 device, and each Provisioned IOPS volume is provisioned with 4,000 IOPS (4,000 16KB reads or writes), for a total of 16,000 random IOPS on the instance. The EC2 instance initially delivers the expected 16,000 IOPS random read and write performance. Sometime later, in order to increase the total random I/O performance of the instance, you add an additional two 500 GB EBS Provisioned IOPS volumes to the RAID. Each volume is provisioned to 4,000 IOPs like the original four, for a total of 24,000 IOPS on the EC2 instance. Monitoring shows that the EC2 instance CPU utilization increased from 50% to 70%, but the total random IOPS measured at the instance level does not increase at all. What is the problem and a valid solution? 

A. The EBS-Optimized throughput limits the total IOPS that can be utilized; use an EBS-Optimized instance that provides larger throughput. 

B. Small block sizes cause performance degradation, limiting the I/O throughput; configure the instance device driver and file system to use 64KB blocks to increase throughput. 

C. The standard EBS Instance root volume limits the total IOPS rate; change the instance root volume to also be a 500GB 4,000 Provisioned IOPS volume. 

D. Larger storage volumes support higher Provisioned IOPS rates; increase the provisioned volume storage of each of the 6 EBS volumes to 1TB. 

E. RAID 0 only scales linearly to about 4 devices; use RAID 0 with 4 EBS Provisioned IOPS volumes, but increase each Provisioned IOPS EBS volume to 6,000 IOPS. 

Answer:


Q9. You are designing a connectivity solution between on-premises infrastructure and Amazon VPC. Your servers on-premises will be communicating with your VPC instances. You will be establishing IPsec tunnels over the Internet You will be using VPN gateways, and terminating the IPsec tunnels on AWS supported customer gateways. Which of the following objectives would you achieve by implementing an IPsec tunnel as outlined above? Choose 4 answers 

A. Peer identity authentication between VPN gateway and customer gateway. 

B. End-to-end identity authentication. 

C. Data integrity protection across the Internet. 

D. End-to-end protection of data in transit. 

E. Data encryption across the Internet. 

F. Protection of data in transit over the Internet. 

Answer: A, C, E, F 


Q10. You are tasked with moving a legacy application from a virtual machine running inside your datacenter to an Amazon VPC. Unfortunately, this app requires access to a number of on- premises services and no one who configured the app still works for your company. Even worse, there's no documentation for it. What will allow the application running inside the VPC to reach back and access its internal dependencies without being reconfigured? Choose 3 answers 

A. A VM Import of the current virtual machine 

B. An Internet Gateway to allow a VPN connection 

C. Entries in Amazon Route 53 that allow the Instance to resolve its dependencies' IP addresses 

D. An IP address space that does not conflict with the one on-premises 

E. An Elastic IP address on the VPC instance 

F. An AWS Direct Connect link between the VPC and the network housing the internal services 

Answer: B, E, F