Proper study guides for Avant-garde Amazon AWS Certified Solutions Architect - Associate certified begins with Amazon aws solution architect associate dumps preparation products which designed to deliver the Tested aws solution architect associate questions questions by making you pass the aws solution architect associate dumps test at your first time. Try the free aws solution architect associate certification demo right now.


♥♥ 2021 NEW RECOMMEND ♥♥

Free VCE & PDF File for Amazon AWS-Solution-Architect-Associate Real Exam (Full Version!)

★ Pass on Your First TRY ★ 100% Money Back Guarantee ★ Realistic Practice Exam Questions

Free Instant Download NEW AWS-Solution-Architect-Associate Exam Dumps (PDF & VCE):
Available on: http://www.surepassexam.com/AWS-Solution-Architect-Associate-exam-dumps.html

Q221. Your company has multiple IT departments, each with their own VPC. Some VPCs are located within the same AWS account, and others in a different AWS account. You want to peer together all VPCs to enable the IT departments to have full access to each others' resources. There are certain limitations placed on VPC peering. Which of the following statements is incorrect in relation to VPC peering?

A. Private DNS values cannot be resolved between instances in peered VPCs.

B. You can have up to 3 VPC peering connections between the same two VPCs at the same time.

C. You cannot create a VPC peering connection between VPCs in different regions.

D. You have a limit on the number active and pending VPC peering connections that you can have per VPC.

Answer:

Explanation:

To create a VPC peering connection with another VPC, you need to be aware of the following limitations and rules:

You cannot create a VPC peering connection between VPCs that have matching or overlapping CIDR blocks.

You cannot create a VPC peering connection between VPCs in different regions.

You have a limit on the number active and pending VPC peering connections that you can have per VPC. VPC peering does not support transitive peering relationships; in a VPC peering connection, your VPC will not have access to any other VPCs that the peer VPC may be peered with. This includes VPC peering connections that are established entirely within your own AWS account.

You cannot have more than one VPC peering connection between the same two VPCs at the same time. The Maximum Transmission Unit (MTU) across a VPC peering connection is 1500 bytes.

A placement group can span peered VPCs; however, you will not get full-bisection bandwidth between instances in peered VPCs.

Unicast reverse path forwarding in VPC peering connections is not supported.

You cannot reference a security group from the peer VPC as a source or destination for ingress or egress rules in your security group. Instead, reference CIDR blocks of the peer VPC as the source or destination of your security group's ingress or egress rules.

Private DNS values cannot be resolved between instances in peered VPCs. Reference:

http://docs.aws.amazon.com/AmazonVPC/Iatest/PeeringGuide/vpc-peering-overview.htmI#vpc-peering-Ii mitations


Q222. You are migrating an internal sewer on your DC to an EC2 instance with EBS volume. Your server disk usage is around 500GB so you just copied all your data to a 2TB disk to be used with AWS Import/Export. Where will the data be imported once it arrives at Amazon?

A. to a 2TB EBS volume

B. to an S3 bucket with 2 objects of 1TB

C. to an 500GB EBS volume

D. to an S3 bucket as a 2TB snapshot 

Answer: B

Explanation:

An import to Amazon EBS will have different results depending on whether the capacity of your storage device is less than or equal to 1 TB or greater than 1 TB. The maximum size of an Amazon EBS snapshot is 1 TB, so if the device image is larger than 1 TB, the image is chunked and stored on Amazon S3. The target location is determined based on the total capacity of the device, not the amount of data on the device.

Reference: http://docs.aws.amazon.com/AWSImportExport/latest/DG/Concepts.html


Q223. SQL Sewer _ store log ins and passwords in the master database.

A. can be configured to but by default does not

B. doesn't

C. does 

Answer: C


Q224. Amazon EC2 provides a . It is an HTTP or HTTPS request that uses the HTTP verbs GET or POST.

A. web database

B. .net framework

C. Query API

D. C library 

Answer: C

Explanation:

Amazon EC2 provides a Query API. These requests are HTTP or HTTPS requests that use the HTTP verbs GET or POST and a Query parameter named Action.

Reference: http://docs.aws.amazon.com/AWSEC2/latest/APIReference/making-api-requests.html


Q225. In Amazon EC2, partial instance-hours are billed .

A. per second used in the hour

B. per minute used

C. by combining partial segments into full hours

D. as full hours 

Answer: D

Explanation:

Partial instance-hours are billed to the next hour. Reference: http://aws.amazon.com/ec2/faqs/


Q226. A 3-tier e-commerce web application is current deployed on-premises and will be migrated to AWS for

greater scalability and elasticity The web server currently shares read-only data using a network distributed file system The app server tier uses a clustering mechanism for discovery and shared session state that depends on I P multicast The database tier uses shared-storage clustering to provide database fail over capability, and uses several read slaves for scaling Data on all sewers and the distributed file system directory is backed up weekly to off-site tapes

Which AWS storage and database architecture meets the requirements of the application?

A. Web sewers: store read-only data in 53, and copy from 53 to root volume at boot time. App servers: share state using a combination of DynamoDB and IP unicast. Database: use RDS with multi-AZ deployment and one or more read replicas. Backup: web servers, app servers, and database backed up weekly to Glacier using snapshots.

B. Web sewers: store read-only data in an EC2 NFS server, mount to each web server at boot time. App servers: share state using a combination of DynamoDB and IP multicast. Database: use RDS with multi-AZ deployment and one or more Read Replicas. Backup: web and app servers backed up weekly via AM Is, database backed up via DB snapshots.

C. Web servers: store read-only data in 53, and copy from 53 to root volume at boot time. App servers: share state using a combination of DynamoDB and IP unicast. Database: use RDS with multi-AZ deployment and one or more Read Replicas. Backup: web and app servers backed up weekly via

AM Is, database backed up via DB snapshots.

D. Web servers: store read-only data in 53, and copy from 53 to root volume at boot time. App servers: share state using a combination of DynamoDB and IP unicast. Database: use RDS with multi-AZ deployment. Backup: web and app sewers backed up weekly via ANI Is, database backed up via DB snapshots.

Answer:

Explanation:

Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database (DB) Instances, making them a natural fit for production database workloads. When you provision a Multi-AZ   DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of an infrastructure failure (for example, instance hardware failure, storage failure, or network disruption), Amazon RDS performs an automatic failover to the standby, so that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention.

Benefits

Enhanced Durability

MuIti-AZ deployments for the MySQL, Oracle, and PostgreSQL engines utilize synchronous physical replication to keep data on the standby up-to-date with the primary. MuIti-AZ deployments for the SQL Server engine use synchronous logical replication to achieve the same result, employing SQL

Server-native Mrroring technology. Both approaches safeguard your data in the event of a DB Instance failure or loss of an Availability Zone.

If a storage volume on your primary fails in a Multi-AZ deployment, Amazon RDS automatically initiates a failover to the up-to-date standby. Compare this to a Single-AZ deployment: in case of a Single-AZ database failure, a user-initiated point-in-time-restore operation will be required. This operation can take several hours to complete, and any data updates that occurred after the latest restorable time (typically within the last five minutes) will not be available.

Amazon Aurora employs a highly durable, SSD-backed virtualized storage layer purpose-built for

database workloads. Amazon Aurora automatically replicates your volume six ways, across three Availability Zones. Amazon Aurora storage is fault-tolerant, transparently handling the loss of up to two copies of data without affecting database write availability and up to three copies without affecting read availability. Amazon Aurora storage is also self-healing. Data blocks and disks are continuously scanned for errors and replaced automatically.

Increased Availability

You also benefit from enhanced database availability when running Multi-AZ deployments. If an Availability Zone failure or DB Instance failure occurs, your availability impact is limited to the time automatic failover takes to complete: typically under one minute for Amazon Aurora and one to two minutes for other database engines (see the RDS FAQ for details).

The availability benefits of MuIti-AZ deployments also extend to planned maintenance and backups.

In the case of system upgrades like OS patching or DB Instance scaling, these operations are applied first on the standby, prior to the automatic failover. As a result, your availability impact is, again, only the time required for automatic fail over to complete.

Unlike Single-AZ deployments, 1/0 actMty is not suspended on your primary during backup for MuIti-AZ deployments for the MySOL, Oracle, and PostgreSQL engines, because the backup is taken from the standby. However, note that you may still experience elevated latencies for a few minutes during backups for Mu|ti-AZ deployments.

On instance failure in Amazon Aurora deployments, Amazon RDS uses RDS MuIti-AZ technology to automate failover to one of up to 15 Amazon Aurora Replicas you have created in any of three Availability Zones. If no Amazon Aurora Replicas have been provisioned, in the case of a failure, Amazon RDS will attempt to create a new Amazon Aurora DB instance for you automatically.

No Administrative Intervention

DB Instance failover is fully automatic and requires no administrative intervention. Amazon RDS monitors the health of your primary and standbys, and initiates a failover automatically in response to a variety of failure conditions.

Failover conditions

Amazon RDS detects and automatically recovers from the most common failure scenarios for Multi-AZ deployments so that you can resume database operations as quickly as possible without administrative intervention. Amazon RDS automatically performs a failover in the event of any of the following:

Loss of availability in primary Availability Zone Loss of network connectMty to primary Compute unit failure on primary

Storage failure on primary

Note: When operations such as DB Instance scaling or system upgrades like OS patching are initiated for Multi-AZ deployments, for enhanced availability, they are applied first on the standby prior to an automatic failover. As a result, your availability impact is limited only to the time required for automatic failover to complete. Note that Amazon RDS Multi-AZ deployments do not failover automatically in response to database operations such as long running queries, deadlocks or database corruption errors.


Q227. Identify a true statement about the On-Demand instances purchasing option provided by Amazon EC2.

A. Pay for the instances that you use by the hour, with no long-term commitments or up-front payments.

B. Make a low, one-time, up-front payment for an instance, reserve it for a one- or three-year term, and pay a significantly lower hourly rate for these instances.

C. Pay for the instances that you use by the hour, with long-term commitments or up-front payments.

D. Make a high, one-time, all-front payment for an instance, reserve it for a one- or three-year term, and

pay a significantly higher hourly rate for these instances. 

Answer: A

Explanation:

On-Demand instances allow you to pay for the instances that you use by the hour, with no long-term commitments or up-front payments.

Reference:  http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/reserved-instances-offerings.html


Q228. You are configuring a new VPC for one of your clients for a cloud migration project, and only a public VPN will be in place. After you created your VPC, you created a new subnet, a new internet gateway, and attached your internet gateway to your VPC. When you launched your first instance into your VPC, you realized that you aren't able to connect to the instance, even if it is configured with an elastic IP. What  should be done to access the instance?

A. A route should be created as 0.0.0.0/0 and your internet gateway as target.

B. Attach another ENI to the instance and connect via new ENI.

C. A NAT instance should be created and all traffic should be forwarded to NAT instance.

D. A NACL should be created that allows all outbound traffic. 

Answer: A

Explanation:

All traffic should be routed via Internet Gateway. So, a route should be created with 0.0.0.0/0 as a source, and your Internet Gateway as your target.

Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario1.htmI


Q229. One of the criteria for a new deployment is that the customer wants to use AWS Storage Gateway. However you are not sure whether you should use gateway-cached volumes or gateway-stored volumes or even what the differences are. Which statement below best describes those differences?

A. Gateway-cached lets you store your data in Amazon Simple Storage Service (Amazon S3) and retain a copy of frequently accessed data subsets locally. Gateway-stored enables you to configure your

on-premises gateway to store all your data locally and then asynchronously back up point-in-time snapshots of this data to Amazon S3.

B. Gateway-cached is free whilst gateway-stored is not.

C. Gateway-cached is up to 10 times faster than gateway-stored.

D. Gateway-stored lets you store your data in Amazon Simple Storage Service (Amazon S3) and retain a copy of frequently accessed data subsets locally. Gateway-cached enables you to configure your

on-premises gateway to store all your data locally and then asynchronously back up point-in-time snapshots of this data to Amazon S3.

Answer:

Explanation:

Volume gateways provide cloud-backed storage volumes that you can mount as Internet Small Computer System Interface (iSCSI) devices from your on-premises application sewers. The gateway supports the following volume configurations:

Gateway-cached volumes — You store your data in Amazon Simple Storage Service (Amazon S3) and retain a copy of frequently accessed data subsets locally. Gateway-cached volumes offer a substantial cost savings on primary storage and minimize the need to scale your storage on-premises. You also retain low-latency access to your frequently accessed data.

Gateway-stored volumes — If you need low-latency access to your entire data set, you can configure your on-premises gateway to store all your data locally and then asynchronously back up point-in-time snapshots of this data to Amazon S3. This configuration provides durable and inexpensive off-site  backups that you can recover to your local data center or Amazon EC2. For example, if you need replacement capacity for disaster recovery, you can recover the backups to Amazon EC2.

Reference:  http://docs.aws.amazon.com/storagegateway/latest/userguide/volume-gateway.html


Q230. You are designing an SSUTLS solution that requires HTIPS clients to be authenticated by the Web server using client certificate authentication. The solution must be resilient.

Which of the following options would you consider for configuring the web server infrastructure? (Choose 2 answers)

A. Configure ELB with TCP listeners on TCP/4d3. And place the Web servers behind it.

B. Configure your Web servers with EIPS Place the Web servers in a Route53 Record Set and configure health checks against all Web servers.

C. Configure ELB with HTIPS listeners, and place the Web servers behind it.

D. Configure your web servers as the origins for a Cloud Front distribution. Use custom SSL certificates on your Cloud Front distribution.

Answer: A, B