Want to know Certleader BDS-C00 Exam practice test features? Want to lear more about Amazon-Web-Services AWS Certified Big Data -Speciality certification experience? Study Tested Amazon-Web-Services BDS-C00 answers to Improve BDS-C00 questions at Certleader. Gat a success with an absolute guarantee to pass Amazon-Web-Services BDS-C00 (AWS Certified Big Data -Speciality) test on your first attempt.

Free demo questions for Amazon-Web-Services BDS-C00 Exam Dumps Below:

NEW QUESTION 1
An organization uses Amazon Elastic MapReduce (EMR) to process a series of extract-transform-load
(ETL) steps that run in sequence. The output of each step must be fully processed in subsequent steps but will not be retained.
Which of the following techniques will meet this requirement most efficiently?

  • A. Use the EMR File System (EMRFS) to store the outputs from each step as objects in Amazon Simple Storage Service (S3).
  • B. Use the s3n URI to story the data to be processes as objects in Amazon S3.
  • C. Define the ETL steps as separate AWS Data Pipeline activities.
  • D. Load the data to be processed into HDFS and then write the final output to Amazon S3.

Answer: A

NEW QUESTION 2
An administrator needs to manage a large catalog of items from various external sellers. The administration needs to determine if the items should be identified as minimally dangerous, dangerous or highly dangerous based on their textual description. The administrator already has some items with the danger attribute, but receives hundreds of new item descriptions every day without such classification.
The administrator has a system that captures dangerous goods reports from customer support team or from user feedback. What is a cost –effective architecture to solve this issue?

  • A. Build a set of regular expression rules that are based on the existing example
  • B. And run them on the DynamoDB streams as every new item description is added to the system.
  • C. Build a kinesis Streams process that captures and marks the relevant items in the dangerous goods reports using a Lambda function once more than two reports have been filed.
  • D. Build a machine learning model to properly classify dangerous goods and run it on the DynamoDB streams as every new item description is added to the system.
  • E. Build a machine learning model with binary classification for dangerous goods and run it on the DynamoDB streams as every new item description is added to the system.

Answer: C

NEW QUESTION 3
You currently run your infrastructure on Amazon EC2 instances behind on Auto Scaling group. All logs
for your application are currently written to ephemeral storage. Recently your company experienced a major bug in code that made it through testing and was ultimately deployed to your fleet. This bug triggered your Auto Scaling group to scale up and back down before you could successfully retrieve the logs off your server to better assist you in troubleshooting the bug.
Which technique should you use to make sure you are able to review your logs after your instances have shut down?

  • A. Configure the ephemeral policies on your Auto Scaling group to back up on terminate
  • B. Configure your Auto Scaling policies to create a snapshot of all ephemeral storage on terminate
  • C. Install the CloudWatch logs Agent on your AMI, and configure CloudWatch Logs Agent to stream your logs
  • D. Install the CloudWatch monitoring agent on your AMI, and set up a new SNS alert for CloudWatch metrics that triggers the CloudWatch monitoring agent to backup all logs on the ephemeral drive
  • E. Install the CloudWatch Logs Agent on your AM
  • F. Update your Scaling policy to enable automated CloudWatch Log copy

Answer: C

NEW QUESTION 4
Customers have recently been complaining that your web application has randomly stopped responding. During a deep dive of your logs, the team has discovered a major bug in your Java web application. This bug is causing a memory leak that eventually causes the application to crash.
Your web application runs on Amazon EC2 and was built with AWS CloudFormation.
Which techniques should you see to help detect theses problems faster, as well as help eliminate the server’s unresponsiveness? Choose 2 answers

  • A. Update your AWS CloudFormation configuration and enable a CustomResource that uses cfn- signal to detect memory leaks
  • B. Update your CloudWatch metric granularity config for all Amazon EC2 memory metrics to support five-second granularit
  • C. Create a CloudWatch alarm that triggers an Amazon SNS notification to page your team when the application memory becomes too large
  • D. Update your AWS CloudFormation configuration to take advantage of Auto Scaling group
  • E. Configure an Auto Scaling group policy to trigger off your custom CloudWatch metrics
  • F. Create a custom CloudWatch metric that you push your JVM memory usage to create a CloudWatch alarm that triggers an Amazon SNS notification to page your team when the application memory usage becomes too large
  • G. Update your AWS CloudFormation configuration to take advantage of CloudWatch metrics Agen
  • H. Configure the CloudWatch Metrics Agent to monitor memory usage and trigger an Amazon SNS alarm

Answer: CD

NEW QUESTION 5
When an EC2 instance that is backed by an s3-based AMI is terminated. What happens to the data on the root volume?

  • A. Data is unavailable until the instance is restarted
  • B. Data is automatically deleted
  • C. Data is automatically saved as an EBS snapshot
  • D. Data is automatically saved as an EBS volume

Answer: B

NEW QUESTION 6
A systems engineer for a company proposes digitalization and backup of large archives for customers.
The systems engineer needs to provide users with a secure storage that makes sure that data will never be tempered with once it has been uploaded. How should this be accomplished?

  • A. Create an Amazon Glacier Vaul
  • B. Specify a “Deny” Vault lock policy on this vault to block “glacier:DeleteArchive”.
  • C. Create an Amazon S3 bucke
  • D. Specify a “Deny” bucket policy on this bucket to block “s3:DeleteObject”.
  • E. Create an Amazon Glacier Vaul
  • F. Specify a “Deny” vault access policy on this Vault to block “glacier:DeleteArchive”.
  • G. Create a secondary AWS containing an Amazon S3 bucke
  • H. Grant “s3:PutObject” to the primary account.

Answer: A

NEW QUESTION 7
When will you incur costs with an Elastic IP address (EIP)?

  • A. When an EIP is allocated.
  • B. When it is allocated and associated with a running instance.
  • C. When it is allocated and associated with a stopped instance.
  • D. Costs are incurred regardless of whether the EIP is associated with a running instance.

Answer: C

NEW QUESTION 8
A customer has a machine learning workflow that consist of multiple quick cycles of reads-writes-
reads on Amazon S3. The customer needs to run the workflow on EMR but is concerned that the reads in subsequent cycles will miss new data critical to the machine learning from the prior cycles.
How should the customer accomplish this?

  • A. Turn on EMRFS consistent view when configuring the EMR cluster
  • B. Use AWS Data Pipeline to orchestrate the data processing cycles
  • C. Set Hadoop.data.consistency = true in the core-site.xml file
  • D. Set Hadoop.s3.consistency = true in the core-site.xml file

Answer: B

NEW QUESTION 9
A user has setup an RDS DB with Oracle. The user wants to get notifications when someone modifies
the security group of that DB. How can the user configure that?

  • A. It is not possible to get the notifications on a change in the security group
  • B. Configure SNS to monitor security group changes
  • C. Configure event notification on the DB security group
  • D. Configure the CloudWatch alarm on the DB for a change in the security group

Answer: C

NEW QUESTION 10
You have a large number of web servers in an Auto Scaling group behind a load balancer. On an hourly basis, you want to filter and process the logs to collect data on unique visitors, and then put that data in a durable data store in order to run reports. Web servers in the Auto Scaling group are constantly launching and terminating based on your scaling policies, but you do not want to lose any of the log data from these servers during a stop/termination initiated by a user or by Auto Scaling. What two approaches will meet these requirements? Choose 2 answers

  • A. Install an Amazon CloudWatch Logs Agent on every web server during the bootstrap proces
  • B. Create a CloudWatch log group and define metric Filters to create custom metrics that track unique visitors from the streaming web server log
  • C. Create a scheduled task on an Amazon EC2 instance that runs every hour to generate a new report based on the CloudWatch custom metrics
  • D. On the web servers, create a scheduled task that executes a script to rotate and transmit the logs to Amazon Glacie
  • E. Ensure that the operating system shutdown procedure triggers a logs transmission when the Amazon EC2 instance is stopped/terminate
  • F. Use Amazon Data pipeline to process data in Amazon Glacier and run reports every hour
  • G. On the web servers, create a scheduled task that executes a script to rotate and transmit the logs to an Amazon S3 bucke
  • H. Ensure that the operating system shutdown process triggers a logs transmission when the Amazon EC2 instance is stopped/terminate
  • I. Use AWS Data Pipeline to move log data from the Amazon S3 bucket to Amazon Redshift in order to process and run reports every hour
  • J. Install an AWS Data Pipeline Logs Agent on every web server during the bootstrap proces
  • K. Create a log group object in AWS Data Pipeline, and define Metric filters to move processed log data directly from the web servers to Amazon Redshift and runs reports every hour

Answer: AC

NEW QUESTION 11
A solutions architect for a logistics organization ships packages from thousands of suppliers to end
customers. The architect is building a platform where suppliers can view the status of one or more of their shipments. Each supplier can have multiple roles that will only allow access to specific fields in the resulting information.
Which strategy allows the appropriate level of access control and requires the LEAST amount of
management work?

  • A. Send the tracking data to Amazon Kinesis Stream
  • B. Use AWS Lambda to store the data in an Amazon DynamoDB Tabl
  • C. Generate temporary AWS credentials for the supplier’s users with AWS STS, specifying fine-grained security policies to limit access only to their application data.
  • D. Send the tracking data to Amazon Kinesis Firehous
  • E. Use Amazon S3 notifications and AWS Lambda to prepare files in Amazon S3 with appropriate data for each supplier’s role
  • F. Generate temporary AWS credentials for the suppliers’ users with AWS ST
  • G. Limit access to the appropriate files through security policies.
  • H. Send the tracking data to Amazon Kinesis Stream
  • I. Use Amazon EMR with Spark Streaming to store the data in HBas
  • J. Create one table per supplie
  • K. Use HBase Kerberos integration with the suppliers’ user
  • L. Use HBase ACL-based security to limit access to the roles to their specific table and columns.
  • M. Send the tracking data to Amazon Kinesis Firehos
  • N. Store the data in an Amazon Redshift cluste
  • O. Create views for the supplier’s users and role
  • P. Allow suppliers access to the Amazon Redshift cluster using a user limited to the application view.

Answer: B

NEW QUESTION 12
A us-based company is expanding their web presence into Europe. The company wants to extend their AWS infrastructure from Northern Virginia (us-east-1) into the Dublin (eu-west-1) region. Which of the following options would enable an equivalent experience for users on both continents?

  • A. Use a public-facing load balancer per region to load-balancer web traffic, and enable HTTP health checks
  • B. Use a public-facing load balancer per region to load balancer web traffic, and enable sticky sessions
  • C. Use Amazon Route S3, and apply a geolocation routing policy to distribution traffic across both regions
  • D. Use Amazon Route S3, and apply a weighted routing policy to distribute traffic across both regions

Answer: C

NEW QUESTION 13
A company has reproducible data that they want to store on Amazon Web Services. The company may want to retrieve the data on a frequent basis. Which Amazon web services storage option allows the customer to optimize storage costs and still achieve high availability for their data?

  • A. Amazon S3 Reduced Redundancy Storage
  • B. Amazon EBS Magnetic Volume
  • C. Amazon Glacier
  • D. Amazon S3 Standard Storage

Answer: A

NEW QUESTION 14
A new algorithm has been written in Python to identify SPAM e-mails. The algorithm analyzes the free text contained within a sample set of 1 million e-mails stored on Amazon S3. The algorithm must be scaled across a production of 5 PB, which also resides in Amazon S3 storage
Which AWS service strategy is best for this use case?

  • A. Copy the data into Amazon ElasticCache to perform text analysis on the in-memory data and export the results of the model into Amazon machine learning
  • B. Use Amazon EMR to parallelize the text analysis tasks across the cluster using a streaming program step
  • C. Use Amazon Elasticsearch service to store the text and then use the Python Elastic search client to run analysis against the text index
  • D. Initiate a python job from AWS Data pipeline to run directly against the Amazon S3 text files

Answer: C

Explanation:
Reference: https://aws.amazon.com/blogs/database/indexing-metadata-in-amazon-elasticsearch- service-using-aws-lambda-and-python/

NEW QUESTION 15
A user has launched an EC2 instance and deployed a production application in it. The user wants to prohibit any mistakes from the production team to avoid accidental termination. How can the user achieve this?

  • A. The user can the set DisableApiTermination attribute to avoid accidental termination
  • B. It is not possible to avoid accidental termination
  • C. The user can set the Deletion termination flag to avoid accidental termination
  • D. The user can set the InstanceInitiatedShutdownBehavior flag to avoid accidental termination

Answer: A

NEW QUESTION 16
An administrator tries to use the Amazon Machine Learning service to classify social media posts that mention the administrator’s company into posts that requires a response and posts that do not. The training dataset of 10,000 posts contains the details of each post including the timestamp, author, and full text of the post. The administrator is missing the target labels that are required for training. Which Amazon Machine Learning model is the most appropriate for the task?

  • A. Unary classification model, where the target class is the require-response post
  • B. Binary classification model, where the two classes are require-response and does-not-require- response
  • C. Multi-class prediction model, with two classes require-response and does-not-require response
  • D. Regression model where the predicted value is the probability that the post requires a response

Answer: B

NEW QUESTION 17
A telecommunications company needs to predict customer churn (i.e. customers eho decide to
switch a computer). The company has historic records of each customer, including monthly consumption patterns, calls to customer service, and whether the customer ultimately quit th eservice. All of this data is stored in Amazon S3. The company needs to know which customers are likely going to churn soon so that they can win back their loyalty.
What is the optimal approach to meet these requirements?

  • A. Use the Amazon Machine Learning service to build the binary classification model based on the dataset stored in Amazon S3. The model will be used regularly to predict churn attribute for existing customers
  • B. Use AWS QuickSight to connect it to data stored in Amazon S3 to obtain the necessary business insigh
  • C. Plot the churn trend graph to extrapolate churn likelihood for existing customer
  • D. Use EMR to run the Hive queries to build a profile of a churning custome
  • E. Apply the profile to existing customers to determine the likelihood of churn
  • F. Use a Redshift cluster to COPY the data from Amazon S3. Create a user Define Function in Redshift that computers the likelihood of churn

Answer: A

NEW QUESTION 18
You are managing the AWS account of a big organization. The organization has more than
1000+ employees and they want to provide access to the various services to most of the employees. Which of the below mentioned options is the best possible solution in this case?

  • A. The user should create a separate IAM user for each employee and provide access to them as per the policy
  • B. The user should create an IAM role and attach STS with the rol
  • C. The user should attach that role to the EC2 instance and setup AWS authentication on that server
  • D. The user should create IAM groups as per the organization’s departments and add each user to the group for better access control
  • E. Attach an IAM role with the organization’s authentication service to authorize each user forvarious AWS services

Answer: D

NEW QUESTION 19
Your company operates a website for promoters to sell tickets for entertainment events. You are
using a load balancer in front of an Auto Scaling group of web server. Promotion of popular events can cause surges of websites visitors. During scaling-out at theses times, newly launched instances are unable to complete configuration quickly enough, leading to user disappointment.
What option should you choose to improve scaling yet minimize costs? Choose 2 answers

  • A. Create an AMI with the application pre-configure
  • B. Create a new Auto Scaling launch configuration using this new AMI, and configure the Auto Scaling group to launch with this AMI
  • C. Use Auto Scaling pre-warming to launch instances before they are require
  • D. Configure pre- warming to use the CPU trend CloudWatch metric for the group
  • E. Publish a custom CloudWatch metric from your application on the number of tickets sold, and create an Auto Scaling policy based on this
  • F. Using the history of past scaling events for similar event sales to predict future scaling requirement
  • G. Use the Auto Scaling scheduled scaling feature to vary the size of the fleet
  • H. Configure an Amazon S3 bucket for website hostin
  • I. Upload into the bucket an HTML holding page with its ‘x-amz-website-redirect-location’ metadata property set to the load balancer endpoin
  • J. Configure Elastic Load Balancing to redirect to the holding page when the load on web servers is above a certain level

Answer: DE

NEW QUESTION 20
Your customers located around the globe require low-latency access to private video files. Which
configuration meets these requirements?

  • A. Use Amazon CloudFront with signed URLs
  • B. Use Amazon EC2 with provisioned IOPS Amazon EBS volumes
  • C. Use Amazon S3 with signed URLs
  • D. Use Amazon S3 with access control lists

Answer: A

NEW QUESTION 21
Which of the following are true regarding AWS Cloud Trail? Choose 3 answers

  • A. Cloudtrail is enabled globally
  • B. Cloudtrail is enabled by default
  • C. Cloudtrail is enabled on a per-region basis
  • D. Cloudtrail is enabled on a per-service basis
  • E. Logs can be delivered to a single Amazon S3 bucket for aggregation
  • F. Logs can only be processes and delivered to the region in which they are generated

Answer: ACE

NEW QUESTION 22
You have an application running on an Amazon Elastic Compute Cloud instance, that uploads 5 GB
video objects to Amazon Simple Storage Service (S3). Video uploads are taking longer than expected, resulting in poor application performance. Which method will help improve performance of your application?

  • A. Enable enhanced networking
  • B. Use Amazon S3 multipart upload
  • C. Leveraging Amazon CloudFront, use the HTTP POST method to reduce latency.
  • D. Use Amazon Elastic Block Store Provisioned IOPs and use an Amazon EBS-optimized instance

Answer: B

NEW QUESTION 23
A medical record filing system for a government medical fund is using an Amazon S3 bucket to
archive documents related to patients. Every patient visit to a physician creates a new file, which can add up to millions of files each month. Collection of these files from each physician is handled via a batch process that runs every night using AWS Data Pipeline. This is sensitive data, so the data and any associated metadata must be encrypted at rest.
Auditors review some files on a quarterly basis to see whether the records are maintained according to regulations. Auditors must be able to locate any physical file in the S3 bucket or a given data, patient, or physician. Auditors spend a signification amount of time locating such files.
What is the most cost-and time-efficient collection methodology in this situation?

  • A. Use Amazon kinesis to get the data feeds directly from physician, batch them using a Spark application on Amazon Elastic MapReduce (EMR) and then store them in Amazon S3 with folders separated per physician.
  • B. Use Amazon API Gateway to get the data feeds directly from physicians, batch them using a Spark application on Amazon Elastic MapReduce (EMR), and then store them in Amazon S3 with folders separated per physician.
  • C. Use Amazon S3 event notifications to populate an Amazon DynamoDB table with metadata about every file loaded to Amazon S3, and partition them based on the month and year of the file.
  • D. Use Amazon S3 event notifications to populate and Amazon Redshift table with metadata about every file loaded to Amazon S3, and partition them based on the month and year of the file.

Answer: D

NEW QUESTION 24
A clinical trial will rely on medical sensors to remotely assess patient health. Each physician who
participates in the trial requires visual reports each morning. The reports are built from aggregations of all the sensor data taken each minute.
What is the most cost-effective solution for creating this visualization each day?

  • A. Use Kinesis Aggregators Library to generate reports for reviewing the patient sensor data and generate a QuickSight visualization on the new data each morning for the physician to review
  • B. Use a Transient EMR cluster that shuts down after use to aggregate the patient sensor data each night and generate a QuickSight visualization on the new data each morning for the physician to review
  • C. Use Spark streaming on EMR to aggregate the sensor data coming in every 15 minutes and generate a QuickSight visualization on the new data each morning for the physician to review
  • D. Use an EMR cluster to aggregate the patient sensor data each right and provide Zeppelin notebooks that look at the new data residing on the cluster each morning

Answer: AD

NEW QUESTION 25
......

Thanks for reading the newest BDS-C00 exam dumps! We recommend you to try the PREMIUM 2passeasy BDS-C00 dumps in VCE and PDF here: https://www.2passeasy.com/dumps/BDS-C00/ (264 Q&As Dumps)