DBS-C01 Exam Questions - Online Test


DBS-C01 Premium VCE File

Learn More 100% Pass Guarantee - Dumps Verified - Instant Download
150 Lectures, 20 Hours

certleader.com

It is more faster and easier to pass the Amazon-Web-Services DBS-C01 exam by using Precise Amazon-Web-Services AWS Certified Database - Specialty questuins and answers. Immediate access to the Up to date DBS-C01 Exam and find the same core area DBS-C01 questions with professionally verified answers, then PASS your exam with a high score now.

Check DBS-C01 free dumps before getting the full version:

NEW QUESTION 1
An online gaming company is planning to launch a new game with Amazon DynamoDB as its data store. The database should be designated to support the following use cases:
DBS-C01 dumps exhibit Update scores in real time whenever a player is playing the game.
DBS-C01 dumps exhibit Retrieve a player’s score details for a specific game session.
A Database Specialist decides to implement a DynamoDB table. Each player has a unique user_id and each game has a unique game_id.
Which choice of keys is recommended for the DynamoDB table?

  • A. Create a global secondary index with game_id as the partition key
  • B. Create a global secondary index with user_id as the partition key
  • C. Create a composite primary key with game_id as the partition key and user_id as the sort key
  • D. Create a composite primary key with user_id as the partition key and game_id as the sort key

Answer: B

NEW QUESTION 2
A user has a non-relational key-value database. The user is looking for a fully managed AWS service that will offload the administrative burdens of operating and scaling distributed databases. The solution must be cost-effective and able to handle unpredictable application traffic.
What should a Database Specialist recommend for this user?

  • A. Create an Amazon DynamoDB table with provisioned capacity mode
  • B. Create an Amazon DocumentDB cluster
  • C. Create an Amazon DynamoDB table with on-demand capacity mode
  • D. Create an Amazon Aurora Serverless DB cluster

Answer: C

NEW QUESTION 3
A company is using Amazon RDS for MySQL to redesign its business application. A Database Specialist has noticed that the Development team is restoring their MySQL database multiple times a day when Developers make mistakes in their schema updates. The Developers sometimes need to wait hours to the restores to complete.
Multiple team members are working on the project, making it difficult to find the correct restore point for each mistake.
Which approach should the Database Specialist take to reduce downtime?

  • A. Deploy multiple read replicas and have the team members make changes to separate replica instances
  • B. Migrate to Amazon RDS for SQL Server, take a snapshot, and restore from the snapshot
  • C. Migrate to Amazon Aurora MySQL and enable the Aurora Backtrack feature
  • D. Enable the Amazon RDS for MySQL Backtrack feature

Answer: A

NEW QUESTION 4
A Database Specialist is creating a new Amazon Neptune DB cluster, and is attempting to load fata from Amazon S3 into the Neptune DB cluster using the Neptune bulk loader API. The Database Specialist receives the following error:
“Unable to connect to s3 endpoint. Provided source = s3://mybucket/graphdata/ and region = us-east-1. Please verify your S3 configuration.”
Which combination of actions should the Database Specialist take to troubleshoot the problem? (Choose two.)

  • A. Check that Amazon S3 has an IAM role granting read access to Neptune
  • B. Check that an Amazon S3 VPC endpoint exists
  • C. Check that a Neptune VPC endpoint exists
  • D. Check that Amazon EC2 has an IAM role granting read access to Amazon S3
  • E. Check that Neptune has an IAM role granting read access to Amazon S3

Answer: BD

NEW QUESTION 5
A clothing company uses a custom ecommerce application and a PostgreSQL database to sell clothes to thousands of users from multiple countries. The company is migrating its application and database from its on premises data center to the AWS Cloud. The company has selected Amazon EC2 for the application and Amazon RDS for PostgreSQL for the database. The company requires database passwords to be changed every 60 days. A Database Specialist needs to ensure that the credentials used by the web application to connect to the database are managed securely.
Which approach should the Database Specialist take to securely manage the database credentials?

  • A. Store the credentials in a text file in an Amazon S3 bucke
  • B. Restrict permissions on the bucket to the IAM role associated with the instance profile onl
  • C. Modify the application to download the text file and retrieve the credentials on start u
  • D. Update the text file every 60 days.
  • E. Configure IAM database authentication for the application to connect to the databas
  • F. Create an IAM user and map it to a separate database user for each ecommerce use
  • G. Require users to update their passwords every 60 days.
  • H. Store the credentials in AWS Secrets Manage
  • I. Restrict permissions on the secret to only the IAM role associated with the instance profil
  • J. Modify the application to retrieve the credentials from Secrets Manager on start u
  • K. Configure the rotation interval to 60 days.
  • L. Store the credentials in an encrypted text file in the application AM
  • M. Use AWS KMS to store the key fordecrypting the text fil
  • N. Modify the application to decrypt the text file and retrieve the credentials on start u
  • O. Update the text file and publish a new AMI every 60 days.

Answer: B

NEW QUESTION 6
A company is closing one of its remote data centers. This site runs a 100 TB on-premises data warehouse solution. The company plans to use the AWS Schema Conversion Tool (AWS SCT) and AWS DMS for the migration to AWS. The site network bandwidth is 500 Mbps. A Database Specialist wants to migrate the on-premises data using Amazon S3 as the data lake and Amazon Redshift as the data warehouse. This move must take place during a 2-week period when source systems are shut down for maintenance. The data should stay encrypted at rest and in transit.
Which approach has the least risk and the highest likelihood of a successful data transfer?

  • A. Set up a VPN tunnel for encrypting data over the network from the data center to AW
  • B. Leverage AWSSCT and apply the converted schema to Amazon Redshif
  • C. Once complete, start an AWS DMS task tomove the data from the source to Amazon S3. Use AWS Glue to load the data from Amazon S3 to AmazonRedshift.
  • D. Leverage AWS SCT and apply the converted schema to Amazon Redshif
  • E. Start an AWS DMS task withtwo AWS Snowball Edge devices to copy data from on-premises to Amazon S3 with AWS KMS encryption.Use AWS DMS to finish copying data to Amazon Redshift.
  • F. Leverage AWS SCT and apply the converted schema to Amazon Redshif
  • G. Once complete, use a fleet of10 TB dedicated encrypted drives using the AWS Import/Export feature to copy data fromon-premises toAmazon S3 with AWS KMS encryptio
  • H. Use AWS Glue to load the data to Amazon redshift.
  • I. Set up a VPN tunnel for encrypting data over the network from the data center to AW
  • J. Leverage a nativedatabase export feature to export the data and compress the file
  • K. Use the aws S3 cp multi-port uploadcommand to upload these files to Amazon S3 with AWS KMS encryptio
  • L. Once complete, load the data toAmazon Redshift using AWS Glue.

Answer: C

NEW QUESTION 7
An ecommerce company is using Amazon DynamoDB as the backend for its order-processing application. The steady increase in the number of orders is resulting in increased DynamoDB costs. Order verification and reporting perform many repeated GetItem functions that pull similar datasets, and this read activity is contributing to the increased costs. The company wants to control these costs without significant development efforts.
How should a Database Specialist address these requirements?

  • A. Use AWS DMS to migrate data from DynamoDB to Amazon DocumentDB
  • B. Use Amazon DynamoDB Streams and Amazon Kinesis Data Firehose to push the data into AmazonRedshift
  • C. Use an Amazon ElastiCache for Redis in front of DynamoDB to boost read performance
  • D. Use DynamoDB Accelerator to offload the reads

Answer: B

NEW QUESTION 8
A company is hosting critical business data in an Amazon Redshift cluster. Due to the sensitive nature of the data, the cluster is encrypted at rest using AWS KMS. As a part of disaster recovery requirements, the company needs to copy the Amazon Redshift snapshots to another Region.
Which steps should be taken in the AWS Management Console to meet the disaster recovery requirements?

  • A. Create a new KMS customer master key in the source Regio
  • B. Switch to the destination Region, enable Amazon Redshift cross-Region snapshots, and use the KMS key of the source Region.
  • C. Create a new IAM role with access to the KMS ke
  • D. Enable Amazon Redshift cross-Region replication using the new IAM role, and use the KMS key of the source Region.
  • E. Enable Amazon Redshift cross-Region snapshots in the source Region, and create a snapshot copy grant and use a KMS key in the destination Region.
  • F. Create a new KMS customer master key in the destination Region and create a new IAM role with access to the new KMS ke
  • G. Enable Amazon Redshift cross-Region replication in the source Region and use the KMS key of the destination Region.

Answer: A

NEW QUESTION 9
A Database Specialist is troubleshooting an application connection failure on an Amazon Aurora DB cluster with multiple Aurora Replicas that had been running with no issues for the past 2 months. The connection failure lasted for 5 minutes and corrected itself after that. The Database Specialist reviewed the Amazon RDS events and determined a failover event occurred at that time. The failover process took around 15 seconds to complete.
What is the MOST likely cause of the 5-minute connection outage?

  • A. After a database crash, Aurora needed to replay the redo log from the last database checkpoint
  • B. The client-side application is caching the DNS data and its TTL is set too high
  • C. After failover, the Aurora DB cluster needs time to warm up before accepting client connections
  • D. There were no active Aurora Replicas in the Aurora DB cluster

Answer: C

NEW QUESTION 10
A retail company with its main office in New York and another office in Tokyo plans to build a database solution on AWS. The company’s main workload consists of a mission-critical application that updates its application data in a data store. The team at the Tokyo office is building dashboards with complex analytical queries using the application data. The dashboards will be used to make buying decisions, so they need to have access to the application data in less than 1 second.
Which solution meets these requirements?

  • A. Use an Amazon RDS DB instance deployed in the us-east-1 Region with a read replica instance in the apnortheast-1 Regio
  • B. Create an Amazon ElastiCache cluster in the ap-northeast-1 Region to cacheapplication data from the replica to generate the dashboards.
  • C. Use an Amazon DynamoDB global table in the us-east-1 Region with replication into the ap-northeast-1Regio
  • D. Use Amazon QuickSight for displaying dashboard results.
  • E. Use an Amazon RDS for MySQL DB instance deployed in the us-east-1 Region with a read replicainstance in the ap-northeast-1 Regio
  • F. Have the dashboard application read from the read replica.
  • G. Use an Amazon Aurora global databas
  • H. Deploy the writer instance in the us-east-1 Region and the replicain the ap-northeast-1 Regio
  • I. Have the dashboard application read from the replica ap-northeast-1 Region.

Answer: D

NEW QUESTION 11
A company has multiple applications serving data from a secure on-premises database. The company is migrating all applications and databases to the AWS Cloud. The IT Risk and Compliance department requires that auditing be enabled on all secure databases to capture all log ins, log outs, failed logins, permission changes, and database schema changes. A Database Specialist has recommended Amazon Aurora MySQL as the migration target, and leveraging the Advanced Auditing feature in Aurora.
Which events need to be specified in the Advanced Auditing configuration to satisfy the minimum auditing requirements? (Choose three.)

  • A. CONNECT
  • B. QUERY_DCL
  • C. QUERY_DDL
  • D. QUERY_DML
  • E. TABLE
  • F. QUERY

Answer: ACE

NEW QUESTION 12
The Security team for a finance company was notified of an internal security breach that happened 3 weeks ago. A Database Specialist must start producing audit logs out of the production Amazon Aurora PostgreSQL cluster for the Security team to use for monitoring and alerting. The Security team is required to perform real-time alerting and monitoring outside the Aurora DB cluster and wants to have the cluster push encrypted files to the chosen solution.
Which approach will meet these requirements?

  • A. Use pg_audit to generate audit logs and send the logs to the Security team.
  • B. Use AWS CloudTrail to audit the DB cluster and the Security team will get data from Amazon S3.
  • C. Set up database activity streams and connect the data stream from Amazon Kinesis to consumer applications.
  • D. Turn on verbose logging and set up a schedule for the logs to be dumped out for the Security team.

Answer: B

NEW QUESTION 13
A Database Specialist is migrating an on-premises Microsoft SQL Server application database to Amazon RDS for PostgreSQL using AWS DMS. The application requires minimal downtime when the RDS DB instance goes live.
What change should the Database Specialist make to enable the migration?

  • A. Configure the on-premises application database to act as a source for an AWS DMS full load with ongoing change data capture (CDC)
  • B. Configure the AWS DMS replication instance to allow both full load and ongoing change data capture(CDC)
  • C. Configure the AWS DMS task to generate full logs to allow for ongoing change data capture (CDC)
  • D. Configure the AWS DMS connections to allow two-way communication to allow for ongoing change datacapture (CDC)

Answer: A

NEW QUESTION 14
A company is deploying a solution in Amazon Aurora by migrating from an on-premises system. The IT department has established an AWS Direct Connect link from the company’s data center. The company’s Database Specialist has selected the option to require SSL/TLS for connectivity to prevent plaintext data from being set over the network. The migration appears to be working successfully, and the data can be queried from a desktop machine.
Two Data Analysts have been asked to query and validate the data in the new Aurora DB cluster. Both Analysts are unable to connect to Aurora. Their user names and passwords have been verified as valid and
the Database Specialist can connect to the DB cluster using their accounts. The Database Specialist also verified that the security group configuration allows network from all corporate IP addresses.
What should the Database Specialist do to correct the Data Analysts’ inability to connect?

  • A. Restart the DB cluster to apply the SSL change.
  • B. Instruct the Data Analysts to download the root certificate and use the SSL certificate on the connection string to connect.
  • C. Add explicit mappings between the Data Analysts’ IP addresses and the instance in the security groupassigned to the DB cluster.
  • D. Modify the Data Analysts’ local client firewall to allow network traffic to AWS.

Answer: D

NEW QUESTION 15
A company is about to launch a new product, and test databases must be re-created from production data. The company runs its production databases on an Amazon Aurora MySQL DB cluster. A Database Specialist needs to deploy a solution to create these test databases as quickly as possible with the least amount of administrative effort.
What should the Database Specialist do to meet these requirements?

  • A. Restore a snapshot from the production cluster into test clusters
  • B. Create logical dumps of the production cluster and restore them into new test clusters
  • C. Use database cloning to create clones of the production cluster
  • D. Add an additional read replica to the production cluster and use that node for testing

Answer: D

NEW QUESTION 16
A company is running an Amazon RDS for PostgeSQL DB instance and wants to migrate it to an Amazon Aurora PostgreSQL DB cluster. The current database is 1 TB in size. The migration needs to have minimal downtime.
What is the FASTEST way to accomplish this?

  • A. Create an Aurora PostgreSQL DB cluste
  • B. Set up replication from the source RDS for PostgreSQL DB instance using AWS DMS to the target DB cluster.
  • C. Use the pg_dump and pg_restore utilities to extract and restore the RDS for PostgreSQL DB instance to the Aurora PostgreSQL DB cluster.
  • D. Create a database snapshot of the RDS for PostgreSQL DB instance and use this snapshot to create the Aurora PostgreSQL DB cluster.
  • E. Migrate data from the RDS for PostgreSQL DB instance to an Aurora PostgreSQL DB cluster using an Aurora Replic
  • F. Promote the replica during the cutover.

Answer: C

NEW QUESTION 17
A company has a production Amazon Aurora Db cluster that serves both online transaction processing (OLTP) transactions and compute-intensive reports. The reports run for 10% of the total cluster uptime while the OLTP transactions run all the time. The company has benchmarked its workload and determined that a six-node Aurora DB cluster is appropriate for the peak workload.
The company is now looking at cutting costs for this DB cluster, but needs to have a sufficient number of nodes in the cluster to support the workload at different times. The workload has not changed since the previous benchmarking exercise.
How can a Database Specialist address these requirements with minimal user involvement?

  • A. Split up the DB cluster into two different clusters: one for OLTP and the other for reportin
  • B. Monitor and set up replication between the two clusters to keep data consistent.
  • C. Review all evaluate the peak combined workloa
  • D. Ensure that utilization of the DB cluster node is at an acceptable leve
  • E. Adjust the number of instances, if necessary.
  • F. Use the stop cluster functionality to stop all the nodes of the DB cluster during times of minimal workloa
  • G. The cluster can be restarted again depending on the workload at the time.
  • H. Set up automatic scaling on the DB cluste
  • I. This will allow the number of reader nodes to adjust automatically to the reporting workload, when needed.

Answer: D

NEW QUESTION 18
A retail company is about to migrate its online and mobile store to AWS. The company’s CEO has strategic plans to grow the brand globally. A Database Specialist has been challenged to provide predictable read and write database performance with minimal operational overhead.
What should the Database Specialist do to meet these requirements?

  • A. Use Amazon DynamoDB global tables to synchronize transactions
  • B. Use Amazon EMR to copy the orders table data across Regions
  • C. Use Amazon Aurora Global Database to synchronize all transactions
  • D. Use Amazon DynamoDB Streams to replicate all DynamoDB transactions and sync them

Answer: A

NEW QUESTION 19
A gaming company has implemented a leaderboard in AWS using a Sorted Set data structure within Amazon ElastiCache for Redis. The ElastiCache cluster has been deployed with cluster mode disabled and has a replication group deployed with two additional replicas. The company is planning for a worldwide gaming event and is anticipating a higher write load than what the current cluster can handle.
Which method should a Database Specialist use to scale the ElastiCache cluster ahead of the upcoming event?

  • A. Enable cluster mode on the existing ElastiCache cluster and configure separate shards for the Sorted Setacross all nodes in the cluster.
  • B. Increase the size of the ElastiCache cluster nodes to a larger instance size.
  • C. Create an additional ElastiCache cluster and load-balance traffic between the two clusters.
  • D. Use the EXPIRE command and set a higher time to live (TTL) after each call to increment a given key.

Answer: B

NEW QUESTION 20
A company is using 5 TB Amazon RDS DB instances and needs to maintain 5 years of monthly database backups for compliance purposes. A Database Administrator must provide Auditors with data within 24 hours.
Which solution will meet these requirements and is the MOST operationally efficient?

  • A. Create an AWS Lambda function to run on the first day of every month to take a manual RDS snapshot.Move the snapshot to the company’s Amazon S3 bucket.
  • B. Create an AWS Lambda function to run on the first day of every month to take a manual RDS snapshot.
  • C. Create an RDS snapshot schedule from the AWS Management Console to take a snapshot every 30 days.
  • D. Create an AWS Lambda function to run on the first day of every month to create an automated RDSsnapshot.

Answer: B

NEW QUESTION 21
A company needs a data warehouse solution that keeps data in a consistent, highly structured format. The company requires fast responses for end-user queries when looking at data from the current year, and users must have access to the full 15-year dataset, when needed. This solution also needs to handle a fluctuating number incoming queries. Storage costs for the 100 TB of data must be kept low. Which solution meets these requirements?

  • A. Leverage an Amazon Redshift data warehouse solution using a dense storage instance type while keeping all the data on local Amazon Redshift storag
  • B. Provision enough instances to support high demand.
  • C. Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent dat
  • D. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum laye
  • E. Provision enough instances to support high demand.
  • F. Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent dat
  • G. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum laye
  • H. Enable Amazon Redshift Concurrency Scaling.
  • I. Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent dat
  • J. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum laye
  • K. Leverage Amazon Redshift elastic resize.

Answer: C

NEW QUESTION 22
......

P.S. Allfreedumps.com now are offering 100% pass ensure DBS-C01 dumps! All DBS-C01 exam questions have been updated with correct answers: https://www.allfreedumps.com/DBS-C01-dumps.html (85 New Questions)