AWS-Certified-Database-Specialty Exam Questions - Online Test


AWS-Certified-Database-Specialty Premium VCE File

Learn More 100% Pass Guarantee - Dumps Verified - Instant Download
150 Lectures, 20 Hours

certleader.com

we provide Accurate Amazon AWS-Certified-Database-Specialty exam question which are the best for clearing AWS-Certified-Database-Specialty test, and to get certified by Amazon AWS Certified Database - Specialty. The AWS-Certified-Database-Specialty Questions & Answers covers all the knowledge points of the real AWS-Certified-Database-Specialty exam. Crack your Amazon AWS-Certified-Database-Specialty Exam with latest dumps, guaranteed!

Check AWS-Certified-Database-Specialty free dumps before getting the full version:

NEW QUESTION 1
A company is deploying a solution in Amazon Aurora by migrating from an on-premises system. The IT department has established an AWS Direct Connect link from the company’s data center. The company’s Database Specialist has selected the option to require SSL/TLS for connectivity to prevent plaintext data from being set over the network. The migration appears to be working successfully, and the data can be queried from a desktop machine.
Two Data Analysts have been asked to query and validate the data in the new Aurora DB cluster. Both Analysts are unable to connect to Aurora. Their user names and passwords have been verified as valid and the Database Specialist can connect to the DB cluster using their accounts. The Database Specialist also verified that the security group configuration allows network from all corporate IP addresses.
What should the Database Specialist do to correct the Data Analysts’ inability to connect?

  • A. Restart the DB cluster to apply the SSL change.
  • B. Instruct the Data Analysts to download the root certificate and use the SSL certificate on the connection string to connect.
  • C. Add explicit mappings between the Data Analysts’ IP addresses and the instance in the security group assigned to the DB cluster.
  • D. Modify the Data Analysts’ local client firewall to allow network traffic to AWS.

Answer: B

Explanation:
• To connect using SSL:
• Provide the SSLTrust certificate (can be downloaded from AWS)
• Provide SSL options when connecting to database
• Not using SSL on a DB that enforces SSL would result in error https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/ssl-certificate-rotation-aurora-postgresql.ht

NEW QUESTION 2
An online shopping company has a large inflow of shopping requests daily. As a result, there is a consistent load on the company’s Amazon RDS database. A database specialist needs to ensure the database is up and running at all times. The database specialist wants an automatic notification system for issues that may cause database downtime or for configuration changes made to the database.
What should the database specialist do to achieve this? (Choose two.)

  • A. Create an Amazon CloudWatch Events event to send a notification using Amazon SNS on every API call logged in AWS CloudTrail.
  • B. Subscribe to an RDS event subscription and configure it to use an Amazon SNS topic to send notifications.
  • C. Use Amazon SES to send notifications based on configured Amazon CloudWatch Events events.
  • D. Configure Amazon CloudWatch alarms on various metrics, such as FreeStorageSpace for the RDS instance.
  • E. Enable email notifications for AWS Trusted Advisor.

Answer: BD

NEW QUESTION 3
A company has an on-premises system that tracks various database operations that occur over the lifetime of a database, including database shutdown, deletion, creation, and backup.
The company recently moved two databases to Amazon RDS and is looking at a solution that would satisfy these requirements. The data could be used by other systems within the company.
Which solution will meet these requirements with minimal effort?

  • A. Create an Amazon Cloudwatch Events rule with the operations that need to be tracked on Amazon RD
  • B. Create an AWS Lambda function to act on these rules and write the output to the tracking systems.
  • C. Create an AWS Lambda function to trigger on AWS CloudTrail API call
  • D. Filter on specific RDS API calls and write the output to the tracking systems.
  • E. Create RDS event subscription
  • F. Have the tracking systems subscribe to specific RDS event system notifications.
  • G. Write RDS logs to Amazon Kinesis Data Firehos
  • H. Create an AWS Lambda function to act on these rules and write the output to the tracking systems.

Answer: C

NEW QUESTION 4
A retail company with its main office in New York and another office in Tokyo plans to build a database solution on AWS. The company’s main workload consists of a mission-critical application that updates its application data in a data store. The team at the Tokyo office is building dashboards with complex analytical queries using the application data. The dashboards will be used to make buying decisions, so they need to have access to the application data in less than 1 second.
Which solution meets these requirements?

  • A. Use an Amazon RDS DB instance deployed in the us-east-1 Region with a read replica instance in the ap- northeast-1 Regio
  • B. Create an Amazon ElastiCache cluster in the ap-northeast-1 Region to cache application data from the replica to generate the dashboards.
  • C. Use an Amazon DynamoDB global table in the us-east-1 Region with replication into the ap-northeast-1 Regio
  • D. Use Amazon QuickSight for displaying dashboard results.
  • E. Use an Amazon RDS for MySQL DB instance deployed in the us-east-1 Region with a read replica instance in the ap-northeast-1 Regio
  • F. Have the dashboard application read from the read replica.
  • G. Use an Amazon Aurora global databas
  • H. Deploy the writer instance in the us-east-1 Region and the replica in the ap-northeast-1 Regio
  • I. Have the dashboard application read from the replicaap-northeast-1 Region.

Answer: D

Explanation:
https://aws.amazon.com/blogs/database/aurora-postgresql-disaster-recovery-solutions-using-amazon-aurora-glob

NEW QUESTION 5
A business's production database is hosted on a single-node Amazon RDS for MySQL DB instance. The database instance is hosted in a United States AWS Region.
A week before a significant sales event, a fresh database maintenance update is released. The maintenance update has been designated as necessary. The firm want to minimize the database instance's downtime and requests that a database expert make the database instance highly accessible until the sales event concludes.
Which solution will satisfy these criteria?

  • A. Defer the maintenance update until the sales event is over.
  • B. Create a read replica with the latest updat
  • C. Initiate a failover before the sales event.
  • D. Create a read replica with the latest updat
  • E. Transfer all read-only traffic to the read replica during the sales event.
  • F. Convert the DB instance into a Multi-AZ deploymen
  • G. Apply the maintenance update.

Answer: D

Explanation:
https://aws.amazon.com/premiumsupport/knowledge-center/rds-required-maintenance/

NEW QUESTION 6
For the first time, a database professional is establishing a test graph database on Amazon Neptune. The database expert must input millions of rows of test observations from an Amazon S3.csv file. The database professional uploaded the data to the Neptune DB instance through a series of API calls.
Which sequence of actions enables the database professional to upload the data most quickly? (Select three.)

  • A. Ensure Amazon Cognito returns the proper AWS STS tokens to authenticate the Neptune DB instance to the S3 bucket hosting the CSV file.
  • B. Ensure the vertices and edges are specified in different .csv files with proper header column formatting.
  • C. Use AWS DMS to move data from Amazon S3 to the Neptune Loader.
  • D. Curl the S3 URI while inside the Neptune DB instance and then run the addVertex or addEdge commands.
  • E. Ensure an IAM role for the Neptune DB instance is configured with the appropriate permissions to allow access to the file in the S3 bucket.
  • F. Create an S3 VPC endpoint and issue an HTTP POST to the database€™s loader endpoint.

Answer: BEF

Explanation:
https://docs.aws.amazon.com/neptune/latest/userguide/bulk-load-optimize.html

NEW QUESTION 7
A database specialist deployed an Amazon RDS DB instance in Dev-VPC1 used by their development team. Dev-VPC1 has a peering connection with Dev-VPC2 that belongs to a different development team in the same department. The networking team confirmed that the routing between VPCs is correct; however, the database engineers in Dev-VPC2 are getting a timeout connections error when trying to connect to the database in Dev- VPC1.
What is likely causing the timeouts?

  • A. The database is deployed in a VPC that is in a different Region.
  • B. The database is deployed in a VPC that is in a different Availability Zone.
  • C. The database is deployed with misconfigured security groups.
  • D. The database is deployed with the wrong client connect timeout configuration.

Answer: C

Explanation:
"A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IP addresses. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connection between your own VPCs, with a VPC in another AWS account, or with a VPC in a different AWS Region." https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.Scenarios.html

NEW QUESTION 8
A company uses Amazon DynamoDB as the data store for its ecommerce website. The website receives little to no traffic at night, and the majority of the traffic occurs during the day. The traffic growth during peak hours is gradual and predictable on a daily basis, but it can be orders of magnitude higher than during off-peak hours.
The company initially provisioned capacity based on its average volume during the day without accounting for the variability in traffic patterns. However, the website is experiencing a significant amount of throttling during peak hours. The company wants to reduce the amount of throttling while minimizing costs.
What should a database specialist do to meet these requirements?

  • A. Use reserved capacit
  • B. Set it to the capacity levels required for peak daytime throughput.
  • C. Use provisioned capacit
  • D. Set it to the capacity levels required for peak daytime throughput.
  • E. Use provisioned capacit
  • F. Create an AWS Application Auto Scaling policy to update capacity based on consumption.
  • G. Use on-demand capacity.

Answer: C

Explanation:
On-demand mode is a good option if any of the following are true: You create new tables with unknown workloads. You have unpredictable application traffic. You prefer the ease of paying for only what you use. https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.h
Amazon DynamoDB auto scaling uses the AWS Application Auto Scaling service to dynamically adjust provisioned throughput capacity on your behalf https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/AutoScaling.html

NEW QUESTION 9
A company needs a data warehouse solution that keeps data in a consistent, highly structured format. The company requires fast responses for end-user queries when looking at data from the current year, and users must have access to the full 15-year dataset, when needed. This solution also needs to handle a fluctuating number incoming queries. Storage costs for the 100 TB of data must be kept low.
Which solution meets these requirements?

  • A. Leverage an Amazon Redshift data warehouse solution using a dense storage instance type while keeping all the data on local Amazon Redshift storag
  • B. Provision enough instances to support high demand.
  • C. Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent dat
  • D. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum laye
  • E. Provision enough instances to support high demand.
  • F. Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent dat
  • G. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum laye
  • H. Enable Amazon Redshift Concurrency Scaling.
  • I. Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent dat
  • J. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum laye
  • K. Leverage Amazon Redshift elastic resize.

Answer: C

Explanation:
https://docs.aws.amazon.com/redshift/latest/dg/concurrency-scaling.html
"With the Concurrency Scaling feature, you can support virtually unlimited concurrent users and concurrent queries, with consistently fast query performance. When concurrency scaling is enabled, Amazon Redshift automatically adds additional cluster capacity when you need it to process an increase in concurrent read queries. Write operations continue as normal on your main cluster. Users always see the most current data, whether the queries run on the main cluster or on a concurrency scaling cluster. You're charged for concurrency scaling clusters only for the time they're in use. For more information about pricing, see Amazon Redshift pricing. You manage which queries are sent to the concurrency scaling cluster by configuring WLM queues. When you enable concurrency scaling for a queue, eligible queries are sent to the concurrency scaling cluster instead of waiting in line."

NEW QUESTION 10
The website of a manufacturing firm makes use of an Amazon Aurora PostgreSQL database cluster. Which settings will result in the LEAST amount of downtime for the application during failover? (Select
three.)

  • A. Use the provided read and write Aurora endpoints to establish a connection to the Aurora DB cluster.
  • B. Create an Amazon CloudWatch alert triggering a restore in another Availability Zone when the primary Aurora DB cluster is unreachable.
  • C. Edit and enable Aurora DB cluster cache management in parameter groups.
  • D. Set TCP keepalive parameters to a high value.
  • E. Set JDBC connection string timeout variables to a low value.
  • F. Set Java DNS caching timeouts to a high value.

Answer: ACE

Explanation:
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.BestPractices.html https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.cluster-cache-mgmt.htm https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.BestPractices.html#Aur

NEW QUESTION 11
A Database Specialist is migrating an on-premises Microsoft SQL Server application database to Amazon RDS for PostgreSQL using AWS DMS. The application requires minimal downtime when the RDS DB instance goes live.
What change should the Database Specialist make to enable the migration?

  • A. Configure the on-premises application database to act as a source for an AWS DMS full load with ongoing change data capture (CDC)
  • B. Configure the AWS DMS replication instance to allow both full load and ongoing change data capture (CDC)
  • C. Configure the AWS DMS task to generate full logs to allow for ongoing change data capture (CDC)
  • D. Configure the AWS DMS connections to allow two-way communication to allow for ongoing change data capture (CDC)

Answer: A

Explanation:
"requires minimal downtime when the RDS DB instance goes live" in order to do CDC: "you must first ensure that ARCHIVELOG MODE is on to provide information to LogMiner. AWS DMS uses LogMiner to read information from the archive logs so that AWS DMS can capture changes"
https://docs.aws.amazon.com/dms/latest/sbs/chap-oracle2postgresql.steps.configureoracle.html "If you want to capture and apply changes (CDC), then you also need the following privileges."

NEW QUESTION 12
Amazon RDS for Oracle with Transparent Data Encryption is used by a financial services organization (TDE). At all times, the organization is obligated to encrypt its data at rest. The decryption key must be widely distributed, and access to the key must be restricted. The organization must be able to rotate the encryption key on demand to comply with regulatory requirements. If any possible security vulnerabilities are discovered, the organization must be able to disable the key. Additionally, the company's overhead must be kept to a minimal.
What method should the database administrator use to configure the encryption to fulfill these specifications?

  • A. AWS CloudHSM
  • B. AWS Key Management Service (AWS KMS) with an AWS managed key
  • C. AWS Key Management Service (AWS KMS) with server-side encryption
  • D. AWS Key Management Service (AWS KMS) CMK with customer-provided material

Answer: D

Explanation:
https://docs.aws.amazon.com/whitepapers/latest/kms-best-practices/aws-managed-and-customer-managed-cmks

NEW QUESTION 13
A ride-hailing application stores bookings in a persistent Amazon RDS for MySQL DB instance. This program is very popular, and the corporation anticipates a tenfold rise in the application's user base over the next several months. The application receives a higher volume of traffic in the morning and evening.
This application is divided into two sections:
An internal booking component that takes online reservations in response to concurrent user queries. A component of a third-party customer relationship management (CRM) system that customer service
professionals utilize. Booking data is accessed using queries in the CRM.
To manage this workload effectively, a database professional must create a cost-effective database system. Which solution satisfies these criteria?

  • A. Use Amazon ElastiCache for Redis to accept the booking
  • B. Associate an AWS Lambda function to capture changes and push the booking data to the RDS for MySQL DB instance used by the CRM.
  • C. Use Amazon DynamoDB to accept the booking
  • D. Enable DynamoDB Streams and associate an AWS Lambda function to capture changes and push the booking data to an Amazon SQS queu
  • E. This triggers another Lambda function that pulls data from Amazon SQS and writes it to the RDS for MySQL DB instance used by the CRM.
  • F. Use Amazon ElastiCache for Redis to accept the booking
  • G. Associate an AWS Lambda function to capture changes and push the booking data to an Amazon Redshift database used by the CRM.
  • H. Use Amazon DynamoDB to accept the booking
  • I. Enable DynamoDB Streams and associate an AWS Lambda function to capture changes and push the booking data to Amazon Athena, which is used by the CRM.

Answer: B

Explanation:
"AWS Lambda function to capture changes" capture changes to what? ElastiCache? The main use of ElastiCache is to cache frequently read data. Also "the company expects a tenfold increase in the user base" and "correspond to simultaneous requests from users"

NEW QUESTION 14
An AWS CloudFormation stack that included an Amazon RDS DB instance was accidentally deleted and recent data was lost. A Database Specialist needs to add RDS settings to the CloudFormation template to reduce the chance of accidental instance data loss in the future.
Which settings will meet this requirement? (Choose three.)

  • A. Set DeletionProtection to True
  • B. Set MultiAZ to True
  • C. Set TerminationProtection to True
  • D. Set DeleteAutomatedBackups to False
  • E. Set DeletionPolicy to Delete
  • F. Set DeletionPolicy to Retain

Answer: ACF

NEW QUESTION 15
A company runs online transaction processing (OLTP) workloads on an Amazon RDS for PostgreSQL Multi- AZ DB instance. Tests were run on the database after work hours, which generated additional database logs. The free storage of the RDS DB instance is low due to these additional logs.
What should the company do to address this space constraint issue?

  • A. Log in to the host and run the rm $PGDATA/pg_logs/* command
  • B. Modify the rds.log_retention_period parameter to 1440 and wait up to 24 hours for database logs to be deleted
  • C. Create a ticket with AWS Support to have the logs deleted
  • D. Run the SELECT rds_rotate_error_log() stored procedure to rotate the logs

Answer: B

Explanation:
To set the retention period for system logs, use the rds.log_retention_period parameter. You can find rds.log_retention_period in the DB parameter group associated with your DB instance. The unit for this parameter is minutes. For example, a setting of 1,440 retains logs for one day. The default value is 4,320 (three days). The maximum value is 10,080 (seven days).
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_LogAccess.Concepts.PostgreSQL.ht

NEW QUESTION 16
A company is using Amazon RDS for MySQL to redesign its business application. A Database Specialist has noticed that the Development team is restoring their MySQL database multiple times a day when Developers make mistakes in their schema updates. The Developers sometimes need to wait hours to the restores to complete.
Multiple team members are working on the project, making it difficult to find the correct restore point for each mistake.
Which approach should the Database Specialist take to reduce downtime?

  • A. Deploy multiple read replicas and have the team members make changes to separate replica instances
  • B. Migrate to Amazon RDS for SQL Server, take a snapshot, and restore from the snapshot
  • C. Migrate to Amazon Aurora MySQL and enable the Aurora Backtrack feature
  • D. Enable the Amazon RDS for MySQL Backtrack feature

Answer: C

Explanation:
"Amazon Aurora, a fully-managed relational database service in AWS, is now offering a backtrack feature. With Amazon Aurora with MySQL compatibility, users can backtrack, or "rewind", a database cluster to a specific point in time, without restoring data from a backup. The backtrack process allows a point in time to be specified with one second resolution, and the rewind process typically takes minutes. This new feature facilitates developers in undoing mistakes like deleting data inappropriately or dropping the wrong table."

NEW QUESTION 17
A company is using Amazon with Aurora Replicas for read-only workload scaling. A Database Specialist needs to split up two read-only applications so each application always connects to a dedicated replica. The Database Specialist wants to implement load balancing and high availability for the read-only applications.
Which solution meets these requirements?

  • A. Use a specific instance endpoint for each replica and add the instance endpoint to each read-only application connection string.
  • B. Use reader endpoints for both the read-only workload applications.
  • C. Use a reader endpoint for one read-only application and use an instance endpoint for the other read-only application.
  • D. Use custom endpoints for the two read-only applications.

Answer: D

Explanation:
https://aws.amazon.com/about-aws/whats-new/2018/11/amazon-aurora-simplifies-workload-management-with-c

NEW QUESTION 18
A Database Specialist is migrating a 2 TB Amazon RDS for Oracle DB instance to an RDS for PostgreSQL DB instance using AWS DMS. The source RDS Oracle DB instance is in a VPC in the us-east-1 Region. The target RDS for PostgreSQL DB instance is in a VPC in the use-west-2 Region.
Where should the AWS DMS replication instance be placed for the MOST optimal performance?

  • A. In the same Region and VPC of the source DB instance
  • B. In the same Region and VPC as the target DB instance
  • C. In the same VPC and Availability Zone as the target DB instance
  • D. In the same VPC and Availability Zone as the source DB instance

Answer: C

Explanation:
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.VPC.html#CHAP_ReplicationIn In fact, all the configurations list on above url prefer the replication instance putting into target vpc region / subnet / az.
https://docs.aws.amazon.com/dms/latest/sbs/CHAP_SQLServer2Aurora.Steps.CreateReplicationInstance.html

NEW QUESTION 19
......

Thanks for reading the newest AWS-Certified-Database-Specialty exam dumps! We recommend you to try the PREMIUM Thedumpscentre.com AWS-Certified-Database-Specialty dumps in VCE and PDF here: https://www.thedumpscentre.com/AWS-Certified-Database-Specialty-dumps/ (270 Q&As Dumps)