AWS-Certified-Solutions-Architect-Professional Exam Questions - Online Test


AWS-Certified-Solutions-Architect-Professional Premium VCE File

Learn More 100% Pass Guarantee - Dumps Verified - Instant Download
150 Lectures, 20 Hours

certleader.com

It is more faster and easier to pass the aws certified solutions architect professional salary by using aws certified solutions architect professional exam dumps. Immediate access to the aws certified solutions architect professional exam dumps and find the same core area aws certified solutions architect professional salary with professionally verified answers, then PASS your exam with a high score now.

Amazon AWS-Certified-Solutions-Architect-Professional Free Dumps Questions Online, Read and Test Now.

NEW QUESTION 1
When does an AWS Data Pipeline terminate the AWS Data Pipeline-managed compute resources?

  • A. AWS Data Pipeline terminates AWS Data Pipeline-managed compute resources every 2 hours.
  • B. When the final actMty that uses the resources is running
  • C. AWS Data Pipeline terminates AWS Data Pipeline-managed compute resources every 12 hours.
  • D. When the final actMty that uses the resources has completed successfully orfailed

Answer: D

Explanation: Compute resources will be provisioned by AWS Data Pipeline when the first actMty for a scheduled time that uses those resources is ready to run, and those instances will be terminated when the final actMty that uses the resources has completed successfully or failed.
Reference: https://aws.amazon.com/datapipe|ine/faqs/

NEW QUESTION 2
You control access to S3 buckets and objects with:

  • A. Identity and Access Management (IAM) Policies.
  • B. Access Control Lists (ACLs).
  • C. Bucket Policies.
  • D. All of the above

Answer: D

NEW QUESTION 3
Your startup wants to implement an order fulfillment process for selling a personalized gadget that needs an average of 3-4 days to produce with some orders taking up to 6 months you expect 10 orders per day on your first day. 1000 orders per day after 6 months and 10,000 orders after 12 months.
Orders coming in are checked for consistency men dispatched to your manufacturing plant for production quality control packaging shipment and payment processing If the product does not meet the quality standards at any stage of the process employees may force the process to repeat a step Customers are notified via email about order status and any critical issues with their orders such as payment failure.
Your case architecture includes AWS Elastic Beanstalk for your website with an RDS MySQL instance for customer data and orders.
How can you implement the order fulfillment process while making sure that the emails are delivered reliably?

  • A. Add a business process management application to your Elastic Beanstalk app sewers and re-use the ROS database for tracking order status use one of the Elastic Beanstalk instances to send emails to customers.
  • B. Use SWF with an Auto Scaling group of actMty workers and a decider instance in another Auto Scaling group with min/max=1 Use the decider instance to send emails to customers.
  • C. Use SWF with an Auto Scaling group of actMty workers and a decider instance in another Auto Scaling group with min/max=1 use SES to send emails to customers.
  • D. Use an SQS queue to manage all process tasks Use an Auto Scaling group of EC2 Instances that poll the tasks and execute the
  • E. Use SES to send emails to customers.

Answer: C

NEW QUESTION 4
An organization is planning to create a secure scalable application with AWS VPC and ELB. The organization has two instances already running and each instance has an ENI attached to it in addition to a primary network interface. The primary network interface and additional ENI both have an elastic IP attached to it.
If those instances are registered with ELB and the organization wants ELB to send data to a particular EIP of the instance, how can they achieve this?

  • A. The organization should ensure that the IP which is required to receive the ELB traffic is attached to a primary network interface.
  • B. It is not possible to attach an instance with two EN|s with ELB as it will give an IP conflict error.
  • C. The organization should ensure that the IP which is required to receive the ELB traffic is attached to an additional ENI.
  • D. It is not possible to send data to a particular IP as ELB will send to any one EI

Answer: A

Explanation: Amazon Virtual Private Cloud (Amazon VPC) allows the user to define a virtual networking environment in a private, isolated section of the Amazon Web Services (AWS) cloud. The user has complete control over the virtual networking environment. Within this virtual private cloud, the user can launch AWS resources, such as an ELB, and EC2 instances. There are two ELBs available with VPC: internet facing and internal (private) ELB. For the internet facing ELB it is required that the ELB should be in a public subnet.
When the user registers a multi-homed instance (an instance that has an Elastic Network Interface (ENI) attached) with a load balancer, the load balancer will route the traffic to the IP address of the primary network interface (eth0).
Reference: http://docs.aws.amazon.com/E|asticLoadBaIancing/latest/DeveIoperGuide/gs-ec2VPC.html

NEW QUESTION 5
In IAM, which of the following is true of temporary security credentials?

  • A. Once you issue temporary security credentials, they cannot be revoked.
  • B. None of these are correct.
  • C. Once you issue temporary security credentials, they can be revoked only when the virtual MFA device is used.
  • D. Once you issue temporary security credentials, they can be revoke

Answer: A

Explanation: Temporary credentials in IAM are valid throughout their defined duration of time and hence can't be revoked. However, because permissions are evaluated each time an AWS request is made using the credentials, you can achieve the effect of revoking the credentials by changing the permissions for the
credentials even after they have been issued. Reference:
http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentiaIs_temp_controI-access_disable-perms.h tml

NEW QUESTION 6
Your company plans to host a large donation website on Amazon Web Sewices (AWS). You anticipate a large and undetermined amount of traffic that will create many database writes. To be certain that you do not drop any writes to a database hosted on AWS. Which sewice should you use?

  • A. Amazon RDS with provisioned IOPS up to the anticipated peak write throughput.
  • B. Amazon Simple Queue Service (SQS) for capturing the writes and draining the queue to write to the database.
  • C. Amazon EIastiCache to store the writes until the writes are committed to the database.
  • D. Amazon DynamoDB with provisioned write throughput up to the anticipated peak write throughpu

Answer: B

NEW QUESTION 7
An organization is setting up their website on AWS. The organization is working on various security measures to be performed on the AWS EC2 instances. Which of the below mentioned security mechanisms will not help the organization to avoid future data leaks and identify security weaknesses?

  • A. Run penetration testing on AWS with prior approval from Amazon.
  • B. Perform SQL injection for application testing.
  • C. Perform a Code Check for any memory leaks.
  • D. Perform a hardening test on the AWS instanc

Answer: C

Explanation: AWS security follows the shared security model where the user is as much responsible as Amazon. Since Amazon is a public cloud it is bound to be targeted by hackers. If an organization is planning to host their application on AWS EC2, they should perform the below mentioned security checks as a measure to find any security weakness/data leaks:
Perform penetration testing as performed by attackers to find any vulnerability. The organization must take an approval from AWS before performing penetration testing
Perform hardening testing to find if there are any unnecessary ports open Perform SQL injection to find any DB security issues
The code memory checks are generally useful when the organization wants to improve the application performance.
Reference: http://aws.amazon.com/security/penetration-testing/

NEW QUESTION 8
Does Amazon RDS API provide actions to modify DB instances inside a VPC and associate them with DB Security Groups?

  • A. Yes, Amazon does this but only for MySQL RDS.
  • B. Yes
  • C. No
  • D. Yes, Amazon does this but only for Oracle RD

Answer: B

Explanation: You can use the action Modify DB Instance, available in the Amazon RDS API, to pass values for the parameters DB Instance Identifier and DB Security Groups specifying the instance ID and the DB Security Groups you want your instance to be part of.
Reference: http://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_|VIodifyDBInstance.htmI

NEW QUESTION 9
You need a persistent and durable storage to trace call actMty of an IVR (Interactive Voice Response) system. Call duration is mostly in the 2-3 minutes timeframe. Each traced call can be either active or terminated. An external application needs to know each minute the list of currently active calls. Usually there are a few calls/second, but once per month there is a periodic peak up to 1000 calls/second for a few hours. The system is open 24/7 and any downtime should be avoided. Historical data is periodically archived to files. Cost saving is a priority for this project.
What database implementation would better fit this scenario, keeping costs as low as possible?

  • A. Use DynamoDB with a "CaIIs" table and a Global Secondary Index on a "State" attribute that can equal to "active" or "terminated". In this way the Global Secondary Index can be used for all items in the table.
  • B. Use RDS Multi-AZ with a "CALLS" table and an indexed "STATE" field that can be equal to "ACT|VE"or 'TERMINATED". In this way the SQL query is optimized by the use of the Index.
  • C. Use RDS Nlulti-AZ with two tables, one for "ACT|VE_CALLS" and one for "TERMINATED_CALLS". In this way the "ACTIVE_CALLS" table is always small and effective to access.
  • D. Use DynamoDB with a "CaIIs" table and a Global Secondary Index on a "Is Active" attribute that is present for active calls onl
  • E. In this way the Global Secondary Index is sparse and more effective.

Answer: C

NEW QUESTION 10
In Amazon VPC, what is the default maximum number of BGP advertised routes allowed per route table?

  • A. 15
  • B. 100
  • C. 5
  • D. 10

Answer: B

Explanation: The maximum number of BGP advertised routes allowed per route table is 100.
Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Appendix_Limits.html

NEW QUESTION 11
To get started using AWS Direct Connect, in which of the following steps do you configure Border Gateway Protocol (BGP)?

  • A. Complete the Cross Connect
  • B. Configure Redundant Connections with AWS Direct Connect
  • C. Create a Virtual Interface
  • D. Download Router Configuration

Answer: C

Explanation: In AWS Direct Connect, your network must support Border Gateway Protocol (BGP) and BGP MD5 authentication, and you need to provide a private Autonomous System Number (ASN) for that to connect to Amazon Virtual Private Cloud (VPC). To connect to public AWS products such as Amazon EC2 and Amazon S3, you will also need to provide a public ASN that you own (preferred) or a private ASN. You have to configure BGP in the Create a Virtual Interface step.
Reference: http://docs.aws.amazon.com/directconnect/latest/UserGuide/getstarted.htmI#createvirtualinterface

NEW QUESTION 12
When you put objects in Amazon S3, what is the indication that an object was successfully stored?

  • A. A HTTP 200 result code and MD5 checksum, taken together, indicate that the operation was successful.
  • B. Amazon S3 is engineered for 99.999999999% durabilit
  • C. Therefore there is no need to confirm that data was inserted.
  • D. A success code is inserted into the S3 object metadata.
  • E. Each S3 account has a special bucket named _s3_Iog
  • F. Success codes are written to this bucket witha timestamp and checksum.

Answer: A

NEW QUESTION 13
You are designing a personal document-archMng solution for your global enterprise with thousands of employee. Each employee has potentially gigabytes of data to be backed up in this archMng solution. The solution will be exposed to the employees as an application, where they can just drag and drop their files to the archMng system. Employees can retrieve their archives through a web interface. The corporate network has high bandwidth AWS Direct Connect connectMty to AWS.
You have a regulatory requirement that all data needs to be encrypted before being uploaded to the cloud.
How do you implement this in a highly available and cost-efficient way?

  • A. Manage encryption keys on-premises in an encrypted relational databas
  • B. Set up an on-premises server with sufficient storage to temporarily store files, and then upload them to Amazon S3, providing a client-side master key.
  • C. Mange encryption keys in a Hardware Security ModuIe (HSM) appliance on-premises serve r with sufficient storage to temporarily store, encrypt, and upload files directly into Amazon Glacier.
  • D. Nlanage encryption keys in Amazon Key Management Service (KMS), upload to Amazon Simple Storage Service (S3) with client-side encryption using a KMS customer master key ID, and configure Amazon S3 lifecycle policies to store each object using the Amazon Glacier storage tier.
  • E. Manage encryption keys in an AWS C|oudHSNI applianc
  • F. Encrypt files prior to uploading on the employee desktop, and then upload directly into Amazon Glacier.

Answer: C

NEW QUESTION 14
Your company is storing millions of sensitive transactions across thousands of 100-GB files that must be encrypted in transit and at rest. Analysts concurrently depend on subsets of files, which can consume up to 5 TB of space, to generate simulations that can be used to steer business decisions. You are required to design an AWS solution that can cost effectively accommodate the long-term storage and in-flight subsets of data.

  • A. Use Amazon Simple Storage Service (S3) with server-side encryption, and run simulations on subsets in ephemeral drives on Amazon EC2.
  • B. Use Amazon S3 with server-side encryption, and run simulations on subsets in-memory on Amazon EC2.
  • C. Use HDFS on Amazon EMR, and run simulations on subsets in ephemeral drives on Amazon EC2.
  • D. Use HDFS on Amazon Elastic MapReduce (EMR), and run simulations on subsets in-memory on Amazon Elastic Compute Cloud (EC2).
  • E. Store the full data set in encrypted Amazon Elastic Block Store (EBS) volumes, and regularly capturesnapshots that can be cloned to EC2 workstation

Answer: D

NEW QUESTION 15
One of the AWS account owners faced a major challenge in June as his account was hacked and the hacker deleted all the data from his AWS account. This resulted in a major blow to the business.
Which of the below mentioned steps would not have helped in preventing this action?

  • A. Setup an MFA for each user as well as for the root account user.
  • B. Take a backup of the critical data to offsite / on premise.
  • C. Create an AMI and a snapshot of the data at regular intervals as well as keep a copy to separate regions.
  • D. Do not share the AWS access and secret access keys with others as well do not store it inside programs, instead use IAM roles.

Answer: C

Explanation: AWS security follows the shared security model where the user is as much responsible as Amazon. If the user wants to have secure access to AWS while hosting applications on EC2, the first security rule to follow is to enable MFA for all users. This will add an added security layer. In the second step, the user should never give his access or secret access keys to anyone as well as store inside programs. The
better solution is to use IAM roles. For critical data of the organization, the user should keep an offsite/ in premise backup which will help to recover critical data in case of security breach.
It is recommended to have AWS AMIs and snapshots as well as keep them at other regions so that they will help in the DR scenario. However, in case of a data security breach of the account they may not be very helpful as hacker can delete that.
Therefore ,creating an AMI and a snapshot of the data at regular intervals as well as keep a copy to separate regions, would not have helped in preventing this action.
Reference: http://media.amazonwebservices.com/pdf/AWS_Security_Whitepaper.pdf

NEW QUESTION 16
An organization is setting up a backup and restore system in AWS of their in premise system. The organization needs High AvaiIabiIity(HA) and Disaster Recovery(DR) but is okay to have a longer recovery time to save costs. Which of the below mentioned setup options helps achieve the objective of cost saving as well as DR in the most effective way?

  • A. Setup pre- configured sewers and create AMIs.. Use EIP and Route 53 to quickly switch over to AWS from in premise.
  • B. Setup the backup data on S3 and transfer data to S3 regularly using the storage gateway.
  • C. Setup a small instance with AutoScaIing; in case of DR start diverting all the load to AWS from on premise.
  • D. Replicate on premise DB to EC2 at regular intervals and setup a scenario similar to the pilot ligh

Answer: B

Explanation: AWS has many solutions for Disaster Recovery(DR) and High AvaiIabiIity(HA). When the organization wants to have HA and DR but are okay to have a longer recovery time they should select the option backup and restore with S3. The data can be sent to S3 using either Direct Connect, Storage Gateway or over the internet.
The EC2 instance will pick the data from the S3 bucket when started and setup the environment. This process takes longer but is very cost effective due to the low pricing of S3. In all the other options, the EC2 instance might be running or there will be AMI storage costs.
Thus, it will be a costlier option. In this scenario the organization should plan appropriate tools to take a backup, plan the retention policy for data and setup security of the data.
Reference: http://d36cz9buwru1tt.cIoudfront.net/AWS_Disaster_Recovery.pdf

NEW QUESTION 17
You are the new IT architect in a company that operates a mobile sleep tracking application.
When activated at night, the mobile app is sending collected data points of 1 kilobyte every 5 minutes to your backend.
The backend takes care of authenticating the user and writing the data points into an Amazon DynamoDB table.
Every morning, you scan the table to extract and aggregate last night's data on a per user basis, and store the results in Amazon S3. Users are notified via Amazon SNS mobile push notifications that new data is available, which is parsed and visualized by the mobile app.
Currently you have around 100k users who are mostly based out of North America. You have been tasked to optimize the architecture of the backend system to lower cost. What would you recommend? Choose 2 answers

  • A. Have the mobile app access Amazon DynamoDB directly Instead of JSON files stored on Amazon S3.
  • B. Write data directly into an Amazon Redshift cluster replacing both Amazon DynamoDB and Amazon S3.
  • C. Introduce an Amazon SQS queue to buffer writes to the Amazon DynamoDB table and reduce provisioned write throughput.
  • D. Introduce Amazon Elasticache to cache reads from the Amazon DynamoDB table and reduce provisioned read throughput.
  • E. Create a new Amazon DynamoDB table each day and drop the one for the previous day after its data is on Amazon S3.

Answer: AD

NEW QUESTION 18
In Amazon EIastiCache, the failure of a single cache node can have an impact on the availability of your application and the load on your back-end database while EIastiCache provisions a replacement for the failed cache node and it get repopulated. Which of the following is a solution to reduce this potential availability impact?

  • A. Spread your memory and compute capacity over fewer number of cache nodes, each with smaller capacity.
  • B. Spread your memory and compute capacity over a larger number of cache nodes, each with smaller capacity.
  • C. Include fewer number of high capacity nodes.
  • D. Include a larger number of cache nodes, each with high capacit

Answer: B

Explanation: In Amazon EIastiCache, the number of cache nodes in the cluster is a key factor in the availability of your cluster running Memcached. The failure of a single cache node can have an impact on the availability of your application and the load on your back-end database while EIastiCache provisions a replacement for the failed cache node and it get repopulated. You can reduce this potential availability impact by spreading your memory and compute capacity over a larger number of cache nodes, each with smaller capacity, rather than using a fewer number of high capacity nodes.
Reference: http://docs.aws.amazon.com/AmazonEIastiCache/Iatest/UserGuide/CacheNode.Memcached.htmI

Thanks for reading the newest AWS-Certified-Solutions-Architect-Professional exam dumps! We recommend you to try the PREMIUM prep-labs.com AWS-Certified-Solutions-Architect-Professional dumps in VCE and PDF here: https://www.prep-labs.com/dumps/AWS-Certified-Solutions-Architect-Professional/ (272 Q&As Dumps)