AWS-Solution-Architect-Associate Exam Questions - Online Test


AWS-Solution-Architect-Associate Premium VCE File

Learn More 100% Pass Guarantee - Dumps Verified - Instant Download
150 Lectures, 20 Hours

certleader.com

Exam Code: AWS-Solution-Architect-Associate (aws solution architect associate exam dumps), Exam Name: AWS Certified Solutions Architect - Associate, Certification Provider: Amazon Certifitcation, Free Today! Guaranteed Training- Pass AWS-Solution-Architect-Associate Exam.

Online AWS-Solution-Architect-Associate free questions and answers of New Version:

NEW QUESTION 1
What does Amazon SWF stand for?

  • A. Simple Web Flow
  • B. Simple Work Flow
  • C. Simple Wireless Forms
  • D. Simple Web Form

Answer: B

NEW QUESTION 2
Does Amazon DynamoDB support both increment and decrement atomic operations?

  • A. Only increment, since decrement are inherently impossible with DynamoDB's data model.
  • B. No, neither increment nor decrement operations.
  • C. Yes, both increment and decrement operations.
  • D. Only decrement, since increment are inherently impossible with DynamoDB's data mode

Answer: C

Explanation: Amazon DynamoDB supports increment and decrement atomic operations.
Reference: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/APISummary.html

NEW QUESTION 3
You have a lot of data stored in the AWS Storage Gateway and your manager has come to you asking about how the billing is calculated, specifically the Virtual Tape Shelf usage. What would be a correct response to this?

  • A. You are billed for the virtual tape data you store in Amazon Glacier and are billed for the size of the virtual tape.
  • B. You are billed for the virtual tape data you store in Amazon Glacier and billed for the portion of virtual tape capacity that you use, not for the size of the virtual tape.
  • C. You are billed for the virtual tape data you store in Amazon S3 and billed for the portion of virtual tape capacity that you use, not for the size of the virtual tape.
  • D. You are billed for the virtual tape data you store in Amazon S3 and are billed for the size of the virtual tape.

Answer: B

Explanation: The AWS Storage Gateway is a service connecting an on-premises software appliance with cloud-based storage to provide seamless and secure integration between an organization’s on-premises IT environment and AWS’s storage infrastructure.
AWS Storage Gateway billing is as follows. Volume storage usage (per GB per month):
You are billed for the Cached volume data you store in Amazon S3. You are only billed for volume capacity you use, not for the size of the volume you create.
Snapshot Storage usage (per GB per month): You are billed for the snapshots your gateway stores in Amazon S3. These snapshots are stored and billed as Amazon EBS snapshots. Snapshots are incremental backups, reducing your storage charges. When taking a new snapshot, only the data that has changed since your last snapshot is stored.
Virtual Tape Library usage (per GB per month):
You are billed for the virtual tape data you store in Amazon S3. You are only billed for the portion of virtual tape capacity that you use, not for the size of the virtual tape.
Virtual Tape Shelf usage (per GB per month):
You are billed for the virtual tape data you store in Amazon Glacier. You are only billed for the portion of virtual tape capacity that you use, not for the size of the virtual tape.
Reference: https://aws.amazon.com/storagegateway/faqs/

NEW QUESTION 4
IAM's Policy Evaluation Logic always starts with a default _ for every request, except for those that use the AWS account's root security credentials b

  • A. Permit
  • B. Deny
  • C. Cancel

Answer: B

NEW QUESTION 5
Does Route 53 support MX Records?

  • A. Yes.
  • B. It supports CNAME records, but not MX records.
  • C. No
  • D. Only Primary MX record
  • E. Secondary MX records are not supporte

Answer: A

NEW QUESTION 6
Can I test my DB Instance against a new version before upgrading?

  • A. No
  • B. Yes
  • C. Only in VPC

Answer: B

NEW QUESTION 7
True or False: When you perform a restore operation to a point in time or from a DB Snapshot, a new DB Instance is created with a new endpoint.

  • A. FALSE
  • B. TRUE

Answer: B

NEW QUESTION 8
Much of your company's data does not need to be accessed often, and can take several hours for retrieval time, so it's stored on Amazon Glacier. However someone within your organization has expressed concerns that his data is more sensitive than the other data, and is wondering whether the high
level of encryption that he knows is on S3 is also used on the much cheaper Glacier service. Which of the following statements would be most applicable in regards to this concern?

  • A. There is no encryption on Amazon Glacier, that's why it is cheaper.
  • B. Amazon Glacier automatically encrypts the data using AES-128 a lesser encryption method than Amazon S3 but you can change it to AES-256 if you are willing to pay more.
  • C. Amazon Glacier automatically encrypts the data using AES-256, the same as Amazon S3.
  • D. Amazon Glacier automatically encrypts the data using AES-128 a lesser encryption method than Amazon S3.

Answer: C

Explanation: Like Amazon S3, the Amazon Glacier service provides low-cost, secure, and durable storage. But where S3 is designed for rapid retrieval, Glacier is meant to be used as an archival service for data that is not accessed often, and for which retrieval times of several hours are suitable.
Amazon Glacier automatically encrypts the data using AES-256 and stores it durably in an immutable form. Amazon Glacier is designed to provide average annual durability of 99.999999999% for an archive. It stores each archive in multiple facilities and multiple devices. Unlike traditional systems which can require laborious data verification and manual repair, Glacier performs regular, systematic data integrity checks, and is built to be automatically self-healing.
Reference: http://d0.awsstatic.com/whitepapers/Security/AWS%20Security%20Whitepaper.pdf

NEW QUESTION 9
While creating an Amazon RDS DB, your first task is to set up a DB _ that controls what IP addresses or EC2 instances have access to your DB Instance.

  • A. Security Pool
  • B. Secure Zone
  • C. Security Token Pool
  • D. Security Group

Answer: D

NEW QUESTION 10
In Amazon EC2, if your EBS volume stays in the detaching state, you can force the detachment by clicking .

  • A. Force Detach
  • B. Detach Instance
  • C. AttachVoIume
  • D. Attachlnstance

Answer: A

Explanation: If your volume stays in the detaching state, you can force the detachment by clicking Force Detach. Reference: http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/ebs-detaching-volume.html

NEW QUESTION 11
An application hosted at the EC2 instance receives an HTTP request from ELB. The same request has an X-Forvvarded-For header, which has three IP addresses. Which system's IP will be a part of this header?

  • A. Previous Request IP address.
  • B. Client IP address.
  • C. All of the answers listed here.
  • D. Load Balancer IP addres

Answer: C

Explanation: When a user sends a request to ELB over HTTP/HTTPS, the request header log at the instance will only receive the IP of ELB. This is because ELB is the interceptor between the EC2 instance and the client request. To get the client IP, use the header X-Forvvarded-For in header. The client IP address in the
X-Fonzvarded-For request header is followed by the IP addresses of each successive proxy that passes along the request. The last IP address is the IP address that connects to the back-end application instance. e.g. if the HTTP request already has a header when it reaches the Load Balancer, the IP address from which the request came is appended at the end of the header followed by the IP address of the Load Balancer. In such cases, the X-Forvvarded-For request header takes the following form:
X-Fonzvarded-For: cIientIPAddress, previousRequestIPAddress, LoadBaIancerIPAddress. Reference:
http://docs.aws.amazon.com/E|asticLoadBaIancing/Iatest/DeveIoperGuide/TerminologyandKeyConcepts. html

NEW QUESTION 12
The Amazon EC2 web service can be accessed using the _ web services messaging protocol. This interface is described by a Web Services Description Language (WSDL) document.

  • A. SOAP
  • B. DCOM
  • C. CORBA
  • D. XML-RPC

Answer: A

NEW QUESTION 13
In Amazon AWS, which of the following statements is true of key pairs?

  • A. Key pairs are used only for Amazon SDKs.
  • B. Key pairs are used only for Amazon EC2 and Amazon CIoudFront.
  • C. Key pairs are used only for Elastic Load Balancing and AWS IAM.
  • D. Key pairs are used for all Amazon service

Answer: B

Explanation: Key pairs consist of a public and private key, where you use the private key to create a digital signature, and then AWS uses the corresponding public key to validate the signature. Key pairs are used only for Amazon EC2 and Amazon CIoudFront.
Reference: http://docs.aws.amazon.com/generaI/latest/gr/aws-sec-cred-types.html

NEW QUESTION 14
A company is deploying a two-tier, highly available web application to AWS. Which service provides
durable storage for static content while utilizing lower Overall CPU resources for the web tier?

  • A. Amazon EBS volume
  • B. Amazon 53
  • C. Amazon EC2 instance store
  • D. Amazon RD5 instance

Answer: B

NEW QUESTION 15
You are architecting an auto-scalable batch processing system using video processing pipelines and Amazon Simple Queue Service (Amazon SQS) for a customer. You are unsure of the limitations of SQS and need to find out. What do you think is a correct statement about the limitations of Amazon SQS?

  • A. It supports an unlimited number of queues but a limited number of messages per queue for each user but automatically deletes messages that have been in the queue for more than 4 weeks.
  • B. It supports an unlimited number of queues and unlimited number of messages per queue for each user but automatically deletes messages that have been in the queue for more than 4 days.
  • C. It supports an unlimited number of queues but a limited number of messages per queue for each user but automatically deletes messages that have been in the queue for more than 4 days.
  • D. It supports an unlimited number of queues and unlimited number of messages per queue for each user but automatically deletes messages that have been in the queue for more than 4 weeks.

Answer: B

Explanation: Amazon Simple Queue Service (Amazon SQS) is a messaging queue service that handles message or workflows between other components in a system.
Amazon SQS supports an unlimited number of queues and unlimited number of messages per queue for each user. Please be aware that Amazon SQS automatically deletes messages that have been in the queue for more than 4 days.
Reference: http://aws.amazon.com/documentation/sqs/

NEW QUESTION 16
Can I initiate a "forced failover" for my MySQL Multi-AZ DB Instance deployment?

  • A. Only in certain regions
  • B. Only in VPC
  • C. Yes
  • D. No

Answer: A

NEW QUESTION 17
How can an EBS volume that is currently attached to an EC2 instance be migrated from one Availability Zone to another?

  • A. Detach the volume and attach it to another EC2 instance in the other AZ.
  • B. Simply create a new volume in the other AZ and specify the original volume as the source.
  • C. Create a snapshot of the volume, and create a new volume from the snapshot in the other AZ.
  • D. Detach the volume, then use the ec2-migrate-voiume command to move it to another A

Answer: C

NEW QUESTION 18
An enterprise wants to use a third-party SaaS application. The SaaS application needs to have access to issue several API commands to discover Amazon EC2 resources running within the enterprise's account The enterprise has internal security policies that require any outside access to their environment must conform to the principles of least prMlege and there must be controls in place to ensure that the credentials used by the 5aa5 vendor cannot be used by any other third party. Which of the following would meet all of these conditions?

  • A. From the AW5 Management Console, navigate to the Security Credentials page and retrieve the access and secret key for your account.
  • B. Create an IAM user within the enterprise account assign a user policy to the IAM user that allows only the actions required by the SaaS application create a new access and secret key for the user and provide these credentials to the 5aa5 provider.
  • C. Create an IAM role for cross-account access allows the SaaS provider's account to assume the role and assign it a policy that allows only the actions required by the SaaS application.
  • D. Create an IAM role for EC2 instances, assign it a policy that allows only the actions required tor the Saas application to work, provide the role ARM to the SaaS provider to use when launching their application instances.

Answer: C

Explanation: Granting Cross-account Permission to objects It Does Not Own
In this example scenario, you own a bucket and you have enabled other AWS accounts to upload objects. That is, your bucket can have objects that other AWS accounts own.
Now, suppose as a bucket owner, you need to grant cross-account permission on objects, regardless of who the owner is, to a user in another account. For example, that user could be a billing application that needs to access object metadata. There are two core issues:
The bucket owner has no permissions on those objects created by other AWS accounts. So for the bucket owner to grant permissions on objects it does not own, the object owner, the AWS account that created the objects, must first grant permission to the bucket owner. The bucket owner can then delegate those permissions.
Bucket owner account can delegate permissions to users in its own account but it cannot delegate permissions to other AWS accounts, because cross-account delegation is not supported.
In this scenario, the bucket owner can create an AWS Identity and Access Management (IAM) role with permission to access objects, and grant another AWS account permission to assume the role temporarily enabling it to access objects in the bucket.
Background: Cross-Account Permissions and Using IAM Roles
IAM roles enable several scenarios to delegate access to your resources, and cross-account access is
one of the key scenarios. In this example, the bucket owner, Account A, uses an IAM role to temporarily delegate object access cross-account to users in another AWS account, Account C. Each IAM role you create has two policies attached to it:
A trust policy identifying another AWS account that can assume the role.
An access policy defining what permissions-for example, s3:Get0bject-are allowed when someone assumes the role. For a list of permissions you can specify in a policy, see Specifying Permissions in a Policy.
The AWS account identified in the trust policy then grants its user permission to assume the role. The user can then do the following to access objects:
Assume the role and, in response, get temporary security credentials. Using the temporary security credentials, access the objects in the bucket.
For more information about IAM roles, go to Roles (Delegation and Federation) in IAM User Guide. The following is a summary of the walkthrough steps:
Account A administrator user attaches a bucket policy granting Account B conditional permission to upload objects.
Account A administrator creates an IAM role, establishing trust with Account C, so users in that account can access Account A. The access policy attached to the ro Ie limits what user in Account C can do when the user accesses Account A.
Account B administrator uploads an object to the bucket owned by Account A, granting full —controI permission to the bucket owner.
Account C administrator creates a user and attaches a user policy that allows the user to assume the role. User in Account C first assumes the role, which returns the user temporary security credentials.
Using those temporary credentials, the user then accesses objects in the bucket.
For this example, you need three accounts. The following table shows how we refer to these accounts and the administrator users in these accounts. Per IAM guidelines (see About Using an Administrator User to Create Resources and Grant Permissions) we do not use the account root
credentials in this walkthrough. Instead, you create an administrator user in each account and use those credentials in creating resources and granting them permissions

NEW QUESTION 19
You need to import several hundred megabytes of data from a local Oracle database to an Amazon RDS DB instance. What does AWS recommend you use to accomplish this?

  • A. Oracle export/import utilities
  • B. Oracle SQL Developer
  • C. Oracle Data Pump
  • D. DBMS_FILE_TRANSFER

Answer: C

Explanation: How you import data into an Amazon RDS DB instance depends on the amount of data you have and the number and variety of database objects in your database.
For example, you can use Oracle SQL Developer to import a simple, 20 MB database; you want to use Oracle Data Pump to import complex databases or databases that are several hundred megabytes or several terabytes in size.
Reference: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Oracle.Procedural.Importing.htmI

100% Valid and Newest Version AWS-Solution-Architect-Associate Questions & Answers shared by Certleader, Get Full Dumps HERE: https://www.certleader.com/AWS-Solution-Architect-Associate-dumps.html (New 672 Q&As)