Real Exam Question Amazonaws
Real Exam Question Amazonaws
https://www.2passeasy.com/dumps/AWS-Solution-Architect-Associate/
NEW QUESTION 1
- (Exam Topic 1)
A company is designing an application. The application uses an AWS Lambda function to receive information through Amazon API Gateway and to store the
information in an Amazon Aurora PostgreSQL database.
During the proof-of-concept stage, the company has to increase the Lambda quotas significantly to handle the high volumes of data that the company needs to
load into the database. A solutions architect must recommend a new design to improve scalability and minimize the configuration effort.
Which solution will meet these requirements?
A. Refactor the Lambda function code to Apache Tomcat code that runs on Amazon EC2 instances.Connect the database by using native Java Database
Connectivity (JDBC) drivers.
B. Change the platform from Aurora to Amazon DynamoD
C. Provision a DynamoDB Accelerator (DAX) cluste
D. Use the DAX client SDK to point the existing DynamoDB API calls at the DAX cluster.
E. Set up two Lambda function
F. Configure one function to receive the informatio
G. Configure the other function to load the information into the databas
H. Integrate the Lambda functions by using Amazon Simple Notification Service (Amazon SNS).
I. Set up two Lambda function
J. Configure one function to receive the informatio
K. Configure the other function to load the information into the databas
L. Integrate the Lambda functions by using an Amazon Simple Queue Service (Amazon SQS) queue.
Answer: B
Explanation:
bottlenecks can be avoided with queues (SQS).
NEW QUESTION 2
- (Exam Topic 1)
A company performs monthly maintenance on its AWS infrastructure. During these maintenance activities, the company needs to rotate the credentials tor its
Amazon ROS tor MySQL databases across multiple AWS Regions
Which solution will meet these requirements with the LEAST operational overhead?
Answer: A
Explanation:
https://aws.amazon.com/blogs/security/how-to-replicate-secrets-aws-secrets-manager-multiple-regions/
NEW QUESTION 3
- (Exam Topic 1)
An application allows users at a company's headquarters to access product data. The product data is stored in an Amazon RDS MySQL DB instance. The
operations team has isolated an application performance slowdown and wants to separate read traffic from write traffic. A solutions architect needs to optimize the
application's performance quickly.
What should the solutions architect recommend?
Answer: D
Explanation:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_MySQL.Replication.ReadReplicas.html
NEW QUESTION 4
- (Exam Topic 1)
A company hosts its multi-tier applications on AWS. For compliance, governance, auditing, and security, the company must track configuration changes on its
AWS resources and record a history of API calls made to these resources.
What should a solutions architect do to meet these requirements?
A. Use AWS CloudTrail to track configuration changes and AWS Config to record API calls
B. Use AWS Config to track configuration changes and AWS CloudTrail to record API calls
C. Use AWS Config to track configuration changes and Amazon CloudWatch to record API calls
D. Use AWS CloudTrail to track configuration changes and Amazon CloudWatch to record API calls
Answer: B
NEW QUESTION 5
- (Exam Topic 1)
A company has an application that ingests incoming messages. These messages are then quickly consumed by dozens of other applications and microservices.
The number of messages varies drastically and sometimes spikes as high as 100,000 each second. The
company wants to decouple the solution and increase scalability. Which solution meets these requirements?
Answer: D
Explanation:
https://aws.amazon.com/sqs/features/
By routing incoming requests to Amazon SQS, the company can decouple the job requests from the processing instances. This allows them to scale the number of
instances based on the size of the queue, providing more resources when needed. Additionally, using an Auto Scaling group based on the queue size will
automatically scale the number of instances up or down depending on the workload. Updating the software to read from the queue will allow it to process the job
requests in a more efficient manner, improving the performance of the system.
NEW QUESTION 6
- (Exam Topic 1)
A company recently migrated to AWS and wants to implement a solution to protect the traffic that flows in and out of the production VPC. The company had an
inspection server in its on-premises data center. The inspection server performed specific operations such as traffic flow inspection and traffic filtering. The
company wants to have the same functionalities in the AWS Cloud.
Which solution will meet these requirements?
A. Use Amazon GuardDuty for traffic inspection and traffic filtering in the production VPC
B. Use Traffic Mirroring to mirror traffic from the production VPC for traffic inspection and filtering.
C. Use AWS Network Firewall to create the required rules for traffic inspection and traffic filtering for the production VPC.
D. Use AWS Firewall Manager to create the required rules for traffic inspection and traffic filtering for the production VPC.
Answer: C
Explanation:
AWS Network Firewall supports both inspection and filtering as required
NEW QUESTION 7
- (Exam Topic 1)
A solutions architect is developing a multiple-subnet VPC architecture. The solution will consist of six subnets in two Availability Zones. The subnets are defined as
public, private and dedicated for databases. Only the Amazon EC2 instances running in the private subnets should be able to access a database.
Which solution meets these requirements?
A. Create a now route table that excludes the route to the public subnets' CIDR block
B. Associate the route table to the database subnets.
C. Create a security group that denies ingress from the security group used by instances in the public subnet
D. Attach the security group to an Amazon RDS DB instance.
E. Create a security group that allows ingress from the security group used by instances in the private subnet
F. Attach the security group to an Amazon RDS DB instance.
G. Create a new peering connection between the public subnets and the private subnet
H. Create a different peering connection between the private subnets and the database subnets.
Answer: C
Explanation:
Security groups are stateful. All inbound traffic is blocked by default. If you create an inbound rule allowing traffic in, that traffic is automatically allowed back out
again. You cannot block specific IP address using Security groups (instead use Network Access Control Lists).
"You can specify allow rules, but not deny rules." "When you first create a security group, it has no inbound rules. Therefore, no inbound traffic originating from
another host to your instance is allowed until you add inbound rules to the security group." Source:
https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html#VPCSecurityGroups
NEW QUESTION 8
- (Exam Topic 1)
An Amazon EC2 administrator created the following policy associated with an IAM group containing several users
A. Users can terminate an EC2 instance in any AWS Region except us-east-1.
B. Users can terminate an EC2 instance with the IP address 10 100 100 1 in the us-east-1 Region
C. Users can terminate an EC2 instance in the us-east-1 Region when the user's source IP is 10.100.100.254.
D. Users cannot terminate an EC2 instance in the us-east-1 Region when the user's source IP is 10.100.100.254
Answer: C
Explanation:
as the policy prevents anyone from doing any EC2 action on any region except us-east-1 and allows only users with source ip 10.100.100.0/24 to terminate
instances. So user with source ip 10.100.100.254 can terminate instances in us-east-1 region.
NEW QUESTION 9
- (Exam Topic 1)
A company needs to configure a real-time data ingestion architecture for its application. The company needs an API, a process that transforms data as the data is
streamed, and a storage solution for the data.
Which solution will meet these requirements with the LEAST operational overhead?
A. Deploy an Amazon EC2 instance to host an API that sends data to an Amazon Kinesis data stream.Create an Amazon Kinesis Data Firehose delivery stream
that uses the Kinesis data stream as a data sourc
B. Use AWS Lambda functions to transform the dat
C. Use the Kinesis Data Firehose delivery stream to send the data to Amazon S3.
D. Deploy an Amazon EC2 instance to host an API that sends data to AWS Glu
E. Stop source/destination checking on the EC2 instanc
F. Use AWS Glue to transform the data and to send the data to Amazon S3.
G. Configure an Amazon API Gateway API to send data to an Amazon Kinesis data strea
H. Create an Amazon Kinesis Data Firehose delivery stream that uses the Kinesis data stream as a data sourc
I. Use AWS Lambda functions to transform the dat
J. Use the Kinesis Data Firehose delivery stream to send the data to Amazon S3.
K. Configure an Amazon API Gateway API to send data to AWS Glu
L. Use AWS Lambda functions to transform the dat
M. Use AWS Glue to send the data to Amazon S3.
Answer: C
NEW QUESTION 10
- (Exam Topic 1)
A company uses 50 TB of data for reporting. The company wants to move this data from on premises to AWS A custom application in the company's data center
runs a weekly data transformation job. The company plans to pause the application until the data transfer is complete and needs to begin the transfer process as
soon as possible.
The data center does not have any available network bandwidth for additional workloads A solutions architect must transfer the data and must configure the
transformation job to continue to run in the AWS Cloud
Which solution will meet these requirements with the LEAST operational overhead?
A. Use AWS DataSync to move the data Create a custom transformation job by using AWS Glue
B. Order an AWS Snowcone device to move the data Deploy the transformation application to the device
C. Order an AWS Snowball Edge Storage Optimized devic
D. Copy the data to the devic
E. Create a custom transformation job by using AWS Glue
F. Order an AWS
G. Snowball Edge Storage Optimized device that includes Amazon EC2 compute Copy the data to the device Create a new EC2 instance on AWS to run the
transformation application
Answer: C
NEW QUESTION 10
- (Exam Topic 1)
An image-processing company has a web application that users use to upload images. The application uploads the images into an Amazon S3 bucket. The
company has set up S3 event notifications to publish the object creation events to an Amazon Simple Queue Service (Amazon SQS) standard queue. The SQS
queue serves as the event source for an AWS Lambda function that processes the images and sends the results to users through email.
Users report that they are receiving multiple email messages for every uploaded image. A solutions architect determines that SQS messages are invoking the
Lambda function more than once, resulting in multiple email messages.
What should the solutions architect do to resolve this issue with the LEAST operational overhead?
A. Set up long polling in the SQS queue by increasing the ReceiveMessage wait time to 30 seconds.
B. Change the SQS standard queue to an SQS FIFO queu
C. Use the message deduplication ID to discard duplicate messages.
D. Increase the visibility timeout in the SQS queue to a value that is greater than the total of the function timeout and the batch window timeout.
E. Modify the Lambda function to delete each message from the SQS queue immediately after the message is read before processing.
Answer: C
NEW QUESTION 14
- (Exam Topic 1)
A company has an Amazon S3 bucket that contains critical data. The company must protect the data from accidental deletion.
Which combination of steps should a solutions architect take to meet these requirements? (Choose two.)
Answer: AB
NEW QUESTION 18
- (Exam Topic 1)
A company hosts a data lake on AWS. The data lake consists of data in Amazon S3 and Amazon RDS for PostgreSQL. The company needs a reporting solution
that provides data visualization and includes all the data sources within the data lake. Only the company's management team should have full access to all the
visualizations. The rest of the company should have only limited access.
Which solution will meet these requirements?
Answer: A
NEW QUESTION 19
- (Exam Topic 1)
A solutions architect must design a highly available infrastructure for a website. The website is powered by Windows web servers that run on Amazon EC2
instances. The solutions architect must implement a solution that can mitigate a large-scale DDoS attack that originates from thousands of IP addresses.
Downtime is not acceptable for the website.
Which actions should the solutions architect take to protect the website from such an attack? (Select TWO.)
Answer: AC
Explanation:
(https://aws.amazon.com/cloudfront
NEW QUESTION 21
- (Exam Topic 1)
A company has more than 5 TB of file data on Windows file servers that run on premises Users and applications interact with the data each day
The company is moving its Windows workloads to AWS. As the company continues this process, the company requires access to AWS and on-premises file
storage with minimum latency The company needs a solution that minimizes operational overhead and requires no significant changes to the existing file access
patterns. The company uses an AWS Site-to-Site VPN connection for connectivity to AWS
What should a solutions architect do to meet these requirements?
Answer: D
NEW QUESTION 24
- (Exam Topic 1)
A bicycle sharing company is developing a multi-tier architecture to track the location of its bicycles during peak operating hours The company wants to use these
data points in its existing analytics platform A solutions architect must determine the most viable multi-tier option to support this architecture The data points must
be accessible from the REST API.
Which action meets these requirements for storing and retrieving location data?
Answer: D
Explanation:
https://aws.amazon.com/solutions/implementations/aws-streaming-data-solution-for-amazon-kinesis/
NEW QUESTION 29
- (Exam Topic 1)
A company uses Amazon S3 to store its confidential audit documents. The S3 bucket uses bucket policies to restrict access to audit team IAM user credentials
according to the principle of least privilege. Company managers are worried about accidental deletion of documents in the S3 bucket and want a more secure
solution.
What should a solutions architect do to secure the audit documents?
Answer: A
NEW QUESTION 31
- (Exam Topic 1)
A company uses AWS Organizations to manage multiple AWS accounts for different departments. The management account has an Amazon S3 bucket that
contains project reports. The company wants to limit access to this S3 bucket to only users of accounts within the organization in AWS Organizations.
Which solution meets these requirements with the LEAST amount of operational overhead?
A. Add the aws:PrincipalOrgID global condition key with a reference to the organization ID to the S3 bucket policy.
B. Create an organizational unit (OU) for each departmen
C. Add the aws:PrincipalOrgPaths global condition key to the S3 bucket policy.
D. Use AWS CloudTrail to monitor the CreateAccount, InviteAccountToOrganization, LeaveOrganization, and RemoveAccountFromOrganization event
E. Update the S3 bucket policy accordingly.
F. Tag each user that needs access to the S3 bucke
G. Add the aws:PrincipalTag global condition key to the S3 bucket policy.
Answer: A
Explanation:
https://aws.amazon.com/blogs/security/control-access-to-aws-resources-by-using-the-aws-organization-of-iam-p The aws:PrincipalOrgID global key provides an
alternative to listing all the account IDs for all AWS accounts in an organization. For example, the following Amazon S3 bucket policy allows members of any
account in the XXX organization to add an object into the examtopics bucket.
{"Version": "2020-09-10",
"Statement": {
"Sid": "AllowPutObject", "Effect": "Allow",
"Principal": "*", "Action": "s3:PutObject",
NEW QUESTION 33
- (Exam Topic 1)
A company has a three-tier web application that is deployed on AWS. The web servers are deployed in a public subnet in a VPC. The application servers and
database servers are deployed in private subnets in the same VPC. The company has deployed a third-party virtual firewall appliance from AWS Marketplace in an
inspection VPC. The appliance is configured with an IP interface that can accept IP packets.
A solutions architect needs to Integrate the web application with the appliance to inspect all traffic to the application before the traffic teaches the web server.
Which solution will moot these requirements with the LEAST operational overhead?
A. Create a Network Load Balancer the public subnet of the application's VPC to route the traffic lo the appliance for packet inspection
B. Create an Application Load Balancer in the public subnet of the application's VPC to route the traffic to the appliance for packet inspection
C. Deploy a transit gateway m the inspection VPC Configure route tables to route the incoming pockets through the transit gateway
D. Deploy a Gateway Load Balancer in the inspection VPC Create a Gateway Load Balancer endpoint to receive the incoming packets and forward the packets to
the appliance
Answer: D
Explanation:
https://aws.amazon.com/blogs/networking-and-content-delivery/scaling-network-traffic-inspection-using-aws-ga
NEW QUESTION 38
- (Exam Topic 1)
A company is deploying a new public web application to AWS. The application will run behind an Application Load Balancer (ALB). The application needs to be
encrypted at the edge with an SSL/TLS certificate that is issued by an external certificate authority (CA). The certificate must be rotated each year before the
certificate expires.
What should a solutions architect do to meet these requirements?
Answer: D
NEW QUESTION 40
- (Exam Topic 1)
A company wants to migrate its on-premises application to AWS. The application produces output files that vary in size from tens of gigabytes to hundreds of
terabytes The application data must be stored in a standard file system structure The company wants a solution that scales automatically, is highly available, and
requires minimum operational overhead.
Which solution will meet these requirements?
A. Migrate the application to run as containers on Amazon Elastic Container Service (Amazon ECS) Use Amazon S3 for storage
B. Migrate the application to run as containers on Amazon Elastic Kubernetes Service (Amazon EKS) Use Amazon Elastic Block Store (Amazon EBS) for storage
C. Migrate the application to Amazon EC2 instances in a Multi-AZ Auto Scaling grou
D. Use Amazon Elastic File System (Amazon EFS) for storage.
E. Migrate the application to Amazon EC2 instances in a Multi-AZ Auto Scaling grou
F. Use Amazon Elastic Block Store (Amazon EBS) for storage.
Answer: C
Explanation:
EFS is a standard file system, it scales automatically and is highly available.
NEW QUESTION 41
- (Exam Topic 1)
A company hosts more than 300 global websites and applications. The company requires a platform to analyze more than 30 TB of clickstream data each day.
What should a solutions architect do to transmit and process the clickstream data?
A. Design an AWS Data Pipeline to archive the data to an Amazon S3 bucket and run an Amazon EMR duster with the data to generate analytics
B. Create an Auto Scaling group of Amazon EC2 instances to process the data and send it to an Amazon S3 data lake for Amazon Redshift to use tor analysis
C. Cache the data to Amazon CloudFron: Store the data in an Amazon S3 bucket When an object is added to the S3 bucket, run an AWS Lambda function to
process the data tor analysis.
D. Collect the data from Amazon Kinesis Data Stream
E. Use Amazon Kinesis Data Firehose to transmit the data to an Amazon S3 data lake Load the data in Amazon Redshift for analysis
Answer: D
Explanation:
https://aws.amazon.com/es/blogs/big-data/real-time-analytics-with-amazon-redshift-streaming-ingestion/
NEW QUESTION 42
- (Exam Topic 1)
A company runs an ecommerce application on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling
group across multiple Availability Zones. The Auto Scaling group scales based on CPU utilization metrics. The ecommerce application stores the transaction data
in a MySQL 8.0 database that is hosted on a large EC2 instance.
The database's performance degrades quickly as application load increases. The application handles more read requests than write transactions. The company
wants a solution that will automatically scale the database to meet the demand of unpredictable read workloads while maintaining high availability.
Which solution will meet these requirements?
A. Use Amazon Redshift with a single node for leader and compute functionality.
B. Use Amazon RDS with a Single-AZ deployment Configure Amazon RDS to add reader instances in a different Availability Zone.
C. Use Amazon Aurora with a Multi-AZ deploymen
D. Configure Aurora Auto Scaling with Aurora Replicas.
E. Use Amazon ElastiCache for Memcached with EC2 Spot Instances.
Answer: C
Explanation:
AURORA is 5x performance improvement over MySQL on RDS and handles more read requests than write,; maintaining high availability = Multi-AZ deployment
NEW QUESTION 45
- (Exam Topic 2)
A company stores its application logs in an Amazon CloudWatch Logs log group. A new policy requires the company to store all application logs in Amazon
OpenSearch Service (Amazon Elasticsearch Service) in near-real time.
Which solution will meet this requirement with the LEAST operational overhead?
A. Configure a CloudWatch Logs subscription to stream the logs to Amazon OpenSearch Service (Amazon Elasticsearch Service).
B. Create an AWS Lambda functio
C. Use the log group to invoke the function to write the logs to Amazon OpenSearch Service (Amazon Elasticsearch Service).
D. Create an Amazon Kinesis Data Firehose delivery strea
E. Configure the log group as the delivery stream's sourc
F. Configure Amazon OpenSearch Service (Amazon Elasticsearch Service) as the delivery stream's destination.
G. Install and configure Amazon Kinesis Agent on each application server to deliver the logs to Amazon Kinesis Data Stream
H. Configure Kinesis Data Streams to deliver the logs to Amazon OpenSearch Service (Amazon Elasticsearch Service)
Answer: B
Explanation:
https://computingforgeeks.com/stream-logs-in-aws-from-cloudwatch-to-elasticsearch/
NEW QUESTION 48
- (Exam Topic 2)
A gaming company hosts a browser-based application on AWS. The users of the application consume a large number of videos and images that are stored in
Amazon S3. This content is the same for all users.
The application has increased in popularity, and millions of users worldwide are accessing these media files. The company wants to provide the files to the users
while reducing the load on the origin.
Which solution meets these requirements MOST cost-effectively?
Answer: B
NEW QUESTION 52
- (Exam Topic 2)
A company needs to move data from an Amazon EC2 instance to an Amazon S3 bucket. The company must ensure that no API calls and no data are routed
through public internet routes. Only the EC2 instance can have access to upload data to the S3 bucket.
Which solution will meet these requirements?
A. Create an interface VPC endpoint for Amazon S3 in the subnet where the EC2 instance is located.Attach a resource policy to the S3 bucket to only allow the
EC2 instance's IAM role for access.
B. Create a gateway VPC endpoint for Amazon S3 in the Availability Zone where the EC2 instance is locate
C. Attach appropriate security groups to the endpoin
D. Attach a resource policy lo the S3 bucket to only allow the EC2 instance's IAM role for access.
E. Run the nslookup tool from inside the EC2 instance to obtain the private IP address of the S3 bucket's service API endpoin
F. Create a route in the VPC route table to provide the EC2 instance with access to the S3 bucke
G. Attach a resource policy to the S3 bucket to only allow the EC2 instance's IAM role for access.
H. Use the AWS provided, publicly available ip-ranges.json tile to obtain the private IP address of the S3 bucket's service API endpoin
I. Create a route in the VPC route table to provide the EC2 instance with access to the S3 bucke
J. Attach a resource policy to the S3 bucket to only allow the EC2 instance's IAM role for access.
Answer: A
Explanation:
(https://aws.amazon.com/blogs/security/how-to-restrict-amazon-s3-bucket-access-to-a-specific-iam-role/)
NEW QUESTION 54
- (Exam Topic 2)
An online retail company has more than 50 million active customers and receives more than 25,000 orders each day. The company collects purchase data for
customers and stores this data in Amazon S3. Additional customer data is stored in Amazon RDS.
The company wants to make all the data available to various teams so that the teams can perform analytics.
The solution must provide the ability to manage fine-grained permissions for the data and must minimize operational overhead.
Which solution will meet these requirements?
Answer: D
NEW QUESTION 55
- (Exam Topic 2)
A company wants to direct its users to a backup static error page if the company's primary website is unavailable. The primary website's DNS records are hosted in
Amazon Route 53. The domain is pointing to an Application Load Balancer (ALB). The company needs a solution that minimizes changes and infrastructure
overhead.
Which solution will meet these requirements?
Answer: B
NEW QUESTION 60
- (Exam Topic 2)
A company runs its ecommerce application on AWS. Every new order is published as a message in a RabbitMQ queue that runs on an Amazon EC2 instance in a
single Availability Zone. These messages are processed by a different application that runs on a separate EC2 instance. This application stores the details in a
PostgreSQL database on another EC2 instance. All the EC2 instances are in the same Availability Zone.
The company needs to redesign its architecture to provide the highest availability with the least operational overhead.
What should a solutions architect do to meet these requirements?
Answer: B
NEW QUESTION 63
- (Exam Topic 2)
A company owns an asynchronous API that is used to ingest user requests and, based on the request type, dispatch requests to the appropriate microservice for
processing. The company is using Amazon API Gateway to deploy the API front end, and an AWS Lambda function that invokes Amazon DynamoDB to store user
requests before dispatching them to the processing microservices.
The company provisioned as much DynamoDB throughput as its budget allows, but the company is still experiencing availability issues and is losing user requests.
What should a solutions architect do to address this issue without impacting existing users?
D. Use the Amazon Simple Queue Service (Amazon SQS) queue and Lambda to buffer writes to DynamoDB.
Answer: D
Explanation:
By using an SQS queue and Lambda, the solutions architect can decouple the API front end from the processing microservices and improve the overall scalability
and availability of the system. The SQS queue acts as a buffer, allowing the API front end to continue accepting user requests even if the processing microservices
are experiencing high workloads or are temporarily unavailable. The Lambda function can then retrieve requests from the SQS queue and write them to
DynamoDB, ensuring that all user requests are stored and processed. This approach allows the company to scale the processing microservices independently
from the API front end, ensuring that the API remains available to users even during periods of high demand.
NEW QUESTION 68
- (Exam Topic 2)
A company needs to save the results from a medical trial to an Amazon S3 repository. The repository must allow a few scientists to add new files and must restrict
all other users to read-only access. No users can have the ability to modify or delete any files in the repository. The company must keep every file in the repository
for a minimum of 1 year after its creation date.
Which solution will meet these requirements?
Answer: C
NEW QUESTION 72
- (Exam Topic 2)
A solutions architect needs to help a company optimize the cost of running an application on AWS. The application will use Amazon EC2 instances, AWS Fargate,
and AWS Lambda for compute within the architecture.
The EC2 instances will run the data ingestion layer of the application. EC2 usage will be sporadic and unpredictable. Workloads that run on EC2 instances can be
interrupted at any time. The application front end will run on Fargate, and Lambda will serve the API layer. The front-end utilization and API layer utilization will be
predictable over the course of the next year.
Which combination of purchasing options will provide the MOST cost-effective solution for hosting this application? (Choose two.)
Answer: AC
NEW QUESTION 77
- (Exam Topic 2)
A company has a legacy data processing application that runs on Amazon EC2 instances. Data is processed sequentially, but the order of results does not matter.
The application uses a monolithic architecture. The only way that the company can scale the application to meet increased demand is to increase the size of the
instances.
The company's developers have decided to rewrite the application to use a microservices architecture on Amazon Elastic Container Service (Amazon ECS).
What should a solutions architect recommend for communication between the microservices?
Answer: A
Explanation:
Queue has Limited throughput (300 msg/s without batching, 3000 msg/s with batching whereby up-to 10 msg per batch operation; Msg duplicates not allowed in
the queue (exactly-once delivery); Msg order is preserved (FIFO); Queue name must end with .fifo
NEW QUESTION 79
- (Exam Topic 2)
A company is building a containerized application on premises and decides to move the application to AWS. The application will have thousands of users soon
after li is deployed. The company Is unsure how to manage the deployment of containers at scale. The company needs to deploy the containerized application in a
highly available architecture that minimizes operational overhead.
Which solution will meet these requirements?
A. Store container images In an Amazon Elastic Container Registry (Amazon ECR) repositor
B. Use an Amazon Elastic Container Service (Amazon ECS) cluster with the AWS Fargate launch type to run the container
C. Use target tracking to scale automatically based on demand.
D. Store container images in an Amazon Elastic Container Registry (Amazon ECR) repositor
E. Use an Amazon Elastic Container Service (Amazon ECS) cluster with the Amazon EC2 launch type to run the container
F. Use target tracking to scale automatically based on demand.
G. Store container images in a repository that runs on an Amazon EC2 instanc
H. Run the containers on EC2 instances that are spread across multiple Availability Zone
I. Monitor the average CPU utilization in Amazon CloudWatc
J. Launch new EC2 instances as needed
K. Create an Amazon EC2 Amazon Machine Image (AMI) that contains the container image Launch EC2 Instances in an Auto Scaling group across multiple
Availability Zone
L. Use an Amazon CloudWatch alarm to scale out EC2 instances when the average CPU utilization threshold is breached.
Answer: A
NEW QUESTION 80
- (Exam Topic 2)
A company is developing a file-sharing application that will use an Amazon S3 bucket for storage. The company wants to serve all the files through an Amazon
CloudFront distribution. The company does not want the files to be accessible through direct navigation to the S3 URL.
What should a solutions architect do to meet these requirements?
A. Write individual policies for each S3 bucket to grant read permission for only CloudFront access.
B. Create an IAM use
C. Grant the user read permission to objects in the S3 bucke
D. Assign the user to CloudFront.
E. Write an S3 bucket policy that assigns the CloudFront distribution ID as the Principal and assigns the target S3 bucket as the Amazon Resource Name (ARN).
F. Create an origin access identity (OAI). Assign the OAI to the CloudFront distributio
G. Configure the S3 bucket permissions so that only the OAI has read permission.
Answer: D
Explanation:
https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-access-to-amazon-s3/
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3
NEW QUESTION 82
- (Exam Topic 2)
A company is planning to build a high performance computing (HPC) workload as a service solution that Is hosted on AWS A group of 16 AmazonEC2Ltnux
Instances requires the lowest possible latency for
node-to-node communication. The instances also need a shared block device volume for high-performing
storage.
Which solution will meet these requirements?
Answer: A
NEW QUESTION 84
- (Exam Topic 2)
A company wants to migrate its on-premises data center to AWS. According to the company's compliance requirements, the company can use only the ap-
northeast-3 Region. Company administrators are not permitted to connect VPCs to the internet.
Which solutions will meet these requirements? (Choose two.)
A. Use AWS Control Tower to implement data residency guardrails to deny internet access and deny access to all AWS Regions except ap-northeast-3.
B. Use rules in AWS WAF to prevent internet acces
C. Deny access to all AWS Regions except ap-northeast-3 in the AWS account settings.
D. Use AWS Organizations to configure service control policies (SCPS) that prevent VPCs from gaining internet acces
E. Deny access to all AWS Regions except ap-northeast-3.
F. Create an outbound rule for the network ACL in each VPC to deny all traffic from 0.0.0.0/0. Create an IAM policy for each user to prevent the use of any AWS
Region other than ap-northeast-3.
G. Use AWS Config to activate managed rules to detect and alert for internet gateways and to detect and alert for new resources deployed outside of ap-
northeast-3.
Answer: AC
NEW QUESTION 88
- (Exam Topic 2)
A company hosts a website analytics application on a single Amazon EC2 On-Demand Instance. The analytics software is written in PHP and uses a MySQL
database. The analytics software, the web server that provides PHP, and the database server are all hosted on the EC2 instance. The application is showing signs
of performance degradation during busy times and is presenting 5xx errors. The company needs to make the application scale seamlessly.
Which solution will meet these requirements MOST cost-effectively?
Answer: D
NEW QUESTION 90
- (Exam Topic 2)
An application runs on Amazon EC2 instances across multiple Availability Zones The instances run in an Amazon EC2 Auto Scaling group behind an Application
Load Balancer The application performs best when the CPU utilization of the EC2 instances is at or near 40%.
What should a solutions architect do to maintain the desired performance across all instances in the group?
A. Use a simple scaling policy to dynamically scale the Auto Scaling group
B. Use a target tracking policy to dynamically scale the Auto Scaling group
C. Use an AWS Lambda function to update the desired Auto Scaling group capacity.
D. Use scheduled scaling actions to scale up and scale down the Auto Scaling group
Answer: B
Explanation:
https://docs.aws.amazon.com/autoscaling/application/userguide/application-auto-scaling-target-tracking.html
NEW QUESTION 95
- (Exam Topic 2)
A medical records company is hosting an application on Amazon EC2 instances. The application processes customer data files that are stored on Amazon S3. The
EC2 instances are hosted in public subnets. The EC2 instances access Amazon S3 over the internet, but they do not require any other network access.
A new requirement mandates that the network traffic for file transfers take a private route and not be sent over the internet.
Which change to the network architecture should a solutions architect recommend to meet this requirement?
Answer: C
NEW QUESTION 98
- (Exam Topic 2)
A company’s website provides users with downloadable historical performance reports. The website needs a solution that will scale to meet the company’s
website demands globally. The solution should be
cost-effective, limit the provisioning of infrastructure resources, and provide the fastest possible response time.
Which combination should a solutions architect recommend to meet these requirements?
Answer: A
Explanation:
Cloudfront for rapid response and s3 to minimize infrastructure.
A. Configure the company's email server to forward notification email messages that are sent to the AWS account root user email address to all users in the
organization.
B. Configure all AWS account root user email addresses as distribution lists that go to a few administrators who can respond to alert
C. Configure AWS account alternate contacts in the AWS Organizations console or programmatically.
D. Configure all AWS account root user email messages to be sent to one administrator who is responsible for monitoring alerts and forwarding those alerts to the
appropriate groups.
E. Configure all existing AWS accounts and all newly created accounts to use the same root user email addres
F. Configure AWS account alternate contacts in the AWS Organizations console orprogrammatically.
Answer: D
A. Create an Amazon Route 53 geolocation routing policy to route requests to one of the two NLB
B. Create an Amazon CloudFront distributio
C. Use the Route 53 record as the distribution's origin.
D. Create a standard accelerator in AWS Global Accelerato
E. Create endpoint groups in us-west-2 and eu-west-1. Add the two NLBs as endpoints for the endpoint groups.
F. Attach Elastic IP addresses to the six EC2 instance
G. Create an Amazon Route 53 geolocation routing policy to route requests to one of the six EC2 instance
H. Create an Amazon CloudFront distributio
I. Usethe Route 53 record as the distribution's origin.
J. Replace the two NLBs with two Application Load Balancers (ALBs). Create an Amazon Route 53 latency routing policy to route requests to one of the two ALB
K. Create an Amazon CloudFront distributio
L. Use the Route 53 record as the distribution's origin.
Answer: B
Explanation:
For standard accelerators, Global Accelerator uses the AWS global network to route traffic to the optimal regional endpoint based on health, client location, and
policies that you configure, which increases the availability of your applications. Endpoints for standard accelerators can be Network Load Balancers, Application
Load Balancers, Amazon EC2 instances, or Elastic IP addresses that are located in one AWS Region or multiple Regions.
https://docs.aws.amazon.com/global-accelerator/latest/dg/what-is-global-accelerator.html
Answer: A
Explanation:
ElastiCache can help speed up the read performance of the database by caching frequently accessed data, reducing latency and allowing the application to
access the data more quickly. This solution requires minimal modifications to the current architecture, as ElastiCache can be used in conjunction with the existing
Amazon RDS for MySQL database.
Answer: D
Explanation:
We recommend that you use On-Demand Instances for applications with short-term, irregular workloads that cannot be interrupted.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-on-demand-instances.html
Answer: C
Explanation:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/rds-custom.html and https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/working-with-
custom-oracle.html
Answer: CE
A. Create an AWS Lambda function to query AWS CloudTrail logs and to send an alert when a Createlmage API call is detected.
B. Configure AWS CloudTrail with an Amazon Simple Notification Service {Amazon SNS) notification that occurs when updated logs are sent to Amazon S3. Use
Amazon Athena to create a new table and to query on Createlmage when an API call is detected.
C. Create an Amazon EventBridge (Amazon CloudWatch Events) rule for the Createlmage API call.Configure the target as an Amazon Simple Notification Service
(Amazon SNS) topic to send an alert when a Createlmage API call is detected.
D. Configure an Amazon Simple Queue Service (Amazon SQS) FIFO queue as a target for AWS CloudTrail log
E. Create an AWS Lambda function to send an alert to an Amazon Simple Notification Service (Amazon SNS) topic when a Createlmage API call is detected.
Answer: B
A. Create an Auto Scaling group that uses three Instances across each of tv/o Regions.
B. Modify the Auto Scaling group to use three instances across each of two Availability Zones.
C. Create an Auto Scaling template that can be used to quickly create more instances in another Region.
D. Change the ALB in front of the Amazon EC2 instances in a round-robin configuration to balance traffic to the web tier.
Answer: B
Explanation:
High availability can be enabled for this architecture quite simply by modifying the existing Auto Scaling group to use multiple availability zones. The ASG will
automatically balance the load so you don't actually need to specify the instances per AZ.
Answer: B
A. Create an AWS Lambda function that has an Amazon EventBridge notification Schedule the EventBridge event to run once a day
B. Create an AWS Lambda function Create an Amazon API Gateway HTTP API, and integrate the API with the function Create an Amazon EventBridge scheduled
avert that calls the API and invokes the function.
C. Create an Amazon Elastic Container Service (Amazon ECS) duster with an AWS Fargate launch type.Create an Amazon EventBridge scheduled event that
launches an ECS task on the cluster to run the job.
D. Create an Amazon Elastic Container Service (Amazon ECS) duster with an Amazon EC2 launch type and an Auto Scaling group with at least one EC2 instanc
E. Create an Amazon EventBridge scheduled event that launches an ECS task on the duster to run the job.
Answer: C
A. Write the messages to an Amazon DynamoDB table with the payment ID as the partition key
B. Write the messages to an Amazon Kinesis data stream with the payment ID as the partition key.
C. Write the messages to an Amazon ElastiCache for Memcached cluster with the payment ID as the key
D. Write the messages to an Amazon Simple Queue Service (Amazon SQS) queue Set the message attribute to use the payment ID
E. Write the messages to an Amazon Simple Queue Service (Amazon SQS) FIFO queu
F. Set the message group to use the payment ID.
Answer: AE
Answer: BE
Answer: D
A. Create an Amazon CloudWatch alarm that enters the ALARM state when the CPUUtilization metric is less than 20%. Create an AWS Lambda function that the
CloudWatch alarm invokes to terminate one ofthe EC2 instances in the ALB target group.
B. Create an EC2 Auto Scalin
C. Select the exisiting ALB as the load balancer and the existing target group as the target grou
D. Set a target tracking scaling policy that is based on the ASGAverageCPUUtilization metri
E. Set the minimum instances to 2, the desired capacity to 3, the desired capacity to 3, the maximum instances to 6, and the target value to 50%. And the EC2
instances to the Auto Scaling group.
F. Create an EC2 Auto Scalin
G. Select the exisiting ALB as the load balancer and the existing target group.Set the minimum instances to 2, the desired capacity to 3, and the maximum
instances to 6 Add the EC2 instances to the Scaling group.
H. Create two Amazon CloudWatch alarm
I. Configure the first CloudWatch alarm to enter the ALARM satet when the average CPUTUilization metric is below 20%. Configure the seconnd CloudWatch
alarm to enter the ALARM state when the average CPUUtilization metric is aboove 50%. Configure the alarms to publish to an Amazon Simple Notification Service
(Amazon SNS) topic to send an email messag
J. After receiving the message, log in to decrease or increase the number of EC2 instances that are running
Answer: B
A. Increase the size of the DB instance to an instance type that has more available memory.
B. Modify the DB instance to be a Multi-AZ DB instanc
C. Configure the application to write to all active RDS DB instances.
D. Modify the API to write incoming data to an Amazon Simple Queue Service (Amazon SQS) queu
E. Use an AWS Lambda function that Amazon SQS invokes to write data from the queue to the database.
F. Modify the API to write incoming data to an Amazon Simple Notification Service (Amazon SNS) topic.Use an AWS Lambda function that Amazon SNS invokes
to write data from the topic to the database.
Answer: C
Explanation:
Using Amazon SQS will help minimize the number of connections to the database, as the API will write data to a queue instead of directly to the database.
Additionally, using an AWS Lambda function that Amazon SQS invokes to write data from the queue to the database will help ensure that data is not lost during
periods of heavy traffic, as the queue will serve as a buffer between the API and the database.
A. Store the images and geographic codes in a database table Use Oracle running on an Amazon RDS Multi-AZ DB instance
B. Store the images in Amazon S3 buckets Use Amazon DynamoDB with the geographic code as the key and the image S3 URL as the value
C. Store the images and geographic codes in an Amazon DynamoDB table Configure DynamoDB Accelerator (DAX) during times of high load
D. Store the images in Amazon S3 buckets Store geographic codes and image S3 URLs in a database table Use Oracle running on an Amazon RDS Multi-AZ DB
instance.
Answer: A
A. Create an Amazon S3 bucket Enable static web hosting on the S3 bucket Upload the static content to the S3 bucket Use AWS Lambda to process all dynamic
content
B. Deploy the web application to an AWS Elastic Beanstalk environment Use URL swapping to switch between multiple Elastic Beanstalk environments for feature
testing
C. Deploy the web application lo Amazon EC2 instances that are configured with Java and PHP Use Auto Scaling groups and an Application Load Balancer to
manage the website's availability
D. Containerize the web application Deploy the web application to Amazon EC2 instances Use the AWS Load Balancer Controller to dynamically route traffic
between containers thai contain the new site features for testing
Answer: B
A. Use Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 worker nodes for compute and MongoDB on EC2 for data storage.
B. Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate for compute and Amazon DynamoDB for data storage.
C. Use Amazon Elastic Kubernetes Service (Amazon EKS) with Amazon EC2 worker nodes for compute and Amazon DynamoDB for data storage.
D. Use Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate for compute and Amazon DocumentDB (with MongoDB compatibility) for data
storage.
Answer: D
Explanation:
Amazon DocumentDB (with MongoDB compatibility) is a fast, reliable, and fully managed database service. Amazon DocumentDB makes it easy to set up,
operate, and scale MongoDB-compatible databases in the cloud. With Amazon DocumentDB, you can run the same application code and use the same drivers
and tools that you use with MongoDB.
https://docs.aws.amazon.com/documentdb/latest/developerguide/what-is.html
A. Use standard SQL queries in Amazon Athena to analyze the CloudFront togs in the S3 bucket Visualize the results with AWS Glue
B. Use standard SQL queries in Amazon Athena to analyze the CloudFront togs in the S3 bucket Visualize the results with Amazon QuickSight
C. Use standard SQL queries in Amazon DynamoDB to analyze the CloudFront logs m the S3 bucket Visualize the results with AWS Glue
D. Use standard SQL queries in Amazon DynamoDB to analyze the CtoudFront logs m the S3 bucket Visualize the results with Amazon QuickSight
Answer: D
A. Create an Amazon CloudFront distribution in front of the API Gateway Regional API endpoint with a maximum TTL of 24 hours
B. Create a Regional AWS WAF web ACL with a rate-based rul
C. Associate the web ACL with the API Gateway stage.
D. Use Amazon CloudWatch metrics to monitor the Count metric and alert the security team when the predefined rate is reached
E. Create an Amazon CloudFront distribution with Lambda@Edge in front of the API Gateway Regional API endpoint Create an AWS Lambda function to block
requests from IP addresses that exceed the predefined rate.
Answer: B
Answer: C
A. Request an Amazon issued private certificate from AWS Certificate Manager (ACM) in the us-east-1 Region
B. Request an Amazon issued private certificate from AWS Certificate Manager (ACM) in the us-west-1 Region.
C. Request an Amazon issued public certificate from AWS Certificate Manager (ACU) in the us-east-1 Region
D. Request an Amazon issued public certificate from AWS Certificate Manager (ACU) in the us-west-1 Regon.
Answer: B
A. Create an AWS DataSync task that shares the data as a mountable file system Mount the file system to the application server
B. Create an Amazon EC2 Windows instance Install and configure a Windows file share role on the instance Connect the application server to the file share
C. Create an Amazon FSx for Windows File Server file system Attach the file system to the origin server Connect the application server to the file system
D. Create an Amazon S3 bucket Assign an IAM role to the application to grant access to the S3 bucket Mount the S3 bucket to the application server
Answer: C
A. Use AWS WAF in front of the ALB Associate the appropriate web ACLs with AWS WAF.
B. Create an ALB listener rule to reply to SQL injection with a fixed response
C. Subscribe to AWS Shield Advanced to block all SQL injection attempts automatically.
D. Set up Amazon Inspector to block all SOL injection attempts automatically
Answer: A
A. Configure a public Application Load Balancer (ALB) with multiple redundant Amazon EC2 instances in public subnet
B. Configure Amazon CloudFront to deliver HTTPS content using the public ALB as the origin.
C. Configure a public Application Load Balancer with multiple redundant Amazon EC2 instances in private subnet
D. Configure Amazon CloudFront to deliver HTTPS content using the EC2 instances as the origin.
E. Configure a public Application Load Balancer (ALB) with multiple redundant Amazon EC2 instances in private subnet
F. Configure Amazon CloudFront to deliver HTTPS content using the public ALB as the origin.
G. Configure a public Application Load Balancer with multiple redundant Amazon EC2 instances in public subnet
H. Configure Amazon CloudFront to deliver HTTPS content using the EC2 instances as the origin.
Answer: C
Explanation:
This solution meets the requirements for a highly available application with web, application, and database tiers, as well as providing edge-based content delivery.
Additionally, it maximizes security by having the ALB in a private subnet, which limits direct access to the web servers, while still being able to serve traffic over the
Internet via the public ALB. This will ensure that the web servers are not exposed to the public Internet, which reduces the attack surface and provides a secure
way to access the application.
Answer: AD
Explanation:
AWS Fargate is a technology that you can use with Amazon ECS to run containers without having to manage servers or clusters of Amazon EC2 instances. With
Fargate, you no longer have to provision, configure, or scale clusters of virtual machines to run containers.
https://docs.aws.amazon.com/AmazonECS/latest/userguide/what-is-fargate.html
Answer: C
Answer: C
Answer: C
A. Use a Multi-AZ deployment of an Amazon RDS for MySQL DB instance with an io2 Block Express EBS volume.
B. Use a Multi-AZ deployment of an Amazon RDS for MySQL DB instance with a General Purpose SSD (gp2) EBS volume.
C. Use Amazon S3 Intelligent-Tiering access tiers.
D. Use two large EC2 instances to host the database in active-passive mode.
Answer: A
A. Take snapshots of Amazon Elastic Block Store (Amazon EBS) volumes of the EC2 instances and database every 2 hours to meet the RPO
B. Configure a snapshot lifecycle policy to take Amazon Elastic Block Store (Amazon EBS) snapshots Enable automated backups in Amazon RDS to meet the
RPO
C. Retain the latest Amazon Machine Images (AMIs) of the web and application tiers Enable automated backups in Amazon RDS and use point-in-time recovery to
meet the RPO
D. Take snapshots of Amazon Elastic Block Store (Amazon EBS) volumes of the EC2 instances every 2 hours Enable automated backups in Amazon RDS and
use point-in-time recovery to meet the RPO
Answer: D
Answer: A
Answer: B
Explanation:
https://aws.amazon.com/pinpoint/product-details/sms/ Two-Way Messaging: Receive SMS messages from your customers and reply back to them in a chat-like
interactive experience. With Amazon Pinpoint, you can create automatic responses when customers send you messages that contain certain keywords. You can
even use Amazon Lex to create conversational bots. A majority of mobile phone users read incoming SMS messages almost immediately after receiving them. If
you need to be able to provide your customers with urgent or important information, SMS messaging may be the right solution for you. You can use Amazon
Pinpoint to create targeted groups of customers, and then send them campaign-based messages. You can also use Amazon Pinpoint to send direct messages,
such as appointment confirmations, order updates, and one-time passwords.
A. Host the visualization tool on premises and query the data warehouse directly over the internet.
B. Host the visualization tool in the same AWS Region as the data warehous
C. Access it over the internet.
D. Host the visualization tool on premises and query the data warehouse directly over a Direct Connect connection at a location in the same AWS Region.
E. Host the visualization tool in the same AWS Region as the data warehouse and access it over a Direct Connect connection at a location in the same Region.
Answer: D
Explanation:
https://aws.amazon.com/directconnect/pricing/ https://aws.amazon.com/blogs/aws/aws-data-transfer-prices-reduced/
Answer: B
Answer: A
A. Deploy an AWS Storage Gateway file gateway as a virtual machine (VM) on premises at each clinic
B. Migrate the files to each clinic’s on-premises applications by using AWS DataSync for processing.
C. Deploy an AWS Storage Gateway volume gateway as a virtual machine (VM) on premises at each clinic.
D. Attach an Amazon Elastic File System (Amazon EFS) file system to each clinic’s on-premises servers.
Answer: A
Explanation:
AWS Storage Gateway is a service that connects an on-premises software appliance with cloud-based storage to provide seamless and secure integration
between an organization's on-premises IT environment and AWS's storage infrastructure. By deploying a file gateway as a virtual machine on each clinic's
premises, the medical research lab can provide low-latency access to the data stored in the S3 bucket while maintaining read-only permissions for each clinic. This
solution allows the clinics to access the data files directly from their
on-premises file-based applications without the need for data transfer or migration.
A. Update the Amazon Cognito identity pool to assume the proper IAM role for access to the protected consent.
B. Update the S3 ACL to allow the application to access the protected content
C. Redeploy the application to Amazon 33 to prevent eventually consistent reads m the S3 bucket from affecting the ability of users to access the protected
content.
D. Update the Amazon Cognito pool to use custom attribute mappings within tie Identity pool and grant users the proper permissions to access the protected
content
Answer: B
Answer: AB
A. Read replicas
B. Manual snapshots
C. Automated backups
D. Multi-AZ deployments
Answer: C
Answer: C
Explanation:
This approach will provide both high availability and scalability for the website platform. By moving the database to Amazon Aurora with a read replica in another
availability zone, it will provide a failover option for the database. The use of an Application Load Balancer and an Auto Scaling group across two availability zones
allows for automatic scaling of the website to meet increased user demand. Additionally, creating an AMI from the original EC2 instance allows for easy replication
of the instance in case of failure.
A. Write an AWS Lambda function that schedules nightly snapshots of the application's EBS volumes and copies the snapshots to a different Region
B. Create a backup plan by using AWS Backup to perform nightly backup
C. Copy the backups to another Region Add the application's EC2 instances as resources
D. Create a backup plan by using AWS Backup to perform nightly backups Copy the backups to another Region Add the application's EBS volumes as resources
E. Write an AWS Lambda function that schedules nightly snapshots of the application's EBS volumes and copies the snapshots to a different Availability Zone
Answer: B
Explanation:
The most operationally efficient solution to meet these requirements would be to create a backup plan by using AWS Backup to perform nightly backups and
copying the backups to another Region. Adding the application's EBS volumes as resources will ensure that the application's EC2 instance configuration and data
are backed up, and copying the backups to another Region will ensure that the application is recoverable in a different AWS Region.
Answer: C
Answer: A
Answer: C
- (Exam Topic 3)
A company is building a solution that will report Amazon EC2 Auto Scaling events across all the applications in an AWS account. The company needs to use a
serverless solution to store the EC2 Auto Scaling status data in Amazon S3. The company then will use the data in Amazon S3 to provide near-real-time updates
in a dashboard. The solution must not affect the speed of EC2 instance launches.
How should the company move the data to Amazon S3 to meet these requirements?
A. Use an Amazon CloudWatch metric stream to send the EC2 Auto Scaling status data to Amazon Kinesis Data Firehos
B. Store the data in Amazon S3.
C. Launch an Amazon EMR cluster to collect the EC2 Auto Scaling status data and send the data to Amazon Kinesis Data Firehos
D. Store the data in Amazon S3.
E. Create an Amazon EventBridge rule to invoke an AWS Lambda function on a schedul
F. Configure the Lambda function to send the EC2 Auto Scaling status data directly to Amazon S3.
G. Use a bootstrap script during the launch of an EC2 instance to install Amazon Kinesis Agen
H. Configure Kinesis Agent to collect the EC2 Auto Scaling status data and send the data to Amazon Kinesis Data Firehos
I. Store the data in Amazon S3.
Answer: A
Explanation:
You can use metric streams to continually stream CloudWatch metrics to a destination of your choice, with near-real-time delivery and low latency. One of the use
cases is Data Lake: create a metric stream and direct it to an Amazon Kinesis Data Firehose delivery stream that delivers your CloudWatch metrics to a data lake
such as Amazon S3.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Metric-Streams.html
A. Create a proxy in RDS Proxy Configure the users' applications to use the DB instance through RDS Proxy
B. Deploy Amazon ElastCache for Memcached between the users' application and the DB instance
C. Migrate the DB instance to a different instance class that has higher I/O capacit
D. Configure the users' applications to use the new DB instance.
E. Configure Multi-AZ for the DB instance Configure the users' application to switch between the DB instances.
Answer: A
A. S3 Standard
B. S3 Intelligent-Tiering
C. S3 Standard-Infrequent Access (S3 Standard-IA)
D. S3 One Zone-Infrequent Access (S3 One Zone-IA)
Answer: C
A. Update the Kinesis Data Streams default settings by modifying the data retention period.
B. Update the application to use the Kinesis Producer Library (KPL) lo send the data to Kinesis Data Streams.
C. Update the number of Kinesis shards lo handle the throughput of me data that is sent to Kinesis Data Streams.
D. Turn on S3 Versioning within the S3 bucket to preserve every version of every object that is ingested in the S3 bucket.
Answer: A
Answer: A
Explanation:
AWS Snowball is a secure data transport solution that accelerates moving large amounts of data into and out of the AWS cloud. It can move up to 80 TB of data at
a time, and provides a network bandwidth of up to 50 Mbps, so it is well-suited for the task. Additionally, it is secure and easy to use, making it the ideal solution for
this migration.
Answer: A
Explanation:
https://aws.amazon.com/fsx/lustre/
Amazon FSx for Lustre is a fully managed service that provides cost-effective, high-performance, scalable storage for compute workloads. Many workloads such
as machine learning, high performance computing (HPC), video rendering, and financial simulations depend on compute instances accessing the same set of data
through high-performance shared storage.
A. A Configure three Network Load Balancers (NLBs) in the three AWS Regions to address theon-premises endpoints Create an accelerator by using AWS Global
Accelerator, and register the NLBs as its endpoint
B. Provide access to the application by using a CNAME that points to the accelerator DNS
C. Configure three Application Load Balancers (ALBs) in the three AWS Regions to address theon-premises endpoint
D. Create an accelerator by using AWS Global Accelerator and register the ALBs as its endpoints Provide access to the application by using a CNAME that points
to the accelerator DNS
E. Configure three Network Load Balancers (NLBs) in the three AWS Regions to address the on-premises endpoints In Route 53. create a latency-based record
that points to the three NLB
F. and use it as an origin for an Amazon CloudFront distribution Provide access to the application by using a CNAME that points to the CloudFront DNS
G. Configure three Application Load Balancers (ALBs) in the three AWS Regions to address theon-premises endpoints In Route 53 create a latency-based record
that points to the three ALBs and use it as an origin for an Amazon CloudFront distribution- Provide access to the application by using a CNAME that points to the
CloudFront DNS
Answer: A
Answer: C
A. Turn on server-side encryption on the SQS components Update tie default key policy to restrict key usage to a set of authorized principals.
B. Turn on server-side encryption on the SNS components by using an AWS Key Management Service (AWS KMS) customer managed key Apply a key policy to
restrict key usage to a set of authorized principals.
C. Turn on encryption on the SNS components Update the default key policy to restrict key usage to a set of authorized principal
D. Set a condition in the topic pokey to allow only encrypted connections over TLS.
E. Turn on server-side encryption on the SOS components by using an AWS Key Management Service (AWS KMS) customer managed key Apply a key pokey to
restrict key usage to a set of authorized principal
F. Set a condition in the queue pokey to allow only encrypted connections over TLS.
G. Turn on server-side encryption on the SOS components by using an AWS Key Management Service (AWS KMS) customer managed ke
H. Apply an IAM pokey to restrict key usage to a set of authorized principal
I. Set a condition in the queue pokey to allow only encrypted connections over TLS
Answer: BD
Answer: B
Answer: C
Answer: AE
Explanation:
"RESTful web services" => API Gateway.
"EC2 instance receives the raw data, transforms the raw data, and stores all the data in an Amazon S3 bucket"
=> GLUE with (Extract - Transform - Load)
A. Configure a TLS listener and add the server certificate on the NLB
B. Configure AWS Shield Advanced and enable AWS WAF on the NLB
C. Change the load balancer to an Application Load Balancer and attach AWS WAF to it
D. Encrypt the Amazon Elastic Block Store (Amazon EBS) volume on the EC2 instances using AWS Key Management Service (AWS KMS)
Answer: A
Visit Our Site to Purchase the Full Set of Actual AWS-Solution-Architect-Associate Exam Questions With
Answers.
We Also Provide Practice Exam Software That Simulates Real Exam Environment And Has Many Self-Assessment Features. Order the AWS-
Solution-Architect-Associate Product From:
https://www.2passeasy.com/dumps/AWS-Solution-Architect-Associate/
* AWS-Solution-Architect-Associate Most Realistic Questions that Guarantee you a Pass on Your FirstTry
* AWS-Solution-Architect-Associate Practice Test Questions in Multiple Choice Formats and Updatesfor 1 Year