AWS Associate architect part 2
AWS Associate architect part 2
What should a solutions architect do to improve the security of the data in transit?
Answer: A
Explanation:
Network Load Balancers now support TLS protocol. With this launch, you can now offload resource intensive
decryption/encryption from your application servers to a high throughput, and low latency Network Load
Balancer. Network Load Balancer is now able to terminate TLS traffic and set up connections with your
targets either over TCP or TLS
protocol.https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-tls-
listener.htmlhttps://exampleloadbalancer.com/nlbtls_demo.html
Answer: A
Explanation:
Dedicated Host Reservations provide a billing discount compared to running On-Demand Dedicated Hosts.
Reservations are available in three payment options.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/dedicated-hosts-overview.html
A.Use the Amazon S3 Standard storage class. Create an S3 Lifecycle policy to move infrequently accessed
data to S3 Glacier.
B.Use the Amazon S3 Standard storage class. Create an S3 Lifecycle policy to move infrequently accessed
data to S3 Standard-Infrequent Access (S3 Standard-IA).
C.Use the Amazon Elastic File System (Amazon EFS) Standard storage class. Create a lifecycle management
policy to move infrequently accessed data to EFS Standard-Infrequent Access (EFS Standard-IA).
D.Use the Amazon Elastic File System (Amazon EFS) One Zone storage class. Create a lifecycle management
policy to move infrequently accessed data to EFS One Zone-Infrequent Access (EFS One Zone-IA).
Answer: C
Explanation:
POSIX + sharable across EC2 instances --> EFS --> A, B out Instances run across multiple AZ -> C is needed.
Linux based system points to EFS plus POSIX-compliant is also EFS related.
Which additional configuration strategy should the solutions architect use to meet these requirements?
A. Create a security group for the web servers and allow port 443 from 0.0.0.0/0. Create a security group for
the MySQL servers and allow port 3306 from the web servers security group.
B. Create a network ACL for the web servers and allow port 443 from 0.0.0.0/0. Create a network ACL for the
MySQL servers and allow port 3306 from the web servers security group.
C. Create a security group for the web servers and allow port 443 from the load balancer. Create a security
group for the MySQL servers and allow port 3306 from the web servers security group.
D. Create a network ACL for the web servers and allow port 443 from the load balancer. Create a network ACL
for the MySQL servers and allow port 3306 from the web servers security group.
Answer: C
Explanation:
Load balancer is public facing accepting all traffic coming towards the VPC (0.0.0.0/0). The web server needs
to trust traffic originating from the ALB. The DB will only trust traffic originating from the Web server on port
3306 for Mysql
Answer: B
Explanation:
The best solution is to implement Amazon ElastiCache to cache the large datasets, which will store the
frequently accessed data in memory, allowing for faster retrieval times. This can help to alleviate the frequent
calls to the database, reduce latency, and improve the overall performance of the backend tier.
Which combination of actions should the solutions architect take to accomplish this goal? (Choose two.)
A.Have the deployment engineer use AWS account root user credentials for performing AWS CloudFormation
stack operations.
B.Create a new IAM user for the deployment engineer and add the IAM user to a group that has the PowerUsers
IAM policy attached.
C.Create a new IAM user for the deployment engineer and add the IAM user to a group that has the
AdministratorAccess IAM policy attached.
D.Create a new IAM user for the deployment engineer and add the IAM user to a group that has an IAM policy
that allows AWS CloudFormation actions only.
E.Create an IAM role for the deployment engineer to explicitly define the permissions specific to the AWS
CloudFormation stack and launch stacks using that IAM role.
Answer: DE
Explanation:
https://www.certyiq.com/discussions/amazon/view/46428-exam-aws-certified-solutions-architect-associate-
saa-c02/
The web application is not working as intended. The web application reports that it cannot connect to the
database. The database is confirmed to be up and running. All configurations for the network ACLs, security
groups, and route tables are still in their default states.
What should a solutions architect recommend to fix the application?
A.Add an explicit rule to the private subnet’s network ACL to allow traffic from the web tier’s EC2 instances.
B.Add a route in the VPC route table to allow traffic between the web tier’s EC2 instances and the database
tier.
C.Deploy the web tier's EC2 instances and the database tier’s RDS instance into two separate VPCs, and
configure VPC peering.
D.Add an inbound rule to the security group of the database tier’s RDS instance to allow traffic from the web
tiers security group.
Answer: D
Explanation:
By default, all inbound traffic to an RDS instance is blocked. Therefore, an inbound rule needs to be added to
the security group of the RDS instance to allow traffic from the security group of the web tier's EC2 instances.
Answer: A
Explanation:
1. Option "A" is the right answer . Read replica use cases - You have a production database that is taking on
normal load & You want to run a reporting application to run some analytics • You create a Read Replica to run
the new workload there • The production application is unaffected • Read replicas are used for SELECT (=read)
only kind of statements (not INSERT, UPDATE, DELETE)
The company wants to optimize customer session management during transactions. The application must store
session data durably.
Answer: AD
Explanation:
optimize customer session management during transactions. Since the session store will be during the
transaction and we have another DB for pre/post transaction storage(Maria DB).
The backup strategy must maximize scalability and optimize resource utilization for this environment.
A. Take snapshots of Amazon Elastic Block Store (Amazon EBS) volumes of the EC2 instances and database
every 2 hours to meet the RPO.
B. Configure a snapshot lifecycle policy to take Amazon Elastic Block Store (Amazon EBS) snapshots. Enable
automated backups in Amazon RDS to meet the RPO.
C. Retain the latest Amazon Machine Images (AMIs) of the web and application tiers. Enable automated
backups in Amazon RDS and use point-in-time recovery to meet the RPO.
D. Take snapshots of Amazon Elastic Block Store (Amazon EBS) volumes of the EC2 instances every 2 hours.
Enable automated backups in Amazon RDS and use point-in-time recovery to meet the RPO.
Answer: C
Explanation:
that if there is no temporary local storage on the EC2 instances, then snapshots of EBS volumes are not
necessary. Therefore, if your application does not require temporary storage on EC2 instances, using AMIs to
back up the web and application tiers is sufficient to restore the system after a failure.Snapshots of EBS
volumes would be necessary if you want to back up the entire EC2 instance, including any applications and
temporary data stored on the EBS volumes attached to the instances. When you take a snapshot of an EBS
volume, it backs up the entire contents of that volume. This ensures that you can restore the entire EC2
instance to a specific point in time more quickly. However, if there is no temporary data stored on the EBS
volumes, then snapshots of EBS volumes are not necessary.
The application must be secure and accessible for global customers that have dynamic IP addresses.
How should a solutions architect configure the security groups to meet these requirements?
A.Configure the security group for the web servers to allow inbound traffic on port 443 from 0.0.0.0/0.
Configure the security group for the DB instance to allow inbound traffic on port 3306 from the security group
of the web servers.
B.Configure the security group for the web servers to allow inbound traffic on port 443 from the IP addresses
of the customers. Configure the security group for the DB instance to allow inbound traffic on port 3306 from
the security group of the web servers.
C.Configure the security group for the web servers to allow inbound traffic on port 443 from the IP addresses
of the customers. Configure the security group for the DB instance to allow inbound traffic on port 3306 from
the IP addresses of the customers.
D.Configure the security group for the web servers to allow inbound traffic on port 443 from 0.0.0.0/0.
Configure the security group for the DB instance to allow inbound traffic on port 3306 from 0.0.0.0/0.
Answer: A
Explanation:
If the customers have dynamic IP addresses, option A would be the most appropriate solution for allowing
global access while maintaining security.
A. Process the audio files by using Amazon Kinesis Video Streams. Use an AWS Lambda function to scan for
known PII patterns.
B. When an audio file is uploaded to the S3 bucket, invoke an AWS Lambda function to start an Amazon
Textract task to analyze the call recordings.
C. Configure an Amazon Transcribe transcription job with PII redaction turned on. When an audio file is
uploaded to the S3 bucket, invoke an AWS Lambda function to start the transcription job. Store the output in a
separate S3 bucket.
D. Create an Amazon Connect contact flow that ingests the audio files with transcription turned on. Embed an
AWS Lambda function to scan for known PII patterns. Use Amazon EventBridge to start the contact flow when
an audio file is uploaded to the S3 bucket.
Answer: C
Explanation:
Option C is the most suitable solution as it suggests using Amazon Transcribe with PII redaction turned on.
When an audio file is uploaded to the S3 bucket, an AWS Lambda function can be used to start the
transcription job. The output can be stored in a separate S3 bucket to ensure that the PII redaction is applied
to the transcript. Amazon Transcribe can redact PII such as credit card numbers, social security numbers, and
phone numbers.
Answer: D
Explanation:
A - Magnetic Max IOPS 200 - WrongB - gp3 Max IOPS 16000 per volume - WrongC - RDS not supported io2 -
WrongD - Correct; 2 gp3 volume with 16 000 each 2*16000 = 32 000 IOPS
To improve the application performance, you can replace the 2,000 GB gp3 volume with two 1,000 GB gp3
volumes. This will increase the number of IOPS available to the database and improve performance.
Which service should the solutions architect use to find the desired information?
A. Amazon GuardDuty
B. Amazon Inspector
C. AWS CloudTrail
D. AWS Config
Answer: C
Explanation:
C. AWS CloudTrailThe best option is to use AWS CloudTrail to find the desired information. AWS CloudTrail is
a service that enables governance, compliance, operational auditing, and risk auditing of AWS account
activities. CloudTrail can be used to log all changes made to resources in an AWS account, including changes
made by IAM users, EC2 instances, AWS management console, and other AWS services. By using CloudTrail,
the solutions architect can identify the IAM user who made the configuration changes to the security group
rules.
AWS CloudTrail
Answer: A
Explanation:
AWS Shield is a managed service that provides protection against Distributed Denial of Service (DDoS)
attacks for applications running on AWS. AWS Shield Standard is automatically enabled to all AWS
customers at no additional cost. AWS Shield Advanced is an optional paid service. AWS Shield Advanced
provides additional protections against more sophisticated and larger attacks for your applications running on
Amazon Elastic Compute Cloud (EC2), Elastic Load Balancing (ELB), Amazon CloudFront, AWS Global
Accelerator, and Route 53.
A solutions architect needs to minimize the amount of operational effort that is needed for the job to run.
A.Create an AWS Lambda function that has an Amazon EventBridge notification. Schedule the EventBridge
event to run once a day.
B.Create an AWS Lambda function. Create an Amazon API Gateway HTTP API, and integrate the API with the
function. Create an Amazon EventBridge scheduled event that calls the API and invokes the function.
C.Create an Amazon Elastic Container Service (Amazon ECS) cluster with an AWS Fargate launch type. Create
an Amazon EventBridge scheduled event that launches an ECS task on the cluster to run the job.
D.Create an Amazon Elastic Container Service (Amazon ECS) cluster with an Amazon EC2 launch type and an
Auto Scaling group with at least one EC2 instance. Create an Amazon EventBridge scheduled event that
launches an ECS task on the cluster to run the job.
Answer: C
Explanation:
The requirement is to run a daily scheduled job to aggregate and filter sales records for analytics in the most
efficient way possible. Based on the requirement, we can eliminate option A and B since they use AWS
Lambda which has a limit of 15 minutes of execution time, which may not be sufficient for a job that can take
up to an hour to complete.Between options C and D, option C is the better choice since it uses AWS Fargate
which is a serverless compute engine for containers that eliminates the need to manage the underlying EC2
instances, making it a low operational effort solution. Additionally, Fargate also provides instant scale-up and
scale-down capabilities to run the scheduled job as per the requirement.Therefore, the correct answer is:C.
Create an Amazon Elastic Container Service (Amazon ECS) cluster with an AWS Fargate launch type. Create
an Amazon EventBridge scheduled event that launches an ECS task on the cluster to run the job.
Question: 398 CertyIQ
A company needs to transfer 600 TB of data from its on-premises network-attached storage (NAS) system to the
AWS Cloud. The data transfer must be complete within 2 weeks. The data is sensitive and must be encrypted in
transit. The company’s internet connection can support an upload speed of 100 Mbps.
A.Use Amazon S3 multi-part upload functionality to transfer the files over HTTPS.
B.Create a VPN connection between the on-premises NAS system and the nearest AWS Region. Transfer the
data over the VPN connection.
C.Use the AWS Snow Family console to order several AWS Snowball Edge Storage Optimized devices. Use the
devices to transfer the data to Amazon S3.
D.Set up a 10 Gbps AWS Direct Connect connection between the company location and the nearest AWS
Region. Transfer the data over a VPN connection into the Region to store the data in Amazon S3.
Answer: C
Explanation:
C. Use the AWS Snow Family console to order several AWS Snowball Edge Storage Optimized devices. Use
the devices to transfer the data to Amazon S3.The best option is to use the AWS Snow Family console to
order several AWS Snowball Edge Storage Optimized devices and use the devices to transfer the data to
Amazon S3. Snowball Edge is a petabyte-scale data transfer device that can help transfer large amounts of
data securely and quickly. Using Snowball Edge can be the most cost-effective solution for transferring large
amounts of data over long distances and can help meet the requirement of transferring 600 TB of data within
two weeks.
A solutions architect must design a solution to protect the application from this type of attack.
Which solution meets these requirements with the LEAST operational overhead?
A. Create an Amazon CloudFront distribution in front of the API Gateway Regional API endpoint with a
maximum TTL of 24 hours.
B. Create a Regional AWS WAF web ACL with a rate-based rule. Associate the web ACL with the API Gateway
stage.
C. Use Amazon CloudWatch metrics to monitor the Count metric and alert the security team when the
predefined rate is reached.
D. Create an Amazon CloudFront distribution with [email protected] in front of the API Gateway Regional API
endpoint. Create an AWS Lambda function to block requests from IP addresses that exceed the predefined
rate.
Answer: B
Explanation:
A rate-based rule in AWS WAF allows the security team to configure thresholds that trigger rate-based rules,
which enable AWS WAF to track the rate of requests for a specified time period and then block them
automatically when the threshold is exceeded. This provides the ability to prevent HTTP flood attacks with
minimal operational overhead.
What should a solutions architect do to meet these requirements with the LEAST amount of operational overhead?
A. Use DynamoDB transactions to write new event data to the table. Configure the transactions to notify
internal teams.
B. Have the current application publish a message to four Amazon Simple Notification Service (Amazon SNS)
topics. Have each team subscribe to one topic.
C. Enable Amazon DynamoDB Streams on the table. Use triggers to write to a single Amazon Simple
Notification Service (Amazon SNS) topic to which the teams can subscribe.
D. Add a custom attribute to each record to flag new items. Write a cron job that scans the table every minute
for items that are new and notifies an Amazon Simple Queue Service (Amazon SQS) queue to which the teams
can subscribe.
Answer: C
Explanation:
The best solution to meet these requirements with the least amount of operational overhead is to enable
Amazon DynamoDB Streams on the table and use triggers to write to a single Amazon Simple Notification
Service (Amazon SNS) topic to which the teams can subscribe. This solution requires minimal configuration
and infrastructure setup, and Amazon DynamoDB Streams provide a low-latency way to capture changes to
the DynamoDB table. The triggers automatically capture the changes and publish them to the SNS topic,
which notifies the internal teams.
The company needs a solution that avoids any single points of failure. The solution must give the application the
ability to scale to meet user demand.
A.Deploy the application servers by using Amazon EC2 instances in an Auto Scaling group across multiple
Availability Zones. Use an Amazon RDS DB instance in a Multi-AZ configuration.
B.Deploy the application servers by using Amazon EC2 instances in an Auto Scaling group in a single
Availability Zone. Deploy the database on an EC2 instance. Enable EC2 Auto Recovery.
C.Deploy the application servers by using Amazon EC2 instances in an Auto Scaling group across multiple
Availability Zones. Use an Amazon RDS DB instance with a read replica in a single Availability Zone. Promote
the read replica to replace the primary DB instance if the primary DB instance fails.
D.Deploy the application servers by using Amazon EC2 instances in an Auto Scaling group across multiple
Availability Zones. Deploy the primary and secondary database servers on EC2 instances across multiple
Availability Zones. Use Amazon Elastic Block Store (Amazon EBS) Multi-Attach to create shared storage
between the instances.
Answer: A
Explanation:
The correct answer is A. Deploy the application servers by using Amazon EC2 instances in an Auto Scaling
group across multiple Availability Zones. Use an Amazon RDS DB instance in a Multi-AZ configuration.To
make an existing application highly available and resilient while avoiding any single points of failure and
giving the application the ability to scale to meet user demand, the best solution would be to deploy the
application servers using Amazon EC2 instances in an Auto Scaling group across multiple Availability Zones
and use an Amazon RDS DB instance in a Multi-AZ configuration.By using an Amazon RDS DB instance in a
Multi-AZ configuration, the database is automatically replicated across multiple Availability Zones, ensuring
that the database is highly available and can withstand the failure of a single Availability Zone. This provides
fault tolerance and avoids any single points of failure.
A. Update the Kinesis Data Streams default settings by modifying the data retention period.
B. Update the application to use the Kinesis Producer Library (KPL) to send the data to Kinesis Data Streams.
C. Update the number of Kinesis shards to handle the throughput of the data that is sent to Kinesis Data
Streams.
D. Turn on S3 Versioning within the S3 bucket to preserve every version of every object that is ingested in the
S3 bucket.
Answer: A
Explanation:
A Kinesis data stream stores records from 24 hours by default, up to 8760 hours (365 days)."
https://docs.aws.amazon.com/streams/latest/dev/kinesis-extended-retention.html
The question mentioned Kinesis data stream default settings and "every other day". After 24hrs, the data isn't
in the Data stream if the default settings is not modified to store data more than 24hrs.
A.Add required IAM permissions in the resource policy of the Lambda function.
B.Create a signed request using the existing IAM credentials in the Lambda function.
C.Create a new IAM user and use the existing IAM credentials in the Lambda function.
D.Create an IAM execution role with the required permissions and attach the IAM role to the Lambda function.
Answer: D
Explanation:
Create Lambda execution role and attach existing S3 IAM role to the lambda function
Answer: D
Explanation:
To improve the architecture of this application, the best solution would be to use Amazon Simple Queue
Service (Amazon SQS) to buffer the requests and decouple the S3 bucket from the Lambda function. This will
ensure that the documents are not lost and can be processed at a later time if the Lambda function is not
available.
This will ensure that the documents are not lost and can be processed at a later time if the Lambda function is
not available. By using Amazon SQS, the architecture is decoupled and the Lambda function can process the
documents in a scalable and fault-tolerant manner.
Which combination of actions should the solutions architect take to ensure that the system can scale to meet
demand? (Choose two.)
A. Use AWS Auto Scaling to adjust the ALB capacity based on request rate.
B. Use AWS Auto Scaling to scale the capacity of the VPC internet gateway.
C. Launch the EC2 instances in multiple AWS Regions to distribute the load across Regions.
D. Use a target tracking scaling policy to scale the Auto Scaling group based on instance CPU utilization.
E. Use scheduled scaling to change the Auto Scaling group minimum, maximum, and desired capacity to zero
for weekends. Revert to the default values at the start of the week.
Answer: DE
Explanation:
Scaling should be at the ASG not ALB. So, not sure about "Use AWS Auto Scaling to adjust the ALB capacity
based on request rate"
Which combination of steps should the solutions architect take to meet these requirements? (Choose two.)
A.Create a network ACL for the public subnet. Add a rule to deny outbound traffic to 0.0.0.0/0 on port 3306.
B.Create a security group for the DB instance. Add a rule to allow traffic from the public subnet CIDR block on
port 3306.
C.Create a security group for the web servers in the public subnet. Add a rule to allow traffic from 0.0.0.0/0 on
port 443.
D.Create a security group for the DB instance. Add a rule to allow traffic from the web servers’ security group
on port 3306.
E.Create a security group for the DB instance. Add a rule to deny all traffic except traffic from the web servers’
security group on port 3306.
Answer: CD
Explanation:
To meet the requirements of allowing access to the web servers in the public subnet on port 443 and the
Amazon RDS for MySQL DB instance in the database subnet on port 3306, the best solution would be to
create a security group for the web servers and another security group for the DB instance, and then define
the appropriate inbound and outbound rules for each security group.1. Create a security group for the web
servers in the public subnet. Add a rule to allow traffic from 0.0.0.0/0 on port 443.2. Create a security group
for the DB instance. Add a rule to allow traffic from the web servers' security group on port 3306.This will
allow the web servers in the public subnet to receive traffic from the internet on port 443, and the Amazon
RDS for MySQL DB instance in the database subnet to receive traffic only from the web servers on port 3306.
A.Create an AWS DataSync task that shares the data as a mountable file system. Mount the file system to the
application server.
B.Create an AWS Storage Gateway file gateway. Create a file share that uses the required client protocol.
Connect the application server to the file share.
C.Create an Amazon Elastic File System (Amazon EFS) file system, and configure it to support Lustre. Attach
the file system to the origin server. Connect the application server to the file system.
D.Create an Amazon FSx for Lustre file system. Attach the file system to the origin server. Connect the
application server to the file system.
Answer: D
Explanation:
To meet the requirements of a shared storage solution for a gaming application that can be accessed using
Lustre clients and is fully managed, the best solution would be to use Amazon FSx for Lustre.Amazon FSx for
Lustre is a fully managed file system that is optimized for compute-intensive workloads, such as high-
performance computing, machine learning, and gaming. It provides a POSIX-compliant file system that can be
accessed using Lustre clients and offers high performance, scalability, and data durability.This solution
provides a highly available, scalable, and fully managed shared storage solution that can be accessed using
Lustre clients. Amazon FSx for Lustre is optimized for compute-intensive workloads and provides high
performance and durability.
The company needs a solution that minimizes latency for the data transmission from the devices. The solution also
must provide rapid failover to another AWS Region.
A. Configure an Amazon Route 53 failover routing policy. Create a Network Load Balancer (NLB) in each of the
two Regions. Configure the NLB to invoke an AWS Lambda function to process the data.
B. Use AWS Global Accelerator. Create a Network Load Balancer (NLB) in each of the two Regions as an
endpoint. Create an Amazon Elastic Container Service (Amazon ECS) cluster with the Fargate launch type.
Create an ECS service on the cluster. Set the ECS service as the target for the NLProcess the data in Amazon
ECS.
C. Use AWS Global Accelerator. Create an Application Load Balancer (ALB) in each of the two Regions as an
endpoint. Create an Amazon Elastic Container Service (Amazon ECS) cluster with the Fargate launch type.
Create an ECS service on the cluster. Set the ECS service as the target for the ALB. Process the data in Amazon
ECS.
D. Configure an Amazon Route 53 failover routing policy. Create an Application Load Balancer (ALB) in each of
the two Regions. Create an Amazon Elastic Container Service (Amazon ECS) cluster with the Fargate launch
type. Create an ECS service on the cluster. Set the ECS service as the target for the ALB. Process the data in
Amazon ECS.
Answer: B
Explanation:
To meet the requirements of minimizing latency for data transmission from the devices and providing rapid
failover to another AWS Region, the best solution would be to use AWS Global Accelerator in combination
with a Network Load Balancer (NLB) and Amazon Elastic Container Service (Amazon ECS).
AWS Global Accelerator is a service that improves the availability and performance of applications by using
static IP addresses (Anycast) to route traffic to optimal AWS endpoints. With Global Accelerator, you can
direct traffic to multiple Regions and endpoints, and provide automatic failover to another AWS Region.
Which replacement to the on-premises file share is MOST resilient and durable?
Answer: C
Explanation:
A) RDS is a database serviceB) Storage Gateway is a hybrid cloud storage service that connects on-premises
applications to AWS storage services.D) provides shared file storage for Linux-based workloads, but it does
not natively support Windows-based workloads.
The most resilient and durable replacement for the on-premises file share in this scenario would be Amazon
FSx for Windows File Server.Amazon FSx is a fully managed Windows file system service that is built on
Windows Server and provides native support for the SMB protocol. It is designed to be highly available and
durable, with built-in backup and restore capabilities. It is also fully integrated with AWS security services,
providing encryption at rest and in transit, and it can be configured to meet compliance standards.
A.Create an IAM role that specifies EBS encryption. Attach the role to the EC2 instances.
B.Create the EBS volumes as encrypted volumes. Attach the EBS volumes to the EC2 instances.
C.Create an EC2 instance tag that has a key of Encrypt and a value of True. Tag all instances that require
encryption at the EBS level.
D.Create an AWS Key Management Service (AWS KMS) key policy that enforces EBS encryption in the account.
Ensure that the key policy is active.
Answer: B
Explanation:
Create encrypted EBS volumes and attach encrypted EBS volumes to EC2 instances..
A.Amazon DynamoDB
B.Amazon RDS for MySQL
C.MySQL-compatible Amazon Aurora Serverless
D.MySQL deployed on Amazon EC2 in an Auto Scaling group
Answer: C
Explanation:
C: Aurora Serverless is a MySQL-compatible relational database engine that automatically scales compute
and memory resources based on application usage. no upfront costs or commitments required. A: DynamoDB
is a NoSQLB: Fixed cost on RDS classD: More operation requires
A. Use Amazon GuardDuty to monitor S3 bucket policies. Create an automatic remediation action rule that uses
an AWS Lambda function to remediate any change that makes the objects public.
B. Use AWS Trusted Advisor to find publicly accessible S3 buckets. Configure email notifications in Trusted
Advisor when a change is detected. Manually change the S3 bucket policy if it allows public access.
C. Use AWS Resource Access Manager to find publicly accessible S3 buckets. Use Amazon Simple Notification
Service (Amazon SNS) to invoke an AWS Lambda function when a change is detected. Deploy a Lambda
function that programmatically remediates the change.
D. Use the S3 Block Public Access feature on the account level. Use AWS Organizations to create a service
control policy (SCP) that prevents IAM users from changing the setting. Apply the SCP to the account.
Answer: D
Explanation:
Answer D is the correct solution that meets the requirements. The S3 Block Public Access feature allows you
to restrict public access to S3 buckets and objects within the account. You can enable this feature at the
account level to prevent any S3 bucket from being made public, regardless of the bucket policy settings.
AWS Organizations can be used to apply a Service Control Policy (SCP) to the account to prevent IAM users
from changing this setting, ensuring that all S3 objects remain private. This is a straightforward and effective
solution that requires minimal operational overhead.
A. Create a separate application tier using EC2 instances dedicated to email processing.
B. Configure the web instance to send email through Amazon Simple Email Service (Amazon SES).
C. Configure the web instance to send email through Amazon Simple Notification Service (Amazon SNS).
D. Create a separate application tier using EC2 instances dedicated to email processing. Place the instances in
an Auto Scaling group.
Answer: B
Explanation:
Amazon SES is a cost-effective and scalable email service that enables businesses to send and receive email
using their own email addresses and domains. Configuring the web instance to send email through Amazon
SES is a simple and effective solution that can reduce the time spent resolving complex email delivery issues
and minimize operational overhead.
Which solution will meet these requirements with the LEAST administrative overhead?
A. Use AWS DataSync to transfer the files to Amazon S3. Create a scheduled task that runs at the end of each
day.
B. Create an Amazon S3 File Gateway. Update the business system to use a new network share from the S3 File
Gateway.
C. Use AWS DataSync to transfer the files to Amazon S3. Create an application that uses the DataSync API in
the automation workflow.
D. Deploy an AWS Transfer for SFTP endpoint. Create a script that checks for new files on the network share
and uploads the new files by using SFTP.
Answer: B
Explanation:
1. Key words: 1. near-real-time (A is out)2. LEAST administrative (C n D is out)
2. A - creating a scheduled task is not near-real time.B - The S3 File Gateway caches frequently accessed
data locally and automatically uploads it to Amazon S3, providing near-real-time access to the data.C -
creating an application that uses the DataSync API in the automation workflow may provide near-real-time
data access, but it requires additional development effort.D - it requires additional development effort.
Which solution will meet these requirements with the MOST operational efficiency?
A. Create an S3 Lifecycle configuration with a rule to transition the objects in the S3 bucket to S3 Intelligent-
Tiering.
B. Use the S3 storage class analysis tool to determine the correct tier for each object in the S3 bucket. Move
each object to the identified storage tier.
C. Create an S3 Lifecycle configuration with a rule to transition the objects in the S3 bucket to S3 Glacier
Instant Retrieval.
D. Create an S3 Lifecycle configuration with a rule to transition the objects in the S3 bucket to S3 One Zone-
Infrequent Access (S3 One Zone-IA).
Answer: A
Explanation:
Creating an S3 Lifecycle configuration with a rule to transition the objects in the S3 bucket to S3 Intelligent-
Tiering would be the most efficient solution to optimize the cost of S3 usage. S3 Intelligent-Tiering is a
storage class that automatically moves objects between two access tiers (frequent and infrequent) based on
changing access patterns. It is a cost-effective solution that does not require any manual intervention to move
data to different storage classes, unlike the other options.
Which combination of actions should a solutions architect take to resolve this issue? (Choose two.)
Answer: BD
Explanation:
To resolve the issue of slow page loads for a rapidly growing e-commerce website hosted on AWS, a solutions
architect can take the following two actions:1. Set up an Amazon CloudFront distribution2. Create a read
replica for the RDS DB instanceConfiguring an Amazon Redshift cluster is not relevant to this issue since
Redshift is a data warehousing service and is typically used for the analytical processing of large amounts of
data.Hosting the dynamic web content in Amazon S3 may not necessarily improve performance since S3 is an
object storage service, not a web application server. While S3 can be used to host static web content, it may
not be suitable for hosting dynamic web content since S3 doesn't support server-side scripting or
processing.Configuring a Multi-AZ deployment for the RDS DB instance will improve high availability but may
not necessarily improve performance.
The application will run for at least 1 year. The company expects the number of Lambda functions that the
application uses to increase during that time. The company wants to maximize its savings on all application
resources and to keep network latency between the services low.
Which solution will meet these requirements?
A.Purchase an EC2 Instance Savings Plan Optimize the Lambda functions’ duration and memory usage and the
number of invocations. Connect the Lambda functions to the private subnet that contains the EC2 instances.
B.Purchase an EC2 Instance Savings Plan Optimize the Lambda functions' duration and memory usage, the
number of invocations, and the amount of data that is transferred. Connect the Lambda functions to a public
subnet in the same VPC where the EC2 instances run.
C.Purchase a Compute Savings Plan. Optimize the Lambda functions’ duration and memory usage, the number
of invocations, and the amount of data that is transferred. Connect the Lambda functions to the private subnet
that contains the EC2 instances.
D.Purchase a Compute Savings Plan. Optimize the Lambda functions’ duration and memory usage, the number
of invocations, and the amount of data that is transferred. Keep the Lambda functions in the Lambda service
VPC.
Answer: C
Explanation:
Answer C is the best solution that meets the company’s requirements.By purchasing a Compute Savings Plan,
the company can save on the costs of running both EC2 instances and Lambda functions. The Lambda
functions can be connected to the private subnet that contains the EC2 instances through a VPC endpoint for
AWS services or a VPC peering connection. This provides direct network access to the EC2 instances while
keeping the traffic within the private network, which helps to minimize network latency.Optimizing the
Lambda functions’ duration, memory usage, number of invocations, and amount of data transferred can help
to further minimize costs and improve performance. Additionally, using a private subnet helps to ensure that
the EC2 instances are not directly accessible from the public internet, which is a security best practice.
The solutions architect has created an IAM role in the production account. The role has a policy that grants access
to an S3 bucket in the production account.
Which solution will meet these requirements while complying with the principle of least privilege?
Answer: B
Explanation:
The solution that will meet these requirements while complying with the principle of least privilege is to add
the development account as a principal in the trust policy of the role in the production account. This will allow
team members to access Amazon S3 buckets in two different AWS accounts while complying with the
principle of least privilege.
Option A is not recommended because it grants too much access to development account users. Option C is
not relevant to this scenario. Option D is not recommended because it does not comply with the principle of
least privilege.
An audit discovers that employees have created Amazon Elastic Block Store (Amazon EBS) volumes for EC2
instances without encrypting the volumes. The company wants any new EC2 instances that any IAM user or root
user launches in ap-southeast-2 to use encrypted EBS volumes. The company wants a solution that will have
minimal effect on employees who create EBS volumes.
A.In the Amazon EC2 console, select the EBS encryption account attribute and define a default encryption key.
B.Create an IAM permission boundary. Attach the permission boundary to the root organizational unit (OU).
Define the boundary to deny the ec2:CreateVolume action when the ec2:Encrypted condition equals false.
C.Create an SCP. Attach the SCP to the root organizational unit (OU). Define the SCP to deny the
ec2:CreateVolume action whenthe ec2:Encrypted condition equals false.
D.Update the IAM policies for each account to deny the ec2:CreateVolume action when the ec2:Encrypted
condition equals false.
E.In the Organizations management account, specify the Default EBS volume encryption setting.
Answer: CE
Explanation:
SCP that denies the ec2:CreateVolume action when the ec2:Encrypted condition equals false. This will
prevent users and service accounts in member accounts from creating unencrypted EBS volumes in the ap-
southeast-2 Region.
A.Use an Amazon RDS Multi-AZ DB instance deployment. Create one read replica and point the read workload
to the read replica.
B.Use an Amazon RDS Multi-AZ DB duster deployment Create two read replicas and point the read workload to
the read replicas.
C.Use an Amazon RDS Multi-AZ DB instance deployment. Point the read workload to the secondary instances in
the Multi-AZ pair.
D.Use an Amazon RDS Multi-AZ DB cluster deployment Point the read workload to the reader endpoint.
Answer: D
Explanation:
The company wants high availability, automatic failover support in less than 40 seconds, read offloading from
the primary instance, and cost-effectiveness.
1. Amazon RDS Multi-AZ deployments provide high availability and automatic failover support.
2. In a Multi-AZ DB cluster, Amazon RDS automatically provisions and maintains a standby in a different
Availability Zone. If a failure occurs, Amazon RDS performs an automatic failover to the standby, minimizing
downtime.
3. The "Reader endpoint" for an Amazon RDS DB cluster provides load-balancing support for read-only
connections to the DB cluster. Directing read traffic to the reader endpoint helps in offloading read operations
from the primary instance.
The company wants a serverless option that provides high IOPS performance and highly configurable security. The
company also wants to maintain control over user permissions.
A.Create an encrypted Amazon Elastic Block Store (Amazon EBS) volume. Create an AWS Transfer Family SFTP
service with a public endpoint that allows only trusted IP addresses. Attach the EBS volume to the SFTP service
endpoint. Grant users access to the SFTP service.
B.Create an encrypted Amazon Elastic File System (Amazon EFS) volume. Create an AWS Transfer Family SFTP
service with elastic IP addresses and a VPC endpoint that has internet-facing access. Attach a security group to
the endpoint that allows only trusted IP addresses. Attach the EFS volume to the SFTP service endpoint. Grant
users access to the SFTP service.
C.Create an Amazon S3 bucket with default encryption enabled. Create an AWS Transfer Family SFTP service
with a public endpoint that allows only trusted IP addresses. Attach the S3 bucket to the SFTP service
endpoint. Grant users access to the SFTP service.
D.Create an Amazon S3 bucket with default encryption enabled. Create an AWS Transfer Family SFTP service
with a VPC endpoint that has internal access in a private subnet. Attach a security group that allows only
trusted IP addresses. Attach the S3 bucket to the SFTP service endpoint. Grant users access to the SFTP
service.
Answer: B
Explanation:
The question is requiring highly configurable security --> that excludes default S3 encryption, which is SSE-
S3 (is not configurable)
The company provides models to hundreds of users. The usage patterns for the models are irregular. Some models
could be unused for days or weeks. Other models could receive batches of thousands of requests at a time.
A.Direct the requests from the API to a Network Load Balancer (NLB). Deploy the models as AWS Lambda
functions that are invoked by the NLB.
B.Direct the requests from the API to an Application Load Balancer (ALB). Deploy the models as Amazon Elastic
Container Service (Amazon ECS) services that read from an Amazon Simple Queue Service (Amazon SQS)
queue. Use AWS App Mesh to scale the instances of the ECS cluster based on the SQS queue size.
C.Direct the requests from the API into an Amazon Simple Queue Service (Amazon SQS) queue. Deploy the
models as AWS Lambda functions that are invoked by SQS events. Use AWS Auto Scaling to increase the
number of vCPUs for the Lambda functions based on the SQS queue size.
D.Direct the requests from the API into an Amazon Simple Queue Service (Amazon SQS) queue. Deploy the
models as Amazon Elastic Container Service (Amazon ECS) services that read from the queue. Enable AWS
Auto Scaling on Amazon ECS for both the cluster and copies of the service based on the queue size.
Answer: D
Explanation:
because it is scalable, reliable, and efficient.C does not scale the models automatically
Which IAM principals can the solutions architect attach this policy to? (Choose two.)
A.Role
B.Group
C.Organization
D.Amazon Elastic Container Service (Amazon ECS) resource
E.Amazon EC2 resource
Answer: AB
Explanation:
identity-based policy used for role and group
The company needs to scale out and scale in more instances based on workload.
A.Use Reserved Instances for the frontend nodes. Use AWS Fargate for the backend nodes.
B.Use Reserved Instances for the frontend nodes. Use Spot Instances for the backend nodes.
C.Use Spot Instances for the frontend nodes. Use Reserved Instances for the backend nodes.
D.Use Spot Instances for the frontend nodes. Use AWS Fargate for the backend nodes.
Answer: B
Explanation:
Which Amazon Elastic Block Store (Amazon EBS) volume type will meet these requirements MOST cost-
effectively?
Answer: C
Explanation:
Both GP2 and GP3 has max IOPS 16000 but GP3 is cost
effective.https://aws.amazon.com/blogs/storage/migrate-your-amazon-ebs-volumes-from-gp2-to-gp3-and-
save-up-to-20-on-costs/
The company hosts the application on an on-premises infrastructure that is running out of storage capacity. A
solutions architect must securely migrate the existing data to AWS while satisfying the new regulation.
Which solution will meet these requirements?
A.Use AWS DataSync to move the existing data to Amazon S3. Use AWS CloudTrail to log data events.
B.Use AWS Snowcone to move the existing data to Amazon S3. Use AWS CloudTrail to log management events.
C.Use Amazon S3 Transfer Acceleration to move the existing data to Amazon S3. Use AWS CloudTrail to log
data events.
D.Use AWS Storage Gateway to move the existing data to Amazon S3. Use AWS CloudTrail to log management
events.
Answer: A
Explanation:
1. https://docs.aws.amazon.com/ja_jp/datasync/latest/userguide/encryption-in-transit.html
2. One time synch, its Data Sync. Dont bother for greyrose answers, they are usually wrong
A.Deploy the application in AWS Lambda. Configure an Amazon API Gateway API to connect with the Lambda
functions.
B.Deploy the application by using AWS Elastic Beanstalk. Configure a load-balanced environment and a rolling
deployment policy.
C.Migrate the database to Amazon ElastiCache. Configure the ElastiCache security group to allow access from
the application.
D.Launch an Amazon EC2 instance. Install a MySQL server on the EC2 instance. Configure the application on
the server. Create an AMI. Use the AMI to create a launch template with an Auto Scaling group.
Answer: B
Explanation:
Deploy the application by using AWS Elastic Beanstalk. Configure a load-balanced environment and a rolling
deployment policy.
Which solution will give the Lambda function access to the DynamoDB table MOST securely?
A.Create an IAM user with programmatic access to the Lambda function. Attach a policy to the user that allows
read and write access to the DynamoDB table. Store the access_key_id and secret_access_key parameters as
part of the Lambda environment variables. Ensure that other AWS users do not have read and write access to
the Lambda function configuration.
B.Create an IAM role that includes Lambda as a trusted service. Attach a policy to the role that allows read and
write access to the DynamoDB table. Update the configuration of the Lambda function to use the new role as
the execution role.
C.Create an IAM user with programmatic access to the Lambda function. Attach a policy to the user that allows
read and write access to the DynamoDB table. Store the access_key_id and secret_access_key parameters in
AWS Systems Manager Parameter Store as secure string parameters. Update the Lambda function code to
retrieve the secure string parameters before connecting to the DynamoDB table.
D.Create an IAM role that includes DynamoDB as a trusted service. Attach a policy to the role that allows read
and write access from the Lambda function. Update the code of the Lambda function to attach to the new role
as an execution role.
Answer: B
Explanation:
What are the effective IAM permissions of this policy for group members?
A.Group members are permitted any Amazon EC2 action within the us-east-1 Region. Statements after the
Allow permission are not applied.
B.Group members are denied any Amazon EC2 permissions in the us-east-1 Region unless they are logged in
with multi-factor authentication (MFA).
C.Group members are allowed the ec2:StopInstances and ec2:TerminateInstances permissions for all Regions
when logged in with multi-factor authentication (MFA). Group members are permitted any other Amazon EC2
action.
D.Group members are allowed the ec2:StopInstances and ec2:TerminateInstances permissions for the us-east-1
Region only when logged in with multi-factor authentication (MFA). Group members are permitted any other
Amazon EC2 action within the us-east-1 Region.
Answer: D
Explanation:
Group members are allowed the ec2:StopInstances and ec2:TerminateInstances permissions for the us-east-1
Region only when logged in with multi-factor authentication (MFA). Group members are permitted any other
Amazon EC2 action within the us-east-1 Region.
The images become irrelevant after 1 month, but the .csv files must be kept to train machine learning (ML) models
twice a year. The ML trainings and audits are planned weeks in advance.
Which combination of steps will meet these requirements MOST cost-effectively? (Choose two.)
A.Launch an Amazon EC2 Spot Instance that downloads the .csv files every hour, generates the image files, and
uploads the images to the S3 bucket.
B.Design an AWS Lambda function that converts the .csv files into images and stores the images in the S3
bucket. Invoke the Lambda function when a .csv file is uploaded.
C.Create S3 Lifecycle rules for .csv files and image files in the S3 bucket. Transition the .csv files from S3
Standard to S3 Glacier 1 day after they are uploaded. Expire the image files after 30 days.
D.Create S3 Lifecycle rules for .csv files and image files in the S3 bucket. Transition the .csv files from S3
Standard to S3 One Zone-Infrequent Access (S3 One Zone-IA) 1 day after they are uploaded. Expire the image
files after 30 days.
E.Create S3 Lifecycle rules for .csv files and image files in the S3 bucket. Transition the .csv files from S3
Standard to S3 Standard-Infrequent Access (S3 Standard-IA) 1 day after they are uploaded. Keep the image
files in Reduced Redundancy Storage (RRS).
Answer: BC
Explanation:
Answer: B
Explanation:
https://aws.amazon.com/jp/blogs/news/building-a-real-time-gaming-leaderboard-with-amazon-elasticache-
for-redis/
Which solution will meet these requirements with the LEAST operational overhead?
A.Use AWS Glue to create an ML transform to build and train models. Use Amazon OpenSearch Service to
visualize the data.
B.Use Amazon SageMaker to build and train models. Use Amazon QuickSight to visualize the data.
C.Use a pre-built ML Amazon Machine Image (AMI) from the AWS Marketplace to build and train models. Use
Amazon OpenSearch Service to visualize the data.
D.Use Amazon QuickSight to build and train models by using calculated fields. Use Amazon QuickSight to
visualize the data.
Answer: B
Explanation:
A.Create a custom AWS Config rule to prevent tag modification except by authorized principals.
B.Create a custom trail in AWS CloudTrail to prevent tag modification.
C.Create a service control policy (SCP) to prevent tag modification except by authorized principals.
D.Create custom Amazon CloudWatch logs to prevent tag modification.
Answer: C
Explanation:
Service control policies (SCPs) are a type of organization policy that you can use to manage permissions in
your organization.
What should a solutions architect do to meet these requirements with the LEAST amount of downtime?
A.Create an Auto Scaling group and a load balancer in the disaster recovery Region. Configure the DynamoDB
table as a global table. Configure DNS failover to point to the new disaster recovery Region's load balancer.
B.Create an AWS CloudFormation template to create EC2 instances, load balancers, and DynamoDB tables to
be launched when needed Configure DNS failover to point to the new disaster recovery Region's load balancer.
C.Create an AWS CloudFormation template to create EC2 instances and a load balancer to be launched when
needed. Configure the DynamoDB table as a global table. Configure DNS failover to point to the new disaster
recovery Region's load balancer.
D.Create an Auto Scaling group and load balancer in the disaster recovery Region. Configure the DynamoDB
table as a global table. Create an Amazon CloudWatch alarm to trigger an AWS Lambda function that updates
Amazon Route 53 pointing to the disaster recovery load balancer.
Answer: A
Explanation:
.Create an Auto Scaling group and a load balancer in the disaster recovery Region. Configure the DynamoDB
table as a global table. Configure DNS failover to point to the new disaster recovery Region's load balancer.
A.Order an AWS Snowball Edge Storage Optimized device. Use AWS Database Migration Service (AWS DMS)
with AWS Schema Conversion Tool (AWS SCT) to migrate the database with replication of ongoing changes.
Send the Snowball Edge device to AWS to finish the migration and continue the ongoing replication.
B.Order an AWS Snowmobile vehicle. Use AWS Database Migration Service (AWS DMS) with AWS Schema
Conversion Tool (AWS SCT) to migrate the database with ongoing changes. Send the Snowmobile vehicle back
to AWS to finish the migration and continue the ongoing replication.
C.Order an AWS Snowball Edge Compute Optimized with GPU device. Use AWS Database Migration Service
(AWS DMS) with AWS Schema Conversion Tool (AWS SCT) to migrate the database with ongoing changes.
Send the Snowball device to AWS to finish the migration and continue the ongoing replication
D.Order a 1 GB dedicated AWS Direct Connect connection to establish a connection with the data center. Use
AWS Database Migration Service (AWS DMS) with AWS Schema Conversion Tool (AWS SCT) to migrate the
database with replication of ongoing changes.
Answer: A
Explanation:
D - Direct Connect takes atleast a month to setup! Requirement is for within 2 weeks.
https://docs.aws.amazon.com/ja_jp/snowball/latest/developer-guide/device-differences.html#device-
optionsAです。
A.Buy reserved DB instances for the total workload. Make the Amazon RDS for PostgreSQL DB instance larger.
B.Make the Amazon RDS for PostgreSQL DB instance a Multi-AZ DB instance.
C.Buy reserved DB instances for the total workload. Add another Amazon RDS for PostgreSQL DB instance.
D.Make the Amazon RDS for PostgreSQL DB instance an on-demand DB instance.
Answer: A
Explanation:
A.
"without adding infrastructure" means scaling vertically and choosing larger instance.
Answer: B
Explanation:
Deploy AWS WAF, associate it with the ALB, and configure a rate-limiting rule.
What is the MOST secure way for the company to share the database with the auditor?
A.Create a read replica of the database. Configure IAM standard database authentication to grant the auditor
access.
B.Export the database contents to text files. Store the files in an Amazon S3 bucket. Create a new IAM user for
the auditor. Grant the user access to the S3 bucket.
C.Copy a snapshot of the database to an Amazon S3 bucket. Create an IAM user. Share the user's keys with the
auditor to grant access to the object in the S3 bucket.
D.Create an encrypted snapshot of the database. Share the snapshot with the auditor. Allow access to the AWS
Key Management Service (AWS KMS) encryption key.
Answer: D
Explanation:
Create an encrypted snapshot of the database. Share the snapshot with the auditor. Allow access to the AWS
Key Management Service (AWS KMS) encryption key.
Which solution resolves this issue with the LEAST operational overhead?
A.Add an additional IPv4 CIDR block to increase the number of IP addresses and create additional subnets in
the VPC. Create new resources in the new subnets by using the new CIDR.
B.Create a second VPC with additional subnets. Use a peering connection to connect the second VPC with the
first VPC Update the routes and create new resources in the subnets of the second VPC.
C.Use AWS Transit Gateway to add a transit gateway and connect a second VPC with the first VPUpdate the
routes of the transit gateway and VPCs. Create new resources in the subnets of the second VPC.
D.Create a second VPC. Create a Site-to-Site VPN connection between the first VPC and the second VPC by
using a VPN-hosted solution on Amazon EC2 and a virtual private gateway. Update the route between VPCs to
the traffic through the VPN. Create new resources in the subnets of the second VPC.
Answer: A
Explanation:
Add an additional IPv4 CIDR block to increase the number of IP addresses and create additional subnets in the
VPC. Create new resources in the new subnets by using the new CIDR.
The company is now planning for a new test cycle and wants to create a new DB instance from the most recent
backup. The company has chosen a MySQL-compatible edition ofAmazon Aurora to host the DB instance.
Which solutions will create the new DB instance? (Choose two.)
Answer: AC
Explanation:
1. A,CA because the snapshot is already stored in AWS. C because you dont need a migration tool going from
MySQL to MySQL. You would use the MySQL utility.
A.Update the Auto Scaling group to use Reserved Instances instead of On-Demand Instances.
B.Update the Auto Scaling group to scale by launching Spot Instances instead of On-Demand Instances.
C.Create an Amazon CloudFront distribution to host the static web contents from an Amazon S3 bucket.
D.Create an AWS Lambda function behind an Amazon API Gateway API to host the static website contents.
Answer: C
Explanation:
.Create an Amazon CloudFront distribution to host the static web contents from an Amazon S3 bucket.
Which solution will meet these requirements with the LEAST operational overhead?
A.Copy the required data to a common account. Create an IAM access role in that account. Grant access by
specifying a permission policy that includes users from the engineering team accounts as trusted entities.
B.Use the Lake Formation permissions Grant command in each account where the data is stored to allow the
required engineering team users to access the data.
C.Use AWS Data Exchange to privately publish the required data to the required engineering team accounts.
D.Use Lake Formation tag-based access control to authorize and grant cross-account permissions for the
required data to the engineering team accounts.
Answer: D
Explanation:
By utilizing Lake Formation's tag-based access control, you can define tags and tag-based policies to grant
selective access to the required data for the engineering team accounts. This approach allows you to control
access at a granular level without the need to copy or move the data to a common account or manage
permissions individually in each account. It provides a centralized and scalable solution for securely sharing
data across accounts with minimal operational overhead.
Answer: A
Explanation:
https://docs.aws.amazon.com/ja_jp/AmazonS3/latest/userguide/transfer-acceleration.html
An employee recently deleted the DB instance, and the application was unavailable for 24 hours as a result. The
company is concerned with the overall reliability of its environment.
What should the solutions architect do to maximize reliability of the application's infrastructure?
A.Delete one EC2 instance and enable termination protection on the other EC2 instance. Update the DB
instance to be Multi-AZ, and enable deletion protection.
B.Update the DB instance to be Multi-AZ, and enable deletion protection. Place the EC2 instances behind an
Application Load Balancer, and run them in an EC2 Auto Scaling group across multiple Availability Zones.
C.Create an additional DB instance along with an Amazon API Gateway and an AWS Lambda function.
Configure the application to invoke the Lambda function through API Gateway. Have the Lambda function write
the data to the two DB instances.
D.Place the EC2 instances in an EC2 Auto Scaling group that has multiple subnets located in multiple
Availability Zones. Use Spot Instances instead of On-Demand Instances. Set up Amazon CloudWatch alarms to
monitor the health of the instances Update the DB instance to be Multi-AZ, and enable deletion protection.
Answer: B
Explanation:
B is correct. HA ensured by DB in Mutli-AZ and EC2 in AG
After an audit from a regulator, the company has 90 days to move the data to the cloud. The company needs to
move the data efficiently and without disruption. The company still needs to be able to access and update the data
during the transfer window.
A.Create an AWS DataSync agent in the corporate data center. Create a data transfer task Start the transfer to
an Amazon S3 bucket.
B.Back up the data to AWS Snowball Edge Storage Optimized devices. Ship the devices to an AWS data center.
Mount a target Amazon S3 bucket on the on-premises file system.
C.Use rsync to copy the data directly from local storage to a designated Amazon S3 bucket over the Direct
Connect connection.
D.Back up the data on tapes. Ship the tapes to an AWS data center. Mount a target Amazon S3 bucket on the
on-premises file system.
Answer: A
Explanation:
By leveraging AWS DataSync in combination with AWS Direct Connect, the company can efficiently and
securely transfer its 700 terabytes of data to an Amazon S3 bucket without disruption. The solution allows
continued access and updates to the data during the transfer window, ensuring business continuity
throughout the migration process.
Which solution will meet these requirements with the LEAST operational overhead?
A.Turn on the S3 Versioning feature for the S3 bucket. Configure S3 Lifecycle to delete the data after 7 years.
Configure multi-factor authentication (MFA) delete for all S3 objects.
B.Turn on S3 Object Lock with governance retention mode for the S3 bucket. Set the retention period to expire
after 7 years. Recopy all existing objects to bring the existing data into compliance.
C.Turn on S3 Object Lock with compliance retention mode for the S3 bucket. Set the retention period to expire
after 7 years. Recopy all existing objects to bring the existing data into compliance.
D.Turn on S3 Object Lock with compliance retention mode for the S3 bucket. Set the retention period to expire
after 7 years. Use S3 Batch Operations to bring the existing data into compliance.
Answer: D
Explanation:
You need AWS Batch to re-apply certain config to files that were already in S3, like encryption
A.Create Amazon Route 53 health checks for each Region. Use an active-active failover configuration.
B.Create an Amazon CloudFront distribution with an origin for each Region. Use CloudFront health checks to
route traffic.
C.Create a transit gateway. Attach the transit gateway to the API Gateway endpoint in each Region. Configure
the transit gateway to route requests.
D.Create an Application Load Balancer in the primary Region. Set the target group to point to the API Gateway
endpoint hostnames in each Region.
Answer: B
Explanation:
This approach leverages the capabilities of CloudFront's intelligent routing and health checks to
automatically distribute traffic across multiple AWS Regions and provide failover capabilities in case of
Regional disruptions or unavailability.
What should a solutions architect do to mitigate any single point of failure in this architecture?
Answer: C
Explanation:
Redundant VPN connections: Instead of relying on a single device in the data center, the Management VPC
should have redundant VPN connections established through multiple customer gateways. This will ensure
high availability and fault tolerance in case one of the VPN connections or customer gateways fails.
Which solution will help the company migrate the database to AWS MOST cost-effectively?
A.Migrate the database to Amazon RDS for Oracle. Replace third-party features with cloud services.
B.Migrate the database to Amazon RDS Custom for Oracle. Customize the database settings to support third-
party features.
C.Migrate the database to an Amazon EC2 Amazon Machine Image (AMI) for Oracle. Customize the database
settings to support third-party features.
D.Migrate the database to Amazon RDS for PostgreSQL by rewriting the application code to remove
dependency on Oracle APEX.
Answer: B
Explanation:
https://aws.amazon.com/about-aws/whats-new/2021/10/amazon-rds-custom-oracle/
https://docs.aws.amazon.com/ja_jp/AmazonRDS/latest/UserGuide/Oracle.Resources.html
A.Create a VPC across two Availability Zones with the application's existing architecture. Host the application
with existing architecture on an Amazon EC2 instance in a private subnet in each Availability Zone with EC2
Auto Scaling groups. Secure the EC2 instance with security groups and network access control lists (network
ACLs).
B.Set up security groups and network access control lists (network ACLs) to control access to the database
layer. Set up a single Amazon RDS database in a private subnet.
C.Create a VPC across two Availability Zones. Refactor the application to host the web tier, application tier, and
database tier. Host each tier on its own private subnet with Auto Scaling groups for the web tier and application
tier.
D.Use a single Amazon RDS database. Allow database access only from the application tier security group.
E.Use Elastic Load Balancers in front of the web tier. Control access by using security groups containing
references to each layer's security groups.
F.Use an Amazon RDS database Multi-AZ cluster deployment in private subnets. Allow database access only
from application tier security groups.
Answer: CEF
Explanation:
Which activities will be managed by the company's operational team? (Choose three.)
A.Management of the Amazon RDS infrastructure layer, operating system, and platforms
B.Creation of an Amazon RDS DB instance and configuring the scheduled maintenance window
C.Configuration of additional software components on Amazon ECS for monitoring, patch management, log
management, and host intrusion detection
D.Installation of patches for all minor and major database versions for Amazon RDS
E.Ensure the physical security of the Amazon RDS infrastructure in the data center
F.Encryption of the data that moves in transit through Direct Connect
Answer: BCF
Explanation:
A.Use AWS App2Container (A2C) to containerize the job. Run the job as an Amazon Elastic Container Service
(Amazon ECS) task on AWS Fargate with 0.5 virtual CPU (vCPU) and 1 GB of memory.
B.Copy the code into an AWS Lambda function that has 1 GB of memory. Create an Amazon EventBridge
scheduled rule to run the code each hour.
C.Use AWS App2Container (A2C) to containerize the job. Install the container in the existing Amazon Machine
Image (AMI). Ensure that the schedule stops the container when the task finishes.
D.Configure the existing schedule to stop the EC2 instance at the completion of the job and restart the EC2
instance when the next job starts.
Answer: B
Explanation:
A.Use AWS Backup to create a backup vault that has a vault lock in governance mode. Create the required
backup plan.
B.Use Amazon Data Lifecycle Manager to create the required automated snapshot policy.
C.Use Amazon S3 File Gateway to create the backup. Configure the appropriate S3 Lifecycle management.
D.Use AWS Backup to create a backup vault that has a vault lock in compliance mode. Create the required
backup plan.
Answer: D
Explanation:
Compliance mode
Which solution will meet these requirements in the MOST operationally efficient way?
A.Use AWS Systems Manager Inventory to generate a map view from the detailed view report.
B.Use AWS Step Functions to collect workload details. Build architecture diagrams of the workloads manually.
C.Use Workload Discovery on AWS to generate architecture diagrams of the workloads.
D.Use AWS X-Ray to view the workload details. Build architecture diagrams with relationships.
Answer: C
Explanation:
Option A: AWS SSM offers "Software inventory": Collect software catalog and configuration for your
instances.Option C: Workload Discovery on AWS: is a tool for maintaining an inventory of the AWS resources
across your accounts and various Regions and mapping relationships between them, and displaying them in a
web UI.
AWS Workload Discovery - create diagram, map and visualise AWS resources across AWS accounts and
Regions
A.Use AWS Budgets to create a budget. Set the budget amount under the Cost and Usage Reports section of
the required AWS accounts.
B.Use AWS Budgets to create a budget. Set the budget amount under the Billing dashboards of the required
AWS accounts.
C.Create an IAM user for AWS Budgets to run budget actions with the required permissions.
D.Create an IAM role for AWS Budgets to run budget actions with the required permissions.
E.Add an alert to notify the company when each account meets its budget threshold. Add a budget action that
selects the IAM identity created with the appropriate config rule to prevent provisioning of additional
resources.
F.Add an alert to notify the company when each account meets its budget threshold. Add a budget action that
selects the IAM identity created with the appropriate service control policy (SCP) to prevent provisioning of
additional resources.
Answer: BDF
Explanation:
https://docs.aws.amazon.com/ja_jp/awsaccountbilling/latest/aboutv2/view-billing-dashboard.html
A.Create a disaster recovery (DR) plan that has a similar number of EC2 instances in the second Region.
Configure data replication.
B.Create point-in-time Amazon Elastic Block Store (Amazon EBS) snapshots of the EC2 instances. Copy the
snapshots to the second Region periodically.
C.Create a backup plan by using AWS Backup. Configure cross-Region backup to the second Region for the
EC2 instances.
D.Deploy a similar number of EC2 instances in the second Region. Use AWS DataSync to transfer the data from
the source Region to the second Region.
Answer: C
Explanation:
Using AWS Backup, you can create backup plans that automate the backup process for your EC2 instances.
By configuring cross-Region backup, you can ensure that backups are replicated to the second Region,
providing a disaster recovery capability. This solution is cost-effective as it leverages AWS Backup's built-in
features and eliminates the need for manual snapshot management or deploying and managing additional
EC2 instances in the second Region.
A.Use AWS DataSync to transfer the data. Create an AWS Lambda function for IdP authentication.
B.Use Amazon AppFlow flows to transfer the data. Create an Amazon Elastic Container Service (Amazon ECS)
task for IdP authentication.
C.Use AWS Transfer Family to transfer the data. Create an AWS Lambda function for IdP authentication.
D.Use AWS Storage Gateway to transfer the data. Create an Amazon Cognito identity pool for IdP
authentication.
Answer: C
Explanation:
Use AWS Transfer Family to transfer the data. Create an AWS Lambda function for IdP authentication.
Question: 458 CertyIQ
A solutions architect is designing a RESTAPI in Amazon API Gateway for a cash payback service. The application
requires 1 GB of memory and 2 GB of storage for its computation resources. The application will require that the
data is in a relational format.
Which additional combination ofAWS services will meet these requirements with the LEAST administrative effort?
(Choose two.)
A.Amazon EC2
B.AWS Lambda
C.Amazon RDS
D.Amazon DynamoDB
E.Amazon Elastic Kubernetes Services (Amazon EKS)
Answer: BC
Explanation:
The application will require that the data is in a relational format" so DynamoDB is out. RDS is the choice.
Lambda is severless.
An accounting team needs to determine spending on Amazon EC2 consumption. The accounting team must
determine which departments are responsible for the costs regardless ofAWS account. The accounting team has
access to AWS Cost Explorer for all AWS accounts within the organization and needs to access all reports from
Cost Explorer.
Which solution meets these requirements in the MOST operationally efficient way?
A.From the Organizations management account billing console, activate a user-defined cost allocation tag
named department. Create one cost report in Cost Explorer grouping by tag name, and filter by EC2.
B.From the Organizations management account billing console, activate an AWS-defined cost allocation tag
named department. Create one cost report in Cost Explorer grouping by tag name, and filter by EC2.
C.From the Organizations member account billing console, activate a user-defined cost allocation tag named
department. Create one cost report in Cost Explorer grouping by the tag name, and filter by EC2.
D.From the Organizations member account billing console, activate an AWS-defined cost allocation tag named
department. Create one cost report in Cost Explorer grouping by tag name, and filter by EC2.
Answer: A
Explanation:
From the Organizations management account billing console, activate a user-defined cost allocation tag
named department. Create one cost report in Cost Explorer grouping by tag name, and filter by EC2.
By activating a user-defined cost allocation tag named "department" and creating a cost report in Cost
Explorer that groups by the tag name and filters by EC2, the accounting team will be able to track and
attribute costs to specific departments across all AWS accounts within the organization. This approach allows
for consistent cost allocation and reporting regardless of the AWS account structure.
https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/custom-tags.html
A.Create AWS Lambda functions to transfer the data securely from Salesforce to Amazon S3.
B.Create an AWS Step Functions workflow. Define the task to transfer the data securely from Salesforce to
Amazon S3.
C.Create Amazon AppFlow flows to transfer the data securely from Salesforce to Amazon S3.
D.Create a custom connector for Salesforce to transfer the data securely from Salesforce to Amazon S3.
Answer: C
Explanation:
Amazon AppFlow is a fully managed integration service that allows you to securely transfer data between
different SaaS applications and AWS services. It provides built-in encryption options and supports encryption
in transit using SSL/TLS protocols. With AppFlow, you can configure the data transfer flow from Salesforce to
Amazon S3, ensuring data encryption at rest by utilizing AWS KMS CMKs.
A.Use AWS Global Accelerator to create an accelerator. Create an Application Load Balancer (ALB) behind an
accelerator endpoint that uses Global Accelerator integration and listening on the TCP and UDP ports. Update
the Auto Scaling group to register instances on the ALB.
B.Use AWS Global Accelerator to create an accelerator. Create a Network Load Balancer (NLB) behind an
accelerator endpoint that uses Global Accelerator integration and listening on the TCP and UDP ports. Update
the Auto Scaling group to register instances on the NLB.
C.Create an Amazon CloudFront content delivery network (CDN) endpoint. Create a Network Load Balancer
(NLB) behind the endpoint and listening on the TCP and UDP ports. Update the Auto Scaling group to register
instances on the NLB. Update CloudFront to use the NLB as the origin.
D.Create an Amazon CloudFront content delivery network (CDN) endpoint. Create an Application Load Balancer
(ALB) behind the endpoint and listening on the TCP and UDP ports. Update the Auto Scaling group to register
instances on the ALB. Update CloudFront to use the ALB as the origin.
Answer: B
Explanation:
NLB + Accelerator
AWS Global Accelerator is a better solution for the mobile gaming app than CloudFront
Question: 462 CertyIQ
A company has an application that processes customer orders. The company hosts the application on an Amazon
EC2 instance that saves the orders to an Amazon Aurora database. Occasionally when traffic is high the workload
does not process orders fast enough.
What should a solutions architect do to write the orders reliably to the database as quickly as possible?
A.Increase the instance size of the EC2 instance when traffic is high. Write orders to Amazon Simple
Notification Service (Amazon SNS). Subscribe the database endpoint to the SNS topic.
B.Write orders to an Amazon Simple Queue Service (Amazon SQS) queue. Use EC2 instances in an Auto Scaling
group behind an Application Load Balancer to read from the SQS queue and process orders into the database.
C.Write orders to Amazon Simple Notification Service (Amazon SNS). Subscribe the database endpoint to the
SNS topic. Use EC2 instances in an Auto Scaling group behind an Application Load Balancer to read from the
SNS topic.
D.Write orders to an Amazon Simple Queue Service (Amazon SQS) queue when the EC2 instance reaches CPU
threshold limits. Use scheduled scaling of EC2 instances in an Auto Scaling group behind an Application Load
Balancer to read from the SQS queue and process orders into the database.
Answer: B
Explanation:
By decoupling the write operation from the processing operation using SQS, you ensure that the orders are
reliably stored in the queue, regardless of the processing capacity of the EC2 instances. This allows the
processing to be performed at a scalable rate based on the available EC2 instances, improving the overall
reliability and speed of order processing.
Answer: C
Explanation:
AWS Lambda charges you based on the number of invocations and the execution time of your function. Since
the data processing job is relatively small (2 MB of data), Lambda is a cost-effective choice. You only pay for
the actual usage without the need to provision and maintain infrastructure.
Question: 464 CertyIQ
A company hosts an online shopping application that stores all orders in an Amazon RDS for PostgreSQL Single-
AZ DB instance. Management wants to eliminate single points of failure and has asked a solutions architect to
recommend an approach to minimize database downtime without requiring any changes to the application code.
A.Convert the existing database instance to a Multi-AZ deployment by modifying the database instance and
specifying the Multi-AZ option.
B.Create a new RDS Multi-AZ deployment. Take a snapshot of the current RDS instance and restore the new
Multi-AZ deployment with the snapshot.
C.Create a read-only replica of the PostgreSQL database in another Availability Zone. Use Amazon Route 53
weighted record sets to distribute requests across the databases.
D.Place the RDS for PostgreSQL database in an Amazon EC2 Auto Scaling group with a minimum group size of
two. Use Amazon Route 53 weighted record sets to distribute requests across instances.
Answer: A
Explanation:
Compared to other solutions that involve creating new instances, restoring snapshots, or setting up
replication manually, converting to a Multi-AZ deployment is a simpler and more streamlined approach with
lower overhead.Overall, option A offers a cost-effective and efficient way to minimize database downtime
without requiring significant changes or additional complexities.
A.Use General Purpose SSD (gp3) EBS volumes with Amazon Elastic Block Store (Amazon EBS) Multi-Attach
B.Use Throughput Optimized HDD (st1) EBS volumes with Amazon Elastic Block Store (Amazon EBS) Multi-
Attach
C.Use Provisioned IOPS SSD (io2) EBS volumes with Amazon Elastic Block Store (Amazon EBS) Multi-Attach
D.Use General Purpose SSD (gp2) EBS volumes with Amazon Elastic Block Store (Amazon EBS) Multi-Attach
Answer: C
Explanation:
C is right Amazon EBS Multi-Attach enables you to attach a single Provisioned IOPS SSD (io1 or io2) volume to
multiple instances that are in the same Availability Zone.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volumes-multi.htmlnothing about gp
A.Configure the application to use Multi-AZ EC2 Auto Scaling and create an Application Load Balancer
B.Configure the application to take snapshots of the EC2 instances and send them to a different AWS Region
C.Configure the application to use Amazon Route 53 latency-based routing to feed requests to the application
D.Configure Amazon Route 53 rules to handle incoming requests and create a Multi-AZ Application Load
Balancer
Answer: A
Explanation:
By combining Multi-AZ EC2 Auto Scaling and an Application Load Balancer, you achieve high availability for
the EC2 instances hosting your stateless two-tier application.
A.Turn on discount sharing from the Billing Preferences section of the account console in the member account
that purchased the Compute Savings Plan.
B.Turn on discount sharing from the Billing Preferences section of the account console in the company's
Organizations management account.
C.Migrate additional compute workloads from another AWS account to the account that has the Compute
Savings Plan.
D.Sell the excess Savings Plan commitment in the Reserved Instance Marketplace.
Answer: B
Explanation:
To summarize, option C (Migrate additional compute workloads from another AWS account to the account
that has the Compute Savings Plan) is a valid solution to address the underutilization of the Compute Savings
Plan. However, it involves workload migration and may require careful planning and coordination. Consider the
feasibility and impact of migrating workloads before implementing this solution.
A.Design a WebSocket API by using Amazon API Gateway. Host the application in Amazon Elastic Container
Service (Amazon ECS) in a private subnet. Create a private VPC link for API Gateway to access Amazon ECS.
B.Design a REST API by using Amazon API Gateway. Host the application in Amazon Elastic Container Service
(Amazon ECS) in a private subnet. Create a private VPC link for API Gateway to access Amazon ECS.
C.Design a WebSocket API by using Amazon API Gateway. Host the application in Amazon Elastic Container
Service (Amazon ECS) in a private subnet. Create a security group for API Gateway to access Amazon ECS.
D.Design a REST API by using Amazon API Gateway. Host the application in Amazon Elastic Container Service
(Amazon ECS) in a private subnet. Create a security group for API Gateway to access Amazon ECS.
Answer: B
Explanation:
REST API with Amazon API Gateway: REST APIs are the appropriate choice for providing the frontend of the
microservices application. Amazon API Gateway allows you to design, deploy, and manage REST APIs at
scale.Amazon ECS in a Private Subnet: Hosting the application in Amazon ECS in a private subnet ensures that
the containers are securely deployed within the VPC and not directly exposed to the public internet.Private
VPC Link: To enable the REST API in API Gateway to access the backend services hosted in Amazon ECS, you
can create a private VPC link. This establishes a private network connection between the API Gateway and
ECS containers, allowing secure communication without traversing the public internet.
The company cannot predict or control the access pattern. The company wants to reduce its S3 costs.
Answer: C
Explanation:
A.Create a NAT gateway and make it the destination of the subnet's route table
B.Create an internet gateway and make it the destination of the subnet's route table
C.Create a virtual private gateway and make it the destination of the subnet's route table
D.Create an egress-only internet gateway and make it the destination of the subnet's route table
Answer: D
Explanation:
An egress-only internet gateway (EIGW) is specifically designed for IPv6-only VPCs and provides outbound
IPv6 internet access while blocking inbound IPv6 traffic. It satisfies the requirement of preventing external
services from initiating connections to the EC2 instances while allowing the instances to initiate outbound
communications.
Answer: C
Explanation:
Gateway VPC Endpoint: A gateway VPC endpoint enables private connectivity between a VPC and Amazon S3.
It allows direct access to Amazon S3 without the need for internet gateways, NAT devices, VPN connections,
or AWS Direct Connect.Minimize Internet Traffic: By creating a gateway VPC endpoint for Amazon S3 and
associating it with all route tables in the VPC, the traffic between the VPC and Amazon S3 will be kept within
the AWS network. This helps in minimizing data transfer costs and prevents the need for traffic to traverse
the internet.Cost-Effective: With a gateway VPC endpoint, the data transfer between the application running
in the VPC and the S3 bucket stays within the AWS network, reducing the need for data transfer across the
internet. This can result in cost savings, especially when dealing with large amounts of data.
A.Configure Amazon DynamoDB Accelerator (DAX) for the new messages table. Update the code to use the
DAX endpoint.
B.Add DynamoDB read replicas to handle the increased read load. Update the application to point to the read
endpoint for the read replicas.
C.Double the number of read capacity units for the new messages table in DynamoDB. Continue to use the
existing DynamoDB endpoint.
D.Add an Amazon ElastiCache for Redis cache to the application stack. Update the application to point to the
Redis cache endpoint instead of DynamoDB.
Answer: A
Explanation:
Amazon DynamoDB Accelerator (DAX): DAX is an in-memory cache for DynamoDB that provides low-latency
access to frequently accessed data. By configuring DAX for the new messages table, read requests for the
table will be served from the DAX cache, significantly reducing the latency.Minimal Application Changes: With
DAX, the application code can be updated to use the DAX endpoint instead of the standard DynamoDB
endpoint. This change is relatively minimal and does not require extensive modifications to the application's
data access logic.Low Latency: DAX caches frequently accessed data in memory, allowing subsequent read
requests for the same data to be served with minimal latency. This ensures that new messages can be read by
users with minimal delay.
Answer: A
Explanation:
Amazon CloudFront: CloudFront is a content delivery network (CDN) service that caches content at edge
locations worldwide. By creating a CloudFront distribution, static content from the website can be cached at
edge locations, reducing the load on the EC2 instances and improving the overall performance. Caching Static
Files: Since the website serves static content, caching these files at CloudFront edge locations can
significantly reduce the number of requests forwarded to the EC2 instances. This helps to lower the overall
cost by offloading traffic from the instances and reducing the data transfer costs.
Which solution will meet these requirements with the LEAST amount of administrative effort?
A.Use VPC peering to manage VPC communication in a single Region. Use VPC peering across Regions to
manage VPC communications.
B.Use AWS Direct Connect gateways across all Regions to connect VPCs across regions and manage VPC
communications.
C.Use AWS Transit Gateway to manage VPC communication in a single Region and Transit Gateway peering
across Regions to manage VPC communications.
D.Use AWS PrivateLink across all Regions to connect VPCs across Regions and manage VPC communications
Answer: C
Explanation:
AWS Transit Gateway: Transit Gateway is a highly scalable service that simplifies network connectivity
between VPCs and on-premises networks. By using a Transit Gateway in a single Region, you can centralize
VPC communication management and reduce administrative effort.Transit Gateway Peering: Transit Gateway
supports peering connections across AWS Regions, allowing you to establish connectivity between VPCs in
different Regions without the need for complex VPC peering configurations. This simplifies the management
of VPC communications across Regions.
A solutions architect wants to use AWS Backup to manage the replication to another Region.
Answer: C
Explanation:
EFS Replication can replicate your file system data to another Region or within the same Region without
requiring additional infrastructure or a custom process. Amazon EFS Replication automatically and
transparently replicates your data to a second file system in a Region or AZ of your choice. You can use the
Amazon EFS console, AWS CLI, and APIs to activate replication on an existing file system. EFS Replication is
continual and provides a recovery point objective (RPO) and a recovery time objective (RTO) of minutes,
helping you meet your compliance and business continuity goals.
Which additional action is the MOST secure way to grant permissions to the new users?
Answer: C
Explanation:
Option B is incorrect because IAM roles are not directly attached to IAM groups.
Which statement should a solutions architect add to the policy to correct bucket access?
A.
B.
C.
D.
Answer: D
Explanation:
D for sure
Which solution will meet these requirements in the MOST secure way?
A.Upload all files to an Amazon S3 bucket that is configured for static website hosting. Grant read-only IAM
permissions to any AWS principals that access the S3 bucket until the designated date.
B.Create a new Amazon S3 bucket with S3 Versioning enabled. Use S3 Object Lock with a retention period in
accordance with the designated date. Configure the S3 bucket for static website hosting. Set an S3 bucket
policy to allow read-only access to the objects.
C.Create a new Amazon S3 bucket with S3 Versioning enabled. Configure an event trigger to run an AWS
Lambda function in case of object modification or deletion. Configure the Lambda function to replace the
objects with the original versions from a private S3 bucket.
D.Upload all files to an Amazon S3 bucket that is configured for static website hosting. Select the folder that
contains the files. Use S3 Object Lock with a retention period in accordance with the designated date. Grant
read-only IAM permissions to any AWS principals that access the S3 bucket.
Answer: B
Explanation:
Option A allows the files to be modified or deleted by anyone with read-only IAM permissions. Option C allows
the files to be modified or deleted by anyone who can trigger the AWS Lambda function.Option D allows the
files to be modified or deleted by anyone with read-only IAM permissions to the S3 bucket
A.Use AWS Systems Manager to replicate and provision the prototype infrastructure in two Availability Zones
B.Define the infrastructure as a template by using the prototype infrastructure as a guide. Deploy the
infrastructure with AWS CloudFormation.
C.Use AWS Config to record the inventory of resources that are used in the prototype infrastructure. Use AWS
Config to deploy the prototype infrastructure into two Availability Zones.
D.Use AWS Elastic Beanstalk and configure it to use an automated reference to the prototype infrastructure to
automatically deploy new environments in two Availability Zones.
Answer: B
Explanation:
Define the infrastructure as a template by using the prototype infrastructure as a guide. Deploy the
infrastructure with AWS CloudFormation.
Which capability should the solutions architect use to meet the compliance requirements?
Answer: B
Explanation:
A VPC endpoint enables you to privately access AWS services without requiring internet gateways, NAT
gateways, VPN connections, or AWS Direct Connect connections. It allows you to connect your VPC directly to
supported AWS services, such as Amazon S3, over a private connection within the AWS network.By creating a
VPC endpoint for Amazon S3, the traffic between your EC2 instances and S3 will stay within the AWS
network and won't traverse the public internet. This provides a more secure and compliant solution, as the
data transfer remains within the private network boundaries.
Answer: B
Explanation:
In the write-through caching strategy, when a customer adds or updates an item in the database, the
application first writes the data to the database and then updates the cache with the same data. This ensures
that the cache is always synchronized with the database, as every write operation triggers an update to the
cache.
Which solution will meet these requirements with the LEAST operational overhead?
A.Use the s3 sync command in the AWS CLI to move the data directly to an S3 bucket
B.Use AWS DataSync to migrate the data from the on-premises location to an S3 bucket
C.Use AWS Snowball to move the data to an S3 bucket
D.Set up an IPsec VPN from the on-premises location to AWS. Use the s3 cp command in the AWS CLI to move
the data directly to an S3 bucket
Answer: B
Explanation:
AWS DataSync is a fully managed data transfer service that simplifies and automates the process of moving
data between on-premises storage and Amazon S3. It provides secure and efficient data transfer with built-in
encryption, ensuring that the data is encrypted in transit.By using AWS DataSync, the company can easily
migrate the 100 GB of historical data from their on-premises location to an S3 bucket. DataSync will handle
the encryption of data in transit and ensure secure transfer.
A.Create an AWS Lambda function based on the container image of the job. Configure Amazon EventBridge to
invoke the function every 10 minutes.
B.Use AWS Batch to create a job that uses AWS Fargate resources. Configure the job scheduling to run every
10 minutes.
C.Use Amazon Elastic Container Service (Amazon ECS) on AWS Fargate to run the job. Create a scheduled task
based on the container image of the job to run every 10 minutes.
D.Use Amazon Elastic Container Service (Amazon ECS) on AWS Fargate to run the job. Create a standalone task
based on the container image of the job. Use Windows task scheduler to run the job every
10 minutes.
Answer: C
Explanation:
By using Amazon ECS on AWS Fargate, you can run the job in a containerized environment while benefiting
from the serverless nature of Fargate, where you only pay for the resources used during the job's execution.
Creating a scheduled task based on the container image of the job ensures that it runs every 10 minutes,
meeting the required schedule. This solution provides flexibility, scalability, and cost-effectiveness.
Which combination of actions should a solutions architect recommend to meet these requirements? (Choose two.)
A.Create a new organization in AWS Organizations with all features turned on. Create the new AWS accounts in
the organization.
B.Set up an Amazon Cognito identity pool. Configure AWS IAM Identity Center (AWS Single Sign-On) to accept
Amazon Cognito authentication.
C.Configure a service control policy (SCP) to manage the AWS accounts. Add AWS IAM Identity Center (AWS
Single Sign-On) to AWS Directory Service.
D.Create a new organization in AWS Organizations. Configure the organization's authentication mechanism to
use AWS Directory Service directly.
E.Set up AWS IAM Identity Center (AWS Single Sign-On) in the organization. Configure IAM Identity Center, and
integrate it with the company's corporate directory service.
Answer: AE
Explanation:
A. By creating a new organization in AWS Organizations, you can establish a consolidated multi-account
architecture. This allows you to create and manage multiple AWS accounts for different business units under
a single organization.E. Setting up AWS IAM Identity Center (AWS Single Sign-On) within the organization
enables you to integrate it with the company's corporate directory service. This integration allows for
centralized authentication, where users can sign in using their corporate credentials and access the AWS
accounts within the organization.Together, these actions create a centralized, multi-account architecture that
leverages AWS Organizations for account management and AWS IAM Identity Center (AWS Single Sign-On)
for authentication and access control.
A.Store the video archives in Amazon S3 Glacier and use Expedited retrievals.
B.Store the video archives in Amazon S3 Glacier and use Standard retrievals.
C.Store the video archives in Amazon S3 Standard-Infrequent Access (S3 Standard-IA).
D.Store the video archives in Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA).
Answer: A
Explanation:
By choosing Expedited retrievals in Amazon S3 Glacier, you can reduce the retrieval time to minutes, making
it suitable for scenarios where quick access is required. Expedited retrievals come with a higher cost per
retrieval compared to standard retrievals but provide faster access to your archived data.
Expedited retrieval typically takes 1-5 minutes to retrieve data, making it suitable for the company's
requirement of having the files available in a maximum of five minutes.
A.Use Amazon S3 to host static content. Use Amazon Elastic Container Service (Amazon ECS) with AWS
Fargate for compute power. Use a managed Amazon RDS cluster for the database.
B.Use Amazon CloudFront to host static content. Use Amazon Elastic Container Service (Amazon ECS) with
Amazon EC2 for compute power. Use a managed Amazon RDS cluster for the database.
C.Use Amazon S3 to host static content. Use Amazon Elastic Kubernetes Service (Amazon EKS) with AWS
Fargate for compute power. Use a managed Amazon RDS cluster for the database.
D.Use Amazon EC2 Reserved Instances to host static content. Use Amazon Elastic Kubernetes Service (Amazon
EKS) with Amazon EC2 for compute power. Use a managed Amazon RDS cluster for the database.
Answer: A
Explanation:
Amazon S3 is a highly scalable and cost-effective storage service that can be used to host static website
content. It provides durability, high availability, and low latency access to the static files.Amazon ECS with
AWS Fargate eliminates the need to manage the underlying infrastructure. It allows you to run containerized
applications without provisioning or managing EC2 instances. This reduces operational overhead and provides
scalability.By using a managed Amazon RDS cluster for the database, you can offload the management tasks
such as backups, patching, and monitoring to AWS. This reduces the operational burden and ensures high
availability and durability of the database.
Answer: C
Explanation:
Amazon EFS is a fully managed file system service that provides scalable, shared storage for Amazon EC2
instances. It supports the Network File System version 4 (NFSv4) protocol, which is a native protocol for
Linux-based systems. EFS is designed to be highly available, durable, and scalable.
A.Add all finance team users to an IAM group. Attach an AWS managed policy named Billing to the group.
B.Attach an identity-based policy to deny access to the billing information to all users, including the root user.
C.Create a service control policy (SCP) to deny access to the billing information. Attach the SCP to the root
organizational unit (OU).
D.Convert from the Organizations all features feature set to the Organizations consolidated billing feature set.
Answer: C
Explanation:
Service Control Policies (SCP): SCPs are an integral part of AWS Organizations and allow you to set fine-
grained permissions on the organizational units (OUs) within your AWS Organization. SCPs provide central
control over the maximum permissions that can be granted to member accounts, including the root
user.Denying Access to Billing Information: By creating an SCP and attaching it to the root OU, you can
explicitly deny access to billing information for all accounts within the organization. SCPs can be used to
restrict access to various AWS services and actions, including billing-related services.Granular Control: SCPs
enable you to define specific permissions and restrictions at the organizational unit level. By denying access
to billing information at the root OU, you can ensure that no member accounts, including root users, have
access to the billing information.
A solutions architect needs to retain messages that are not delivered and analyze the messages for up to 14 days.
Which solution will meet these requirements with the LEAST development effort?
A.Configure an Amazon SNS dead letter queue that has an Amazon Kinesis Data Stream target with a retention
period of 14 days.
B.Add an Amazon Simple Queue Service (Amazon SQS) queue with a retention period of 14 days between the
application and Amazon SNS.
C.Configure an Amazon SNS dead letter queue that has an Amazon Simple Queue Service (Amazon SQS) target
with a retention period of 14 days.
D.Configure an Amazon SNS dead letter queue that has an Amazon DynamoDB target with a TTL attribute set
for a retention period of 14 days.
Answer: C
Explanation:
The message retention period in Amazon SQS can be set between 1 minute and 14 days (the default is 4 days).
Therefore, you can configure your SQS DLQ to retain undelivered SNS messages for 14 days. This will enable
you to analyze undelivered messages with the least development effort.
A.Use an Amazon EMR cluster. Create an Apache Hive job to back up the data to Amazon S3.
B.Export the data directly from DynamoDB to Amazon S3 with continuous backups. Turn on point-in-time
recovery for the table.
C.Configure Amazon DynamoDB Streams. Create an AWS Lambda function to consume the stream and export
the data to an Amazon S3 bucket.
D.Create an AWS Lambda function to export the data from the database tables to Amazon S3 on a regular
basis. Turn on point-in-time recovery for the table.
Answer: B
Explanation:
Continuous Backups: DynamoDB provides a feature called continuous backups, which automatically backs up
your table data. Enabling continuous backups ensures that your table data is continuously backed up without
the need for additional coding or manual interventions.Export to Amazon S3: With continuous backups
enabled, DynamoDB can directly export the backups to an Amazon S3 bucket. This eliminates the need for
custom coding to export the data.Minimal Coding: Option B requires the least amount of coding effort as
continuous backups and the export to Amazon S3 functionality are built-in features of DynamoDB.No Impact
on Availability and RCUs: Enabling continuous backups and exporting data to Amazon S3 does not affect the
availability of your application or the read capacity units (RCUs) defined for the table. These operations
happen in the background and do not impact the table's performance or consume additional RCUs.
A.Use AWS Lambda event source mapping. Set Amazon Simple Queue Service (Amazon SQS) standard queues
as the event source. Use AWS Key Management Service (SSE-KMS) for encryption. Add the kms:Decrypt
permission for the Lambda execution role.
B.Use AWS Lambda event source mapping. Use Amazon Simple Queue Service (Amazon SQS) FIFO queues as
the event source. Use SQS managed encryption keys (SSE-SQS) for encryption. Add the encryption key
invocation permission for the Lambda function.
C.Use the AWS Lambda event source mapping. Set Amazon Simple Queue Service (Amazon SQS) FIFO queues
as the event source. Use AWS KMS keys (SSE-KMS). Add the kms:Decrypt permission for the Lambda
execution role.
D.Use the AWS Lambda event source mapping. Set Amazon Simple Queue Service (Amazon SQS) standard
queues as the event source. Use AWS KMS keys (SSE-KMS) for encryption. Add the encryption key invocation
permission for the Lambda function.
Answer: A
Explanation:
SQS FIFO is slightly more expensive than standard queuehttps://calculator.aws/#/addService/SQSI would still
go with the standard because of the keyword "at least once" because FIFO process "exactly once". That
leaves us with A and D, I believe that lambda function only needs to decrypt so I would choose A
Which solution will meet these requirements with the LEAST development effort?
A.Develop AWS Systems Manager templates that use an approved EC2 creation process. Use the approved
Systems Manager templates to provision EC2 instances.
B.Use AWS Organizations to organize the accounts into organizational units (OUs). Define and attach a service
control policy (SCP) to control the usage of EC2 instance types.
C.Configure an Amazon EventBridge rule that invokes an AWS Lambda function when an EC2 instance is
created. Stop disallowed EC2 instance types.
D.Set up AWS Service Catalog products for the staff to create the allowed EC2 instance types. Ensure that
staff can deploy EC2 instances only by using the Service Catalog products.
Answer: B
Explanation:
AWS Organizations: AWS Organizations is a service that helps you centrally manage multiple AWS accounts.
It enables you to group accounts into organizational units (OUs) and apply policies across those
accounts.Service Control Policies (SCPs): SCPs in AWS Organizations allow you to define fine-grained
permissions and restrictions at the account or OU level. By attaching an SCP to the development accounts,
you can control the creation and usage of EC2 instance types.Least Development Effort: Option B requires
minimal development effort as it leverages the built-in features of AWS Organizations and SCPs. You can
define the SCP to restrict the use of oversized EC2 instance types and apply it to the appropriate OUs or
accounts.
The company needs to create written sentiment analysis reports from the customer service call recordings. The
customer service call recording text must be translated into English.
Answer: DEF
Explanation:
Amazon Transcribe will convert the audio recordings into text, Amazon Translate will translate the text into
English, and Amazon Comprehend will perform sentiment analysis on the translated text to generate
sentiment analysis reports.
The administrator is using an IAM role that has the following IAM policy attached:
What is the cause of the unsuccessful request?
Answer: D
Explanation:
The request to terminate the EC2 instance does not originate from the CIDR blocks 192.0.2.0/24 or
203.0.113.0/24.
A.Configure AWS Audit Manager on the account. Select the Payment Card Industry Data Security Standards
(PCI DSS) for auditing.
B.Configure Amazon S3 Inventory on the S3 bucket Configure Amazon Athena to query the inventory.
C.Configure Amazon Macie to run a data discovery job that uses managed identifiers for the required data
types.
D.Use Amazon S3 Select to run a report across the S3 bucket.
Answer: C
Explanation:
Amazon Macie is a service that helps discover, classify, and protect sensitive data stored in AWS. It uses
machine learning algorithms and managed identifiers to detect various types of sensitive information,
including personally identifiable information (PII) and financial information. By configuring Amazon Macie to
run a data discovery job with the appropriate managed identifiers for the required data types (such as
passport numbers and credit card numbers), the company can identify and classify any sensitive data present
in the S3 bucket.
Which combination of actions should a solutions architect take to meet these requirements? (Choose two.)
Answer: BD
Explanation:
By combining the deployment of an AWS Storage Gateway file gateway and an AWS Storage Gateway
volume gateway, the company can address both its block storage and NFS storage needs, while leveraging
local caching capabilities for improved performance.
A.Provision a dedicated EC2 NAT instance in the public subnet. Configure the route table for the private subnet
to use the elastic network interface of this instance as the destination for all S3 traffic.
B.Provision a dedicated EC2 NAT instance in the private subnet. Configure the route table for the public subnet
to use the elastic network interface of this instance as the destination for all S3 traffic.
C.Provision a VPC gateway endpoint. Configure the route table for the private subnet to use the gateway
endpoint as the route for all S3 traffic.
D.Provision a second NAT gateway. Configure the route table for the private subnet to use this NAT gateway as
the destination for all S3 traffic.
Answer: C
Explanation:
A VPC gateway endpoint allows you to privately access Amazon S3 from within your VPC without using a NAT
gateway or NAT instance. By provisioning a VPC gateway endpoint for S3, the service in the private subnet
can directly communicate with S3 without incurring data transfer costs for traffic going through a NAT
gateway.
The company wants to reduce costs. The company has identified the S3 bucket as a large expense.
Which solution will reduce the S3 costs with the LEAST operational overhead?
A.Use S3 Lifecycle to delete expired object versions and retain the two most recent versions.
B.Use an AWS Lambda function to check for older versions and delete all but the two most recent versions.
C.Use S3 Batch Operations to delete noncurrent object versions and retain only the two most recent versions.
D.Deactivate versioning on the S3 bucket and retain the two most recent versions.
Answer: A
Explanation:
S3 Lifecycle policies allow you to define rules that automatically transition or expire objects based on their
age or other criteria. By configuring an S3 Lifecycle policy to delete expired object versions and retain only
the two most recent versions, you can effectively manage the storage costs while maintaining the desired
retention policy. This solution is highly automated and requires minimal operational overhead as the lifecycle
management is handled by S3 itself.
Answer: D
Explanation:
1. D For Dedicated Connections, 1 Gbps, 10 Gbps, and 100 Gbps ports are available. For Hosted Connections,
connection speeds of 50 Mbps, 100 Mbps, 200 Mbps, 300 Mbps, 400 Mbps, 500 Mbps, 1 Gbps, 2 Gbps, 5 Gbps
and 10 Gbps may be ordered from approved AWS Direct Connect Partners. See AWS Direct Connect Partners
for more information.
2. A hosted connection is a lower-cost option that is offered by AWS Direct Connect Partners
A.Deploy AWS DataSync agents on premises. Schedule DataSync tasks to transfer the data to the FSx for
Windows File Server file system.
B.Copy the shares on each file server into Amazon S3 buckets by using the AWS CLI. Schedule AWS DataSync
tasks to transfer the data to the FSx for Windows File Server file system.
C.Remove the drives from each file server. Ship the drives to AWS for import into Amazon S3. Schedule AWS
DataSync tasks to transfer the data to the FSx for Windows File Server file system.
D.Order an AWS Snowcone device. Connect the device to the on-premises network. Launch AWS DataSync
agents on the device. Schedule DataSync tasks to transfer the data to the FSx for Windows File Server file
system.
E.Order an AWS Snowball Edge Storage Optimized device. Connect the device to the on-premises network.
Copy data to the device by using the AWS CLI. Ship the device back to AWS for import into Amazon S3.
Schedule AWS DataSync tasks to transfer the data to the FSx for Windows File Server file system.
Answer: AD
Explanation:
A This option involves deploying DataSync agents on your on-premises file servers and using DataSync to
transfer the data directly to the FSx for Windows File Server. DataSync ensures that file permissions are
preserved during the migration process.DThis option involves using an AWS Snowcone device, a portable data
transfer device. You would connect the Snowcone device to your on-premises network, launch DataSync
agents on the device, and schedule DataSync tasks to transfer the data to FSx for Windows File Server.
DataSync handles the migration process while preserving file permissions.
Which solution will meet these requirements with the MOST operational efficiency?
A.Use Amazon Kinesis Data Streams to ingest data. Use AWS Lambda to analyze the data in real time.
B.Use AWS Glue to ingest data. Use Amazon Kinesis Data Analytics to analyze the data in real time.
C.Use Amazon Kinesis Data Firehose to ingest data. Use Amazon Kinesis Data Analytics to analyze the data in
real time.
D.Use Amazon API Gateway to ingest data. Use AWS Lambda to analyze the data in real time.
Answer: C
Explanation:
By leveraging the combination of Amazon Kinesis Data Firehose and Amazon Kinesis Data Analytics, you can
efficiently ingest and analyze the payment data in real time without the need for manual processing or
additional infrastructure management. This solution provides a streamlined and scalable approach to handle
continuous data ingestion and analysis requirements.
Which combination of actions should a solutions architect take to improve the performance and resilience of the
website? (Choose two.)
A.Move the website images into an Amazon S3 bucket that is mounted on every EC2 instance
B.Share the website images by using an NFS share from the primary EC2 instance. Mount this share on the
other EC2 instances.
C.Move the website images onto an Amazon Elastic File System (Amazon EFS) file system that is mounted on
every EC2 instance.
D.Create an Amazon Machine Image (AMI) from the existing EC2 instance. Use the AMI to provision new
instances behind an Application Load Balancer as part of an Auto Scaling group. Configure the Auto Scaling
group to maintain a minimum of two instances. Configure an accelerator in AWS Global Accelerator for the
website
E.Create an Amazon Machine Image (AMI) from the existing EC2 instance. Use the AMI to provision new
instances behind an Application Load Balancer as part of an Auto Scaling group. Configure the Auto Scaling
group to maintain a minimum of two instances. Configure an Amazon CloudFront distribution for the website.
Answer: CE
Explanation:
By combining the use of Amazon EFS for shared file storage and Amazon CloudFront for content delivery, you
can achieve improved performance and resilience for the website.
What should the company do to obtain access to customer accounts in the MOST secure way?
A.Ensure that the customers create an IAM role in their account with read-only EC2 and CloudWatch
permissions and a trust policy to the company’s account.
B.Create a serverless API that implements a token vending machine to provide temporary AWS credentials for a
role with read-only EC2 and CloudWatch permissions.
C.Ensure that the customers create an IAM user in their account with read-only EC2 and CloudWatch
permissions. Encrypt and store customer access and secret keys in a secrets management system.
D.Ensure that the customers create an Amazon Cognito user in their account to use an IAM role with read-only
EC2 and CloudWatch permissions. Encrypt and store the Amazon Cognito user and password in a secrets
management system.
Answer: A
Explanation:
By having customers create an IAM role with the necessary permissions in their own accounts, the company
can use AWS Identity and Access Management (IAM) to establish cross-account access. The trust policy
allows the company's AWS account to assume the customer's IAM role temporarily, granting access to the
specified resources (EC2 instances and CloudWatch metrics) within the customer's account. This approach
follows the principle of least privilege, as the company only requests the necessary permissions and does not
require long-term access keys or user credentials from the customers.
A.Set up VPC peering connections between each VPC. Update each associated subnet’s route table
B.Configure a NAT gateway and an internet gateway in each VPC to connect each VPC through the internet
C.Create an AWS Transit Gateway in the networking team’s AWS account. Configure static routes from each
VPC.
D.Deploy VPN gateways in each VPC. Create a transit VPC in the networking team’s AWS account to connect to
each VPC.
Answer: C
Explanation:
AWS Transit Gateway is a highly scalable and centralized hub for connecting multiple VPCs, on-premises
networks, and remote networks. It simplifies network connectivity by providing a single entry point and
reducing the number of connections required. In this scenario, deploying an AWS Transit Gateway in the
networking team's AWS account allows for efficient management and control over the network connectivity
across multiple VPCs.
Which solution will provide EC2 instances to meet these requirements MOST cost-effectively?
A.Purchase a 1-year Savings Plan for Amazon EC2 that covers the instance family of the Auto Scaling group
that the batch job uses.
B.Purchase a 1-year Reserved Instance for the specific instance type and operating system of the instances in
the Auto Scaling group that the batch job uses.
C.Create a new launch template for the Auto Scaling group. Set the instances to Spot Instances. Set a policy to
scale out based on CPU usage.
D.Create a new launch template for the Auto Scaling group. Increase the instance size. Set a policy to scale out
based on CPU usage.
Answer: C
Explanation:
Purchasing a 1-year Savings Plan (option A) or a 1-year Reserved Instance (option B) may provide cost savings,
but they are more suitable for long-running, steady-state workloads. Since your batch jobs run for a specific
period each day, using Spot Instances with the ability to scale out based on CPU usage is a more cost-
effective choice.
A.Upload files from the user's browser to the application servers. Transfer the files to an Amazon S3 bucket.
B.Provision an AWS Storage Gateway file gateway. Upload files directly from the user's browser to the file
gateway.
C.Generate Amazon S3 presigned URLs in the application. Upload files directly from the user's browser into an
S3 bucket.
D.Provision an Amazon Elastic File System (Amazon EFS) file system. Upload files directly from the user's
browser to the file system.
Answer: C
Explanation:
This approach allows users to upload files directly to S3 without passing through the application servers,
reducing the load on the application and improving scalability. It leverages the client-side capabilities to
handle the file uploads and offloads the processing to S3.
The company wants to have separate deployments of its web platform across multiple Regions. However, the
company must maintain a single primary reservation database that is globally consistent.
Answer: A
Explanation:
Using DynamoDB's global tables feature, you can achieve a globally consistent reservation database with low
latency on updates, making it suitable for serving a global user base. The automatic replication provided by
DynamoDB eliminates the need for manual synchronization between Regions.
A. Convert the application to use Amazon DynamoDB. Use a global table for the center reservation table. Use
the correct Regional endpoint in each Regional deployment.
In the event of a natural disaster in the us-west-1 Region, the company wants to recover workloads quickly in the
us-west-2 Region. The company wants no more than 24 hours of data loss on the EC2 instances. The company also
wants to automate any backups of the EC2 instances.
Which solutions will meet these requirements with the LEAST administrative effort? (Choose two.)
A.Create an Amazon EC2-backed Amazon Machine Image (AMI) lifecycle policy to create a backup based on
tags. Schedule the backup to run twice daily. Copy the image on demand.
B.Create an Amazon EC2-backed Amazon Machine Image (AMI) lifecycle policy to create a backup based on
tags. Schedule the backup to run twice daily. Configure the copy to the us-west-2 Region.
C.Create backup vaults in us-west-1 and in us-west-2 by using AWS Backup. Create a backup plan for the EC2
instances based on tag values. Create an AWS Lambda function to run as a scheduled job to copy the backup
data to us-west-2.
D.Create a backup vault by using AWS Backup. Use AWS Backup to create a backup plan for the EC2 instances
based on tag values. Define the destination for the copy as us-west-2. Specify the backup schedule to run twice
daily.
E.Create a backup vault by using AWS Backup. Use AWS Backup to create a backup plan for the EC2 instances
based on tag values. Specify the backup schedule to run twice daily. Copy on demand to us-west-2.
Answer: BD
Explanation:
Option B suggests using an EC2-backed Amazon Machine Image (AMI) lifecycle policy to automate the
backup process. By configuring the policy to run twice daily and specifying the copy to the us-west-2 Region,
the company can ensure regular backups are created and copied to the alternate region.Option D proposes
using AWS Backup, which provides a centralized backup management solution. By creating a backup vault
and backup plan based on tag values, the company can automate the backup process for the EC2 instances.
The backup schedule can be set to run twice daily, and the destination for the copy can be defined as the us-
west-2 Region.
solutions are both automated and require no manual intervention to create or copy backups
Users report that the application is running more slowly than expected. A security audit of the web server log files
shows that the application is receiving millions of illegitimate requests from a small number of IP addresses. A
solutions architect needs to resolve the immediate performance problem while the company investigates a more
permanent solution.
A.Modify the inbound security group for the web tier. Add a deny rule for the IP addresses that are consuming
resources.
B.Modify the network ACL for the web tier subnets. Add an inbound deny rule for the IP addresses that are
consuming resources.
C.Modify the inbound security group for the application tier. Add a deny rule for the IP addresses that are
consuming resources.
D.Modify the network ACL for the application tier subnets. Add an inbound deny rule for the IP addresses that
are consuming resources.
Answer: B
Explanation:
In this scenario, the security audit reveals that the application is receiving millions of illegitimate requests
from a small number of IP addresses. To address this issue, it is recommended to modify the network ACL
(Access Control List) for the web tier subnets.By adding an inbound deny rule specifically targeting the IP
addresses that are consuming resources, the network ACL can block the illegitimate traffic at the subnet
level before it reaches the web servers. This will help alleviate the excessive load on the web tier and improve
the application's performance.
A.Create a VPC peering connection between the eu-west-1 VPC and the ap-southeast-2 VPC. Create an inbound
rule in the eu-west-1 application security group that allows traffic from the database server IP addresses in the
ap-southeast-2 security group.
B.Configure a VPC peering connection between the ap-southeast-2 VPC and the eu-west-1 VPC. Update the
subnet route tables. Create an inbound rule in the ap-southeast-2 database security group that references the
security group ID of the application servers in eu-west-1.
C.Configure a VPC peering connection between the ap-southeast-2 VPC and the eu-west-1 VPUpdate the
subnet route tables. Create an inbound rule in the ap-southeast-2 database security group that allows traffic
from the eu-west-1 application server IP addresses.
D.Create a transit gateway with a peering attachment between the eu-west-1 VPC and the ap-southeast-2 VPC.
After the transit gateways are properly peered and routing is configured, create an inbound rule in the
database security group that references the security group ID of the application servers in eu-west-1.
Answer: C
Explanation:
Answer: C -->"You cannot reference the security group of a peer VPC that's in a different Region. Instead, use
the CIDR block of the peer VPC."
https://docs.aws.amazon.com/vpc/latest/peering/vpc-peering-security-groups.html
A.Configure each development environment with its own Amazon Aurora PostgreSQL database
B.Configure each development environment with its own Amazon RDS for PostgreSQL Single-AZ DB instances
C.Configure each development environment with its own Amazon Aurora On-Demand PostgreSQL-Compatible
database
D.Configure each development environment with its own Amazon S3 bucket by using Amazon S3 Object Select
Answer: C
Explanation:
Option C suggests using Amazon Aurora On-Demand PostgreSQL-Compatible databases for each
development environment. This option provides the benefits of Amazon Aurora, which is a high-performance
and scalable database engine, while allowing you to pay for usage on an on-demand basis. Amazon Aurora
On-Demand instances are typically more cost-effective for individual development environments compared to
the provisioned capacity options.
C cost effectively
Which solution will meet these requirements with the LEAST operational overhead?
A.Use AWS Config to identify all untagged resources. Tag the identified resources programmatically. Use tags
in the backup plan.
B.Use AWS Config to identify all resources that are not running. Add those resources to the backup vault.
C.Require all AWS account owners to review their resources to identify the resources that need to be backed
up.
D.Use Amazon Inspector to identify all noncompliant resources.
Answer: A
Explanation:
This solution allows you to leverage AWS Config to identify any untagged resources within your AWS
Organizations accounts. Once identified, you can programmatically apply the necessary tags to indicate the
backup requirements for each resource. By using tags in the backup plan configuration, you can ensure that
only the tagged resources are included in the backup process, reducing operational overhead and ensuring all
necessary resources are backed up.
A.Create a static website hosted in Amazon S3 that invokes AWS Lambda functions to resize the images and
store the images in an Amazon S3 bucket.
B.Create a static website hosted in Amazon CloudFront that invokes AWS Step Functions to resize the images
and store the images in an Amazon RDS database.
C.Create a dynamic website hosted on a web server that runs on an Amazon EC2 instance. Configure a process
that runs on the EC2 instance to resize the images and store the images in an Amazon S3 bucket.
D.Create a dynamic website hosted on an automatically scaling Amazon Elastic Container Service (Amazon
ECS) cluster that creates a resize job in Amazon Simple Queue Service (Amazon SQS). Set up an image-resizing
program that runs on an Amazon EC2 instance to process the resize jobs.
Answer: A
Explanation:
By using Amazon S3 and AWS Lambda together, you can create a serverless architecture that provides highly
scalable and available image resizing capabilities. Here's how the solution would work:Set up an Amazon S3
bucket to store the original images uploaded by users.Configure an event trigger on the S3 bucket to invoke
an AWS Lambda function whenever a new image is uploaded.The Lambda function can be designed to
retrieve the uploaded image, perform the necessary resizing operations based on device requirements, and
store the resized images back in the S3 bucket or a different bucket designated for resized images.Configure
the Amazon S3 bucket to make the resized images publicly accessible for serving to users.
A.Grant the required permission in AWS Identity and Access Management (IAM) to the AmazonEKSNodeRole
IAM role.
B.Create interface VPC endpoints to allow nodes to access the control plane.
C.Recreate nodes in the public subnet. Restrict security groups for EC2 nodes.
D.Allow outbound traffic in the security group of the nodes.
Answer: B
Explanation:
By creating interface VPC endpoints, you can enable the necessary communication between the Amazon EKS
control plane and the nodes in private subnets. This solution ensures that the control plane maintains
endpoint private access (set to true) and endpoint public access (set to false) for security compliance.
Which use cases are suitable for Amazon Redshift in this scenario? (Choose three.)
A.Supporting data APIs to access data with traditional, containerized, and event-driven applications
B.Supporting client-side and server-side encryption
C.Building analytics workloads during specified hours and when the application is not active
D.Caching data to reduce the pressure on the backend database
E.Scaling globally to support petabytes of data and tens of millions of requests per minute
F.Creating a secondary replica of the cluster by using the AWS Management Console
Answer: BCE
Explanation:
B. Supporting client-side and server-side encryption: Amazon Redshift supports both client-side and server-
side encryption for improved data security.C. Building analytics workloads during specified hours and when
the application is not active: Amazon Redshift is optimized for running complex analytic queries against very
large datasets, making it a good choice for this use case.E. Scaling globally to support petabytes of data and
tens of millions of requests per minute: Amazon Redshift is designed to handle petabytes of data, and to
deliver fast query and I/O performance for virtually any size dataset.
The company requires the API to respond consistently with low latency to ensure customer satisfaction. The
company needs to provide a compute host for the API.
Which solution will meet these requirements with the LEAST operational overhead?
A.Use an Application Load Balancer and Amazon Elastic Container Service (Amazon ECS).
B.Use Amazon API Gateway and AWS Lambda functions with provisioned concurrency.
C.Use an Application Load Balancer and an Amazon Elastic Kubernetes Service (Amazon EKS) cluster.
D.Use Amazon API Gateway and AWS Lambda functions with reserved concurrency.
Answer: B
Explanation:
In the context of the given scenario, where the company wants low latency and consistent performance for
their API during peak usage times, it would be more suitable to use provisioned concurrency. By allocating a
specific number of concurrent executions, the company can ensure that there are enough function instances
available to handle the expected load and minimize the impact of cold starts. This will result in lower latency
and improved performance for the API.
Which solution will meet this requirement with the MOST operational efficiency?
A.Enable S3 logging in the Systems Manager console. Choose an S3 bucket to send the session data to.
B.Install the Amazon CloudWatch agent. Push all logs to a CloudWatch log group. Export the logs to an S3
bucket from the group for archival purposes.
C.Create a Systems Manager document to upload all server logs to a central S3 bucket. Use Amazon
EventBridge to run the Systems Manager document against all servers that are in the account daily.
D.Install an Amazon CloudWatch agent. Push all logs to a CloudWatch log group. Create a CloudWatch logs
subscription that pushes any incoming log events to an Amazon Kinesis Data Firehose delivery stream. Set
Amazon S3 as the destination.
Answer: A
Explanation:
1. It have menu to Enable S3 Logging.https://docs.aws.amazon.com/systems-
manager/latest/userguide/session-manager-logging.html#session-manager-logging-s3
2. option A does not involve CloudWatch, while option D does. Therefore, in terms of operational overhead,
option A would generally have less complexity and operational overhead compared to option D.Option A
simply enables S3 logging in the Systems Manager console, allowing you to directly send session logs to an
S3 bucket. This approach is straightforward and requires minimal configuration.On the other hand, option D
involves installing and configuring the Amazon CloudWatch agent, creating a CloudWatch log group, setting
up a CloudWatch Logs subscription, and configuring an Amazon Kinesis Data Firehose delivery stream to
store logs in an S3 bucket. This requires additional setup and management compared to option A.So, if
minimizing operational overhead is a priority, option A would be a simpler and more straightforward choice.
Which solution meets these requirements with the LEAST amount of effort?
Answer: A
Explanation:
Enabling storage autoscaling allows RDS to automatically adjust the storage capacity based on the
application's needs. When the storage usage exceeds a predefined threshold, RDS will automatically increase
the allocated storage without requiring manual intervention or causing downtime. This ensures that the RDS
database has sufficient disk space to handle the increasing storage requirements.
Answer: B
Explanation:
AWS Service Catalog allows you to create and manage catalogs of IT services that can be deployed within
your organization. With Service Catalog, you can define a standardized set of products (solutions and tools in
this case) that customers can self-service provision. By creating Service Catalog products, you can control
and enforce the deployment of approved and validated solutions and tools.
Which DynamoDB table configuration will meet these requirements MOST cost-effectively?
A.Configure DynamoDB with provisioned read and write by using the DynamoDB Standard table class. Set
DynamoDB auto scaling to a maximum defined capacity.
B.Configure DynamoDB in on-demand mode by using the DynamoDB Standard table class.
C.Configure DynamoDB with provisioned read and write by using the DynamoDB Standard Infrequent Access
(DynamoDB Standard-IA) table class. Set DynamoDB auto scaling to a maximum defined capacity.
D.Configure DynamoDB in on-demand mode by using the DynamoDB Standard Infrequent Access (DynamoDB
Standard-IA) table class.
Answer: B
Explanation:
AWS Service Catalog allows you to create and manage catalogs of IT services that can be deployed within
your organization. With Service Catalog, you can define a standardized set of products (solutions and tools in
this case) that customers can self-service provision. By creating Service Catalog products, you can control
and enforce the deployment of approved and validated solutions and tools.
Question: 521 CertyIQ
A retail company has several businesses. The IT team for each business manages its own AWS account. Each team
account is part of an organization in AWS Organizations. Each team monitors its product inventory levels in an
Amazon DynamoDB table in the team's own AWS account.
The company is deploying a central inventory reporting application into a shared AWS account. The application
must be able to read items from all the teams' DynamoDB tables.
A.Integrate DynamoDB with AWS Secrets Manager in the inventory application account. Configure the
application to use the correct secret from Secrets Manager to authenticate and read the DynamoDB table.
Schedule secret rotation for every 30 days.
B.In every business account, create an IAM user that has programmatic access. Configure the application to use
the correct IAM user access key ID and secret access key to authenticate and read the DynamoDB table.
Manually rotate IAM access keys every 30 days.
C.In every business account, create an IAM role named BU_ROLE with a policy that gives the role access to the
DynamoDB table and a trust policy to trust a specific role in the inventory application account. In the inventory
account, create a role named APP_ROLE that allows access to the STS AssumeRole API operation. Configure
the application to use APP_ROLE and assume the crossaccount role BU_ROLE to read the DynamoDB table.
D.Integrate DynamoDB with AWS Certificate Manager (ACM). Generate identity certificates to authenticate
DynamoDB. Configure the application to use the correct certificate to authenticate and read the DynamoDB
table.
Answer: C
Explanation:
IAM Roles: IAM roles provide a secure way to grant permissions to entities within AWS. By creating an IAM
role in each business account named BU_ROLE with the necessary permissions to access the DynamoDB
table, the access can be controlled at the IAM role level.Cross-Account Access: By configuring a trust policy
in the BU_ROLE that trusts a specific role in the inventory application account (APP_ROLE), you establish a
trusted relationship between the two accounts.Least Privilege: By creating a specific IAM role (BU_ROLE) in
each business account and granting it access only to the required DynamoDB table, you can ensure that each
team's table is accessed with the least privilege principle.Security Token Service (STS): The use of STS
AssumeRole API operation in the inventory application account allows the application to assume the cross-
account role (BU_ROLE) in each business account.
Which combination of steps will meet these requirements with the LEAST operational overhead? (Choose two.)
Answer: BC
Explanation:
By combining the Kubernetes Cluster Autoscaler (option C) to manage the number of nodes in the cluster and
enabling horizontal pod autoscaling (option B) with the Kubernetes Metrics Server, you can achieve automatic
scaling of your EKS cluster and container applications based on workload demand. This approach minimizes
operational overhead as it leverages built-in Kubernetes functionality and automation mechanisms.
Which solution will meet these requirements in the MOST operationally efficient way?
Answer: B
Explanation:
1. By using CloudFront with [email protected], you can benefit from the distributed CDN infrastructure, reduce
the load on DynamoDB, and retrieve data with low latency. The use of caching also helps to minimize the
impact on baseline performance and improve the overall efficiency of data retrieval in your application.
Which solution will meet these requirements with the LEAST effort?
A.Use AWS Glue and write custom scripts to query CloudTrail logs for the errors.
B.Use AWS Batch and write custom scripts to query CloudTrail logs for the errors.
C.Search CloudTrail logs with Amazon Athena queries to identify the errors.
D.Search CloudTrail logs with Amazon QuickSight. Create a dashboard to identify the errors.
Answer: C
Explanation:
"Using Athena with CloudTrail logs is a powerful way to enhance your analysis of AWS service
activity."https://docs.aws.amazon.com/athena/latest/ug/cloudtrail-logs.html
Which solution will meet these requirements with the LEAST operational overhead?
A.Access usage cost-related data by using the AWS Cost Explorer API with pagination.
B.Access usage cost-related data by using downloadable AWS Cost Explorer report .csv files.
C.Configure AWS Budgets actions to send usage cost data to the company through FTP.
D.Create AWS Budgets reports for usage cost data. Send the data to the company through SMTP.
Answer: A
Explanation:
1. Answer is: Asays dashboard = Cost Explorer, therefor C & D are eliminated.also says programmatically,
means non manual intervention therefor API.
2. least operational overhead = API access
Which solution will reduce the downtime for scaling exercises with the LEAST operational overhead?
A.Create more Aurora PostgreSQL read replicas in the cluster to handle the load during failover.
B.Set up a secondary Aurora PostgreSQL cluster in the same AWS Region. During failover, update the
application to use the secondary cluster's writer endpoint.
C.Create an Amazon ElastiCache for Memcached cluster to handle the load during failover.
D.Set up an Amazon RDS proxy for the database. Update the application to use the proxy endpoint.
Answer: D
Explanation:
D is the correct answer.It is talking about the write database. Not reader.Amazon RDS proxy allows you to
automatically route write request to the healthy writer, minimizing downtime.
The company wants to expand globally and to ensure that its application has minimal downtime.
A.Extend the Auto Scaling groups for the web tier and the application tier to deploy instances in Availability
Zones in a second Region. Use an Aurora global database to deploy the database in the primary Region and the
second Region. Use Amazon Route 53 health checks with a failover routing policy to the second Region.
B.Deploy the web tier and the application tier to a second Region. Add an Aurora PostgreSQL cross-Region
Aurora Replica in the second Region. Use Amazon Route 53 health checks with a failover routing policy to the
second Region. Promote the secondary to primary as needed.
C.Deploy the web tier and the application tier to a second Region. Create an Aurora PostgreSQL database in the
second Region. Use AWS Database Migration Service (AWS DMS) to replicate the primary database to the
second Region. Use Amazon Route 53 health checks with a failover routing policy to the second Region.
D.Deploy the web tier and the application tier to a second Region. Use an Amazon Aurora global database to
deploy the database in the primary Region and the second Region. Use Amazon Route 53 health checks with a
failover routing policy to the second Region. Promote the secondary to primary as needed.
Answer: D
Explanation:
B&C are discarted.The answer is between A and D. I would go with D because it explicitley created this web /
app tier in second region, instead A just autoscales into a secondary region, rather then always having
resources in this second region.
The company wants the AWS solution to process incoming data files as soon as possible with minimal changes to
the FTP clients that send the files. The solution must delete the incoming data files after the files have been
processed successfully. Processing for each file needs to take 3-8 minutes.
Which solution will meet these requirements in the MOST operationally efficient way?
A.Use an Amazon EC2 instance that runs an FTP server to store incoming files as objects in Amazon S3 Glacier
Flexible Retrieval. Configure a job queue in AWS Batch. Use Amazon EventBridge rules to invoke the job to
process the objects nightly from S3 Glacier Flexible Retrieval. Delete the objects after the job has processed
the objects.
B.Use an Amazon EC2 instance that runs an FTP server to store incoming files on an Amazon Elastic Block
Store (Amazon EBS) volume. Configure a job queue in AWS Batch. Use Amazon EventBridge rules to invoke the
job to process the files nightly from the EBS volume. Delete the files after the job has processed the files.
C.Use AWS Transfer Family to create an FTP server to store incoming files on an Amazon Elastic Block Store
(Amazon EBS) volume. Configure a job queue in AWS Batch. Use an Amazon S3 event notification when each
file arrives to invoke the job in AWS Batch. Delete the files after the job has processed the files.
D.Use AWS Transfer Family to create an FTP server to store incoming files in Amazon S3 Standard. Create an
AWS Lambda function to process the files and to delete the files after they are processed. Use an S3 event
notification to invoke the Lambda function when the files arrive.
Answer: D
Explanation:
1. Most likely D.
2. You cannot setup AWS Transfer Family to save files into EBS.
A.Migrate the databases to Amazon EC2. Use an AWS Key Management Service (AWS KMS) AWS managed key
for encryption.
B.Migrate the databases to Amazon RDS Configure encryption at rest.
C.Migrate the data to Amazon S3 Use Amazon Macie for data security and protection
D.Migrate the database to Amazon RDS. Use Amazon CloudWatch Logs for data security and protection.
Answer: B
Explanation:
1. B for sure.First the correct is Amazon RDS, then encryption at rest makes the database secure.
2. B. Migrate the databases to Amazon RDS Configure encryption at rest.Looks like best option
A.Add an Amazon CloudFront distribution in front of the NLBs. Increase the Cache-Control max-age parameter.
B.Replace the NLBs with Application Load Balancers (ALBs). Configure Route 53 to use latency-based routing.
C.Add AWS Global Accelerator in front of the NLBs. Configure a Global Accelerator endpoint to use the correct
listener ports.
D.Add an Amazon API Gateway endpoint behind the NLBs. Enable API caching. Override method caching for the
different stages.
Answer: C
Explanation:
only b and c handle TCP/UDP, and C comes with accelerator to enhance performance
UDP and TCP is AWS Global accelarator as it works in the Transportation layer.Now this with NLB is perfect.
Which solution will meet these requirements with the MOST operational efficiency?
A.Create a function URL for the Lambda function. Provide the Lambda function URL to the third party for the
webhook.
B.Deploy an Application Load Balancer (ALB) in front of the Lambda function. Provide the ALB URL to the third
party for the webhook.
C.Create an Amazon Simple Notification Service (Amazon SNS) topic. Attach the topic to the Lambda function.
Provide the public hostname of the SNS topic to the third party for the webhook.
D.Create an Amazon Simple Queue Service (Amazon SQS) queue. Attach the queue to the Lambda function.
Provide the public hostname of the SQS queue to the third party for the webhook.
Answer: A
Explanation:
1. key word: Lambda function URLs
2. https://docs.aws.amazon.com/lambda/latest/dg/lambda-urls.html
Which combination of steps will meet these requirements with the MOST operational efficiency? (Choose three.)
A.Register the required domain in a registrar. Create a wildcard custom domain name in a Route 53 hosted zone
and record in the zone that points to the API Gateway endpoint.
B.Request a wildcard certificate that matches the domains in AWS Certificate Manager (ACM) in a different
Region.
C.Create hosted zones for each customer as required in Route 53. Create zone records that point to the API
Gateway endpoint.
D.Request a wildcard certificate that matches the custom domain name in AWS Certificate Manager (ACM) in
the same Region.
E.Create multiple API endpoints for each customer in API Gateway.
F.Create a custom domain name in API Gateway for the REST API. Import the certificate from AWS Certificate
Manager (ACM).
Answer: ADF
Explanation:
It's ADF
ADF - One to create the custom domain in Route 53 (Amazon DNS)Second to request wildcard certificate
from ADMThirds to import the certificate from ACM.
A.Use Amazon Macie. Create an Amazon EventBridge rule to filter the SensitiveData event type from Macie
findings and to send an Amazon Simple Notification Service (Amazon SNS) notification to the security team.
B.Use Amazon GuardDuty. Create an Amazon EventBridge rule to filter the CRITICAL event type from
GuardDuty findings and to send an Amazon Simple Notification Service (Amazon SNS) notification to the
security team.
C.Use Amazon Macie. Create an Amazon EventBridge rule to filter the SensitiveData:S3Object/Personal event
type from Macie findings and to send an Amazon Simple Queue Service (Amazon SQS) notification to the
security team.
D.Use Amazon GuardDuty. Create an Amazon EventBridge rule to filter the CRITICAL event type from
GuardDuty findings and to send an Amazon Simple Queue Service (Amazon SQS) notification to the security
team.
Answer: A
Explanation:
B and D are discarted as Macie is to identify PII. Now that we have between A and C.SNS is more suitable for
this option as a pub/sub service, we subscribe the security team and then they will receive the notifications.
A.Transition objects to the S3 Standard storage class 30 days after creation. Write an expiration action that
directs Amazon S3 to delete objects after 90 days.
B.Transition objects to the S3 Standard-Infrequent Access (S3 Standard-IA) storage class 30 days after
creation. Move all objects to the S3 Glacier Flexible Retrieval storage class after 90 days. Write an expiration
action that directs Amazon S3 to delete objects after 90 days.
C.Transition objects to the S3 Glacier Flexible Retrieval storage class 30 days after creation. Write an
expiration action that directs Amazon S3 to delete objects after 90 days.
D.Transition objects to the S3 One Zone-Infrequent Access (S3 One Zone-IA) storage class 30 days after
creation. Move all objects to the S3 Glacier Flexible Retrieval storage class after 90 days. Write an expiration
action that directs Amazon S3 to delete objects after 90 days.
Answer: C
Explanation:
C seems the most sutiable. Is the lowest cost. After 30 days is backup only, doesn't specify frequent access.
Therefor we must transition the items after 30 days to Glacier Flexible Retrieval.Also it says deletion after 90
days, so all answers specifying a transition after 90 days makes no sense.
A.Create a new AWS Key Management Service (AWS KMS) key. Use AWS Secrets Manager to manage, rotate,
and store all secrets in Amazon EKS.
B.Create a new AWS Key Management Service (AWS KMS) key. Enable Amazon EKS KMS secrets encryption
on the Amazon EKS cluster.
C.Create the Amazon EKS cluster with default options. Use the Amazon Elastic Block Store (Amazon EBS)
Container Storage Interface (CSI) driver as an add-on.
D.Create a new AWS Key Management Service (AWS KMS) key with the alias/aws/ebs alias. Enable default
Amazon Elastic Block Store (Amazon EBS) volume encryption for the account.
Answer: B
Explanation:
It is B, because we need to encrypt inside of the EKS cluster, not outside.AWS KMS is to encrypt at rest.
Question: 536 CertyIQ
A company wants to provide data scientists with near real-time read-only access to the company's production
Amazon RDS for PostgreSQL database. The database is currently configured as a Single-AZ database. The data
scientists use complex queries that will not affect the production database. The company needs a solution that is
highly available.
A.Scale the existing production database in a maintenance window to provide enough power for the data
scientists.
B.Change the setup from a Single-AZ to a Multi-AZ instance deployment with a larger secondary standby
instance. Provide the data scientists access to the secondary instance.
C.Change the setup from a Single-AZ to a Multi-AZ instance deployment. Provide two additional read replicas
for the data scientists.
D.Change the setup from a Single-AZ to a Multi-AZ cluster deployment with two readable standby instances.
Provide read endpoints to the data scientists.
Answer: C
Explanation:
C.The question says highly available therefor Multi Az deployment.Also mentions cost consideration. database
instance is cheaper then cluster (D).Also read replicas is a must since the queries are complex and can slow
down the database (question has not complex queries but is a mistake must have been complex queries)
A.Migrate the MySQL database to Amazon RDS for MySQL with a Multi-AZ DB cluster deployment. Use Amazon
ElastiCache for Redis with high availability to store session data and to cache reads. Migrate the web server to
an Auto Scaling group that is in three Availability Zones.
B.Migrate the MySQL database to Amazon RDS for MySQL with a Multi-AZ DB cluster deployment. Use Amazon
ElastiCache for Memcached with high availability to store session data and to cache reads. Migrate the web
server to an Auto Scaling group that is in three Availability Zones.
C.Migrate the MySQL database to Amazon DynamoDB Use DynamoDB Accelerator (DAX) to cache reads. Store
the session data in DynamoDB. Migrate the web server to an Auto Scaling group that is in three Availability
Zones.
D.Migrate the MySQL database to Amazon RDS for MySQL in a single Availability Zone. Use Amazon
ElastiCache for Redis with high availability to store session data and to cache reads. Migrate the web server to
an Auto Scaling group that is in three Availability Zones.
Answer: A
Explanation:
Memcached is best suited for caching data, while Redis is better for storing data that needs to be persisted. If
you need to store data that needs to be accessed frequently, such as user profiles, session data, and
application settings, then Redis is the better choice
Question: 538 CertyIQ
A global video streaming company uses Amazon CloudFront as a content distribution network (CDN). The company
wants to roll out content in a phased manner across multiple countries. The company needs to ensure that viewers
who are outside the countries to which the company rolls out content are not able to view the content.
A.Add geographic restrictions to the content in CloudFront by using an allow list. Set up a custom error
message.
B.Set up a new URL tor restricted content. Authorize access by using a signed URL and cookies. Set up a
custom error message.
C.Encrypt the data for the content that the company distributes. Set up a custom error message.
D.Create a new URL for restricted content. Set up a time-restricted access policy for signed URLs.
Answer: A
Explanation:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/georestrictions.html
A.Configure a multi-site active/active setup between the on-premises server and AWS by using Microsoft SQL
Server Enterprise with Always On availability groups.
B.Configure a warm standby Amazon RDS for SQL Server database on AWS. Configure AWS Database
Migration Service (AWS DMS) to use change data capture (CDC).
C.Use AWS Elastic Disaster Recovery configured to replicate disk changes to AWS as a pilot light.
D.Use third-party backup software to capture backups every night. Store a secondary set of backups in Amazon
S3.
Answer: B
Explanation:
B is the correct one.C and D are discarted as makes no sense.Between A and B is because B is RDS which is a
manged service, we can use even to pay only for used resources when needed. Leveraging AWS DMS it
replicates / syncs the data.
A.Use AWS Database Migration Service (AWS DMS) to create an Amazon RDS DB instance in multiple AWS
Regions. Point the reporting functions toward a separate DB instance from the primary DB instance.
B.Use Amazon RDS in a Single-AZ deployment to create an Oracle database. Create a read replica in the same
zone as the primary DB instance. Direct the reporting functions to the read replica.
C.Use Amazon RDS deployed in a Multi-AZ cluster deployment to create an Oracle database. Direct the
reporting functions to use the reader instance in the cluster deployment.
D.Use Amazon RDS deployed in a Multi-AZ instance deployment to create an Amazon Aurora database. Direct
the reporting functions to the reader instances.
Answer: C
Explanation:
C. Use Amazon RDS deployed in a Multi-AZ cluster deployment to create an Oracle database. Direct the
reporting functions to use the reader instance in the cluster deployment.A and B discarted. The answer is
between C and DD says use an Amazon RDS to build an Amazon Aurora, makes no sense.C is the correct one,
high availability in multi az deployment.Also point the reporting to the reader replica.
Which combination of steps will meet these requirements MOST cost-effectively? (Choose three.)
A.Create an AWS Lambda function to retrieve user information from Amazon DynamoDB. Create an Amazon API
Gateway endpoint to accept RESTful APIs. Send the API calls to the Lambda function.
B.Create an Amazon Elastic Container Service (Amazon ECS) service behind an Application Load Balancer to
retrieve user information from Amazon RDS. Create an Amazon API Gateway endpoint to accept RESTful APIs.
Send the API calls to the Lambda function.
C.Create an Amazon Cognito user pool to authenticate users.
D.Create an Amazon Cognito identity pool to authenticate users.
E.Use AWS Amplify to serve the frontend web content with HTML, CSS, and JS. Use an integrated Amazon
CloudFront configuration.
F.Use Amazon S3 static web hosting with PHP, CSS, and JS. Use Amazon CloudFront to serve the frontend web
content.
Answer: ACE
Explanation:
Option B (Amazon ECS) is not the best option since the website "can be idle for a long time", so Lambda
(Option A) is a more cost-effective choice.Option D is incorrect because User pools are for authentication
(identity verification) while Identity pools are for authorization (access control). Option F is wrong because S3
web hosting only supports static web files like HTML/CSS, and does not support PHP or JavaScript.
Answer: B
Explanation:
Signed URLshttps://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html
A.From the AWS Account Management Console of the management account, turn on discount sharing from the
billing preferences section.
B.From the AWS Account Management Console of the account that purchased the existing Savings Plan, turn
on discount sharing from the billing preferences section. Include all accounts.
C.From the AWS Organizations management account, use AWS Resource Access Manager (AWS RAM) to share
the Savings Plan with other accounts.
D.Create an organization in AWS Organizations in a new payer account. Invite the other AWS accounts to join
the organization from the management account.
E.Create an organization in AWS Organizations in the existing AWS account with the existing EC2 instances and
Savings Plan. Invite the other AWS accounts to join the organization from the management account.
Answer: AE
Explanation:
From the AWS Account Management Console of the management account, turn on discount sharing from the
billing preferences section.
.From the AWS Organizations management account, use AWS Resource Access Manager (AWS RAM) to share
the Savings Plan with other accounts.
Answer: A
Explanation:
A. Create a canary release deployment stage for API Gateway. Deploy the latest API version. Point an
appropriate percentage of traffic to the canary stage. After API verification, promote the canary stage to the
production stage.Canary release meaning only certain percentage of the users.
A.Update the Route 53 records to use a latency routing policy. Add a static error page that is hosted in an
Amazon S3 bucket to the records so that the traffic is sent to the most responsive endpoints.
B.Set up a Route 53 active-passive failover configuration. Direct traffic to a static error page that is hosted in
an Amazon S3 bucket when Route 53 health checks determine that the ALB endpoint is unhealthy.
C.Set up a Route 53 active-active configuration with the ALB and an Amazon EC2 instance that hosts a static
error page as endpoints. Configure Route 53 to send requests to the instance only if the health checks fail for
the ALB.
D.Update the Route 53 records to use a multivalue answer routing policy. Create a health check. Direct traffic
to the website if the health check passes. Direct traffic to a static error page that is hosted in Amazon S3 if the
health check does not pass.
Answer: B
Explanation:
B is correct..https://repost.aws/knowledge-center/fail-over-s3-r53
A.Set up AWS Storage Gateway to connect with the backup applications using the NFS interface.
B.Set up an Amazon EFS file system that connects with the backup applications using the NFS interface.
C.Set up an Amazon EFS file system that connects with the backup applications using the iSCSI interface.
D.Set up AWS Storage Gateway to connect with the backup applications using the iSCSI-virtual tape library
(VTL) interface.
Answer: D
Explanation:
https://aws.amazon.com/storagegateway/vtl/?nc1=h_ls
Which solution will meet these requirements with the LEAST operational overhead?
A.Use Amazon Kinesis Data Firehose to deliver streaming data to Amazon S3.
B.Use AWS Glue to deliver streaming data to Amazon S3.
C.Use AWS Lambda to deliver streaming data and store the data to Amazon S3.
D.Use AWS Database Migration Service (AWS DMS) to deliver streaming data to Amazon S3.
Answer: A
Explanation:
Which solution will meet these requirements with the LEAST operational overhead?
A.Use AWS Systems Manager templates to control which AWS services each department can use.
B.Create organization units (OUs) for each department in AWS Organizations. Attach service control policies
(SCPs) to the OUs.
C.Use AWS CloudFormation to automatically provision only the AWS services that each department can use.
D.Set up a list of products in AWS Service Catalog in the AWS accounts to manage and control the usage of
specific AWS services.
Answer: B
Explanation:
A.Deploy a NAT instance in the VPC. Route all the internet-based traffic through the NAT instance.
B.Deploy a NAT gateway in the public subnets. Modify the private subnet route table to direct all internet-
bound traffic to the NAT gateway.
C.Configure an internet gateway and attach it to the VPModify the private subnet route table to direct internet-
bound traffic to the internet gateway.
D.Configure a virtual private gateway and attach it to the VPC. Modify the private subnet route table to direct
internet-bound traffic to the virtual private gateway.
Answer: B
Explanation:
Which steps must the solutions architect take to implement the correct permissions? (Choose two.)
Answer: BD
Explanation:
D. Allow the Lambda execution role in the AWS KMS key policy.
A.Use S3 Standard. Use an S3 Lifecycle rule to transition the reports to S3 Glacier after 7 days.
B.Use S3 Standard. Use an S3 Lifecycle rule to transition the reports to S3 Standard-Infrequent Access (S3
Standard-IA) after 7 days.
C.Use S3 Intelligent-Tiering. Configure S3 Intelligent-Tiering to transition the reports to S3 Standard-
Infrequent Access (S3 Standard-IA) and S3 Glacier.
D.Use S3 Standard. Use an S3 Lifecycle rule to transition the reports to S3 Glacier Deep Archive after 7 days.
Answer: A
Explanation:
For me its A because S3 glacier Flexible retrieval standard can retrieve files in 3 to 5 hours D is incorrect
because S3 glacier deep archive needs 12 hours minimum to retrieve files B and C are more expensive
comparing to A and D
Answer: B
Explanation:
1. The key considerations are:The company needs flexibility to change EC2 instance types and families every
2-3 months. This rules out Reserved Instances which lock you into an instance type and family for 1-3 years.A
Compute Savings Plan allows switching instance types and families freely within the term as needed. No
Upfront is more flexible than All Upfront.A 1-year term balances commitment and flexibility better than a 3-
year term given the company's changing needs.With No Upfront, the company only pays for usage monthly
without an upfront payment. This optimizes cost.
2. " needs to change the type and family of its EC2 instances". that means B I think.
Which solution will meet these requirements with the LEAST operational overhead?
A.Configure Amazon Macie in each Region. Create a job to analyze the data that is in Amazon S3.
B.Configure AWS Security Hub for all Regions. Create an AWS Config rule to analyze the data that is in Amazon
S3.
C.Configure Amazon Inspector to analyze the data that is in Amazon S3.
D.Configure Amazon GuardDuty to analyze the data that is in Amazon S3.
Answer: A
Explanation:
A.Use the compute optimized instance family for the application. Use the memory optimized instance family for
the database.
B.Use the storage optimized instance family for both the application and the database.
C.Use the memory optimized instance family for both the application and the database.
D.Use the high performance computing (HPC) optimized instance family for the application. Use the memory
optimized instance family for the database.
Answer: C
Explanation:
Use the memory optimized instance family for both the application and the database.
A solutions architect needs to design a secure solution to establish a connection between the EC2 instances and
the SQS queue.
A.Implement an interface VPC endpoint for Amazon SQS. Configure the endpoint to use the private subnets.
Add to the endpoint a security group that has an inbound access rule that allows traffic from the EC2 instances
that are in the private subnets.
B.Implement an interface VPC endpoint for Amazon SQS. Configure the endpoint to use the public subnets.
Attach to the interface endpoint a VPC endpoint policy that allows access from the EC2 instances that are in
the private subnets.
C.Implement an interface VPC endpoint for Amazon SQS. Configure the endpoint to use the public subnets.
Attach an Amazon SQS access policy to the interface VPC endpoint that allows requests from only a specified
VPC endpoint.
D.Implement a gateway endpoint for Amazon SQS. Add a NAT gateway to the private subnets. Attach an IAM
role to the EC2 instances that allows access to the SQS queue.
Answer: A
Explanation:
A is correct .B,C: 'Configuring endpoints to use public subnets' --> Invalid D: No Gateway Endpoint for SQS.
Question: 556 CertyIQ
A solutions architect is using an AWS CloudFormation template to deploy a three-tier web application. The web
application consists of a web tier and an application tier that stores and retrieves user data in Amazon DynamoDB
tables. The web and application tiers are hosted on Amazon EC2 instances, and the database tier is not publicly
accessible. The application EC2 instances need to access the DynamoDB tables without exposing API credentials
in the template.
A.Create an IAM role to read the DynamoDB tables. Associate the role with the application instances by
referencing an instance profile.
B.Create an IAM role that has the required permissions to read and write from the DynamoDB tables. Add the
role to the EC2 instance profile, and associate the instance profile with the application instances.
C.Use the parameter section in the AWS CloudFormation template to have the user input access and secret
keys from an already-created IAM user that has the required permissions to read and write from the DynamoDB
tables.
D.Create an IAM user in the AWS CloudFormation template that has the required permissions to read and write
from the DynamoDB tables. Use the GetAtt function to retrieve the access and secret keys, and pass them to
the application instances through the user data.
Answer: B
Explanation:
Create an IAM role that has the required permissions to read and write from the DynamoDB tables. Add the
role to the EC2 instance profile, and associate the instance profile with the application instances.
A.Use Amazon Athena to process the S3 data. Use AWS Glue with the Amazon Redshift data to enrich the S3
data.
B.Use Amazon EMR to process the S3 data. Use Amazon EMR with the Amazon Redshift data to enrich the S3
data.
C.Use Amazon EMR to process the S3 data. Use Amazon Kinesis Data Streams to move the S3 data into
Amazon Redshift so that the data can be enriched.
D.Use AWS Glue to process the S3 data. Use AWS Lake Formation with the Amazon Redshift data to enrich the
S3 data.
Answer: A
Explanation:
https://aws.amazon.com/blogs/architecture/reduce-archive-cost-with-serverless-data-archiving/
A.Implement AWS Transit Gateway to connect the VPCs. Update the route tables of each VPC to use the transit
gateway for inter-VPC communication.
B.Implement an AWS Site-to-Site VPN tunnel between the VPCs. Update the route tables of each VPC to use
the VPN tunnel for inter-VPC communication.
C.Set up a VPC peering connection between the VPCs. Update the route tables of each VPC to use the VPC
peering connection for inter-VPC communication.
D.Set up a 1 GB AWS Direct Connect connection between the VPCs. Update the route tables of each VPC to use
the Direct Connect connection for inter-VPC communication.
Answer: C
Explanation:
C is the correct answer. VPC peering is the most cost-effective way to connect two VPCs within the same
region and AWS account. There are no additional charges for VPC peering beyond standard data transfer
rates. Transit Gateway and VPN add additional hourly and data processing charges that are not necessary for
simple VPC peering. Direct Connect provides dedicated network connectivity, but is overkill for the relatively
low inter-VPC data transfer needs described here. It has high fixed costs plus data transfer rates .For
occasional inter-VPC communication of moderate data volumes within the same region and account, VPC
peering is the most cost-effective solution. It provides simple private connectivity without transfer charges or
network appliances.
The company wants more details about the cost for each product line from the consolidated billing feature in
Organizations.
Answer: BE
Explanation:
"Only a management account in an organization and single accounts that aren't members of an organization
have access to the cost allocation tags manager in the Billing and Cost Management console.
"https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/custom-tags.html
Question: 560 CertyIQ
A company's solutions architect is designing an AWS multi-account solution that uses AWS Organizations. The
solutions architect has organized the company's accounts into organizational units (OUs).
The solutions architect needs a solution that will identify any changes to the OU hierarchy. The solution also needs
to notify the company's operations team of any changes.
Which solution will meet these requirements with the LEAST operational overhead?
A.Provision the AWS accounts by using AWS Control Tower. Use account drift notifications to identify the
changes to the OU hierarchy.
B.Provision the AWS accounts by using AWS Control Tower. Use AWS Config aggregated rules to identify the
changes to the OU hierarchy.
C.Use AWS Service Catalog to create accounts in Organizations. Use an AWS CloudTrail organization trail to
identify the changes to the OU hierarchy.
D.Use AWS CloudFormation templates to create accounts in Organizations. Use the drift detection operation on
a stack to identify the changes to the OU hierarchy.
Answer: A
Explanation:
The key advantages you highlight of Control Tower are convincing:Fully managed service simplifies multi-
account setup.Built-in account drift notifications detect OU changes automatically.More scalable and less
complex than Config rules or CloudTrail.Better security and compliance guardrails than custom options.Lower
operational overhead compared to other solution
Which solution will meet these requirements with the LEAST amount of operational overhead?
A.Set up a DynamoDB Accelerator (DAX) cluster. Route all read requests through DAX.
B.Set up Amazon ElastiCache for Redis between the DynamoDB table and the web application. Route all read
requests through Redis.
C.Set up Amazon ElastiCache for Memcached between the DynamoDB table and the web application. Route all
read requests through Memcached.
D.Set up Amazon DynamoDB Streams on the table, and have AWS Lambda read from the table and populate
Amazon ElastiCache. Route all read requests through ElastiCache.
Answer: A
Explanation:
A , because B,C and D contains Elastic ache which required a heavy code changes, so more operational
overhead
Which combination of steps should the solutions architect take to meet this requirement? (Choose two.)
Answer: AB
Explanation:
You can access Amazon DynamoDB from your VPC using gateway VPC endpoints. After you create the
gateway endpoint, you can add it as a target in your route table for traffic destined from your VPC to
DynamoDB.
Which solution will meet these requirements with the LEAST operational overhead?
A.Use Amazon CloudWatch Container Insights to collect and group the cluster information.
B.Use Amazon EKS Connector to register and connect all Kubernetes clusters.
C.Use AWS Systems Manager to collect and view the cluster information.
D.Use Amazon EKS Anywhere as the primary cluster to view the other clusters with native Kubernetes
commands.
Answer: B
Explanation:
You can use Amazon EKS Connector to register and connect any conformant Kubernetes cluster to AWS and
visualize it in the Amazon EKS console. After a cluster is connected, you can see the status, configuration, and
workloads for that cluster in the Amazon EKS console. You can use this feature to view connected clusters in
Amazon EKS console, but you can't manage them
A.Store sensitive data in an Amazon Elastic Block Store (Amazon EBS) volume. Use EBS encryption to encrypt
the data. Use an IAM instance role to restrict access.
B.Store sensitive data in Amazon RDS for MySQL. Use AWS Key Management Service (AWS KMS) client-side
encryption to encrypt the data.
C.Store sensitive data in Amazon S3. Use AWS Key Management Service (AWS KMS) server-side encryption to
encrypt the data. Use S3 bucket policies to restrict access.
D.Store sensitive data in Amazon FSx for Windows Server. Mount the file share on application servers. Use
Windows file permissions to restrict access.
Answer: B
Explanation:
Using client-side encryption we can protect specific fields and guarantee only decryption if the client has
access to an API key, we can protect specific fields even from database admins.
A.Use native MySQL tools to migrate the database to Amazon RDS for MySQL. Configure elastic storage
scaling.
B.Migrate the database to Amazon Redshift by using the mysqldump utility. Turn on Auto Scaling for the
Amazon Redshift cluster.
C.Use AWS Database Migration Service (AWS DMS) to migrate the database to Amazon Aurora. Turn on Aurora
Auto Scaling.
D.Use AWS Database Migration Service (AWS DMS) to migrate the database to Amazon DynamoDB. Configure
an Auto Scaling policy.
Answer: C
Explanation:
C is correct A is incorrect. RDS for MySQL does not scale automatically during periods of increased demand. B
is incorrect. Redshift is used for data sharing purposes. D is incorrect. you muse change application codes.
A.Create an Amazon S3 bucket. Allow access from all the EC2 instances in the VPC.
B.Create an Amazon Elastic File System (Amazon EFS) file system. Mount the EFS file system from each EC2
instance.
C.Create a file system on a Provisioned IOPS SSD (io2) Amazon Elastic Block Store (Amazon EBS) volume.
Attach the EBS volume to all the EC2 instances.
D.Create file systems on Amazon Elastic Block Store (Amazon EBS) volumes that are attached to each EC2
instance. Synchronize the EBS volumes across the different EC2 instances.
Answer: B
Explanation:
1. The key reasons:EFS provides a scalable, high performance NFS file system that can be concurrently
accessed from multiple EC2 instances.It supports the hierarchical directory structure needed by the
applications.EFS is elastic, growing and shrinking automatically as needed.It can be accessed from instances
across AZs, meeting the shared storage requirement.S3 object storage (option A) lacks the file system
semantics needed by the apps.EBS volumes (options C and D) are attached to a single instance and would
require replication and syncing to share across instances.EFS is purpose-built for this use case of a shared file
system across Linux instances and aligns best with the performance, concurrency, and availability needs.
2. Going with b
Which solution will meet these requirements with the LEAST operational overhead?
A.Use Amazon API Gateway with AWS Lambda functions to receive the data from the sensors, process the data,
and store the data in an Amazon DynamoDB table.
B.Use an Elastic Load Balancer that is supported by an Auto Scaling group of Amazon EC2 instances to receive
and process the data from the sensors. Use an Amazon S3 bucket to store the processed data.
C.Use Amazon API Gateway with AWS Lambda functions to receive the data from the sensors, process the data,
and store the data in a Microsoft SQL Server Express database on an Amazon EC2 instance.
D.Use an Elastic Load Balancer that is supported by an Auto Scaling group of Amazon EC2 instances to receive
and process the data from the sensors. Use an Amazon Elastic File System (Amazon EFS) shared file system to
store the processed data.
Answer: A
Explanation:
The key reasons are: ° API Gateway removes the need to manage servers to receive the HTTP requests from
sensors ° Lambda functions provide a serverless compute layer to process data as needed ° DynamoDB is a
fully managed NoSQL database that scales automatically ° This serverless architecture has minimal
operational overhead to manage ° Options B, C, and D all require managing EC2 instances which increases ops
workload ° Option C also adds SQL Server admin tasks and licensing costs ° Option D uses EFS file storage
which requires capacity planning and management
The application design must support caching to minimize the amount of time that users wait for the engineering
drawings to load. The application must be able to store petabytes of data.
Which combination of storage and caching should the solutions architect use?
Explanation:
The answer seems A: B : Glacier for archiving C : i don t think EBS scale to petabytes (I am not sure about
that)D : it incorrect because All application components will be deployed on the AWS infrastructure
Answer: A
Explanation:
Option A is the most appropriate solution because Amazon Event Bridge publishes metrics to Amazon
CloudWatch. You can find relevant metrics in the "AWS/Events" namespace, which allows you to monitor the
number of events matched by the rule and the number of invocations to the rule's target.
Which solution will meet these requirements with the LEAST operational overhead?
Answer: B
Explanation:
1. The key reasons:Auto Scaling scheduled actions allow defining specific dates/times to scale out or in. This
can be used to scale to 6 instances every Friday evening automatically.Scheduled scaling removes the need
for manual intervention to scale up/down for the workload.EventBridge reminders and manual scaling require
human involvement each week adding overhead.Automatic scaling responds to demand and may not align
perfectly to scale out every Friday without additional tuning.Scheduled Auto Scaling actions provide the
automation needed to scale for the weekly workload without ongoing operational overhead.
2. Predicted period.. So schedule the instance
Question: 571 CertyIQ
A company is creating a REST API. The company has strict requirements for the use of TLS. The company requires
TLSv1.3 on the API endpoints. The company also requires a specific public third-party certificate authority (CA) to
sign the TLS certificate.
A.Use a local machine to create a certificate that is signed by the third-party CImport the certificate into AWS
Certificate Manager (ACM). Create an HTTP API in Amazon API Gateway with a custom domain. Configure the
custom domain to use the certificate.
B.Create a certificate in AWS Certificate Manager (ACM) that is signed by the third-party CA. Create an HTTP
API in Amazon API Gateway with a custom domain. Configure the custom domain to use the certificate.
C.Use AWS Certificate Manager (ACM) to create a certificate that is signed by the third-party CA. Import the
certificate into AWS Certificate Manager (ACM). Create an AWS Lambda function with a Lambda function URL.
Configure the Lambda function URL to use the certificate.
D.Create a certificate in AWS Certificate Manager (ACM) that is signed by the third-party CA. Create an AWS
Lambda function with a Lambda function URL. Configure the Lambda function URL to use the certificate.
Answer: B
Explanation:
AWS Certificate Manager (ACM) is a service that lets you easily provision, manage, and deploy SSL/TLS
certificates for use with AWS services and your internal resources. By creating a certificate in ACM that is
signed by the third-party CA, the company can meet its requirement for a specific public third-party CA to
sign the TLS certificate.
The company wants to migrate the on-premises database to a managed AWS service. The company wants to use
auto scaling capabilities to manage unexpected workload increases.
Which solution will meet these requirements with the LEAST administrative overhead?
A.Provision an Amazon DynamoDB database with default read and write capacity settings.
B.Provision an Amazon Aurora database with a minimum capacity of 1 Aurora capacity unit (ACU).
C.Provision an Amazon Aurora Serverless v2 database with a minimum capacity of 1 Aurora capacity unit (ACU).
D.Provision an Amazon RDS for MySQL database with 2 GiB of memory.
Answer: C
Explanation:
C seems to be the right answer Instead of provisioning and managing database servers, you specify Aurora
capacity units (ACUs). Each ACU is a combination of approximately 2 gigabytes (GB) of memory,
corresponding CPU, and networking. Database storage automatically scales from 10 gibibytes (GiB) to 128
tebibytes (TiB), the same as storage in a standard Aurora DB
clusterhttps://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v1.how-it-
works.htmlhttps://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2.html
Question: 573 CertyIQ
A company wants to use an event-driven programming model with AWS Lambda. The company wants to reduce
startup latency for Lambda functions that run on Java 11. The company does not have strict latency requirements
for the applications. The company wants to reduce cold starts and outlier latencies when a function scales up.
Answer: D
Explanation:
1. https://docs.aws.amazon.com/lambda/latest/dg/snapstart.html
2. D is correctLambda SnapStart for Java can improve startup performance for latency-sensitive applications
by up to 10x at no extra cost, typically with no changes to your function code. The largest contributor to
startup latency (often referred to as cold start time) is the time that Lambda spends initializing the function,
which includes loading the function's code, starting the runtime, and initializing the function code.With
SnapStart, Lambda initializes your function when you publish a function version. Lambda takes a Firecracker
microVM snapshot of the memory and disk state of the initialized execution environment, encrypts the
snapshot, and caches it for low-latency access. When you invoke the function version for the first time, and as
the invocations scale up, Lambda resumes new execution environments from the cached snapshot instead of
initializing them from scratch, improving startup latency.
A.Migrate the existing RDS for MySQL database to an Aurora Serverless v2 MySQL database cluster.
B.Migrate the existing RDS for MySQL database to an Aurora MySQL database cluster.
C.Migrate the existing RDS for MySQL database to an Amazon EC2 instance that runs MySQL. Purchase an
instance reservation for the EC2 instance.
D.Migrate the existing RDS for MySQL database to an Amazon Elastic Container Service (Amazon ECS) cluster
that uses MySQL container images to run tasks.
Answer: B
Explanation:
B seems to be the correct answer, because if we have a predictable workload Aurora database seems to be
most cost effective however if we have unpredictable workload aurora serverless seems to be more cost
effective because our database will scale up and down for more informations please read this article .
https://medium.com/trackit/aurora-or-aurora-serverless-v2-which-is-more-cost-effective-bcd12e172dcf
Question: 575 CertyIQ
A company deploys its applications on Amazon Elastic Kubernetes Service (Amazon EKS) behind an Application
Load Balancer in an AWS Region. The application needs to store data in a PostgreSQL database engine. The
company wants the data in the database to be highly available. The company also needs increased capacity for
read workloads.
Which solution will meet these requirements with the MOST operational efficiency?
Answer: C
Explanation:
1. RDS Multi-AZ DB cluster deployments provide high availability, automatic failover, and increased read
capacity.A multi-AZ cluster automatically handles replicating data across AZs in a single region.This
maintains operational efficiency as it is natively managed by RDS without needing external
replication.DynamoDB global tables involve complex provisioning and requires app changes.RDS read replicas
require manual setup and management of replication.RDS Multi-AZ clustering is purpose-built by AWS for HA
PostgreSQL deployments and balancing read workloads.
2. Multi-AZ DB clusters provide high availability, increased capacity for read workloads, and lower write
latency when compared to Multi-AZ DB instance deployments.
Which type of endpoint should a solutions architect use to meet these requirements?
A.Private endpoint
B.Regional endpoint
C.Interface VPC endpoint
D.Edge-optimized endpoint
Answer: D
Explanation:
The correct answer is D API Gateway - Endpoint Types • Edge-Optimized (default): For global clients •
Requests are routed through the CloudFront Edge locations (improves latency) • The API Gateway still lives in
only one region• Regional: • For clients within the same region • Could manually combine with CloudFront
(more control over the caching strategies and the distribution)• Private: • Can only be accessed from your VPC
using an interface VPC endpoint (ENI) • Use a resource policy to define access
Which solution will meet these requirements with the MOST operational efficiency?
Answer: C
Explanation:
C seems to be correct
Which solution will meet these requirements with the LEAST operational overhead?
Answer: A
Explanation:
Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for Amazon
DynamoDB that delivers up to a 10 times performance improvement—from milliseconds to microseconds—
even at millions of requests per second.
https://aws.amazon.com/dynamodb/dax/#:~:text=Amazon%20DynamoDB%20Accelerator%20(DAX)%20is,millions%20o
A.Use the Instance Scheduler on AWS to configure start and stop schedules.
B.Turn off automatic backups. Create weekly manual snapshots of the database.
C.Create a custom AWS Lambda function to start and stop the database based on minimum CPU utilization.
D.Purchase All Upfront reserved DB instances.
Answer: A
Explanation:
1. A https://aws.amazon.com/solutions/implementations/instance-scheduler-on-aws/
2. Scheduler do the job
A.Configure an Auto Scaling group with an Amazon EC2 instance. Use an Amazon FSx for Lustre file system to
run the application.
B.Host the application on an Amazon EC2 instance. Use an Amazon Elastic Block Store (Amazon EBS) GP2
volume to run the application.
C.Configure an Auto Scaling group with an Amazon EC2 instance. Use an Amazon FSx for OpenZFS file system
to run the application.
D.Host the application on an Amazon EC2 instance. Use an Amazon Elastic Block Store (Amazon EBS) GP3
volume to run the application.
Answer: D
Explanation:
Migrate your Amazon EBS volumes from gp2 to gp3 and save up to 20% on costs.
My rational: Options A y C are based on autoscaling-group and no make sense for me on this scenary.Then,
use Amazon EBS is the solution and GP2 or GP3 is the question. Requirement requires the most COST
effective solution, then, I choose GP3
A solutions architect needs to design a highly available and fault-tolerant architecture for the application. The
solutions architect creates an Auto Scaling group of EC2 instances.
Which set of additional steps should the solutions architect take to meet these requirements?
A.Set the Auto Scaling group's minimum capacity to two. Deploy one On-Demand Instance in one Availability
Zone and one On-Demand Instance in a second Availability Zone.
B.Set the Auto Scaling group's minimum capacity to four. Deploy two On-Demand Instances in one Availability
Zone and two On-Demand Instances in a second Availability Zone.
C.Set the Auto Scaling group's minimum capacity to two. Deploy four Spot Instances in one Availability Zone.
D.Set the Auto Scaling group's minimum capacity to four. Deploy two On-Demand Instances in one Availability
Zone and two Spot Instances in a second Availability Zone.
Answer: B
Explanation:
1. By setting the Auto Scaling group's minimum capacity to four, the architect ensures that there are always at
least two running instances. Deploying two On-Demand Instances in each of two Availability Zones ensures
that the application is highly available and fault-tolerant. If one Availability Zone becomes unavailable, the
application can still run in the other Availability Zone.
2. While Spot Instances can be used to reduce costs, they might not provide the same level of availability and
guaranteed uptime that On-Demand Instances offer. So I will go with B and not D.
A.Set up a geolocation routing policy. Send the traffic that is near us-west-1 to the on-premises data center.
Send the traffic that is near eu-central-1 to eu-central-1.
B.Set up a simple routing policy that routes all traffic that is near eu-central-1 to eu-central-1 and routes all
traffic that is near the on-premises datacenter to the on-premises data center.
C.Set up a latency routing policy. Associate the policy with us-west-1.
D.Set up a weighted routing policy. Split the traffic evenly between eu-central-1 and the on-premises data
center.
Answer: A
Explanation:
The key reasons are:Geolocation routing allows you to route users to the closest endpoint based on their
geographic location. This will provide the lowest latency.Routing us-west-1 traffic to the on-premises data
center minimizes latency for those users since it is also located near there.Routing eu-central-1 traffic to the
eu-central-1 AWS region minimizes latency for users nearby.This achieves routing users to the closest
endpoint on a geographic basis to optimize for low latency.
A.Read the data from the tapes on premises. Stage the data in a local NFS storage. Use AWS DataSync to
migrate the data to Amazon S3 Glacier Flexible Retrieval.
B.Use an on-premises backup application to read the data from the tapes and to write directly to Amazon S3
Glacier Deep Archive.
C.Order multiple AWS Snowball devices that have Tape Gateway. Copy the physical tapes to virtual tapes in
Snowball. Ship the Snowball devices to AWS. Create a lifecycle policy to move the tapes to Amazon S3 Glacier
Deep Archive.
D.Configure an on-premises Tape Gateway. Create virtual tapes in the AWS Cloud. Use backup software to
copy the physical tape to the virtual tape.
Answer: C
Explanation: