ExamtopicsSAA C03 1 3
ExamtopicsSAA C03 1 3
Prepare for your AWS Certified Solutions Architect - Associate SAA-C03 exam with additiona
products
Study Guide
632 PDF Pages
Download Now
Video Course
368 Lectures
$19.99
Buy Now
Question #1 Topic 1
A company collects data for temperature, humidity, and atmospheric pressure in cities across multiple continents. The average volume of data
that the company collects from each site daily is 500 GB. Each site has a high-speed Internet connection.
The company wants to aggregate the data from all these global sites as quickly as possible in a single Amazon S3 bucket. The solution must
minimize operational complexity.
Which solution meets these requirements?
A. Turn on S3 Transfer Acceleration on the destination S3 bucket. Use multipart uploads to directly upload site data to the destination S3
bucket.
B. Upload the data from each site to an S3 bucket in the closest Region. Use S3 Cross-Region Replication to copy objects to the destination S3
bucket. Then remove the data from the origin S3 bucket.
C. Schedule AWS Snowball Edge Storage Optimized device jobs daily to transfer data from each site to the closest Region. Use S3 Cross-
Region Replication to copy objects to the destination S3 bucket.
D. Upload the data from each site to an Amazon EC2 instance in the closest Region. Store the data in an Amazon Elastic Block Store (Amazon
EBS) volume. At regular intervals, take an EBS snapshot and copy it to the Region that contains the destination S3 bucket. Restore the EBS
volume in that Region.
Correct Answer: A
A company needs the ability to analyze the log files of its proprietary application. The logs are stored in JSON format in an Amazon S3 bucket.
Queries will be simple and will run on-demand. A solutions architect needs to perform the analysis with minimal changes to the existing
architecture.
What should the solutions architect do to meet these requirements with the LEAST amount of operational overhead?
A. Use Amazon Redshift to load all the content into one place and run the SQL queries as needed.
B. Use Amazon CloudWatch Logs to store the logs. Run SQL queries as needed from the Amazon CloudWatch console.
C. Use Amazon Athena directly with Amazon S3 to run the queries as needed.
D. Use AWS Glue to catalog the logs. Use a transient Apache Spark cluster on Amazon EMR to run the SQL queries as needed.
Correct Answer: C
Amazon Athena is a serverless interactive query service that allows you to analyze data directly from Amazon S3 using standard SQL
queries. It eliminates the need for infrastructure provisioning or data loading, making it a low-overhead solution.
Overall, Amazon Athena offers a straightforward and efficient solution for analyzing log files stored in JSON format, ensuring minimal
operational overhead and compatibility with simple on-demand queries.
upvoted 1 times
A company uses AWS Organizations to manage multiple AWS accounts for different departments. The management account has an Amazon S3
bucket that contains project reports. The company wants to limit access to this S3 bucket to only users of accounts within the organization in
AWS Organizations.
Which solution meets these requirements with the LEAST amount of operational overhead?
A. Add the aws PrincipalOrgID global condition key with a reference to the organization ID to the S3 bucket policy.
B. Create an organizational unit (OU) for each department. Add the aws:PrincipalOrgPaths global condition key to the S3 bucket policy.
C. Use AWS CloudTrail to monitor the CreateAccount, InviteAccountToOrganization, LeaveOrganization, and RemoveAccountFromOrganization
events. Update the S3 bucket policy accordingly.
D. Tag each user that needs access to the S3 bucket. Add the aws:PrincipalTag global condition key to the S3 bucket policy.
Correct Answer: A
aws:PrincipalOrgID – Simplifies specifying the Principal element in a resource-based policy. This global key provides an alternative to
listing all the account IDs for all AWS accounts in an organization. Instead of listing all of the accounts that are members of an
organization, you can specify the organization ID in the Condition element.
aws:PrincipalOrgPaths – Use this condition key to match members of a specific organization root, an OU, or its children. The
aws:PrincipalOrgPaths condition key returns true when the principal (root user, IAM user, or role) making the request is in the specified
organization path. A path is a text representation of the structure of an AWS Organizations entity.
upvoted 13 times
It restricts access to the S3 bucket based on the organization ID, ensuring that only users within the organization can access the bucket.
This method is suitable if you want to restrict access at the organization level rather than individual departments or organizational units.
The operational overhead for Option A is also relatively low since it involves adding a global condition key to the S3 bucket policy. However,
it is important to note that the organization ID must be accurately configured in the bucket policy to ensure the desired access control is
enforced.
In summary, Option A is a valid solution with minimal operational overhead that can limit access to the S3 bucket to users within the
organization using the aws PrincipalOrgID global condition key.
upvoted 1 times
The correct solution that meets these requirements with the least amount of operational overhead is Option A: Add the aws
PrincipalOrgID global condition key with a reference to the organization ID to the S3 bucket policy.
Option A involves adding the aws:PrincipalOrgID global condition key to the S3 bucket policy, which allows you to specify the organization
ID of the accounts that you want to grant access to the bucket. By adding this condition to the policy, you can limit access to the bucket to
only users of accounts within the organization.
upvoted 4 times
Option C involves using AWS CloudTrail to monitor certain events and updating the S3 bucket policy accordingly. While this option
could potentially work, it would require ongoing monitoring and updates to the policy, which could increase operational overhead.
upvoted 2 times
Overall, Option A is the most straightforward and least operationally complex solution for limiting access to the S3 bucket to only
users of accounts within the organization.
upvoted 1 times
An application runs on an Amazon EC2 instance in a VPC. The application processes logs that are stored in an Amazon S3 bucket. The EC2
instance needs to access the S3 bucket without connectivity to the internet.
Which solution will provide private network connectivity to Amazon S3?
B. Stream the logs to Amazon CloudWatch Logs. Export the logs to the S3 bucket.
D. Create an Amazon API Gateway API with a private link to access the S3 endpoint.
Correct Answer: A
A: Correct - Gateway VPC endpoint can connect to S3 bucket privately without additional cost
B: Incorrect - You can set up interface VPC endpoint for CloudWatch Logs for private network from EC2 to CloudWatch. But from
CloudWatch to S3 bucket: Log data can take up to 12 hours to become available for export and the requirement only need EC2 to S3
C: Incorrect - Create an instance profile just grant access but not help EC2 connect to S3 privately
D: Incorrect - API Gateway like the proxy which receive network from out site and it forward request to AWS Lambda, Amazon EC2, Elastic
Load Balancing products such as Application Load Balancers or Classic Load Balancers, Amazon DynamoDB, Amazon Kinesis, or any
publicly available HTTPS-based endpoint. But not S3
upvoted 23 times
https://docs.aws.amazon.com/whitepapers/latest/aws-privatelink/what-are-vpc-endpoints.html#:~:text=A-,VPC%20endpoint,-
enables%20customers%20to
upvoted 2 times
A gateway VPC endpoint is a private way to connect to AWS services without using the internet. This is the best solution for the given
scenario because it will allow the EC2 instance to access the S3 bucket without any internet connectivity
upvoted 1 times
Gateway VPC Endpoint: A gateway VPC endpoint allows you to privately connect your VPC to supported AWS services. By creating a
gateway VPC endpoint for S3, you can establish a private connection between your VPC and the S3 service without requiring internet
connectivity.
Private network connectivity: The gateway VPC endpoint for S3 enables your EC2 instance within the VPC to access the S3 bucket over the
private network, ensuring secure and direct communication between the EC2 instance and S3.
No internet connectivity required: Since the requirement is to access the S3 bucket without internet connectivity, the gateway VPC
endpoint provides a private and direct connection to S3 without needing to route traffic through the internet.
Minimal operational complexity: Setting up a gateway VPC endpoint is a straightforward process. It involves creating the endpoint and
configuring the appropriate routing in the VPC. This solution minimizes operational complexity while providing the required private
network connectivity.
upvoted 2 times
A company is hosting a web application on AWS using a single Amazon EC2 instance that stores user-uploaded documents in an Amazon EBS
volume. For better scalability and availability, the company duplicated the architecture and created a second EC2 instance and EBS volume in
another Availability Zone, placing both behind an Application Load Balancer. After completing this change, users reported that, each time they
refreshed the website, they could see one subset of their documents or the other, but never all of the documents at the same time.
What should a solutions architect propose to ensure users see all of their documents at once?
A. Copy the data so both EBS volumes contain all the documents
B. Configure the Application Load Balancer to direct a user to the server with the documents
C. Copy the data from both EBS volumes to Amazon EFS. Modify the application to save new documents to Amazon EFS
D. Configure the Application Load Balancer to send the request to both servers. Return each document from the correct server
Correct Answer: C
The current architecture is using two separate EBS volumes, one for each EC2 instance. This means that each instance only has a subset of
the documents. When a user refreshes the website, the Application Load Balancer will randomly direct them to one of the two instances. If
the user's documents are not on the instance that they are directed to, they will not be able to see them.
upvoted 1 times
Solution: Amazon Elastic File System, see https://aws.amazon.com/efs/ . "Amazon EFS file system creation, mounting, and settings"
https://www.youtube.com/watch?v=Aux37Nwe5nc . "Amazon EFS overview" https://www.youtube.com/watch?v=vAV4ASDnbN0 .
upvoted 1 times
In summary, Option C, which involves copying the data to Amazon EFS and modifying the application to use Amazon EFS for document
storage, is the most appropriate solution to ensure users can see all their documents at once in the duplicated architecture. Amazon EFS
provides scalability, availability, and shared access, allowing both EC2 instances to access and synchronize the documents seamlessly.
upvoted 3 times
A company uses NFS to store large video files in on-premises network attached storage. Each video file ranges in size from 1 MB to 500 GB. The
total storage is 70 TB and is no longer growing. The company decides to migrate the video files to Amazon S3. The company must migrate the
video files as soon as possible while using the least possible network bandwidth.
Which solution will meet these requirements?
A. Create an S3 bucket. Create an IAM role that has permissions to write to the S3 bucket. Use the AWS CLI to copy all files locally to the S3
bucket.
B. Create an AWS Snowball Edge job. Receive a Snowball Edge device on premises. Use the Snowball Edge client to transfer data to the
device. Return the device so that AWS can import the data into Amazon S3.
C. Deploy an S3 File Gateway on premises. Create a public service endpoint to connect to the S3 File Gateway. Create an S3 bucket. Create a
new NFS file share on the S3 File Gateway. Point the new file share to the S3 bucket. Transfer the data from the existing NFS file share to the
S3 File Gateway.
D. Set up an AWS Direct Connect connection between the on-premises network and AWS. Deploy an S3 File Gateway on premises. Create a
public virtual interface (VIF) to connect to the S3 File Gateway. Create an S3 bucket. Create a new NFS file share on the S3 File Gateway. Point
the new file share to the S3 bucket. Transfer the data from the existing NFS file share to the S3 File Gateway.
Correct Answer: C
B. On a Snowball Edge device you can copy files with a speed of up to 100Gbps. 70TB will take around 5600 seconds, so very quickly, less
than 2 hours. The downside is that it'll take between 4-6 working days to receive the device and then another 2-3 working days to send it
back and for AWS to move the data onto S3 once it reaches them. Total time: 6-9 working days. Bandwidth used: 0.
C. File Gateway uses the Internet, so maximum speed will be at most 1Gbps, so it'll take a minimum of 6.5 days and you use 70TB of
Internet bandwidth.
D. You can achieve speeds of up to 10Gbps with Direct Connect. Total time 15.5 hours and you will use 70TB of bandwidth. However, what's
interesting is that the question does not specific what type of bandwidth? Direct Connect does not use your Internet bandwidth, as you
will have a dedicate peer to peer connectivity between your on-prem and the AWS Cloud, so technically, you're not using your "public"
bandwidth.
The requirements are a bit too vague but I think that B is the most appropriate answer, although D might also be correct if the bandwidth
usage refers strictly to your public connectivity.
upvoted 55 times
Data: 70TB
=70TB*8b/B=560Tb
=560Tb*1000G/1T=560000Gb
Speed: 100Gb/s
Time=Data:Speed=56000Gb:100Gb/s=5600s
Time=5600s:3600s/hour=~1.5 hours (in case always on max speed)
upvoted 2 times
AWS Snowball can transfer up to 80TB per device without use network bandwidth and transfer data at up to 100 Gbps.
upvoted 1 times
AWS Snowball Edge provides a physical storage device to transfer large local datasets to Amazon S3. This avoids using network bandwidth
for the data transfer.
Snowball Edge can transfer up to 80TB per device, so a single Snowball Edge job can handle the 70TB dataset.
The client transfers data to the Snowball Edge device on-premises, avoiding the need to copy the data over the network.
When AWS receives the device back, the data is imported to the S3 bucket.
This achieves fast data migration without using network bandwidth.
Option A would consume large amounts of network bandwidth for the data transfer.
Options C and D use S3 File Gateway, which still requires the data to be sent over the network to S3. This does not meet the goal of
minimizing network bandwidth.
So Option B with Snowball Edge is the right approach to migrate the large dataset quickly while using minimal network bandwidth.
upvoted 2 times
This solution is the most efficient way to migrate the video files to Amazon S3. The Snowball Edge device can transfer data at up to 100
Gbps, which is much faster than the company's current network bandwidth. The Snowball Edge device is also a secure way to transfer
data, as it is encrypted at rest and in transit.
upvoted 2 times
A company has an application that ingests incoming messages. Dozens of other applications and microservices then quickly consume these
messages. The number of messages varies drastically and sometimes increases suddenly to 100,000 each second. The company wants to
decouple the solution and increase scalability.
Which solution meets these requirements?
A. Persist the messages to Amazon Kinesis Data Analytics. Configure the consumer applications to read and process the messages.
B. Deploy the ingestion application on Amazon EC2 instances in an Auto Scaling group to scale the number of EC2 instances based on CPU
metrics.
C. Write the messages to Amazon Kinesis Data Streams with a single shard. Use an AWS Lambda function to preprocess messages and store
them in Amazon DynamoDB. Configure the consumer applications to read from DynamoDB to process the messages.
D. Publish the messages to an Amazon Simple Notification Service (Amazon SNS) topic with multiple Amazon Simple Queue Service (Amazon
SOS) subscriptions. Configure the consumer applications to process the messages from the queues.
Correct Answer: A
It's important to note that the maximum number of messages per second that a queue can handle is not the same as the maximum
number of requests per second that the SQS API can handle. The SQS API is designed to handle a high volume of requests per second,
so it can be used to send messages to your queue at a rate that exceeds the maximum message throughput of the queue.
upvoted 7 times
This solution uses Amazon SNS and SQS to publish and subscribe to messages respectively, which decouples the system and enables
scalability by allowing multiple consumer applications to process the messages in parallel. Additionally, using Amazon SQS with
multiple subscriptions can provide increased resiliency by allowing multiple copies of the same message to be processed in parallel.
upvoted 5 times
Amazon SNS allows you to publish messages to a topic, which can then fan out those messages to multiple subscribers.
By using Amazon SQS as a subscriber to the SNS topic, you can handle the message load in a decoupled and scalable way. SQS can store
messages until the consuming application is ready to process them, helping to smooth out the variance in message load.
This approach allows the company to effectively decouple the message producing applications from the consuming applications, and it
can easily scale to handle the high load of messages.
The number of messages (100,000 each second) might require careful configuration and sharding of SQS queues or use of FIFO queues to
ensure that they can handle the load.
Options A, B, and C have their own limitations:
upvoted 1 times
M0SHE 5 days, 14 hours ago
Selected Answer: D
D is the right answer
upvoted 1 times
Publishing messages to an SNS topic with multiple SQS subscriptions is a common AWS pattern for achieving both decoupling and
scalability in message-driven systems. SNS allows messages to be fanned out to multiple subscribers, which in this case would be SQS
queues. Each consumer application could then process messages from its SQS queue at its own pace, providing scalability and ensuring
that all messages are processed by all consumer applications.
A. Amazon Kinesis Data Analytics is primarily used for real-time analysis of streaming data. It's not designed to distribute messages to
multiple consumers.
upvoted 2 times
A company is migrating a distributed application to AWS. The application serves variable workloads. The legacy platform consists of a primary
server that coordinates jobs across multiple compute nodes. The company wants to modernize the application with a solution that maximizes
resiliency and scalability.
How should a solutions architect design the architecture to meet these requirements?
A. Configure an Amazon Simple Queue Service (Amazon SQS) queue as a destination for the jobs. Implement the compute nodes with Amazon
EC2 instances that are managed in an Auto Scaling group. Configure EC2 Auto Scaling to use scheduled scaling.
B. Configure an Amazon Simple Queue Service (Amazon SQS) queue as a destination for the jobs. Implement the compute nodes with Amazon
EC2 instances that are managed in an Auto Scaling group. Configure EC2 Auto Scaling based on the size of the queue.
C. Implement the primary server and the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure
AWS CloudTrail as a destination for the jobs. Configure EC2 Auto Scaling based on the load on the primary server.
D. Implement the primary server and the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure
Amazon EventBridge (Amazon CloudWatch Events) as a destination for the jobs. Configure EC2 Auto Scaling based on the load on the
compute nodes.
Correct Answer: C
A company is running an SMB file server in its data center. The file server stores large files that are accessed frequently for the first few days after
the files are created. After 7 days the files are rarely accessed.
The total data size is increasing and is close to the company's total storage capacity. A solutions architect must increase the company's available
storage space without losing low-latency access to the most recently accessed files. The solutions architect must also provide file lifecycle
management to avoid future storage issues.
Which solution will meet these requirements?
A. Use AWS DataSync to copy data that is older than 7 days from the SMB file server to AWS.
B. Create an Amazon S3 File Gateway to extend the company's storage space. Create an S3 Lifecycle policy to transition the data to S3 Glacier
Deep Archive after 7 days.
C. Create an Amazon FSx for Windows File Server file system to extend the company's storage space.
D. Install a utility on each user's computer to access Amazon S3. Create an S3 Lifecycle policy to transition the data to S3 Glacier Flexible
Retrieval after 7 days.
Correct Answer: D
D is wrong because utility function is vague and there is no need for flexible storage.
upvoted 40 times
B/D both answer can be make more available storage space after 7 days send to the S3 galacier but B will losing access about most
recently accessed files with the low-latency access.
upvoted 1 times
Transition actions – These actions define when objects transition to another storage class. For example, you might choose to transition
objects to the S3 Standard-IA storage class 30 days after creating them, or archive objects to the S3 Glacier Flexible Retrieval storage class
one year after creating them. For more information, see Using Amazon S3 storage classes.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lifecycle-mgmt.html
B. Create an Amazon S3 File Gateway to extend the company's storage space. Create an S3 Lifecycle policy to transition the data to S3
Glacier Deep Archive after 7 days.
upvoted 1 times
A company is building an ecommerce web application on AWS. The application sends information about new orders to an Amazon API Gateway
REST API to process. The company wants to ensure that orders are processed in the order that they are received.
Which solution will meet these requirements?
A. Use an API Gateway integration to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic when the application
receives an order. Subscribe an AWS Lambda function to the topic to perform processing.
B. Use an API Gateway integration to send a message to an Amazon Simple Queue Service (Amazon SQS) FIFO queue when the application
receives an order. Configure the SQS FIFO queue to invoke an AWS Lambda function for processing.
C. Use an API Gateway authorizer to block any requests while the application processes an order.
D. Use an API Gateway integration to send a message to an Amazon Simple Queue Service (Amazon SQS) standard queue when the
application receives an order. Configure the SQS standard queue to invoke an AWS Lambda function for processing.
Correct Answer: A
https://youtube.com/shorts/Je_Zc_qWoYE?feature=share
What is API gateway?
https://youtube.com/shorts/1IGqAHgpqEo?feature=share
upvoted 2 times
benacert 4 weeks ago
B it is..
upvoted 1 times
A company has an application that runs on Amazon EC2 instances and uses an Amazon Aurora database. The EC2 instances connect to the
database by using user names and passwords that are stored locally in a file. The company wants to minimize the operational overhead of
credential management.
What should a solutions architect do to accomplish this goal?
C. Create an Amazon S3 bucket to store objects that are encrypted with an AWS Key Management Service (AWS KMS) encryption key. Migrate
the credential file to the S3 bucket. Point the application to the S3 bucket.
D. Create an encrypted Amazon Elastic Block Store (Amazon EBS) volume for each EC2 instance. Attach the new EBS volume to each EC2
instance. Migrate the credential file to the new EBS volume. Point the application to the new EBS volume.
Correct Answer: B
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/rds-secrets-manager.html
upvoted 1 times
https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotate-secrets_turn-on-for-db.html
upvoted 1 times
Question #12 Topic 1
A global company hosts its web application on Amazon EC2 instances behind an Application Load Balancer (ALB). The web application has static
data and dynamic data. The company stores its static data in an Amazon S3 bucket. The company wants to improve performance and reduce
latency for the static data and dynamic data. The company is using its own domain name registered with Amazon Route 53.
What should a solutions architect do to meet these requirements?
A. Create an Amazon CloudFront distribution that has the S3 bucket and the ALB as origins. Configure Route 53 to route traffic to the
CloudFront distribution.
B. Create an Amazon CloudFront distribution that has the ALB as an origin. Create an AWS Global Accelerator standard accelerator that has
the S3 bucket as an endpoint Configure Route 53 to route traffic to the CloudFront distribution.
C. Create an Amazon CloudFront distribution that has the S3 bucket as an origin. Create an AWS Global Accelerator standard accelerator that
has the ALB and the CloudFront distribution as endpoints. Create a custom domain name that points to the accelerator DNS name. Use the
custom domain name as an endpoint for the web application.
D. Create an Amazon CloudFront distribution that has the ALB as an origin. Create an AWS Global Accelerator standard accelerator that has
the S3 bucket as an endpoint. Create two domain names. Point one domain name to the CloudFront DNS name for dynamic content. Point the
other domain name to the accelerator DNS name for static content. Use the domain names as endpoints for the web application.
Correct Answer: C
A: AWS Global Accelerator and Amazon CloudFront are separate services that use the AWS global network and its edge locations around
the world. CloudFront improves performance for both cacheable content (such as images and videos) and dynamic content (such as API
acceleration and dynamic site delivery). Global Accelerator improves performance for a wide range of applications over TCP or UDP by
proxying packets at the edge to applications running in one or more AWS Regions. Global Accelerator is a good fit for non-HTTP use cases,
such as gaming (UDP), IoT (MQTT), or Voice over IP, as well as for HTTP use cases that specifically require static IP addresses or
deterministic, fast regional failover. Both services integrate with AWS Shield for DDoS protection.
upvoted 21 times
CloudFront with Multiple Origins: CloudFront allows you to set up multiple origins for your distribution, so you can use both the ALB (for
dynamic content) and the S3 bucket (for static content) as origins. This means that both your dynamic and static content can be served
through CloudFront, which will cache content at edge locations to reduce latency.
Route 53 Integration with CloudFront: Amazon Route 53 can be easily configured to route traffic for your domain to a CloudFront
distribution. Users will access your domain, and Route 53 will direct them to the nearest CloudFront edge location.
upvoted 1 times
network
https://repost.aws/knowledge-center/cloudfront-distribution-serve-content
upvoted 1 times
A company performs monthly maintenance on its AWS infrastructure. During these maintenance activities, the company needs to rotate the
credentials for its Amazon RDS for MySQL databases across multiple AWS Regions.
Which solution will meet these requirements with the LEAST operational overhead?
A. Store the credentials as secrets in AWS Secrets Manager. Use multi-Region secret replication for the required Regions. Configure Secrets
Manager to rotate the secrets on a schedule.
B. Store the credentials as secrets in AWS Systems Manager by creating a secure string parameter. Use multi-Region secret replication for the
required Regions. Configure Systems Manager to rotate the secrets on a schedule.
C. Store the credentials in an Amazon S3 bucket that has server-side encryption (SSE) enabled. Use Amazon EventBridge (Amazon
CloudWatch Events) to invoke an AWS Lambda function to rotate the credentials.
D. Encrypt the credentials as secrets by using AWS Key Management Service (AWS KMS) multi-Region customer managed keys. Store the
secrets in an Amazon DynamoDB global table. Use an AWS Lambda function to retrieve the secrets from DynamoDB. Use the RDS API to rotate
the secrets.
Correct Answer: A
This solution is the best option for meeting the requirements with the least operational overhead. AWS Secrets Manager is designed
specifically for managing and rotating secrets like database credentials. Using multi-Region secret replication, you can easily replicate the
secrets across the required AWS Regions. Additionally, Secrets Manager allows you to configure automatic secret rotation on a schedule,
further reducing the operational overhead.
upvoted 1 times
AWS Secrets Manager allows you to store, manage, and rotate secrets, such as database credentials, across multiple AWS Regions. By
enabling multi-Region secret replication, you can replicate the secrets across the required Regions to allow for seamless rotation of the
credentials during maintenance activities. Additionally, Secrets Manager provides automatic rotation of secrets on a schedule, which
would minimize the operational overhead of rotating the credentials on a monthly basis.
upvoted 2 times
Option D, encrypting the credentials as secrets using KMS multi-Region customer managed keys and storing the secrets in a
DynamoDB global table, would not provide automatic rotation of secrets on a schedule and would require additional operational
overhead to retrieve the secrets from DynamoDB and use the RDS API to rotate the secrets.
upvoted 2 times
Zerotn3 9 months, 1 week ago
vote A !
upvoted 1 times
A company runs an ecommerce application on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2
Auto Scaling group across multiple Availability Zones. The Auto Scaling group scales based on CPU utilization metrics. The ecommerce
application stores the transaction data in a MySQL 8.0 database that is hosted on a large EC2 instance.
The database's performance degrades quickly as application load increases. The application handles more read requests than write transactions.
The company wants a solution that will automatically scale the database to meet the demand of unpredictable read workloads while maintaining
high availability.
Which solution will meet these requirements?
A. Use Amazon Redshift with a single node for leader and compute functionality.
B. Use Amazon RDS with a Single-AZ deployment Configure Amazon RDS to add reader instances in a different Availability Zone.
C. Use Amazon Aurora with a Multi-AZ deployment. Configure Aurora Auto Scaling with Aurora Replicas.
Correct Answer: C
Aurora is a fully managed, MySQL-compatible relational database that is designed for high performance and high availability. Aurora
Multi-AZ deployments automatically maintain a synchronous standby replica in a different Availability Zone to provide high availability.
Additionally, Aurora Auto Scaling allows you to automatically scale the number of Aurora Replicas in response to read workloads, allowing
you to meet the demand of unpredictable read workloads while maintaining high availability. This would provide an automated solution
for scaling the database to meet the demand of the application while maintaining high availability.
upvoted 9 times
Option B, using Amazon RDS with a Single-AZ deployment and configuring RDS to add reader instances in a different Availability Zone,
would not provide high availability and would not automatically scale the number of reader instances in response to read workloads.
Option D, using Amazon ElastiCache for Memcached with EC2 Spot Instances, would not provide a database solution and would not
meet the requirements.
upvoted 2 times
A: Incorrect - Amazon Redshift is used columnar block storage which useful Data Analytic and warehouse.
It also have the issue when migrate from MySql to Redshift: storage procedure, trigger,.. Single node for leader don't maintaining high
availability.
B: Incorrect - The requirement said that: "Automatically scale the database to meet the demand of unpredictable read workloads" ->
missing auto scaling.
C: Correct - it's resolved the issue high availability and auto scaling.
D: Incorrect - Stop instance don't maintaining high availability.
upvoted 6 times
Aurora Auto Scaling allows you to add or remove Aurora Replicas based on CPU utilization, connections, or custom metrics. This enables
you to automatically scale the read capacity of the database in response to application load. Aurora Replicas are read-only instances that
can offload read traffic from the primary instance. They are kept in sync with the primary instance using Aurora's distributed storage
architecture, which enables low-latency updates across the replicas.
upvoted 1 times
Question #15 Topic 1
A company recently migrated to AWS and wants to implement a solution to protect the traffic that flows in and out of the production VPC. The
company had an inspection server in its on-premises data center. The inspection server performed specific operations such as traffic flow
inspection and traffic filtering. The company wants to have the same functionalities in the AWS Cloud.
Which solution will meet these requirements?
A. Use Amazon GuardDuty for traffic inspection and traffic filtering in the production VPC.
B. Use Traffic Mirroring to mirror traffic from the production VPC for traffic inspection and filtering.
C. Use AWS Network Firewall to create the required rules for traffic inspection and traffic filtering for the production VPC.
D. Use AWS Firewall Manager to create the required rules for traffic inspection and traffic filtering for the production VPC.
Correct Answer: C
AWS Network Firewall is a managed firewall service that provides filtering for both inbound and outbound network traffic. It allows you to
create rules for traffic inspection and filtering, which can help protect your production VPC.
Option A: Amazon GuardDuty is a threat detection service, not a traffic inspection or filtering service.
Option B: Traffic Mirroring is a feature that allows you to replicate and send a copy of network traffic from a VPC to another VPC or on-
premises location. It is not a service that performs traffic inspection or filtering.
Option D: AWS Firewall Manager is a security management service that helps you to centrally configure and manage firewalls across your
accounts. It is not a service that performs traffic inspection or filtering.
upvoted 48 times
In the context of the given scenario, AWS Network Firewall can be a suitable choice if the company wants to implement traffic inspection
and filtering directly within the VPC without the need for traffic mirroring. It provides an additional layer of security by enforcing specific
rules for traffic filtering, which can help protect the production environment.
upvoted 2 times
A company hosts a data lake on AWS. The data lake consists of data in Amazon S3 and Amazon RDS for PostgreSQL. The company needs a
reporting solution that provides data visualization and includes all the data sources within the data lake. Only the company's management team
should have full access to all the visualizations. The rest of the company should have only limited access.
Which solution will meet these requirements?
A. Create an analysis in Amazon QuickSight. Connect all the data sources and create new datasets. Publish dashboards to visualize the data.
Share the dashboards with the appropriate IAM roles.
B. Create an analysis in Amazon QuickSight. Connect all the data sources and create new datasets. Publish dashboards to visualize the data.
Share the dashboards with the appropriate users and groups.
C. Create an AWS Glue table and crawler for the data in Amazon S3. Create an AWS Glue extract, transform, and load (ETL) job to produce
reports. Publish the reports to Amazon S3. Use S3 bucket policies to limit access to the reports.
D. Create an AWS Glue table and crawler for the data in Amazon S3. Use Amazon Athena Federated Query to access data within Amazon RDS
for PostgreSQL. Generate reports by using Amazon Athena. Publish the reports to Amazon S3. Use S3 bucket policies to limit access to the
reports.
Correct Answer: D
A - Incorrect: Amazon QuickSight only support users(standard version) and groups (enterprise version). users and groups only exists
without QuickSight. QuickSight don't support IAM. We use users and groups to view the QuickSight dashboard
B - Correct: as explained in answer A and QuickSight is used to created dashboard from S3, RDS, Redshift, Aurora, Athena, OpenSearch,
Timestream
C - Incorrect: This way don't support visulization and don't mention how to process RDS data
D - Incorrect: This way don't support visulization and don't mention how to combine data RDS and S3
upvoted 24 times
Option B involves using Amazon QuickSight, which is a business intelligence tool provided by AWS for data visualization and reporting.
With this option, you can connect all the data sources within the data lake, including Amazon S3 and Amazon RDS for PostgreSQL. You can
create datasets within QuickSight that pull data from these sources.
The solution allows you to publish dashboards in Amazon QuickSight, which will provide the required data visualization capabilities. To
control access, you can use appropriate IAM (Identity and Access Management) roles, assigning full access only to the company's
management team and limiting access for the rest of the company. You can share the dashboards selectively with the users and groups
that need access.
upvoted 1 times
Answer keyword "Amazon QuickSight", "share the dashboards with the appropriate users and groups". Choose B.
upvoted 1 times
https://docs.aws.amazon.com/quicksight/latest/user/sharing-a-dashboard.html
upvoted 1 times
Amazon QuickSight is a business intelligence (BI) tool provided by AWS that allows you to create interactive dashboards and reports. It
supports a variety of data sources, including Amazon S3 and Amazon RDS for PostgreSQL, which are the data sources in the company's
data lake.
Option A (Create an analysis in Amazon QuickSight and share with IAM roles) is incorrect because it suggests sharing with IAM roles,
which are more suitable for managing access to AWS resources rather than granting access to specific users or groups within QuickSight.
upvoted 3 times
Open the published dashboard and choose Share at upper right. Then choose Share dashboard.
In the Share dashboard page that opens, under Manage permissions, review the users and groups, and their roles and settings.
You can search to locate a specific user or group by entering their name or any part of their name in the search box at upper right.
Searching is case-sensitive, and wildcards aren't supported. Delete the search term to return the view to all users.
upvoted 1 times
A company is implementing a new business application. The application runs on two Amazon EC2 instances and uses an Amazon S3 bucket for
document storage. A solutions architect needs to ensure that the EC2 instances can access the S3 bucket.
What should the solutions architect do to meet this requirement?
A. Create an IAM role that grants access to the S3 bucket. Attach the role to the EC2 instances.
B. Create an IAM policy that grants access to the S3 bucket. Attach the policy to the EC2 instances.
C. Create an IAM group that grants access to the S3 bucket. Attach the group to the EC2 instances.
D. Create an IAM user that grants access to the S3 bucket. Attach the user account to the EC2 instances.
Correct Answer: A
An IAM role is an AWS resource that allows you to delegate access to AWS resources and services. You can create an IAM role that grants
access to the S3 bucket and then attach the role to the EC2 instances. This will allow the EC2 instances to access the S3 bucket and the
documents stored within it.
Option B is incorrect because an IAM policy is used to define permissions for an IAM user or group, not for an EC2 instance.
Option C is incorrect because an IAM group is used to group together IAM users and policies, not to grant access to resources.
Option D is incorrect because an IAM user is used to represent a person or service that interacts with AWS resources, not to grant access
to resources.
upvoted 39 times
Option C is not the most appropriate choice because IAM groups are used to manage collections of IAM users and their permissions,
rather than granting access to specific resources like S3 buckets.
Option D is not the optimal solution because IAM users are intended for individual user accounts and are not the recommended approach
for granting access to resources within EC2 instances.
upvoted 3 times
big0007 4 months, 2 weeks ago
IAM Roles manage who/what has access to your AWS resources, whereas IAM policies control their permissions.
Therefore, a Policy alone is useless without an active IAM Role or IAM User.
upvoted 1 times
A: Correct - IAM role is used to grant access for AWS services like EC2, Lambda,...
B: Incorrect - IAM policy only apply for users cannot attach it to EC2 (AWS service).
C: Incorrect - IAM group is used to group of permission and attach to list of users.
D: Incorrect - To make EC2 work we need access key and secret access key but not user account. But even when we use access key and
secret access key of user it's not recommended because anyone can access EC2 instance can get your access key and secret access key
and get all permission from the owner. The secure way is using IAM role which we just specify enough role for EC2 instance.
upvoted 4 times
An application development team is designing a microservice that will convert large images to smaller, compressed images. When a user uploads
an image through the web interface, the microservice should store the image in an Amazon S3 bucket, process and compress the image with an
AWS Lambda function, and store the image in its compressed form in a different S3 bucket.
A solutions architect needs to design a solution that uses durable, stateless components to process the images automatically.
Which combination of actions will meet these requirements? (Choose two.)
A. Create an Amazon Simple Queue Service (Amazon SQS) queue. Configure the S3 bucket to send a notification to the SQS queue when an
image is uploaded to the S3 bucket.
B. Configure the Lambda function to use the Amazon Simple Queue Service (Amazon SQS) queue as the invocation source. When the SQS
message is successfully processed, delete the message in the queue.
C. Configure the Lambda function to monitor the S3 bucket for new uploads. When an uploaded image is detected, write the file name to a text
file in memory and use the text file to keep track of the images that were processed.
D. Launch an Amazon EC2 instance to monitor an Amazon Simple Queue Service (Amazon SQS) queue. When items are added to the queue,
log the file name in a text file on the EC2 instance and invoke the Lambda function.
E. Configure an Amazon EventBridge (Amazon CloudWatch Events) event to monitor the S3 bucket. When an image is uploaded, send an alert
to an Amazon ample Notification Service (Amazon SNS) topic with the application owner's email address for further processing.
Correct Answer: AB
Option A involves creating an SQS queue and configuring the S3 bucket to send a notification to the queue when an image is uploaded.
This allows the application to decouple the image upload process from the image processing process and ensures that the image
processing process is triggered automatically when a new image is uploaded.
Option B involves configuring the Lambda function to use the SQS queue as the invocation source. When the SQS message is successfully
processed, the message is deleted from the queue. This ensures that the Lambda function is invoked only once per image and that the
image is not processed multiple times.
upvoted 17 times
Option D is incorrect because it involves launching an EC2 instance to monitor the SQS queue, which is not a stateless solution.
Option E is incorrect because it involves using Amazon EventBridge (formerly Amazon CloudWatch Events) to send an alert to an
Amazon Simple Notification Service (Amazon SNS) topic, which is not related to the image processing process.
upvoted 12 times
Option B: Configuring the Lambda function to use the SQS queue as the invocation source allows it to retrieve messages from the queue
and process them in a stateless manner. After successfully processing the image, the Lambda function can delete the message from the
queue to avoid duplicate processing.
upvoted 1 times
miki111 2 months, 3 weeks ago
Option AB MET THE REQUIREMENT
upvoted 1 times
Option B is also a correct because it ensures that Lambda is triggered by messages in SQS. Lambda can retrieve image information from
SQS, process and compress image, and store compressed image in a different S3. Once processing is successful, Lambda can delete
processed message from SQS, indicating that image has been processed.
Option C is not recommended because it introduces a stateful approach by using a text file to keep track of processed images.
Option D is not optimal solution as it introduces unnecessary complexity by involving an EC2 to monitor SQS and maintain a text file.
Option E is not directly related to requirement of processing images automatically. Although EventBridge and SNS can be useful for event
notifications and further processing, they don't provide the same level of durability and scalability as SQS.
upvoted 3 times
A,B: Correct - SQS has message retention function(store message) default 4 days(can increate update 14 days) so that you can re-run
lambda if there are any errors when processing the images.
C: Incorrect - Lambda function just run the request then stop, the max tmeout is 15 mins. So we cannot store data in the ram of Lambda
function.
D: Incorrect - we can trigger Lambda dirrectly from SQS no need EC2 instance in this case
E: Incorrect - It kinds of manually step -> the owner has to read email then process it :))
upvoted 3 times
A company has a three-tier web application that is deployed on AWS. The web servers are deployed in a public subnet in a VPC. The application
servers and database servers are deployed in private subnets in the same VPC. The company has deployed a third-party virtual firewall appliance
from AWS Marketplace in an inspection VPC. The appliance is configured with an IP interface that can accept IP packets.
A solutions architect needs to integrate the web application with the appliance to inspect all traffic to the application before the traffic reaches the
web server.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create a Network Load Balancer in the public subnet of the application's VPC to route the traffic to the appliance for packet inspection.
B. Create an Application Load Balancer in the public subnet of the application's VPC to route the traffic to the appliance for packet inspection.
C. Deploy a transit gateway in the inspection VPConfigure route tables to route the incoming packets through the transit gateway.
D. Deploy a Gateway Load Balancer in the inspection VPC. Create a Gateway Load Balancer endpoint to receive the incoming packets and
forward the packets to the appliance.
Correct Answer: B
Gateway Load Balancers make it easy to deploy, scale, and manage third-party virtual appliances, such as security appliances.
https://docs.aws.amazon.com/elasticloadbalancing/latest/gateway/getting-started.html
upvoted 2 times
Option A is incorrect because a Network Load Balancer (NLB) is a regional service, and the appliance is deployed in an inspection VPC. This
means that the NLB would not be able to reach the appliance.
Option B is incorrect because an Application Load Balancer (ALB) is a regional service, and the appliance is deployed in an inspection VPC.
This means that the ALB would not be able to reach the appliance.
Option C is incorrect because a transit gateway is a global service, and the appliance is deployed in an inspection VPC. This means that the
transit gateway would not be able to reach the appliance.
upvoted 5 times
In this case, the appliance is used as a security system before the web tier.
upvoted 2 times
By creating a Network Load Balancer (NLB) in the public subnet, you can configure it to forward incoming traffic to the virtual firewall
appliance for inspection. The NLB operates at the transport layer (Layer 4) and can distribute traffic across multiple instances, including
the firewall appliance. This allows you to scale the inspection capacity if needed. The NLB can be associated with a target group that
includes the IP address of the firewall appliance, directing traffic to it before reaching the web servers.
Option B (Application Load Balancer) is not suitable for this scenario as it operates at the application layer (Layer 7) and does not provide
direct access to the IP packets for inspection.
Option C (Transit Gateway) and option D (Gateway Load Balancer) introduce additional complexity and overhead compared to using an
NLB. They are not necessary for achieving the requirement of inspecting traffic to the web application before reaching the web servers.
upvoted 7 times
A company wants to improve its ability to clone large amounts of production data into a test environment in the same AWS Region. The data is
stored in Amazon EC2 instances on Amazon Elastic Block Store (Amazon EBS) volumes. Modifications to the cloned data must not affect the
production environment. The software that accesses this data requires consistently high I/O performance.
A solutions architect needs to minimize the time that is required to clone the production data into the test environment.
Which solution will meet these requirements?
A. Take EBS snapshots of the production EBS volumes. Restore the snapshots onto EC2 instance store volumes in the test environment.
B. Configure the production EBS volumes to use the EBS Multi-Attach feature. Take EBS snapshots of the production EBS volumes. Attach the
production EBS volumes to the EC2 instances in the test environment.
C. Take EBS snapshots of the production EBS volumes. Create and initialize new EBS volumes. Attach the new EBS volumes to EC2 instances
in the test environment before restoring the volumes from the production EBS snapshots.
D. Take EBS snapshots of the production EBS volumes. Turn on the EBS fast snapshot restore feature on the EBS snapshots. Restore the
snapshots into new EBS volumes. Attach the new EBS volumes to EC2 instances in the test environment.
Correct Answer: D
Amazon EBS fast snapshot restore (FSR) enables you to create a volume from a snapshot that is fully initialized at creation. This eliminates
the latency of I/O operations on a block when it is accessed for the first time. Volumes that are created using fast snapshot restore
instantly deliver all of their provisioned performance.
upvoted 25 times
Enabling the EBS fast snapshot restore feature allows you to restore EBS snapshots into new EBS volumes almost instantly, without
needing to wait for the data to be fully copied from the snapshot. This significantly reduces the time required to clone the production
data.
By taking EBS snapshots of the production EBS volumes and restoring them into new EBS volumes in the test environment, you can
ensure that the cloned data is separate and does not affect the production environment. Attaching the new EBS volumes to the EC2
instances in the test environment allows you to access the cloned data.
upvoted 2 times
An ecommerce company wants to launch a one-deal-a-day website on AWS. Each day will feature exactly one product on sale for a period of 24
hours. The company wants to be able to handle millions of requests each hour with millisecond latency during peak hours.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use Amazon S3 to host the full website in different S3 buckets. Add Amazon CloudFront distributions. Set the S3 buckets as origins for the
distributions. Store the order data in Amazon S3.
B. Deploy the full website on Amazon EC2 instances that run in Auto Scaling groups across multiple Availability Zones. Add an Application
Load Balancer (ALB) to distribute the website traffic. Add another ALB for the backend APIs. Store the data in Amazon RDS for MySQL.
C. Migrate the full application to run in containers. Host the containers on Amazon Elastic Kubernetes Service (Amazon EKS). Use the
Kubernetes Cluster Autoscaler to increase and decrease the number of pods to process bursts in traffic. Store the data in Amazon RDS for
MySQL.
D. Use an Amazon S3 bucket to host the website's static content. Deploy an Amazon CloudFront distribution. Set the S3 bucket as the origin.
Use Amazon API Gateway and AWS Lambda functions for the backend APIs. Store the data in Amazon DynamoDB.
Correct Answer: D
Using Amazon S3 to host static content and Amazon CloudFront to distribute the content can provide high performance and scale for
websites with millions of requests each hour. Amazon API Gateway and AWS Lambda can be used to build scalable and highly available
backend APIs to support the website, and Amazon DynamoDB can be used to store the data. This solution requires minimal operational
overhead as it leverages fully managed services that automatically scale to meet demand.
upvoted 13 times
Option B is incorrect because deploying the full website on EC2 instances and using an Application Load Balancer (ALB) and an RDS
database would require more operational overhead to maintain and scale the infrastructure.
Option C is incorrect because while deploying the application in containers and hosting them on Amazon Elastic Kubernetes Service
(EKS) can provide high performance and scale, it would require more operational overhead to maintain and scale the infrastructure
compared to using fully managed services like S3 and CloudFront.
upvoted 7 times
Answer C, use Kubernetes is quite overhead, Amazon DynamoDB faster than Amazon RDS for MySQL.
Answer D is suitalbe in technical architect design, with Amazon S3, Amazon CloudFront, Amazon API Gateway, AWS Lambda, Amazon
DynamoDB. for "LEAD operational overhead" (not mean migration/re-architect overhead, it is operational). Choose D.
upvoted 1 times
This solution leverages the scalability, low latency, and operational ease provided by AWS services.
This solution minimizes operational overhead because it leverages managed services, eliminating the need for manual scaling or
management of infrastructure. It also provides the required scalability and low-latency response times to handle peak-hour traffic
effectively.
Options A, B, and C involve more operational overhead and management responsibilities, such as managing EC2 instances, Auto Scaling
groups, RDS for MySQL, containers, and Kubernetes clusters. These options require more manual configuration and maintenance
compared to the serverless and managed services approach provided by option D.
upvoted 3 times
A: Incorrect - We cannot store all the data to S3 because our data is dynamic (Each day will feature exactly one product on sale for a period
of 24 hours)
B: Incorrect - We don't have cache to improve performance (one product on sale for a period of 24 hours). Auto Scaling groups and RDS
for MySQL need time to scale cannot scale immedidately.
C: Incorrect - We don't have cache to improve performance (one product on sale for a period of 24 hours). Kubernetes Cluster Autoscaler
can scale better than Auto Scaling groups but it also need time to scale.
D: Correct - DynamoDB, S3, CloudFront, API Gateway are managed servers and they are highly scalable. CloudFront can cache static and
dynamic data.
upvoted 8 times
A solutions architect is using Amazon S3 to design the storage architecture of a new digital media application. The media files must be resilient to
the loss of an Availability Zone. Some files are accessed frequently while other files are rarely accessed in an unpredictable pattern. The solutions
architect must minimize the costs of storing and retrieving the media files.
Which storage option meets these requirements?
A. S3 Standard
B. S3 Intelligent-Tiering
Correct Answer: B
Amazon S3 Intelligent Tiering is a storage class that automatically moves data to the most cost-effective storage tier based on access
patterns. It can store objects in two access tiers: the frequent access tier and the infrequent access tier. The frequent access tier is
optimized for frequently accessed objects and is charged at the same rate as S3 Standard. The infrequent access tier is optimized for
objects that are not accessed frequently and are charged at a lower rate than S3 Standard.
S3 Intelligent Tiering is a good choice for storing media files that are accessed frequently and infrequently in an unpredictable pattern
because it automatically moves data to the most cost-effective storage tier based on access patterns, minimizing storage and retrieval
costs. It is also resilient to the loss of an Availability Zone because it stores objects in multiple Availability Zones within a region.
upvoted 8 times
Option C, S3 Standard-Infrequent Access (S3 Standard-IA), is not a good choice because it is optimized for infrequently accessed objects
and does not offer the cost optimization of S3 Intelligent-Tiering.
Option D, S3 One Zone-Infrequent Access (S3 One Zone-IA), is not a good choice because it is not resilient to the loss of an Availability
Zone. It stores objects in a single Availability Zone, making it less durable than other storage classes.
upvoted 5 times
In the given scenario, where some media files are accessed frequently while others are rarely accessed in an unpredictable pattern, S3
Intelligent-Tiering can be a suitable choice. It automatically adjusts the storage tier based on the access patterns, ensuring that frequently
accessed files remain in the frequent access tier for fast retrieval, while rarely accessed files are moved to the infrequent access tier for
cost savings.
Compared to S3 Standard-IA, S3 Intelligent-Tiering provides more granular cost optimization and may be more suitable if the access
patterns of the media files fluctuate over time. However, it's worth noting that S3 Intelligent-Tiering may have slightly higher storage costs
compared to S3 Standard-IA due to the added flexibility and automation it offers.
upvoted 3 times
S3 Standard-IA is designed for infrequently accessed data, which is a good fit for the media files that are rarely accessed in an
unpredictable pattern. S3 Standard-IA is also cross-Region replicated, providing resilience to the loss of an Availability Zone. Additionally,
S3 Standard-IA has a lower storage and retrieval cost compared to S3 Standard and S3 Intelligent-Tiering, which makes it a cost-effective
option for storing infrequently accessed data.
upvoted 1 times
Question #23 Topic 1
A company is storing backup files by using Amazon S3 Standard storage. The files are accessed frequently for 1 month. However, the files are not
accessed after 1 month. The company must keep the files indefinitely.
Which storage solution will meet these requirements MOST cost-effectively?
B. Create an S3 Lifecycle configuration to transition objects from S3 Standard to S3 Glacier Deep Archive after 1 month.
C. Create an S3 Lifecycle configuration to transition objects from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-IA) after 1
month.
D. Create an S3 Lifecycle configuration to transition objects from S3 Standard to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 1
month.
Correct Answer: B
Amazon S3 Glacier Deep Archive is a secure, durable, and extremely low-cost Amazon S3 storage class for long-term retention of data that
is rarely accessed and for which retrieval times of several hours are acceptable. It is the lowest-cost storage option in Amazon S3, making
it a cost-effective choice for storing backup files that are not accessed after 1 month.
You can use an S3 Lifecycle configuration to automatically transition objects from S3 Standard to S3 Glacier Deep Archive after 1 month.
This will minimize the storage costs for the backup files that are not accessed frequently.
upvoted 8 times
Option C, transitioning objects from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-IA) after 1 month, is not a good choice
because it is not the lowest-cost storage option and would not provide the cost benefits of S3 Glacier Deep Archive.
Option D, transitioning objects from S3 Standard to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 1 month, is not a good
choice because it is not the lowest-cost storage option and would not provide the cost benefits of S3 Glacier Deep Archive.
upvoted 3 times
If the files need to be accessed frequently within the first month but not after that, transitioning them to S3 Glacier Deep Archive using an
S3 Lifecycle configuration can provide cost savings. However, keep in mind that retrieving the files from S3 Glacier Deep Archive will have a
significant time delay.
upvoted 3 times
A: Incorrect - We know the pattern (accessed frequently for 1 month, NOT accessed after 1 month) so we can configure it manually to
make the cost reduce as much as possible.
B: Correct - Glacier Deep Archive is the most cost-effective for file which rarely use
C: Incorrect - Standard-Infrequent Access good for in Infrequent Access but not good for rarely(never) use.
D: Incorrect - One Zone-Infrequent Access can reduce more cost compare to Standard-Infrequent Access but it is not the best way
compare to Glacier Deep Archive.
upvoted 3 times
A company observes an increase in Amazon EC2 costs in its most recent bill. The billing team notices unwanted vertical scaling of instance types
for a couple of EC2 instances. A solutions architect needs to create a graph comparing the last 2 months of EC2 costs and perform an in-depth
analysis to identify the root cause of the vertical scaling.
How should the solutions architect generate the information with the LEAST operational overhead?
A. Use AWS Budgets to create a budget report and compare EC2 costs based on instance types.
B. Use Cost Explorer's granular filtering feature to perform an in-depth analysis of EC2 costs based on instance types.
C. Use graphs from the AWS Billing and Cost Management dashboard to compare EC2 costs based on instance types for the last 2 months.
D. Use AWS Cost and Usage Reports to create a report and send it to an Amazon S3 bucket. Use Amazon QuickSight with Amazon S3 as a
source to generate an interactive graph based on instance types.
Correct Answer: C
Billing and Cost management → An OVERALL look at all of the costs within your AWS organization or billing account
In the management billing console you can do things such as add your credit card, add or remove regions, change your default currency
but cost explorer is sort of different- its mainly for finding out what’s being charged the most- you can review costs in both services but
remember management billing console is for configurations mainly…
Cost Explorer can be used to filter and find the ROOT of problems
Amazon S3 Bucket: Storing the cost and usage data in an S3 bucket allows you to have a centralized and secure location for your data,
making it easily accessible for further analysis.
Amazon QuickSight: With Amazon QuickSight as a data visualization tool, you can easily connect to the data stored in the S3 bucket and
create interactive graphs and visualizations. QuickSight offers various chart types and filtering options to perform an in-depth analysis
based on instance types, cost trends, and usage patterns over the last 2 months.
upvoted 2 times
To visualize and analyze the EC2 costs based on instance types, the architect can use Amazon QuickSight, a business intelligence tool
offered by AWS. QuickSight can directly access data stored in Amazon S3 and generate interactive graphs, charts, and dashboards for
detailed analysis. By connecting QuickSight to the S3 bucket containing the cost reports, the architect can easily create a graph comparing
the EC2 costs over the last 2 months based on instance types.
This approach minimizes operational overhead by leveraging AWS services (Cost and Usage Reports, Amazon S3, and QuickSight) to
automate data retrieval, storage, and visualization, allowing for efficient analysis of EC2 costs without the need for manual data gathering
and processing.
upvoted 2 times
Question #25 Topic 1
A company is designing an application. The application uses an AWS Lambda function to receive information through Amazon API Gateway and to
store the information in an Amazon Aurora PostgreSQL database.
During the proof-of-concept stage, the company has to increase the Lambda quotas significantly to handle the high volumes of data that the
company needs to load into the database. A solutions architect must recommend a new design to improve scalability and minimize the
configuration effort.
Which solution will meet these requirements?
A. Refactor the Lambda function code to Apache Tomcat code that runs on Amazon EC2 instances. Connect the database by using native
Java Database Connectivity (JDBC) drivers.
B. Change the platform from Aurora to Amazon DynamoDProvision a DynamoDB Accelerator (DAX) cluster. Use the DAX client SDK to point
the existing DynamoDB API calls at the DAX cluster.
C. Set up two Lambda functions. Configure one function to receive the information. Configure the other function to load the information into
the database. Integrate the Lambda functions by using Amazon Simple Notification Service (Amazon SNS).
D. Set up two Lambda functions. Configure one function to receive the information. Configure the other function to load the information into
the database. Integrate the Lambda functions by using an Amazon Simple Queue Service (Amazon SQS) queue.
Correct Answer: D
A: Incorrect - Lambda is Serverless and automatically scale - EC2 instance we have to create load balancer, auto scaling group,.. a lot of
things. using native Java Database Connectivity (JDBC) drivers don't improve the performance.
B: Incorrect - a lot of things to changes and DynamoDB Accelerator use for cache(read) not for write.
C: Incorrect - SNS is use for send notification (e-mail, SMS).
D: Correct - with SQS we can scale application well by queuing the data.
upvoted 12 times
You can build chatbots using Lambda functions to process user input, execute business logic, and generate responses.
Scales automatically
They can be triggered in response to API events
Lambda functions can process files as they are uploaded to S3 buckets. This is often used for tasks like image resizing, data extraction, or
file validation.
upvoted 1 times
By dividing the functionality into two Lambda functions, one for receiving the information and the other for loading it into the database,
you can independently scale and optimize each function based on their specific requirements. This approach allows for more efficient
resource allocation and reduces the potential impact of high volumes of data on the overall system.
Integrating the Lambda functions using an SQS adds another layer of scalability and reliability. The receiving function can push the
information to the SQS, and the loading function can retrieve messages from the queue and process them independently. This
asynchronous decoupling ensures that the receiving function can handle high volumes of incoming requests without overwhelming the
loading function. Additionally, SQS provides built-in retries and guarantees message durability, ensuring that no data is lost during
processing.
upvoted 5 times
Option C and D both propose an event-driven architecture using Lambda functions, but option D is better suited for this use case because
it uses an Amazon SQS queue to decouple the receiving and loading of information into the database. This will provide better fault
tolerance and scalability, as messages can be stored in the queue until they are processed by the second Lambda function. In contrast,
using SNS for this use case might cause some events to be missed, as it only guarantees the delivery of messages to subscribers, not to
the Lambda function.
upvoted 3 times
A company needs to review its AWS Cloud deployment to ensure that its Amazon S3 buckets do not have unauthorized configuration changes.
What should a solutions architect do to accomplish this goal?
D. Turn on Amazon S3 server access logging. Configure Amazon EventBridge (Amazon Cloud Watch Events).
Correct Answer: A
AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. You can use AWS Config
to monitor and record changes to the configuration of your Amazon S3 buckets. By turning on AWS Config and enabling the appropriate
rules, you can ensure that your S3 buckets do not have unauthorized configuration changes.
upvoted 29 times
Amazon Inspector (Option C) is a service that helps you assess the security and compliance of your applications. While it can be used to
assess the security of your S3 buckets, it does not monitor or record changes to the configuration of your S3 buckets.
Amazon S3 server access logging (Option D) enables you to log requests made to your S3 bucket. While it can help you identify changes
to your S3 bucket, it does not monitor or record changes to the configuration of your S3 bucket.
upvoted 21 times
https://aws.amazon.com/config/#:~:text=How%20it%20works-,AWS%20Config,-continually%20assesses%2C%20audits
upvoted 1 times
The solutions architect can enable AWS Config and configure rules specifically checking for S3 bucket settings like public access blocking,
encryption settings, access control lists, etc. AWS Config will record configuration changes to S3 buckets over time, allowing the company
to review changes and be alerted about any unauthorized modifications.
By. Claude.ai
upvoted 1 times
While options B, C, and D offer valuable services for various aspects of AWS deployment, they are not specifically focused on preventing
unauthorized configuration changes in Amazon S3 buckets as effectively as enabling AWS Config.
upvoted 2 times
Abrar2022 4 months, 2 weeks ago
Don't be mistaken in thinking that it's Server access logs because that's for detailed records for requests made to S3. It's AWS Config
because it records configuration changes.
upvoted 1 times
AWS Config is a service that provides you with a detailed view of the configuration of your AWS resources. It continuously records
configuration changes to your resources and allows you to review, audit, and compare these changes over time. By turning on AWS Config
and enabling the appropriate rules, you can monitor the configuration changes to your Amazon S3 buckets and receive notifications when
unauthorized changes are made.
upvoted 1 times
AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. You can use AWS Config
to monitor and record changes to the configuration of your Amazon S3 buckets. By turning on AWS Config and enabling the appropriate
rules, you can ensure that your S3 buckets do not have unauthorized configuration changes.
upvoted 1 times
Buruguduystunstugudunstuy 9 months, 2 weeks ago
AWS Trusted Advisor (Option B) is a service that provides best practice recommendations for your AWS resources, but it does not
monitor or record changes to the configuration of your S3 buckets.
Amazon Inspector (Option C) is a service that helps you assess the security and compliance of your applications. While it can be used to
assess the security of your S3 buckets, it does not monitor or record changes to the configuration of your S3 buckets.
Amazon S3 server access logging (Option D) enables you to log requests made to your S3 bucket. While it can help you identify changes
to your S3 bucket, it does not monitor or record changes to the configuration of your S3 bucket.
upvoted 1 times
A company is launching a new application and will display application metrics on an Amazon CloudWatch dashboard. The company's product
manager needs to access this dashboard periodically. The product manager does not have an AWS account. A solutions architect must provide
access to the product manager by following the principle of least privilege.
Which solution will meet these requirements?
A. Share the dashboard from the CloudWatch console. Enter the product manager's email address, and complete the sharing steps. Provide a
shareable link for the dashboard to the product manager.
B. Create an IAM user specifically for the product manager. Attach the CloudWatchReadOnlyAccess AWS managed policy to the user. Share
the new login credentials with the product manager. Share the browser URL of the correct dashboard with the product manager.
C. Create an IAM user for the company's employees. Attach the ViewOnlyAccess AWS managed policy to the IAM user. Share the new login
credentials with the product manager. Ask the product manager to navigate to the CloudWatch console and locate the dashboard by name in
the Dashboards section.
D. Deploy a bastion server in a public subnet. When the product manager requires access to the dashboard, start the server and share the RDP
credentials. On the bastion server, ensure that the browser is configured to open the dashboard URL with cached AWS credentials that have
appropriate permissions to view the dashboard.
Correct Answer: B
Share a single dashboard and designate specific email addresses of the people who can view the dashboard. Each of these users creates
their own password that they must enter to view the dashboard.
upvoted 62 times
"To help manage this information access, Amazon CloudWatch has introduced CloudWatch dashboard sharing. This allows customers to
easily and securely share their CloudWatch dashboards with people outside of their organization, in another business unit, or with those
with no access AWS console access. This blog will demonstrate how a dashboard can be shared across the enterprise via a SAML provider
in order to broker this secure access."
upvoted 1 times
Share a single dashboard and designate specific email addresses of the people who can view the dashboard. Each of these users creates
their own password that they must enter to view the dashboard.
Share a single dashboard publicly, so that anyone who has the link can view the dashboard.
Share all the CloudWatch dashboards in your account and specify a third-party single sign-on (SSO) provider for dashboard access. All
users who are members of this SSO provider's list can access all the dashboards in the account. To enable this, you integrate the SSO
provider with Amazon Cognito. The SSO provider must support Security Assertion Markup Language (SAML
upvoted 5 times
With this approach, the product manager can access the dashboard periodically by simply clicking on the provided link. They will be able
to view the application metrics without the need for an AWS account or IAM user credentials. This ensures that the product manager has
the necessary access while adhering to the principle of least privilege by not granting unnecessary permissions or creating additional IAM
users.
upvoted 3 times
https://aws.amazon.com/blogs/mt/share-your-amazon-cloudwatch-dashboards-with-anyone-using-aws-single-sign-on/
upvoted 1 times
A company is migrating applications to AWS. The applications are deployed in different accounts. The company manages the accounts centrally
by using AWS Organizations. The company's security team needs a single sign-on (SSO) solution across all the company's accounts. The company
must continue managing the users and groups in its on-premises self-managed Microsoft Active Directory.
Which solution will meet these requirements?
A. Enable AWS Single Sign-On (AWS SSO) from the AWS SSO console. Create a one-way forest trust or a one-way domain trust to connect the
company's self-managed Microsoft Active Directory with AWS SSO by using AWS Directory Service for Microsoft Active Directory.
B. Enable AWS Single Sign-On (AWS SSO) from the AWS SSO console. Create a two-way forest trust to connect the company's self-managed
Microsoft Active Directory with AWS SSO by using AWS Directory Service for Microsoft Active Directory.
C. Use AWS Directory Service. Create a two-way trust relationship with the company's self-managed Microsoft Active Directory.
D. Deploy an identity provider (IdP) on premises. Enable AWS Single Sign-On (AWS SSO) from the AWS SSO console.
Correct Answer: A
Amazon EC2, Amazon RDS, and Amazon FSx will work with either a one-way or two-way trust.
upvoted 11 times
By implementing this solution, the company can achieve a single sign-on experience for their AWS accounts while maintaining central
control over user and group management in their on-premises Active Directory. The one-way trust ensures that user and group
information flows securely from the on-premises directory to AWS SSO, allowing for centralized access management and control across all
AWS accounts.
upvoted 5 times
Explanation:
Option A is the best solution as it enables AWS Single Sign-On (AWS SSO) from the AWS SSO console and creates a one-way forest trust or
a one-way domain trust to connect the company's self-managed Microsoft Active Directory with AWS SSO by using AWS Directory Service
for Microsoft Active Directory. This solution allows the company to manage users and groups in the on-premises Active Directory and
provides a single sign-on (SSO) experience across all the company's AWS accounts.
upvoted 2 times
darkknight23 5 months, 1 week ago
I think its A. From ChatGpt:
=========
Should this be one way or two way trust?
To integrate AWS SSO with an on-premises Microsoft Active Directory, a one-way trust relationship should be established.
In a one-way trust relationship, the on-premises Microsoft Active Directory trusts the AWS SSO directory, but the AWS SSO directory does
not trust the on-premises Microsoft Active Directory. This means that users and groups in the on-premises Microsoft Active Directory can
be mapped to AWS SSO users and groups, but not vice versa.
This is the recommended approach for security reasons, as it ensures that the on-premises Microsoft Active Directory is not exposed to
external entities. The one-way trust relationship also simplifies administration and reduces the risk of errors in configuration.
upvoted 2 times
Option A, which suggests creating a one-way trust relationship, would not enable synchronization of user and group information between
AWS SSO and the on-premises AD domain.
upvoted 1 times
A company provides a Voice over Internet Protocol (VoIP) service that uses UDP connections. The service consists of Amazon EC2 instances that
run in an Auto Scaling group. The company has deployments across multiple AWS Regions.
The company needs to route users to the Region with the lowest latency. The company also needs automated failover between Regions.
Which solution will meet these requirements?
A. Deploy a Network Load Balancer (NLB) and an associated target group. Associate the target group with the Auto Scaling group. Use the
NLB as an AWS Global Accelerator endpoint in each Region.
B. Deploy an Application Load Balancer (ALB) and an associated target group. Associate the target group with the Auto Scaling group. Use the
ALB as an AWS Global Accelerator endpoint in each Region.
C. Deploy a Network Load Balancer (NLB) and an associated target group. Associate the target group with the Auto Scaling group. Create an
Amazon Route 53 latency record that points to aliases for each NLB. Create an Amazon CloudFront distribution that uses the latency record as
an origin.
D. Deploy an Application Load Balancer (ALB) and an associated target group. Associate the target group with the Auto Scaling group. Create
an Amazon Route 53 weighted record that points to aliases for each ALB. Deploy an Amazon CloudFront distribution that uses the weighted
record as an origin.
Correct Answer: C
Option A is not the best choice because using an NLB as an AWS Global Accelerator endpoint in each Region does not provide automated
failover between Regions.
Option B is also not ideal because using an Application Load Balancer (ALB) as an AWS Global Accelerator endpoint in each Region does
not provide automated failover between Regions.
upvoted 1 times
midriss 4 weeks ago
Option A suggests using Network Load Balancers (NLB) and AWS Global Accelerator, which can provide lower-latency routing, but it does
not inherently support automated failover between Regions.
upvoted 1 times
An NLB is a good choice for a VoIP service because it can route traffic to the Region with the lowest latency. An NLB also provides load
balancing and fault tolerance for your VoIP service.
An Auto Scaling group can automatically scale your VoIP service up or down based on demand. This ensures that you have the right
number of EC2 instances running to handle the load.
Create an Amazon Route 53 latency record that points to aliases for each NLB
A latency record in Amazon Route 53 routes traffic to the NLB that has the lowest latency. This ensures that your VoIP calls are routed to
the Region with the lowest latency.
Create an Amazon CloudFront distribution that uses the latency record as an origin
Amazon CloudFront is a content delivery network (CDN) that can deliver your VoIP traffic closer to your users. This can improve the
performance of your VoIP service.
upvoted 3 times
When using AWS Global Accelerator, it automatically routes traffic to the closest AWS edge location based on latency and network
conditions. In case of a failure in one Region, AWS Global Accelerator will automatically reroute traffic to the healthy endpoints in another
Region, providing automated failover.
So, option A does meet the requirement for automated failover between Regions, in addition to routing users to the Region with the
lowest latency using AWS Global Accelerator.
upvoted 3 times
Explanation:
Option A is the best solution as it deploys a Network Load Balancer (NLB) and an associated target group, and associates the target group
with the Auto Scaling group. The NLB can be used as an AWS Global Accelerator endpoint in each Region, allowing users to be routed to
the Region with the lowest latency. Additionally, the NLB can automatically failover between Regions to ensure service availability.
Option B is not the best solution as an Application Load Balancer (ALB) is designed for HTTP/HTTPS traffic and may not be suitable for the
company's VoIP service that uses UDP connections."
upvoted 1 times
Question #30 Topic 1
A development team runs monthly resource-intensive tests on its general purpose Amazon RDS for MySQL DB instance with Performance Insights
enabled. The testing lasts for 48 hours once a month and is the only process that uses the database. The team wants to reduce the cost of
running the tests without reducing the compute and memory attributes of the DB instance.
Which solution meets these requirements MOST cost-effectively?
A. Stop the DB instance when tests are completed. Restart the DB instance when required.
B. Use an Auto Scaling policy with the DB instance to automatically scale when tests are completed.
C. Create a snapshot when tests are completed. Terminate the DB instance and restore the snapshot when required.
D. Modify the DB instance to a low-capacity instance when tests are completed. Modify the DB instance again when required.
Correct Answer: C
Not A - By stopping the DB although you are not paying for DB hours you are still paying for Provisioned IOPs , the storage for Stopped DB
is more than Snapshot of underlying EBS vol. and Automated Back ups .
Not D - Is possible but not MOST cost effective, no need to run the RDS when not needed.
upvoted 9 times
Explanation:
Stopping and starting a DB instance is the most cost-effective solution for scenarios where the database is not in use all the time. Amazon
RDS allows you to stop and start the database instances, and you are not charged for the instance hours while the database is stopped.
upvoted 1 times
By creating a snapshot and terminating the DB instance, you effectively stop incurring costs for the running instance. When you need to
run the tests again, you can restore the snapshot to create a new instance and resume testing. This approach allows you to save costs
during the periods when the tests are not running.
However, it's important to note that option C involves additional steps and may result in some downtime during the restoration process.
You need to consider the time required for snapshot creation, termination, and restoration when planning the testing schedule.
upvoted 3 times
Abrar2022 3 months, 2 weeks ago
Selected Answer: C
Can't be A because you're still charged for provisioned storage even when it's stopped.
upvoted 1 times
By stopping the DB instance when the tests are completed, the company will only be charged for storage and not for compute resources
while the instance is stopped. This can result in significant cost savings as compared to running the instance continuously.
When the tests need to be run again, the company can simply start the DB instance, and it will be available for use. This solution is
straightforward and does not require any additional configuration or infrastructure.
upvoted 2 times
A company that hosts its web application on AWS wants to ensure all Amazon EC2 instances. Amazon RDS DB instances. and Amazon Redshift
clusters are configured with tags. The company wants to minimize the effort of configuring and operating this check.
What should a solutions architect do to accomplish this?
A. Use AWS Config rules to define and detect resources that are not properly tagged.
B. Use Cost Explorer to display resources that are not properly tagged. Tag those resources manually.
C. Write API calls to check all resources for proper tag allocation. Periodically run the code on an EC2 instance.
D. Write API calls to check all resources for proper tag allocation. Schedule an AWS Lambda function through Amazon CloudWatch to
periodically run the code.
Correct Answer: A
Yes, you can use AWS Config to create tags for your resources. AWS Config is a service that enables you to assess, audit, and evaluate the
configurations of your AWS resources. You can use AWS Config to create rules that automatically tag resources when they are created or
when their configurations change.
To create tags for your resources using AWS Config, you will need to create an AWS Config rule that specifies the tag key and value you
want to use and the resources you want to apply the tag to. You can then enable the rule and AWS Config will automatically apply the tag
to the specified resources when they are created or when their configurations change.
upvoted 13 times
By leveraging AWS Config, the solution can automatically detect any resources that do not comply with the defined tagging requirements.
This approach eliminates the need for manual checks or periodic code execution, reducing operational overhead. Additionally, AWS Config
provides the ability to automatically remediate non-compliant resources by triggering Lambda or sending notifications, further
streamlining the configuration management process.
Option B (using Cost Explorer) primarily focuses on cost analysis and does not provide direct enforcement of proper tagging. Option C and
D (writing API calls and running them manually or through scheduled Lambda) require more manual effort and maintenance compared to
using AWS Config rules.
upvoted 3 times
lelouchjedai 3 months, 2 weeks ago
Selected Answer: A
The answer is A
upvoted 1 times
A solution architect can accomplish this by writing API calls to check all resources (EC2 instances, RDS DB instances, and Redshift clusters)
for proper tag allocation. Then, schedule an AWS Lambda function through Amazon CloudWatch to periodically run the code. This way, the
check will be automated and it eliminates the need to manually check and configure the resources. The Lambda function can be triggered
periodically and will check all resources, this way it will minimize the effort of configuring and operating the check.
upvoted 2 times
A development team needs to host a website that will be accessed by other teams. The website contents consist of HTML, CSS, client-side
JavaScript, and images.
Which method is the MOST cost-effective for hosting the website?
D. Configure an Application Load Balancer with an AWS Lambda target that uses the Express.js framework.
Correct Answer: B
Containerizing the website and hosting it in AWS Fargate (option A) would involve additional complexity and costs associated with
managing the container environment and scaling resources. Deploying a web server on an Amazon EC2 instance (option C) would require
provisioning and managing the EC2 instance, which may not be cost-effective for a static website. Configuring an Application Load
Balancer with an AWS Lambda target (option D) adds unnecessary complexity and may not be the most efficient solution for hosting a
static website.
upvoted 2 times
A company runs an online marketplace web application on AWS. The application serves hundreds of thousands of users during peak hours. The
company needs a scalable, near-real-time solution to share the details of millions of financial transactions with several other internal applications.
Transactions also need to be processed to remove sensitive data before being stored in a document database for low-latency retrieval.
What should a solutions architect recommend to meet these requirements?
A. Store the transactions data into Amazon DynamoDB. Set up a rule in DynamoDB to remove sensitive data from every transaction upon write.
Use DynamoDB Streams to share the transactions data with other applications.
B. Stream the transactions data into Amazon Kinesis Data Firehose to store data in Amazon DynamoDB and Amazon S3. Use AWS Lambda
integration with Kinesis Data Firehose to remove sensitive data. Other applications can consume the data stored in Amazon S3.
C. Stream the transactions data into Amazon Kinesis Data Streams. Use AWS Lambda integration to remove sensitive data from every
transaction and then store the transactions data in Amazon DynamoDB. Other applications can consume the transactions data off the Kinesis
data stream.
D. Store the batched transactions data in Amazon S3 as files. Use AWS Lambda to process every file and remove sensitive data before
updating the files in Amazon S3. The Lambda function then stores the data in Amazon DynamoDB. Other applications can consume
transaction files stored in Amazon S3.
Correct Answer: C
Kinesis Data Firehose currently supports Amazon S3, Amazon Redshift, Amazon OpenSearch Service, Splunk, Datadog, NewRelic,
Dynatrace, Sumologic, LogicMonitor, MongoDB, and HTTP End Point as destinations.
https://aws.amazon.com/kinesis/data-
firehose/faqs/#:~:text=Kinesis%20Data%20Firehose%20currently%20supports,HTTP%20End%20Point%20as%20destinations.
upvoted 50 times
Option A (storing transactions data in DynamoDB and using DynamoDB Streams) may not provide the same level of scalability and real-
time data sharing as Kinesis Data Streams. Option B (using Kinesis Data Firehose to store data in DynamoDB and S3) adds unnecessary
complexity and additional storage costs. Option D (storing batched transactions data in S3 and processing with Lambda) may not provide
the required near-real-time data sharing and low-latency retrieval compared to the streaming-based solution.
upvoted 3 times
https://dynobase.dev/dynamodb-faq/can-firehose-write-to-dynamodb/#:~:text=Answer,data%20to%20a%20DynamoDB%20table.
upvoted 1 times
A company hosts its multi-tier applications on AWS. For compliance, governance, auditing, and security, the company must track configuration
changes on its AWS resources and record a history of API calls made to these resources.
What should a solutions architect do to meet these requirements?
A. Use AWS CloudTrail to track configuration changes and AWS Config to record API calls.
B. Use AWS Config to track configuration changes and AWS CloudTrail to record API calls.
C. Use AWS Config to track configuration changes and Amazon CloudWatch to record API calls.
D. Use AWS CloudTrail to track configuration changes and Amazon CloudWatch to record API calls.
Correct Answer: B
Use AWS Config first to track configuration changes, Second is AWS CloudTrai to record API calls. (Answer B, and correct answer). Answer A
is reversed order of B, and not accepted.
upvoted 2 times
Option A (using CloudTrail to track configuration changes and Config to record API calls) is incorrect because CloudTrail is specifically
designed to capture API call history, while Config is designed for tracking configuration changes.
Option C (using Config to track configuration changes and CloudWatch to record API calls) is not the recommended approach. While
CloudWatch can be used for monitoring and logging, it does not provide the same level of detail and compliance tracking as CloudTrail for
recording API calls.
Option D (using CloudTrail to track configuration changes and CloudWatch to record API calls) is not the optimal choice because CloudTrail
is the appropriate service for tracking configuration changes, while CloudWatch is not specifically designed for recording API call history.
upvoted 2 times
AWS CloudTrail is a fully managed service that provides a detailed history of API calls made to the company's AWS resources. It records all
API activity in the AWS account, including who made the API call, when the call was made, and what resources were affected by the call.
This information is critical for security and auditing purposes, as it allows the company to investigate any suspicious activity that might
occur on its AWS resources.
upvoted 3 times
AWS CloudTrail is a service that enables you to record API calls made to your AWS resources. It provides a history of API calls made to your
resources, including the identity of the caller, the time of the call, the source of the call, and the response element returned by the service.
upvoted 1 times
AWS CloudTrail is a service that enables you to record API calls made to your AWS resources. It provides a history of API calls made to your
resources, including the identity of the caller, the time of the call, the source of the call, and the response element returned by the service.
upvoted 1 times
A company is preparing to launch a public-facing web application in the AWS Cloud. The architecture consists of Amazon EC2 instances within a
VPC behind an Elastic Load Balancer (ELB). A third-party service is used for the DNS. The company's solutions architect must recommend a
solution to detect and protect against large-scale DDoS attacks.
Which solution meets these requirements?
Correct Answer: D
AWS Shield https://aws.amazon.com/shield/ Automatically detect and mitigate sophisticated network-level DDoS.
Option B is also incorrect because Amazon Inspector is a vulnerability assessment service that helps identify security issues and
vulnerabilities within EC2. It does not directly protect against DDoS attacks.
Option C is not the optimal choice because AWS Shield provides basic DDoS protection for resources such as Elastic IP addresses,
CloudFront, and Route53 hosted zones. However, it does not provide the advanced capabilities and assistance offered by AWS Shield
Advanced, which is better suited for protecting against large-scale DDoS attacks.
Therefore, option D with AWS Shield Advanced and assigning the ELB to it is the recommended solution to detect and protect against
large-scale DDoS attacks in the architecture described.
upvoted 6 times
AWS Shield is a service that provides DDoS protection for your AWS resources. There are two tiers of AWS Shield: AWS Shield Standard and
AWS Shield Advanced. AWS Shield Standard is included with all AWS accounts at no additional cost and provides protection against most
common network and transport layer DDoS attacks. AWS Shield Advanced provides additional protection against more complex and larger
scale DDoS attacks, as well as access to a team of DDoS response experts.
To detect and protect against large-scale DDoS attacks on a public-facing web application hosted on Amazon EC2 instances behind an
Elastic Load Balancer (ELB), you should enable AWS Shield Advanced and assign the ELB to it. This will provide advanced protection against
DDoS attacks targeting the ELB and the EC2 instances behind it.
upvoted 6 times
Amazon Inspector is a security assessment service that analyzes the runtime behavior of your Amazon EC2 instances to identify
security vulnerabilities. It is not specifically designed for detecting and protecting against DDoS attacks.
Amazon Route 53 is a DNS service that routes traffic to your resources on the internet. It is not specifically designed for detecting and
protecting against DDoS attacks.
upvoted 3 times
A company is building an application in the AWS Cloud. The application will store data in Amazon S3 buckets in two AWS Regions. The company
must use an AWS Key Management Service (AWS KMS) customer managed key to encrypt all data that is stored in the S3 buckets. The data in
both S3 buckets must be encrypted and decrypted with the same KMS key. The data and the key must be stored in each of the two Regions.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create an S3 bucket in each Region. Configure the S3 buckets to use server-side encryption with Amazon S3 managed encryption keys
(SSE-S3). Configure replication between the S3 buckets.
B. Create a customer managed multi-Region KMS key. Create an S3 bucket in each Region. Configure replication between the S3 buckets.
Configure the application to use the KMS key with client-side encryption.
C. Create a customer managed KMS key and an S3 bucket in each Region. Configure the S3 buckets to use server-side encryption with
Amazon S3 managed encryption keys (SSE-S3). Configure replication between the S3 buckets.
D. Create a customer managed KMS key and an S3 bucket in each Region. Configure the S3 buckets to use server-side encryption with AWS
KMS keys (SSE-KMS). Configure replication between the S3 buckets.
Correct Answer: C
By using SSE-KMS, you can encrypt the data stored in the S3 buckets with a customer managed KMS key. This ensures that the data is
protected and allows you to have control over the encryption key. By creating an S3 bucket in each Region and configuring replication
between them, you can have data and key redundancy in both Regions.
upvoted 3 times
The correct answer is B because that's the only way to actually get the same key across multiple regions with minimal operational
overhead
upvoted 12 times
But for option B, the major issue is if you create KMS keys in 2 regions, they can not be the same.
upvoted 3 times
D. While this solution does use customer managed KMS keys, it creates separate KMS keys in each region. Although it uses SSE-KMS,
which would be closer to the requirement, it doesn't meet the requirement of using the same KMS key across two regions.
upvoted 1 times
(AWS-KMS) was mentioned only in a D option, then only D meets the requirements
upvoted 1 times
That being said, C and D are automatically INCORRECT because they start off with this "Create a customer managed KMS key and an S3
bucket in //each// Region" Creating two keys immediately fails the reqs of having one key.
A is INCORRECT because it creates an "AWS Managed Key", so that fails the reqs of having a "Cust Managed Key".
That leaves B, which has a Multi-Region, Cust Managed Key. Which Meets the REQUIREMENTS and is within the (admittedly vague)
parameters.
upvoted 3 times
A company recently launched a variety of new workloads on Amazon EC2 instances in its AWS account. The company needs to create a strategy
to access and administer the instances remotely and securely. The company needs to implement a repeatable process that works with native AWS
services and follows the AWS Well-Architected Framework.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use the EC2 serial console to directly access the terminal interface of each instance for administration.
B. Attach the appropriate IAM role to each existing instance and new instance. Use AWS Systems Manager Session Manager to establish a
remote SSH session.
C. Create an administrative SSH key pair. Load the public key into each EC2 instance. Deploy a bastion host in a public subnet to provide a
tunnel for administration of each instance.
D. Establish an AWS Site-to-Site VPN connection. Instruct administrators to use their local on-premises machines to connect directly to the
instances by using SSH keys across the VPN tunnel.
Correct Answer: B
Repeatable: The process of attaching an IAM role to an EC2 instance and using Systems Manager Session Manager to establish a remote
SSH session is repeatable. This can be easily automated, so that new instances can be provisioned and administrators can connect to them
securely without any manual intervention.
upvoted 2 times
https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html#:~:text=RSS-,Session%20Manager,-
is%20a%20fully
upvoted 1 times
james2033 2 months, 2 weeks ago
Selected Answer: B
Keyword "access and administer the instances remotely and securely" See "AWS Systems Manager Session Manager at "
https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html .
upvoted 1 times
Option C adds operational overhead and introduces additional infrastructure that needs to be managed, monitored, and secured. It also
requires SSH key management and maintenance.
Option D is complex and may not be necessary for remote administration. It also requires administrators to connect from their local on-
premises machines, which adds complexity and potential security risks.
Therefore, option B is the recommended solution as it provides secure, auditable, and repeatable remote access using IAM roles and AWS
Systems Manager Session Manager, with minimal operational overhead.
upvoted 2 times
A company is hosting a static website on Amazon S3 and is using Amazon Route 53 for DNS. The website is experiencing increased demand from
around the world. The company must decrease latency for users who access the website.
Which solution meets these requirements MOST cost-effectively?
A. Replicate the S3 bucket that contains the website to all AWS Regions. Add Route 53 geolocation routing entries.
B. Provision accelerators in AWS Global Accelerator. Associate the supplied IP addresses with the S3 bucket. Edit the Route 53 entries to point
to the IP addresses of the accelerators.
C. Add an Amazon CloudFront distribution in front of the S3 bucket. Edit the Route 53 entries to point to the CloudFront distribution.
D. Enable S3 Transfer Acceleration on the bucket. Edit the Route 53 entries to point to the new endpoint.
Correct Answer: C
Option B (provisioning accelerators in AWS Global Accelerator) can be more expensive as it adds an extra layer of infrastructure
(accelerators) and requires associating IP addresses with the S3 bucket. CloudFront already includes global edge locations and provides
similar acceleration capabilities.
Option D (enabling S3 Transfer Acceleration) can help improve upload speed to the S3 bucket but may not have a significant impact on
reducing latency for website visitors.
Therefore, option C is the most cost-effective solution as it leverages CloudFront's caching and global distribution capabilities to decrease
latency and improve website performance.
upvoted 13 times
CloudFront automatically caches and replicates content to its edge locations, resulting in faster delivery and lower latency for users
worldwide. This solution is highly effective in optimizing performance while keeping costs under control because CloudFront charges are
based on actual data transfer and requests, and the pay-as-you-go pricing model ensures that you only pay for what you use.
upvoted 3 times
This solution is also cost-effective as it only charges for the data transfer and requests made by users accessing the content from the
CloudFront edge locations. Additionally, this solution provides scalability and reliability benefits as CloudFront can automatically scale to
handle increased demand and provide high availability for the website.
upvoted 1 times
To decrease latency for users who access the static website hosted on Amazon S3, you can add an Amazon CloudFront distribution in front
of the S3 bucket and edit the Route 53 entries to point to the CloudFront distribution. This will allow CloudFront to cache the content of
the website at locations around the world, which will reduce the time it takes for users to access the website by serving it from the location
that is nearest to them.
upvoted 3 times
Answer B, (WRONG) - Provisioning accelerators in AWS Global Accelerator and associating the supplied IP addresses with the S3 bucket
would also be more expensive than using CloudFront, as it would require you to pay for the additional cost of the accelerators.
Answer D, (WRONG) - Enabling S3 Transfer Acceleration on the bucket and editing the Route 53 entries to point to the new endpoint
would not reduce latency for users who access the website from around the world, as it only speeds up the transfer of large files over
the public internet and does not have cache servers in multiple locations around the world.
upvoted 5 times
Question #39 Topic 1
A company maintains a searchable repository of items on its website. The data is stored in an Amazon RDS for MySQL database table that
contains more than 10 million rows. The database has 2 TB of General Purpose SSD storage. There are millions of updates against this data every
day through the company's website.
The company has noticed that some insert operations are taking 10 seconds or longer. The company has determined that the database storage
performance is the problem.
Which solution addresses this performance issue?
D. Enable Multi-AZ RDS read replicas with MySQL native asynchronous replication.
Correct Answer: B
Option C (changing the DB instance to a burstable performance instance class) is suitable for workloads with varying usage patterns and
burstable performance needs, but it may not provide consistent and predictable performance for heavy write workloads.
Option D (enabling Multi-AZ RDS read replicas with MySQL native asynchronous replication) is a solution for high availability and read
scaling but does not directly address the storage performance issue.
Therefore, option A is the most appropriate solution to address the performance issue by leveraging Provisioned IOPS SSD storage type,
which provides consistent and predictable I/O performance for the Amazon RDS for MySQL database.
upvoted 11 times
Option C (changing the DB instance to a burstable performance instance class) is not the optimal choice since burstable performance
instances are designed for workloads with bursty traffic patterns, and they may not provide the sustained performance needed for heavy
update operations.
Option D (enabling Multi-AZ RDS read replicas with MySQL native asynchronous replication) is primarily used for high availability and read
scaling rather than addressing storage performance issues.
upvoted 2 times
A company has thousands of edge devices that collectively generate 1 TB of status alerts each day. Each alert is approximately 2 KB in size. A
solutions architect needs to implement a solution to ingest and store the alerts for future analysis.
The company wants a highly available solution. However, the company needs to minimize costs and does not want to manage additional
infrastructure. Additionally, the company wants to keep 14 days of data available for immediate analysis and archive any data older than 14 days.
What is the MOST operationally efficient solution that meets these requirements?
A. Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts. Configure the Kinesis Data Firehose stream to deliver the
alerts to an Amazon S3 bucket. Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days.
B. Launch Amazon EC2 instances across two Availability Zones and place them behind an Elastic Load Balancer to ingest the alerts. Create a
script on the EC2 instances that will store the alerts in an Amazon S3 bucket. Set up an S3 Lifecycle configuration to transition data to
Amazon S3 Glacier after 14 days.
C. Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts. Configure the Kinesis Data Firehose stream to deliver the
alerts to an Amazon OpenSearch Service (Amazon Elasticsearch Service) cluster. Set up the Amazon OpenSearch Service (Amazon
Elasticsearch Service) cluster to take manual snapshots every day and delete data from the cluster that is older than 14 days.
D. Create an Amazon Simple Queue Service (Amazon SQS) standard queue to ingest the alerts, and set the message retention period to 14
days. Configure consumers to poll the SQS queue, check the age of the message, and analyze the message data as needed. If the message is
14 days old, the consumer should copy the message to an Amazon S3 bucket and delete the message from the SQS queue.
Correct Answer: A
C involves delivering the alerts to an Amazon OpenSearch Service cluster and manually managing snapshots and data deletion. This
introduces additional complexity and manual overhead compared to the simpler solution of using Kinesis Data Firehose and S3.
D suggests using SQS to ingest the alerts, but it does not provide the same level of data persistence and durability as storing the alerts
directly in S3. Additionally, it requires manual processing and copying of messages to S3, which adds operational complexity.
Therefore, A provides the most operationally efficient solution that meets the company's requirements by leveraging Kinesis Data Firehose
to ingest the alerts, storing them in an S3 bucket, and using an S3 Lifecycle configuration to transition data to S3 Glacier for long-term
archival, all without the need for managing additional infrastructure.
upvoted 5 times
This solution meets the company's requirements to minimize costs and not manage additional infrastructure while providing high
availability. Kinesis Data Firehose is a fully managed service that can automatically ingest streaming data and load it into Amazon S3,
Amazon Redshift, or Amazon Elasticsearch Service. By configuring the Firehose to deliver the alerts to an S3 bucket, the company can take
advantage of S3's high durability and availability. An S3 Lifecycle configuration can be set up to automatically transition data that is older
than 14 days to Amazon S3 Glacier, an extremely low-cost storage class for infrequently accessed data.
upvoted 2 times
A company's application integrates with multiple software-as-a-service (SaaS) sources for data collection. The company runs Amazon EC2
instances to receive the data and to upload the data to an Amazon S3 bucket for analysis. The same EC2 instance that receives and uploads the
data also sends a notification to the user when an upload is complete. The company has noticed slow application performance and wants to
improve the performance as much as possible.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create an Auto Scaling group so that EC2 instances can scale out. Configure an S3 event notification to send events to an Amazon Simple
Notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete.
B. Create an Amazon AppFlow flow to transfer data between each SaaS source and the S3 bucket. Configure an S3 event notification to send
events to an Amazon Simple Notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete.
C. Create an Amazon EventBridge (Amazon CloudWatch Events) rule for each SaaS source to send output data. Configure the S3 bucket as the
rule's target. Create a second EventBridge (Cloud Watch Events) rule to send events when the upload to the S3 bucket is complete. Configure
an Amazon Simple Notification Service (Amazon SNS) topic as the second rule's target.
D. Create a Docker container to use instead of an EC2 instance. Host the containerized application on Amazon Elastic Container Service
(Amazon ECS). Configure Amazon CloudWatch Container Insights to send events to an Amazon Simple Notification Service (Amazon SNS)
topic when the upload to the S3 bucket is complete.
Correct Answer: B
Ultimately, the choice between Option A and Option B depends on specific factors such as the existing architecture, the nature of data
transfers, and any potential advantages offered by using Amazon AppFlow for data integration.
If the primary concern is to improve performance for data uploads and user notifications without introducing new services, Option A (Auto
Scaling group with S3 event notifications) would likely be the simpler and more operationally efficient choice. However, if data integration
between SaaS sources and the S3 bucket is a critical aspect of the application, Option B might be a more suitable approach.
upvoted 1 times
Option C involves using Amazon EventBridge (CloudWatch Events) rules for data output and S3 uploads, but it introduces additional
complexity with separate rules and does not specifically address the slow application performance.
Option D suggests containerizing the application and using Amazon Elastic Container Service (Amazon ECS) with CloudWatch Container
Insights, which may involve more operational overhead and setup compared to the simpler solution provided by Amazon AppFlow.
Therefore, option B offers the most streamlined solution with the least operational overhead by utilizing Amazon AppFlow for data
transfer, configuring S3 event notifications for upload completion, and leveraging Amazon SNS for notifications without requiring
additional infrastructure management.
upvoted 4 times
https://docs.aws.amazon.com/appflow/latest/userguide/what-is-appflow.html
upvoted 2 times
Option B. Create an Amazon AppFlow flow to transfer data between each SaaS source and the S3 bucket. Configure an S3 event
notification to send events to an Amazon Simple Notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete.
Amazon AppFlow is a fully managed service that enables you to easily and securely transfer data between your SaaS applications and
Amazon S3. By creating an AppFlow flow to transfer the data between the SaaS sources and the S3 bucket, the company can improve the
performance of the application by offloading the data transfer process to a managed service.
upvoted 4 times
Option A is incorrect because creating an Auto Scaling group and configuring an S3 event notification does not address the root cause
of the slow application performance, which is related to the data transfer process.
Option C is incorrect because creating multiple EventBridge (CloudWatch Events) rules and configuring them to send events to an SNS
topic is more complex and involves additional operational overhead.
Option D is incorrect because creating a Docker container and hosting it on ECS does not address the root cause of the slow application
performance, which is related to the data transfer process.
upvoted 6 times
Question #42 Topic 1
A company runs a highly available image-processing application on Amazon EC2 instances in a single VPC. The EC2 instances run inside several
subnets across multiple Availability Zones. The EC2 instances do not communicate with each other. However, the EC2 instances download images
from Amazon S3 and upload images to Amazon S3 through a single NAT gateway. The company is concerned about data transfer charges.
What is the MOST cost-effective way for the company to avoid Regional data transfer charges?
Correct Answer: C
A suggests launching the NAT gateway in each AZ. While this can help with availability and redundancy, it does not address the issue of
data transfer charges, as the traffic would still traverse the NAT gateways and incur data transfer fees.
B suggests replacing the NAT gateway with a NAT instance. However, this solution still involves transferring data between the instances
and S3 through the NAT instance, which would result in data transfer charges.
D suggests provisioning an EC2 Dedicated Host to run the EC2. While this can provide dedicated hardware for the instances, it does not
directly address the issue of data transfer charges.
upvoted 2 times
Bmarodi 4 months ago
Selected Answer: C
Option C is the answer.
upvoted 1 times
https://aws.amazon.com/privatelink/pricing/
https://aws.amazon.com/vpc/pricing/
https://aws.amazon.com/premiumsupport/knowledge-center/vpc-reduce-nat-gateway-transfer-costs/
upvoted 1 times
A VPC endpoint for Amazon S3 allows you to access Amazon S3 resources within your VPC without using the Internet or a NAT gateway.
This means that data transfer between your EC2 instances and S3 will not incur Regional data transfer charges.
Option A (wrong), launching a NAT gateway in each Availability Zone, would not avoid data transfer charges because the NAT gateway
would still be used to access S3.
Option B (wrong), replacing the NAT gateway with a NAT instance, would also not avoid data transfer charges as it would still require using
the Internet or a NAT gateway to access S3.
Option D (wrong), provisioning an EC2 Dedicated Host, would not affect data transfer charges as it only pertains to the physical host that
the EC2 instances are running on and not the data transfer charges for accessing.
upvoted 3 times
A company has an on-premises application that generates a large amount of time-sensitive data that is backed up to Amazon S3. The application
has grown and there are user complaints about internet bandwidth limitations. A solutions architect needs to design a long-term solution that
allows for both timely backups to Amazon S3 and with minimal impact on internet connectivity for internal users.
Which solution meets these requirements?
A. Establish AWS VPN connections and proxy all traffic through a VPC gateway endpoint.
B. Establish a new AWS Direct Connect connection and direct backup traffic through this new connection.
C. Order daily AWS Snowball devices. Load the data onto the Snowball devices and return the devices to AWS each day.
D. Submit a support ticket through the AWS Management Console. Request the removal of S3 service limits from the account.
Correct Answer: B
So the answer is B
upvoted 26 times
AWS Direct Connect is a network service that allows you to establish a dedicated network connection from your on-premises data center
to AWS. This connection bypasses the public Internet and can provide more reliable, lower-latency communication between your on-
premises application and Amazon S3. By directing backup traffic through the AWS Direct Connect connection, you can minimize the
impact on your internet bandwidth and ensure timely backups to S3.
upvoted 15 times
Option C (wrong), using AWS Snowball devices, would not address the issue of internet bandwidth limitations as the data would still
need to be transferred over the Internet to and from the Snowball devices.
Option D (wrong), submitting a support ticket to request the removal of S3 service limits, would not address the issue of internet
bandwidth limitations and would not ensure timely backups to S3.
upvoted 5 times
https://aws.amazon.com/directconnect/#:~:text=The-,AWS%20Direct%20Connect,-cloud%20service%20is
upvoted 1 times
While option A can provide a secure connection, it still utilizes internet bandwidth for data transfer and may not effectively address issue
of limited bandwidth.
While option C can work for occasional large data transfers, it may not be suitable for frequent backups and can introduce additional
operational overhead.
D, submitting a support ticket to request removal of S3 service limits, does not address issue of internet bandwidth limitations and is not a
relevant solution for given requirements.
upvoted 2 times
A company has an Amazon S3 bucket that contains critical data. The company must protect the data from accidental deletion.
Which combination of steps should a solutions architect take to meet these requirements? (Choose two.)
Correct Answer: BD
https://aws.amazon.com/it/premiumsupport/knowledge-center/s3-audit-deleted-missing-objects/
https://aws.amazon.com/it/premiumsupport/knowledge-center/s3-audit-deleted-missing-objects/
Enabling MFA Delete adds additional layer of protection by requiring MFA device to be present when attempting to delete objects. This
helps prevent accidental or unauthorized deletions by requiring extra level of authentication.
C. Creating a bucket policy on S3 is more focused on defining access control and permissions for bucket and its objects, rather than
protecting against accidental deletion.
D. Enabling default encryption on S3 ensures that any new objects uploaded to bucket are automatically encrypted. While encryption is
important for data security, it does not directly address accidental deletion.
E. Creating lifecycle policy for objects in S3 allows for automated management of objects based on predefined rules. While this can help
with data retention and storage cost optimization, it does not directly protect against accidental deletion.
upvoted 4 times
B. Enable MFA Delete on the S3 bucket. MFA Delete requires the use of a multi-factor authentication (MFA) device to permanently delete
an object or suspend versioning on a bucket. This provides an additional layer of protection against accidental or malicious deletion of
objects.
upvoted 1 times
B. Create an Amazon Simple Queue Service (Amazon SQS) queue, and subscribe it to the SNS topic.
C. Increase the CPU and memory that are allocated to the Lambda function.
E. Modify the Lambda function to read from an Amazon Simple Queue Service (Amazon SQS) queue.
Correct Answer: BE
E) Modify the Lambda function to read from an Amazon Simple Queue Service (Amazon SQS) queue.
upvoted 1 times
C. Increasing CPU and memory allocated to the Lambda function may improve its performance but does not address the issue of
connectivity failures.
D. Increasing provisioned throughput for the Lambda function is not applicable as Lambda functions are automatically scaled by AWS and
provisioned throughput is not configurable.
Therefore, the correct combination of actions to ensure that the Lambda function ingests all data in the future is to create an SQS queue
and subscribe it to the SNS topic (option B) and modify the Lambda function to read from the SQS queue (option E).
upvoted 5 times
Bmarodi 4 months ago
Selected Answer: BE
The combination of actions should a solutions architect take to ensure that the Lambda function ingests all data in the future, are by
Creating an Amazon Simple Queue Service (Amazon SQS) queue, and subscribe it to the SNS topic, and Modifying the Lambda function to
read from an Amazon Simple Queue Service (Amazon SQS) queue
upvoted 1 times
E. Modify the Lambda function to read from an Amazon Simple Queue Service (Amazon SQS) queue. This will allow the Lambda function to
process the data from the SQS queue at its own pace, decoupling the data ingestion from the data delivery and providing more flexibility
and fault tolerance.
upvoted 3 times
By using an SQS queue as a buffer between the SNS topic and the Lambda function, the company can improve the reliability and resilience
of the ingestion workflow. This approach will help ensure that the Lambda function ingests all data in the future, even when there are
network connectivity issues.
upvoted 3 times
An Amazon Simple Queue Service (SQS) queue can be used to decouple the data ingestion workflow and provide a buffer for data
deliveries. By subscribing the SQS queue to the SNS topic, you can ensure that notifications about new data deliveries are sent to the
queue even if the Lambda function is unavailable or experiencing connectivity issues. When the Lambda function is ready to process the
data, it can read from the SQS queue and process the data in the order in which it was received.
upvoted 2 times
A company has an application that provides marketing services to stores. The services are based on previous purchases by store customers. The
stores upload transaction data to the company through SFTP, and the data is processed and analyzed to generate new marketing offers. Some of
the files can exceed 200 GB in size.
Recently, the company discovered that some of the stores have uploaded files that contain personally identifiable information (PII) that should not
have been included. The company wants administrators to be alerted if PII is shared again. The company also wants to automate remediation.
What should a solutions architect do to meet these requirements with the LEAST development effort?
A. Use an Amazon S3 bucket as a secure transfer point. Use Amazon Inspector to scan the objects in the bucket. If objects contain PII, trigger
an S3 Lifecycle policy to remove the objects that contain PII.
B. Use an Amazon S3 bucket as a secure transfer point. Use Amazon Macie to scan the objects in the bucket. If objects contain PII, use
Amazon Simple Notification Service (Amazon SNS) to trigger a notification to the administrators to remove the objects that contain PII.
C. Implement custom scanning algorithms in an AWS Lambda function. Trigger the function when objects are loaded into the bucket. If
objects contain PII, use Amazon Simple Notification Service (Amazon SNS) to trigger a notification to the administrators to remove the objects
that contain PII.
D. Implement custom scanning algorithms in an AWS Lambda function. Trigger the function when objects are loaded into the bucket. If
objects contain PII, use Amazon Simple Email Service (Amazon SES) to trigger a notification to the administrators and trigger an S3 Lifecycle
policy to remove the meats that contain PII.
Correct Answer: B
Using an S3 bucket for secure transfer is good. Amazon Macie can scan the objects for PII automatically without custom development.
SNS can notify administrators if PII is detected, allowing them to handle remediation.
upvoted 1 times
If a file is larger than the applicable quota, Macie doesn't analyze any data in the file.
https://docs.aws.amazon.com/macie/latest/user/macie-quotas.html
upvoted 6 times
A. Using Amazon Inspector to scan the objects in S3 is not the optimal choice for scanning PII data. Amazon Inspector is designed for
host-level vulnerability assessment rather than content scanning.
C. Implementing custom scanning algorithms in an AWS Lambda function would require significant development effort to handle
scanning large files.
D. Using SES for notification and triggering S3 Lifecycle policy may add unnecessary complexity to the solution.
Therefore, the best option that meets the requirements with the least development effort is to use an S3 as a secure transfer point, utilize
Amazon Macie for PII scanning, and trigger an SNS notification to the administrators (option B).
upvoted 3 times
Question #47 Topic 1
A company needs guaranteed Amazon EC2 capacity in three specific Availability Zones in a specific AWS Region for an upcoming event that will
last 1 week.
What should the company do to guarantee the EC2 capacity?
C. Purchase Reserved Instances that specify the Region and three Availability Zones needed.
D. Create an On-Demand Capacity Reservation that specifies the Region and three Availability Zones needed.
Correct Answer: D
An On-Demand Capacity Reservation is a type of Amazon EC2 reservation that enables you to create and manage reserved capacity on
Amazon EC2. With an On-Demand Capacity Reservation, you can specify the Region and Availability Zones where you want to reserve
capacity, and the number of EC2 instances you want to reserve. This allows you to guarantee capacity in specific Availability Zones in a
specific Region.
***WRONG***
Option A, purchasing Reserved Instances that specify the Region needed, would not guarantee capacity in specific Availability Zones.
Option B, creating an On-Demand Capacity Reservation that specifies the Region needed, would not guarantee capacity in specific
Availability Zones.
Option C, purchasing Reserved Instances that specify the Region and three Availability Zones needed, would not guarantee capacity in
specific Availability Zones as Reserved Instances do not provide capacity reservations.
upvoted 14 times
On-Demand Capacity Reservations allow you to reserve EC2 capacity across specific Availability Zones for any duration. This guarantees
you will have access to those resources.
upvoted 1 times
A. Purchasing Reserved Instances that specify the Region needed does not guarantee capacity in specific Availability Zones.
B. Creating an On-Demand Capacity Reservation without specifying the Availability Zones would not guarantee capacity in the desired
zones.
C. Purchasing Reserved Instances that specify the Region and three Availability Zones is not necessary for a short-term event and involves
longer-term commitments.
upvoted 4 times
Abrar2022 4 months, 2 weeks ago
Reserved instances is for long term
On-demand Capacity reservation enables you to choose specific AZ for any duration
upvoted 1 times
A company's website uses an Amazon EC2 instance store for its catalog of items. The company wants to make sure that the catalog is highly
available and that the catalog is stored in a durable location.
What should a solutions architect do to meet these requirements?
C. Move the catalog from the instance store to Amazon S3 Glacier Deep Archive.
D. Move the catalog to an Amazon Elastic File System (Amazon EFS) file system.
Correct Answer: A
The instance store on an EC2 instance is ephemeral storage that does not provide the durability or availability needed for the catalog.
Amazon EFS provides a scalable, high-performance file system that can be shared between EC2 instances. Data on EFS is stored
redundantly across multiple Availability Zones, providing high durability and availability.
EFS is a better solution for the catalog storage than ElastiCache, S3 Glacier, or a larger EC2 instance store. Moving the catalog to EFS would
meet the requirements for high availability and durable storage.
upvoted 1 times
TariqKipkemei 1 month, 3 weeks ago
Selected Answer: D
Highly available and durable = Elastic File System (Amazon EFS)
upvoted 1 times
If you require data durability, you can enable the Redis append-only file feature (AOF). When this feature is enabled, the node writes all of
the commands that change cache data to an append-only file. When a node is rebooted and the cache engine starts, the AOF is
"replayed." The result is a warm Redis cache with all of the data intact.
AOF is disabled by default. To enable AOF for a cluster running Redis, you must create a parameter group with the appendonly parameter
set to yes. You then assign that parameter group to your cluster. You can also modify the appendfsync parameter to control how often
Redis writes to the AOF file.
upvoted 3 times
Option B would not address the requirement for high availability or durability. Instance stores are ephemeral storage attached to EC2
instances and are not durable or replicated.
Option C would provide durability but not high availability. S3 Glacier Deep Archive is designed for long-term archival storage, and
accessing the data from Glacier can have significant retrieval times and costs.
Therefore, option D is the most suitable choice to ensure high availability and durability for the company's catalog.
upvoted 3 times
A company stores call transcript files on a monthly basis. Users access the files randomly within 1 year of the call, but users access the files
infrequently after 1 year. The company wants to optimize its solution by giving users the ability to query and retrieve files that are less than 1-year-
old as quickly as possible. A delay in retrieving older files is acceptable.
Which solution will meet these requirements MOST cost-effectively?
A. Store individual files with tags in Amazon S3 Glacier Instant Retrieval. Query the tags to retrieve the files from S3 Glacier Instant Retrieval.
B. Store individual files in Amazon S3 Intelligent-Tiering. Use S3 Lifecycle policies to move the files to S3 Glacier Flexible Retrieval after 1 year.
Query and retrieve the files that are in Amazon S3 by using Amazon Athena. Query and retrieve the files that are in S3 Glacier by using S3
Glacier Select.
C. Store individual files with tags in Amazon S3 Standard storage. Store search metadata for each archive in Amazon S3 Standard storage.
Use S3 Lifecycle policies to move the files to S3 Glacier Instant Retrieval after 1 year. Query and retrieve the files by searching for metadata
from Amazon S3.
D. Store individual files in Amazon S3 Standard storage. Use S3 Lifecycle policies to move the files to S3 Glacier Deep Archive after 1 year.
Store search metadata in Amazon RDS. Query the files from Amazon RDS. Retrieve the files from S3 Glacier Deep Archive.
Correct Answer: C
S3 Intelligent-Tiering is the ideal storage class for data with unknown, changing, or unpredictable access patterns, independent of object
size or retention period. You can use S3 Intelligent-Tiering as the default storage class for virtually any workload, especially data lakes,
data analytics, new applications, and user-generated content.
https://aws.amazon.com/fr/s3/storage-classes/intelligent-tiering/
upvoted 35 times
If the Intelligent-Tiering data transitions to Glacier after 180 days instead of 1 year, it would still be a cost-effective solution that meets
the requirements.
With files stored in Amazon S3 Intelligent-Tiering, the data is automatically moved to the appropriate storage class based on its access
patterns. In this case, if the data transitions to Glacier after 180 days, it means that files that are infrequently accessed beyond the
initial 180 days will be stored in Glacier, which is a lower-cost storage option compared to S3 Standard.
upvoted 4 times
S3 Intelligent-Tiering automatically moves files between frequent and infrequent access tiers based on actual access patterns, optimizing
cost.
Lifecycle policies can move older files to Glacier Flexible Retrieval after 1 year, which has higher latency and lower cost than S3.
Athena allows querying the metadata of files in S3 without retrieving the files themselves.
Glacier Select can directly query files in Glacier without needing to restore the entire file.
upvoted 2 times
Option C adds complexity by involving two storage classes and may not provide the most cost-effective solution.
Option D would require additional infrastructure with RDS for storing metadata and retrieval from S3 Glacier Deep Archive, which may not
be necessary and could incur higher costs.
Option B is the most suitable and cost-effective solution for optimizing file retrieval based on the access patterns described. Amazon S3
Intelligent-Tiering is a storage class that automatically moves objects between two access tiers: frequent access and infrequent access,
based on their access patterns. By storing the files in S3 Intelligent-Tiering, the files less than 1-year-old will be kept in the frequent access
tier, allowing for quick retrieval.
upvoted 3 times
Considering the above, it could happen that an object is required after day 180 of the first year, in that case the object is not immediately
reachable, so one of the requirements are not met.
A company has a production workload that runs on 1,000 Amazon EC2 Linux instances. The workload is powered by third-party software. The
company needs to patch the third-party software on all EC2 instances as quickly as possible to remediate a critical security vulnerability.
What should a solutions architect do to meet these requirements?
A. Create an AWS Lambda function to apply the patch to all EC2 instances.
B. Configure AWS Systems Manager Patch Manager to apply the patch to all EC2 instances.
C. Schedule an AWS Systems Manager maintenance window to apply the patch to all EC2 instances.
D. Use AWS Systems Manager Run Command to run a custom command that applies the patch to all EC2 instances.
Correct Answer: D
Run Command allows you to automate common administrative tasks and perform one-time configuration changes at scale. (Ref
https://docs.aws.amazon.com/systems-manager/latest/userguide/execute-remote-commands.html)
Seems like patch manager is meant for OS level patches and not 3rd party applications. And this falls under run command wheelhouse to
carry out one-time configuration changes (update of 3rd part application) at scale.
upvoted 39 times
Creating an AWS Lambda function to apply the patch to all EC2 instances would not be a suitable solution, as Lambda functions are not
designed to run on EC2 instances. Configuring AWS Systems Manager Patch Manager to apply the patch to all EC2 instances would not be
a suitable solution, as Patch Manager is not designed to apply third-party software patches. Scheduling an AWS Systems Manager
maintenance window to apply the patch to all EC2 instances would not be a suitable solution, as maintenance windows are not designed
to apply patches to third-party software
upvoted 18 times
A suggests using an Lambda to apply the patch. It requires additional development effort to create and manage it, handle error handling
and retries, and scale solution appropriately to handle a large number of instances.
C suggests scheduling an SSM maintenance window. While it can be used to orchestrate patching activities, they may not provide fastest
patching time for all instances, as execution is scheduled within defined maintenance window timeframe.
D suggests using Run Command to run a custom command for patching. While it can be used for executing commands on multiple
instances, it requires manual execution and may not provide the scalability and automation capabilities that Patch Manager offers.
upvoted 2 times
I first copy paste the question, then I write "the correct answer is [whatever the correct answer determined by the discussion]"
Then I get correct information on why the other answer choices are wrong and why the correct answer choice is correct
upvoted 3 times
So, using Patch Mnt. you can manage the deploy (with policies, creating groups, etc), so it's the best and more secure way to do it.
upvoted 3 times
A company is developing an application that provides order shipping statistics for retrieval by a REST API. The company wants to extract the
shipping statistics, organize the data into an easy-to-read HTML format, and send the report to several email addresses at the same time every
morning.
Which combination of steps should a solutions architect take to meet these requirements? (Choose two.)
A. Configure the application to send the data to Amazon Kinesis Data Firehose.
B. Use Amazon Simple Email Service (Amazon SES) to format the data and to send the report by email.
C. Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled event that invokes an AWS Glue job to query the application's API
for the data.
D. Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled event that invokes an AWS Lambda function to query the
application's API for the data.
E. Store the application data in Amazon S3. Create an Amazon Simple Notification Service (Amazon SNS) topic as an S3 event destination to
send the report by email.
Correct Answer: DE
B: Use SES to format data and send report by email. In Lambda, after retrieving shipping statistics, you can format data into an easy-to-
read HTML format using any HTML templating framework.
Options A, C, and E are not necessary for achieving the desired outcome. Ooption A is typically used for real-time streaming data ingestion
and delivery to data lakes or analytics services. Glue (C) is a fully managed extract, transform, and load (ETL) service, which may be an
overcomplication for this scenario. Storing the application data in S3 and using SNS (E) can be an alternative approach, but it adds
unnecessary complexity.
upvoted 2 times
D. By creating an Amazon EventBridge scheduled event that triggers an AWS Lambda function, you can automate the process of querying
the application's API for shipping statistics. The Lambda function can retrieve the data and perform any necessary formatting or
transformation before proceeding to the next step.
E. Storing the application data in Amazon S3 allows for easy accessibility and further processing. You can configure an S3 event
notification to trigger an Amazon Simple Notification Service (SNS) topic whenever new data is uploaded to the S3 bucket. The SNS topic
can be configured to send the report by email to the desired email addresses.
upvoted 1 times
Bmarodi 3 months, 4 weeks ago
Selected Answer: BD
I go for BD options.
upvoted 1 times
A company wants to migrate its on-premises application to AWS. The application produces output files that vary in size from tens of gigabytes to
hundreds of terabytes. The application data must be stored in a standard file system structure. The company wants a solution that scales
automatically. is highly available, and requires minimum operational overhead.
Which solution will meet these requirements?
A. Migrate the application to run as containers on Amazon Elastic Container Service (Amazon ECS). Use Amazon S3 for storage.
B. Migrate the application to run as containers on Amazon Elastic Kubernetes Service (Amazon EKS). Use Amazon Elastic Block Store
(Amazon EBS) for storage.
C. Migrate the application to Amazon EC2 instances in a Multi-AZ Auto Scaling group. Use Amazon Elastic File System (Amazon EFS) for
storage.
D. Migrate the application to Amazon EC2 instances in a Multi-AZ Auto Scaling group. Use Amazon Elastic Block Store (Amazon EBS) for
storage.
Correct Answer: C
Output files that vary in size from tens of gigabytes to hundreds of terabytes
A suggests using ECS for container orchestration and S3 for storage. ECS doesn't offer a native file system storage solution. S3 is an object
storage service and may not be the most suitable option for a standard file system structure.
B suggests using EKS for container orchestration and EBS for storage. Similar to A, EBS is block storage and not optimized for file system
access. While EKS can manage containers, it doesn't specifically address the file storage requirements.
D suggests using EC2 with EBS for storage. While EBS can provide block storage for EC2, it doesn't inherently offer a scalable file system
solution like EFS. You would need to manage and provision EBS volumes manually, which may introduce operational overhead.
upvoted 3 times
To meet the requirements, a solution that would allow the company to migrate its on-premises application to AWS and scale automatically,
be highly available, and require minimum operational overhead would be to migrate the application to Amazon Elastic Compute Cloud
(Amazon EC2) instances in a Multi-AZ (Availability Zone) Auto Scaling group.
upvoted 1 times
A company needs to store its accounting records in Amazon S3. The records must be immediately accessible for 1 year and then must be
archived for an additional 9 years. No one at the company, including administrative users and root users, can be able to delete the records during
the entire 10-year period. The records must be stored with maximum resiliency.
Which solution will meet these requirements?
A. Store the records in S3 Glacier for the entire 10-year period. Use an access control policy to deny deletion of the records for a period of 10
years.
B. Store the records by using S3 Intelligent-Tiering. Use an IAM policy to deny deletion of the records. After 10 years, change the IAM policy to
allow deletion.
C. Use an S3 Lifecycle policy to transition the records from S3 Standard to S3 Glacier Deep Archive after 1 year. Use S3 Object Lock in
compliance mode for a period of 10 years.
D. Use an S3 Lifecycle policy to transition the records from S3 Standard to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 1 year. Use
S3 Object Lock in governance mode for a period of 10 years.
Correct Answer: C
The S3 Lifecycle policy transitions the data to Glacier Deep Archive after 1 year for long-term archival.
S3 Object Lock in compliance mode prevents any user from deleting or overwriting objects for the specified retention period.
Glacier Deep Archive provides very high durability and the lowest storage cost for long-term archival.
Compliance mode ensures no one can override or change the retention settings even if policies change.
This meets all the requirements - immediate access for 1 year, archived for 9 years, unable to delete for 10 years, maximum resiliency
upvoted 2 times
A: S3 Glacier is suitable for long-term archival, it may not provide immediate accessibility for the first year as required.
B: Intelligent-Tiering may not offer the most cost-effective archival storage option for extended 9-year period. Changing the IAM policy
after 10 years to allow deletion also introduces manual steps and potential human error.
D: While S3 One Zone-IA can provide cost savings, it doesn't offer the same level of resiliency as S3 Glacier Deep Archive for long-term
archival.
upvoted 3 times
11pantheman11 5 months, 1 week ago
Selected Answer: C
In compliance mode, a protected object version can't be overwritten or deleted by any user, including the root user in your AWS account.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-overview.html
upvoted 3 times
Legal Hold: It’s On/Off setting on an object version. There is no retention period. If you enable Legal Hole on specific object version, you
will not be able to delete or override that specific object version. It needs S:PutObjectLegalHole as a permission.
upvoted 3 times
To meet the requirements, the company could use an S3 Lifecycle policy to transition the records from S3 Standard to S3 Glacier Deep
Archive after 1 year. S3 Glacier Deep Archive is Amazon's lowest-cost storage class, specifically designed for long-term retention of data
that is accessed rarely. This would allow the company to store the records with maximum resiliency and at the lowest possible cost.
upvoted 3 times
A company runs multiple Windows workloads on AWS. The company's employees use Windows file shares that are hosted on two Amazon EC2
instances. The file shares synchronize data between themselves and maintain duplicate copies. The company wants a highly available and durable
storage solution that preserves how users currently access the files.
What should a solutions architect do to meet these requirements?
A. Migrate all the data to Amazon S3. Set up IAM authentication for users to access files.
B. Set up an Amazon S3 File Gateway. Mount the S3 File Gateway on the existing EC2 instances.
C. Extend the file share environment to Amazon FSx for Windows File Server with a Multi-AZ configuration. Migrate all the data to FSx for
Windows File Server.
D. Extend the file share environment to Amazon Elastic File System (Amazon EFS) with a Multi-AZ configuration. Migrate all the data to
Amazon EFS.
Correct Answer: C
***CORRECT***
D: Extend the file share environment to Amazon Elastic File System (Amazon EFS) with a Multi-AZ configuration. Migrate all the data to
Amazon EFS.
upvoted 1 times
FSx for Windows provides fully managed Windows-native SMB file shares that are accessible from Windows clients.
It allows seamlessly migrating the existing Windows file shares to FSx shares without disrupting users.
The Multi-AZ configuration provides high availability and durability for file storage.
Users can continue to access files the same way over SMB without any changes.
It is optimized for Windows workloads and provides features like user quotas, ACLs, AD integration.
Data is stored on SSDs with automatic backups for resilience.
upvoted 1 times
A: S3 is a highly durable object storage service, it is not designed to directly host Windows file shares. Implementing IAM authentication
for file access would require significant changes to existing user access method.
B: S3 File Gateway can provide access to Amazon S3 objects through standard file protocols, it may not be ideal solution for preserving
existing user access method and maintaining Windows file shares.
D: Although Amazon EFS provides highly available and durable file storage, it may not directly support the existing Windows file shares
and their access method.
upvoted 3 times
11pantheman11 5 months, 1 week ago
Selected Answer: C
https://aws.amazon.com/fsx/windows/faqs/
Thousands of compute instances and devices can access a file system concurrently.
Amazon EFS is a fully managed, elastic file storage service that scales on demand. It is designed to be highly available, durable, and
secure, making it well-suited for hosting file shares. By using a Multi-AZ configuration, the file share will be automatically replicated across
multiple Availability Zones, providing high availability and durability for the data.
To migrate the data, you can use a variety of tools and techniques, such as Robocopy or AWS DataSync. Once the data has been migrated
to EFS, you can simply update the file share configuration on the existing EC2 instances to point to the EFS file system, and users will be
able to access the files in the same way they currently do.
upvoted 1 times
Option B, setting up an Amazon S3 File Gateway, would not provide the high availability and durability needed for hosting file shares.
Option C, extending the file share environment to FSx for Windows File Server, would provide the desired high availability and
durability, but would also require users to access the files in a different way.
upvoted 3 times
I am taking back my answer. "The correct answer is Option C". Extend the file share environment to Amazon FSx for Windows File
Server with a Multi-AZ configuration. Migrate all the data to FSx for Windows File Server.
upvoted 6 times
A solutions architect is developing a VPC architecture that includes multiple subnets. The architecture will host applications that use Amazon EC2
instances and Amazon RDS DB instances. The architecture consists of six subnets in two Availability Zones. Each Availability Zone includes a
public subnet, a private subnet, and a dedicated subnet for databases. Only EC2 instances that run in the private subnets can have access to the
RDS databases.
Which solution will meet these requirements?
A. Create a new route table that excludes the route to the public subnets' CIDR blocks. Associate the route table with the database subnets.
B. Create a security group that denies inbound traffic from the security group that is assigned to instances in the public subnets. Attach the
security group to the DB instances.
C. Create a security group that allows inbound traffic from the security group that is assigned to instances in the private subnets. Attach the
security group to the DB instances.
D. Create a new peering connection between the public subnets and the private subnets. Create a different peering connection between the
private subnets and the database subnets.
Correct Answer: C
Using security groups to control access between resources is a standard practice in VPCs.
The security group attached to the RDS DB instances can allow inbound traffic from the security group for the EC2 instances in the private
subnets.
This allows only those EC2 instances in the private subnets to connect to the databases, meeting the requirements.
Route tables, peering connections, and denying public subnet access would not achieve the needed selectivity of allowing only the private
subnet EC2 instances.
Security groups provide stateful filtering at the instance level for precise access control.
upvoted 1 times
A: This approach may help control routing within VPC, it does not address the specific access requirement between EC2 instances and RDS
databases.
B: Using a deny rule in a security group can lead to complexities and potential misconfigurations. It is generally recommended to use
allow rules to explicitly define access permissions.
D: Peering connections enable communication between different VPCs or VPCs in different regions, and they are not necessary for
restricting access between subnets within the same VPC.
upvoted 3 times
In this solution, the security group applied to the DB instances allows inbound traffic from the security group assigned to instances in the
private subnets. This ensures that only EC2 instances running in the private subnets can have access to the RDS databases.
upvoted 3 times
Option B, creating a security group that denies inbound traffic from the security group assigned to instances in the public subnets and
attaching it to the DB instances, would not meet the requirements because it would allow all traffic from the private subnets to reach
the DB instances, not just traffic from the security group assigned to instances in the private subnets.
Option D, creating a new peering connection between the public subnets and the private subnets and a different peering connection
between the private subnets and the database subnets, would not meet the requirements because it would allow all traffic from the
private subnets to reach the DB instances, not just traffic from the security group assigned to instances in the private subnets.
upvoted 1 times
A company has registered its domain name with Amazon Route 53. The company uses Amazon API Gateway in the ca-central-1 Region as a public
interface for its backend microservice APIs. Third-party services consume the APIs securely. The company wants to design its API Gateway URL
with the company's domain name and corresponding certificate so that the third-party services can use HTTPS.
Which solution will meet these requirements?
A. Create stage variables in API Gateway with Name="Endpoint-URL" and Value="Company Domain Name" to overwrite the default URL. Import
the public certificate associated with the company's domain name into AWS Certificate Manager (ACM).
B. Create Route 53 DNS records with the company's domain name. Point the alias record to the Regional API Gateway stage endpoint. Import
the public certificate associated with the company's domain name into AWS Certificate Manager (ACM) in the us-east-1 Region.
C. Create a Regional API Gateway endpoint. Associate the API Gateway endpoint with the company's domain name. Import the public
certificate associated with the company's domain name into AWS Certificate Manager (ACM) in the same Region. Attach the certificate to the
API Gateway endpoint. Configure Route 53 to route traffic to the API Gateway endpoint.
D. Create a Regional API Gateway endpoint. Associate the API Gateway endpoint with the company's domain name. Import the public
certificate associated with the company's domain name into AWS Certificate Manager (ACM) in the us-east-1 Region. Attach the certificate to
the API Gateway APIs. Create Route 53 DNS records with the company's domain name. Point an A record to the company's domain name.
Correct Answer: D
To design the API Gateway URL with the company's domain name and corresponding certificate, the company needs to do the following:
1. Create a Regional API Gateway endpoint: This will allow the company to create an endpoint that is specific to a region.
2. Associate the API Gateway endpoint with the company's domain name: This will allow the company to use its own domain name for the
API Gateway URL.
3. Import the public certificate associated with the company's domain name into AWS Certificate Manager (ACM) in the same Region: This
will allow the company to use HTTPS for secure communication with its APIs.
4. Attach the certificate to the API Gateway endpoint: This will allow the company to use the certificate for securing the API Gateway URL.
5. Configure Route 53 to route traffic to the API Gateway endpoint: This will allow the company to use Route 53 to route traffic to the API
Gateway URL using the company's domain name.
upvoted 24 times
Options A and D do not include the necessary steps to associate the API Gateway endpoint with the company's domain name and
attach the certificate to the endpoint.
Option B includes the necessary steps to associate the API Gateway endpoint with the company's domain name and attach the
certificate, but it imports the certificate into the us-east-1 Region instead of the ca-central-1 Region where the API Gateway is located.
upvoted 5 times
paniya93 Most Recent 23 hours, 53 minutes ago
Selected Answer: C
Explain why this saying a different region which not mentioned in the Q.
upvoted 1 times
Option A: Using stage variables and importing certificates into ACM is not sufficient for achieving the requirement of associating a custom
domain and certificate with the API Gateway endpoint.
Option B: While it mentions importing the certificate into ACM, it doesn't address the need for a Regional API Gateway or the appropriate
region for the certificate.
Option D: Using certificates from the us-east-1 region for a Regional API Gateway might cause issues. Additionally, it doesn't provide clear
details on how to associate the domain name and certificate with the API Gateway endpoint.
upvoted 1 times
A. This approach does not involve using the company's domain name or a custom certificate. It does not provide a solution for enabling
HTTPS access with a corresponding certificate.
B. It suggests importing the certificate into ACM in the us-east-1 Region, which may not align with the desired ca-central-1 Region for this
scenario. It's important to use ACM in the same Region where API Gateway is deployed to simplify certificate management.
D. It suggests importing the certificate into ACM in the us-east-1 Region, which again does not align with the desired ca-central-1 Region.
Additionally, it mentions attaching the certificate to API Gateway, which is not necessary for achieving the desired outcome of enabling
HTTPS access for the API Gateway endpoint.
upvoted 2 times
When creating a custom domain name in Amazon API Gateway and attaching an ACM certificate to it, the region of the certificate does not
have to match the region of the API Gateway deployment. However, it's worth noting that there may be additional latency or costs
associated with using a certificate from a different region.
In summary, the solution I provided is still valid and meets the requirements of the question, even though it uses a different region for
ACM...pum!
upvoted 1 times
Certificates are regional and have to be uploaded in the same AWS Region as the service you're using it for. (If you're using a certificate
with CloudFront, you have to upload it into US East (N. Virginia).)
https://docs.aws.amazon.com/acm/latest/userguide/import-certificate.html
upvoted 3 times
A company is running a popular social media website. The website gives users the ability to upload images to share with other users. The
company wants to make sure that the images do not contain inappropriate content. The company needs a solution that minimizes development
effort.
What should a solutions architect do to meet these requirements?
A. Use Amazon Comprehend to detect inappropriate content. Use human review for low-confidence predictions.
B. Use Amazon Rekognition to detect inappropriate content. Use human review for low-confidence predictions.
C. Use Amazon SageMaker to detect inappropriate content. Use ground truth to label low-confidence predictions.
D. Use AWS Fargate to deploy a custom machine learning model to detect inappropriate content. Use ground truth to label low-confidence
predictions.
Correct Answer: B
Amazon Rekognition is a cloud-based image and video analysis service that can detect inappropriate content in images using its pre-
trained label detection model. It can identify a wide range of inappropriate content, including explicit or suggestive adult content, violent
content, and offensive language. The service provides high accuracy and low latency, making it a good choice for this use case.
upvoted 8 times
Option C, using Amazon SageMaker to detect inappropriate content, would require significant development effort to build and train a
custom machine learning model. It would also require a large dataset of labeled images to train the model, which may be time-
consuming and expensive to obtain.
Option D, using AWS Fargate to deploy a custom machine learning model, would also require significant development effort and a
large dataset of labeled images. It may not be the most efficient or cost-effective solution for this use case.
In summary, the best solution is to use Amazon Rekognition to detect inappropriate content in images, and use human review for low-
confidence predictions to ensure that all inappropriate content is detected.
upvoted 7 times
Amazon Rekognition is a good choice for this solution because it is a managed service, which means that the company does not have to
worry about managing the infrastructure or the machine learning model. Rekognition is also highly accurate, and it can be used to detect
a wide range of inappropriate content
upvoted 1 times
A. Amazon Comprehend is a natural language processing service provided by AWS, primarily focused on text analysis rather than image
analysis.
C. Amazon SageMaker is a comprehensive machine learning service that allows you to build, train, and deploy custom machine learning
models. It requires significant development effort to build and train a custom model. In addition, utilizing ground truth to label low-
confidence predictions would further add to the development complexity and maintenance overhead.
D. Similar to C, using AWS Fargate to deploy a custom machine learning model requires significant development effort.
upvoted 2 times
https://docs.aws.amazon.com/rekognition/latest/dg/a2i-rekognition.html
upvoted 1 times
Question #58 Topic 1
A company wants to run its critical applications in containers to meet requirements for scalability and availability. The company prefers to focus
on maintenance of the critical applications. The company does not want to be responsible for provisioning and managing the underlying
infrastructure that runs the containerized workload.
What should a solutions architect do to meet these requirements?
B. Use Amazon Elastic Container Service (Amazon ECS) on Amazon EC2 worker nodes.
D. Use Amazon EC2 instances from an Amazon Elastic Container Service (Amazon ECS)-optimized Amazon Machine Image (AMI).
Correct Answer: C
https://aws.amazon.com/fr/fargate/
upvoted 19 times
A. This option would require manual provisioning and management of EC2, as well as installing and configuring Docker on those
instances. It would introduce additional overhead and responsibilities for maintaining the underlying infrastructure.
B. While this option leverages ECS to manage containers, it still requires provisioning and managing EC2 to serve as worker nodes. It adds
complexity and maintenance overhead compared to the serverless nature of Fargate.
D. This option still involves managing and provisioning EC2, even though an ECS-optimized AMI simplifies the process of setting up EC2 for
running ECS. It does not provide the level of serverless abstraction and ease of management offered by Fargate.
upvoted 3 times
https://docs.aws.amazon.com/AmazonECS/latest/userguide/what-is-fargate.html
upvoted 1 times
SilentMilli 8 months, 4 weeks ago
Selected Answer: C
ECS + Fargate
upvoted 3 times
AWS Fargate is a fully managed container execution environment that runs containers without the need to provision and manage
underlying infrastructure. This makes it a good choice for companies that want to focus on maintaining their critical applications and do
not want to be responsible for provisioning and managing the underlying infrastructure.
Option A involves installing Docker on Amazon EC2 instances, which would still require the company to manage the underlying
infrastructure. Option B involves using Amazon ECS on Amazon EC2 worker nodes, which would also require the company to manage the
underlying infrastructure. Option D involves using Amazon EC2 instances from an Amazon ECS-optimized Amazon Machine Image (AMI),
which would also require the company to manage the underlying infrastructure.
upvoted 2 times
A company hosts more than 300 global websites and applications. The company requires a platform to analyze more than 30 TB of clickstream
data each day.
What should a solutions architect do to transmit and process the clickstream data?
A. Design an AWS Data Pipeline to archive the data to an Amazon S3 bucket and run an Amazon EMR cluster with the data to generate
analytics.
B. Create an Auto Scaling group of Amazon EC2 instances to process the data and send it to an Amazon S3 data lake for Amazon Redshift to
use for analysis.
C. Cache the data to Amazon CloudFront. Store the data in an Amazon S3 bucket. When an object is added to the S3 bucket. run an AWS
Lambda function to process the data for analysis.
D. Collect the data from Amazon Kinesis Data Streams. Use Amazon Kinesis Data Firehose to transmit the data to an Amazon S3 data lake.
Load the data in Amazon Redshift for analysis.
Correct Answer: D
https://aws.amazon.com/es/blogs/big-data/real-time-analytics-with-amazon-redshift-streaming-ingestion/
upvoted 16 times
Amazon Kinesis Data Streams is a highly scalable and durable service that enables real-time processing of streaming data at a high
volume and high rate. You can use Kinesis Data Streams to collect and process the clickstream data in real-time.
Amazon Kinesis Data Firehose is a fully managed service that loads streaming data into data stores and analytics tools. You can use
Kinesis Data Firehose to transmit the data from Kinesis Data Streams to an Amazon S3 data lake.
Once the data is in the data lake, you can use Amazon Redshift to load the data and perform analysis on it. Amazon Redshift is a fully
managed, petabyte-scale data warehouse service that allows you to quickly and efficiently analyze data using SQL and your existing
business intelligence tools.
upvoted 14 times
Option B, which involves creating an Auto Scaling group of Amazon EC2 instances to process the data and sending it to an Amazon S3
data lake for Amazon Redshift to use for analysis, is not the most appropriate solution because it does not involve a fully managed
service for transmitting the data from the processing layer to the data lake.
Option C, which involves caching the data to Amazon CloudFront, storing the data in an Amazon S3 bucket, and running an AWS
Lambda function to process the data for analysis when an object is added to the S3 bucket, is not the most appropriate solution
because it does not involve a scalable and durable service for collecting and processing the data in real-time.
upvoted 3 times
Kinesis Data Streams can continuously capture and ingest high volumes of clickstream data in real-time. This handles the large 30TB daily
data intake.
Kinesis Firehose can automatically load the streaming data into S3. This creates a data lake for further analysis.
Firehose can transform and analyze the data in flight before loading to S3 using Lambda. This enables real-time processing.
The data in S3 can be easily loaded into Amazon Redshift for interactive analysis at scale.
Kinesis auto scales to handle the high data volumes. Minimal effort is needed for infrastructure management.
upvoted 1 times
B. This option involves managing and scaling EC2, which adds operational overhead. It is also not real-time streaming solution.
Additionally, use of Redshift for analyzing clickstream data might not be most efficient or cost-effective approach.
C. CloudFront is CDN service and is not designed for real-time data processing or analytics. While using Lambda to process data can be an
option, it may not be most efficient solution for processing large volumes of clickstream data.
Therefore, collecting the data from Kinesis Data Streams, using Kinesis Data Firehose to transmit it to S3 data lake, and loading it into
Redshift for analysis is the recommended approach. This combination provides scalable, real-time streaming solution with storage and
analytics capabilities that can handle high volume of clickstream data.
upvoted 2 times
Option D) is a valid answer too, however with Amazon Redshift Streaming Ingestion "you can connect to Amazon Kinesis Data Streams
data streams and pull data directly to Amazon Redshift without staging data in S3" https://aws.amazon.com/redshift/redshift-streaming-
ingestion. So in this scenario Kinesis Data Firehose and S3 are redundant.
upvoted 4 times
A company has a website hosted on AWS. The website is behind an Application Load Balancer (ALB) that is configured to handle HTTP and
HTTPS separately. The company wants to forward all requests to the website so that the requests will use HTTPS.
What should a solutions architect do to meet this requirement?
B. Create a rule that replaces the HTTP in the URL with HTTPS.
D. Replace the ALB with a Network Load Balancer configured to use Server Name Indication (SNI).
Correct Answer: C
To meet the requirement of forwarding all requests to the website so that the requests will use HTTPS, a solutions architect can create a
listener rule on the ALB that redirects HTTP traffic to HTTPS. This can be done by creating a rule with a condition that matches all HTTP
traffic and a rule action that redirects the traffic to the HTTPS listener. The HTTPS listener should already be configured to accept HTTPS
traffic and forward it to the target group.
upvoted 13 times
Option B. Creating a rule that replaces the HTTP in the URL with HTTPS is not a valid solution because this would not redirect the traffic
to the HTTPS listener.
Option D. Replacing the ALB with a Network Load Balancer configured to use Server Name Indication (SNI) is not a valid solution
because it would not address the requirement to redirect HTTP traffic to HTTPS.
upvoted 9 times
Here is why:
ALB listener rules allow you to redirect traffic from one listener port (e.g. 80 for HTTP) to another (e.g. 443 for HTTPS). This achieves the
goal to forward all requests over HTTPS.
Network ACLs control traffic at the subnet level and cannot distinguish between HTTP and HTTPS requests to implement a redirect (option
A incorrect).
Replacing HTTP with HTTPS in the URL happens at the client side. It does not redirect at the ALB (option B incorrect).
Network Load Balancers work at the TCP level and do not understand HTTP or HTTPS protocols. So they cannot redirect in this manner
(option D incorrect).
upvoted 3 times
D. While NLB can handle SSL/TLS termination using SNI for routing requests to different services, replacing the ALB solely to enforce
HTTP-to-HTTPS redirection would be an unnecessary and more complex solution.
Therefore, the recommended approach is to create a listener rule on the ALB to redirect HTTP traffic to HTTPS. By configuring a listener
rule, you can define a redirect action that automatically directs HTTP requests to their corresponding HTTPS versions.
upvoted 3 times
Abrar2022 4 months, 2 weeks ago
A solutions architect should create listen rules to direct http traffic to https.
upvoted 1 times
Create a redirect rule on the ALB: The redirect rule should be configured to redirect all incoming HTTP requests to HTTPS. This can be
done by creating a redirect rule that redirects HTTP requests on port 80 to HTTPS requests on port 443.
Update the DNS record: The DNS record for the website should be updated to point to the ALB's DNS name, so that all traffic is routed
through the ALB.
Verify the configuration: Once the configuration is complete, the website should be tested to ensure that all requests are being redirected
to HTTPS. This can be done by accessing the website using HTTP and verifying that the request is redirected to HTTPS.
upvoted 1 times
A company is developing a two-tier web application on AWS. The company's developers have deployed the application on an Amazon EC2
instance that connects directly to a backend Amazon RDS database. The company must not hardcode database credentials in the application. The
company must also implement a solution to automatically rotate the database credentials on a regular basis.
Which solution will meet these requirements with the LEAST operational overhead?
A. Store the database credentials in the instance metadata. Use Amazon EventBridge (Amazon CloudWatch Events) rules to run a scheduled
AWS Lambda function that updates the RDS credentials and instance metadata at the same time.
B. Store the database credentials in a configuration file in an encrypted Amazon S3 bucket. Use Amazon EventBridge (Amazon CloudWatch
Events) rules to run a scheduled AWS Lambda function that updates the RDS credentials and the credentials in the configuration file at the
same time. Use S3 Versioning to ensure the ability to fall back to previous values.
C. Store the database credentials as a secret in AWS Secrets Manager. Turn on automatic rotation for the secret. Attach the required
permission to the EC2 role to grant access to the secret.
D. Store the database credentials as encrypted parameters in AWS Systems Manager Parameter Store. Turn on automatic rotation for the
encrypted parameters. Attach the required permission to the EC2 role to grant access to the encrypted parameters.
Correct Answer: C
AWS Secrets Manager is a service that enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets
throughout their lifecycle. By storing the database credentials as a secret in Secrets Manager, you can ensure that they are not hardcoded
in the application and that they are automatically rotated on a regular basis. To grant the EC2 instance access to the secret, you can attach
the required permission to the EC2 role. This will allow the application to retrieve the secret from Secrets Manager as needed.
upvoted 9 times
Option B, storing the database credentials in an encrypted S3 bucket and using a Lambda function to update them, would also not
meet this requirement, as the application would still need to access the credentials from the configuration file.
Option D, storing the database credentials as encrypted parameters in AWS Systems Manager Parameter Store, would also not meet
this requirement, as the application would still need to access the encrypted parameters in order to use them.
upvoted 5 times
This approach minimizes operational overhead and provides a secure and managed solution for credential management.
upvoted 2 times
https://docs.aws.amazon.com/secretsmanager/latest/userguide/create_database_secret.html
upvoted 3 times
Question #62 Topic 1
A company is deploying a new public web application to AWS. The application will run behind an Application Load Balancer (ALB). The application
needs to be encrypted at the edge with an SSL/TLS certificate that is issued by an external certificate authority (CA). The certificate must be
rotated each year before the certificate expires.
What should a solutions architect do to meet these requirements?
A. Use AWS Certificate Manager (ACM) to issue an SSL/TLS certificate. Apply the certificate to the ALB. Use the managed renewal feature to
automatically rotate the certificate.
B. Use AWS Certificate Manager (ACM) to issue an SSL/TLS certificate. Import the key material from the certificate. Apply the certificate to
the ALUse the managed renewal feature to automatically rotate the certificate.
C. Use AWS Certificate Manager (ACM) Private Certificate Authority to issue an SSL/TLS certificate from the root CA. Apply the certificate to
the ALB. Use the managed renewal feature to automatically rotate the certificate.
D. Use AWS Certificate Manager (ACM) to import an SSL/TLS certificate. Apply the certificate to the ALB. Use Amazon EventBridge (Amazon
CloudWatch Events) to send a notification when the certificate is nearing expiration. Rotate the certificate manually.
Correct Answer: D
A & B: These options assume that the SSL/TLS certificate can be issued directly by ACM. However, since the requirement specifies that the
certificate should be issued by an external certificate authority (CA), this option is not suitable.
C: ACM Private Certificate Authority is used when you want to create your own private CA and issue certificates from it. It does not support
certificates issued by external CAs. Therefore, this option is not suitable for the given requirement.
upvoted 3 times
Router 3 months, 2 weeks ago
D is correct, since it's an external certificate
upvoted 1 times
This option meets the requirements because it uses an SSL/TLS certificate issued by an external CA and involves a manual rotation process
that can be done yearly before the certificate expires. The other options involve using AWS Certificate Manager to issue the certificate,
which does not meet the requirement of using an external CA.
upvoted 1 times
AWS Certificate Manager (ACM) is a service that lets you easily provision, manage, and deploy Secure Sockets Layer/Transport Layer
Security (SSL/TLS) certificates for use with AWS resources. ACM provides managed renewal for SSL/TLS certificates, which means that ACM
automatically renews your certificates before they expire.
To meet the requirements for the web application, you should use ACM to issue an SSL/TLS certificate and apply it to the Application Load
Balancer (ALB). Then, you can use the managed renewal feature to automatically rotate the certificate each year before it expires. This will
ensure that the web application is always encrypted at the edge with a valid SSL/TLS certificate.
upvoted 2 times
https://docs.aws.amazon.com/acm/latest/userguide/import-certificate.html
https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/Create-CloudWatch-Events-Rule.html
upvoted 4 times
Option C, using ACM Private Certificate Authority, is not necessary in this scenario because the requirement is to use a certificate issued
by an external certificate authority.
Option B, importing the key material from the certificate, is not a valid option because ACM does not allow you to import key material
for SSL/TLS certificates.
upvoted 1 times
Question #63 Topic 1
A company runs its infrastructure on AWS and has a registered base of 700,000 users for its document management application. The company
intends to create a product that converts large .pdf files to .jpg image files. The .pdf files average 5 MB in size. The company needs to store the
original files and the converted files. A solutions architect must design a scalable solution to accommodate demand that will grow rapidly over
time.
Which solution meets these requirements MOST cost-effectively?
A. Save the .pdf files to Amazon S3. Configure an S3 PUT event to invoke an AWS Lambda function to convert the files to .jpg format and store
them back in Amazon S3.
B. Save the .pdf files to Amazon DynamoDUse the DynamoDB Streams feature to invoke an AWS Lambda function to convert the files to .jpg
format and store them back in DynamoDB.
C. Upload the .pdf files to an AWS Elastic Beanstalk application that includes Amazon EC2 instances, Amazon Elastic Block Store (Amazon
EBS) storage, and an Auto Scaling group. Use a program in the EC2 instances to convert the files to .jpg format. Save the .pdf files and the .jpg
files in the EBS store.
D. Upload the .pdf files to an AWS Elastic Beanstalk application that includes Amazon EC2 instances, Amazon Elastic File System (Amazon
EFS) storage, and an Auto Scaling group. Use a program in the EC2 instances to convert the file to .jpg format. Save the .pdf files and the .jpg
files in the EBS store.
Correct Answer: A
C. Using Elastic Beanstalk with EC2 and EBS storage can work, but it may not be most cost-effective solution. It involves managing the
underlying infrastructure and scaling manually.
D. Similar to C, using Elastic Beanstalk with EC2 and EFS storage can work, but it may not be most cost-effective solution. EFS is a shared
file storage service and may not provide optimal performance for conversion process, especially as demand and file sizes increase.
A. leverages Lambda and the scalable and cost-effective storage of S3. With Lambda, you only pay for actual compute time used during
the file conversion, and S3 provides durable and scalable storage for both .pdf files and .jpg files. The S3 PUT event triggers Lambda to
perform conversion, eliminating need to manage infrastructure and scaling, making it most cost-effective solution for this scenario.
upvoted 4 times
In this solution, the .pdf files are saved to Amazon S3, which is an object storage service that is highly scalable, durable, and secure. S3 can
store unlimited amounts of data at a very low cost.
The S3 PUT event triggers an AWS Lambda function to convert the .pdf files to .jpg format. Lambda is a serverless compute service that
runs code in response to specific events and automatically scales to meet demand. This means that the conversion process can scale up or
down as needed, without the need for manual intervention.
The converted .jpg files are then stored back in S3, which allows the company to store both the original .pdf files and the converted .jpg
files in the same service. This reduces the complexity of the solution and helps to keep costs low.
upvoted 1 times
Buruguduystunstugudunstuy 9 months, 2 weeks ago
Option C is also a valid solution, but it may be more expensive due to the use of EC2 instances, EBS storage, and an Auto Scaling group.
These resources can add additional cost, especially if the demand for the conversion service grows rapidly.
Option D is not a valid solution because it uses Amazon EFS, which is a file storage service that is not suitable for storing large amounts
of data. EFS is designed for storing and accessing files that are accessed frequently, such as application logs and media files. It is not
designed for storing large files like .pdf or .jpg files.
upvoted 2 times
A company has more than 5 TB of file data on Windows file servers that run on premises. Users and applications interact with the data each day.
The company is moving its Windows workloads to AWS. As the company continues this process, the company requires access to AWS and on-
premises file storage with minimum latency. The company needs a solution that minimizes operational overhead and requires no significant
changes to the existing file access patterns. The company uses an AWS Site-to-Site VPN connection for connectivity to AWS.
What should a solutions architect do to meet these requirements?
A. Deploy and configure Amazon FSx for Windows File Server on AWS. Move the on-premises file data to FSx for Windows File Server.
Reconfigure the workloads to use FSx for Windows File Server on AWS.
B. Deploy and configure an Amazon S3 File Gateway on premises. Move the on-premises file data to the S3 File Gateway. Reconfigure the on-
premises workloads and the cloud workloads to use the S3 File Gateway.
C. Deploy and configure an Amazon S3 File Gateway on premises. Move the on-premises file data to Amazon S3. Reconfigure the workloads to
use either Amazon S3 directly or the S3 File Gateway. depending on each workload's location.
D. Deploy and configure Amazon FSx for Windows File Server on AWS. Deploy and configure an Amazon FSx File Gateway on premises. Move
the on-premises file data to the FSx File Gateway. Configure the cloud workloads to use FSx for Windows File Server on AWS. Configure the on-
premises workloads to use the FSx File Gateway.
Correct Answer: A
https://aws.amazon.com/blogs/storage/accessing-your-file-workloads-from-on-premises-with-file-gateway/
upvoted 1 times
Helps eliminate on-premises file servers and consolidates all their data in AWS to take advantage of the scale and economics of cloud
storage.
Provides options that you can use for all your file workloads, including those that require on-premises access to cloud data.
Applications that need to stay on premises can now experience the same low latency and high performance that they have in AWS,
without taxing your networks or impacting the latencies experienced by your most demanding applications.
upvoted 1 times
Question #65 Topic 1
A hospital recently deployed a RESTful API with Amazon API Gateway and AWS Lambda. The hospital uses API Gateway and Lambda to upload
reports that are in PDF format and JPEG format. The hospital needs to modify the Lambda code to identify protected health information (PHI) in
the reports.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use existing Python libraries to extract the text from the reports and to identify the PHI from the extracted text.
B. Use Amazon Textract to extract the text from the reports. Use Amazon SageMaker to identify the PHI from the extracted text.
C. Use Amazon Textract to extract the text from the reports. Use Amazon Comprehend Medical to identify the PHI from the extracted text.
D. Use Amazon Rekognition to extract the text from the reports. Use Amazon Comprehend Medical to identify the PHI from the extracted text.
Correct Answer: C
Option C: Using Amazon Textract to extract the text from the reports, and Amazon Comprehend Medical to identify the PHI from the
extracted text, would be the most efficient solution as it would involve the least operational overhead. Textract is specifically designed for
extracting text from documents, and Comprehend Medical is a fully managed service that can accurately identify PHI in medical text. This
solution would require minimal maintenance and would not incur any additional costs beyond the usage fees for Textract and
Comprehend Medical.
upvoted 12 times
Option B: Using Amazon SageMaker to identify the PHI from the extracted text would involve additional operational overhead in terms
of setting up and maintaining a SageMaker model, as well as potentially incurring additional costs for using SageMaker.
Option D: Using Amazon Rekognition to extract the text from the reports would not be an effective solution, as Rekognition is primarily
designed for image recognition and would not be able to accurately extract text from PDF or JPEG files.
upvoted 4 times
Amazon Textract has built-in support to extract text from PDFs and images, eliminating the need to build this yourself with Python
libraries.
Amazon Comprehend Medical has pre-trained machine learning models to identify PHI entities out-of-the-box, avoiding the need to train
your own SageMaker model.
Using these fully managed AWS services minimizes operational overhead of maintaining machine learning models yourself.
upvoted 1 times
Once text is extracted, hospital can then use Comprehend Medical, a natural language processing service specifically designed for medical
text, to analyze and identify PHI. It can recognize medical entities such as medical conditions, treatments, and patient information.
A. suggests using existing Python libraries, which would require hospital to develop and maintain custom code for text extraction and PHI
identification.
B and D involve using Textract along with SageMaker or Rekognition, respectively, for PHI identification. While these options could work,
they introduce additional complexity by incorporating machine learning models and training.
upvoted 2 times
channn 6 months ago
Key word: hospital!
upvoted 1 times
A company has an application that generates a large number of files, each approximately 5 MB in size. The files are stored in Amazon S3.
Company policy requires the files to be stored for 4 years before they can be deleted. Immediate accessibility is always required as the files
contain critical business data that is not easy to reproduce. The files are frequently accessed in the first 30 days of the object creation but are
rarely accessed after the first 30 days.
Which storage solution is MOST cost-effective?
A. Create an S3 bucket lifecycle policy to move files from S3 Standard to S3 Glacier 30 days from object creation. Delete the files 4 years after
object creation.
B. Create an S3 bucket lifecycle policy to move files from S3 Standard to S3 One Zone-Infrequent Access (S3 One Zone-IA) 30 days from
object creation. Delete the files 4 years after object creation.
C. Create an S3 bucket lifecycle policy to move files from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-IA) 30 days from object
creation. Delete the files 4 years after object creation.
D. Create an S3 bucket lifecycle policy to move files from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-IA) 30 days from object
creation. Move the files to S3 Glacier 4 years after object creation.
Correct Answer: C
If they do not explicitly mention that they are using Glacier Instant Retrieval, we should assume that Glacier -> takes more time to retrieve
and may not meet the requirements
upvoted 59 times
S3 Standard-IA is for data that is accessed less frequently, but requires rapid access when needed.
https://aws.amazon.com/s3/storage-classes/#:~:text=S3%20Standard%2DIA)-,S3%20Standard%2DIA,-is%20for%20data
upvoted 2 times
A company hosts an application on multiple Amazon EC2 instances. The application processes messages from an Amazon SQS queue, writes to
an Amazon RDS table, and deletes the message from the queue. Occasional duplicate records are found in the RDS table. The SQS queue does not
contain any duplicate messages.
What should a solutions architect do to ensure messages are being processed once only?
Correct Answer: D
The visibility timeout determines the amount of time that a message received from an SQS queue is hidden from other consumers while
the message is being processed. If the processing of a message takes longer than the visibility timeout, the message will become visible
to other consumers and may be processed again. By increasing the visibility timeout, the solutions architect can ensure that the message
is not made visible to other consumers until the processing is complete and the message can be safely deleted from the queue.
Option A (Use the CreateQueue API call to create a new queue) would not address the issue of duplicate message processing.
Option B (Use the AddPermission API call to add appropriate permissions) is not relevant to this issue.
Option C (Use the ReceiveMessage API call to set an appropriate wait time) is also not relevant to this issue.
upvoted 6 times
Option C (Use the ReceiveMessage API call to set an appropriate wait time) is not relevant to this issue because it is related to
configuring how long the ReceiveMessage API call should wait for new messages to arrive in the SQS queue before returning an
empty response. It does not address the issue of duplicate records in the RDS table.
upvoted 2 times
Option A, creating a new queue, does not address the issue of concurrent processing and duplicate records. It would only create a new
queue, which is not necessary for solving the problem.
Option B, adding permissions, also does not directly address the issue of duplicate records. Permissions are necessary for accessing the
SQS queue but not for preventing concurrent processing.
Option C, setting an appropriate wait time using the ReceiveMessage API call, does not specifically prevent duplicate records. It can help
manage the rate at which messages are received from the queue but does not address the issue of concurrent processing.
upvoted 4 times
A solutions architect is designing a new hybrid architecture to extend a company's on-premises infrastructure to AWS. The company requires a
highly available connection with consistent low latency to an AWS Region. The company needs to minimize costs and is willing to accept slower
traffic if the primary connection fails.
What should the solutions architect do to meet these requirements?
A. Provision an AWS Direct Connect connection to a Region. Provision a VPN connection as a backup if the primary Direct Connect connection
fails.
B. Provision a VPN tunnel connection to a Region for private connectivity. Provision a second VPN tunnel for private connectivity and as a
backup if the primary VPN connection fails.
C. Provision an AWS Direct Connect connection to a Region. Provision a second Direct Connect connection to the same Region as a backup if
the primary Direct Connect connection fails.
D. Provision an AWS Direct Connect connection to a Region. Use the Direct Connect failover attribute from the AWS CLI to automatically create
a backup connection if the primary Direct Connect connection fails.
Correct Answer: A
https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/aws-direct-connect-vpn.html
upvoted 11 times
Option D suggests using the Direct Connect failover attribute from the AWS CLI to automatically create a backup connection if the primary
Direct Connect connection fails. While this approach can be automated, it does not provide the same level of immediate failover
capabilities as having a separate backup connection in place.
Therefore, option A, provisioning an AWS Direct Connect connection to a Region and provisioning a VPN connection as a backup, is the
most suitable solution that meets the company's requirements for connectivity, cost-effectiveness, and high availability.
upvoted 4 times
https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/aws-direct-connect-vpn.html
upvoted 2 times
The main difference between need and require is that needs are goals and objectives a business must achieve, whereas require or
requirements are the things we need to do in order to achieve a need.
upvoted 2 times
Option D, which involves using the Direct Connect failover attribute from the AWS CLI to automatically create a backup connection if
the primary Direct Connect connection fails, is not a valid option because the Direct Connect failover attribute is not available in the
AWS CLI.
upvoted 6 times
A company is running a business-critical web application on Amazon EC2 instances behind an Application Load Balancer. The EC2 instances are
in an Auto Scaling group. The application uses an Amazon Aurora PostgreSQL database that is deployed in a single Availability Zone. The
company wants the application to be highly available with minimum downtime and minimum loss of data.
Which solution will meet these requirements with the LEAST operational effort?
A. Place the EC2 instances in different AWS Regions. Use Amazon Route 53 health checks to redirect traffic. Use Aurora PostgreSQL Cross-
Region Replication.
B. Configure the Auto Scaling group to use multiple Availability Zones. Configure the database as Multi-AZ. Configure an Amazon RDS Proxy
instance for the database.
C. Configure the Auto Scaling group to use one Availability Zone. Generate hourly snapshots of the database. Recover the database from the
snapshots in the event of a failure.
D. Configure the Auto Scaling group to use multiple AWS Regions. Write the data from the application to Amazon S3. Use S3 Event
Notifications to launch an AWS Lambda function to write the data to the database.
Correct Answer: B
Use an Auto Scaling group across multiple AZs for high availability of the EC2 instances.
Configure the Aurora DB as Multi-AZ for high availability, automatic failover, and minimum data loss.
Use RDS Proxy for connection pooling to the DB for performance
upvoted 2 times
C. While snapshots can be used for data backup and recovery, they do not provide real-time failover capabilities and can result in
significant data loss if a failure occurs between snapshots.
D. While this approach offers some decoupling and scalability benefits, it adds complexity to the data flow and introduces additional
overhead for data processing.
In comparison, option B provides a simpler and more streamlined solution by utilizing multiple AZs, Multi-AZ configuration for the
database, and RDS Proxy for improved connection management. It ensures high availability, minimal downtime, and minimum loss of
data with the least operational effort.
upvoted 4 times
This solution will meet the requirements of high availability with minimum downtime and minimum loss of data with the least operational
effort. By configuring the Auto Scaling group to use multiple Availability Zones, the web application will be able to withstand the failure of
one Availability Zone without any disruption to the service. By configuring the database as Multi-AZ, the database will automatically
failover to a standby instance in a different Availability Zone in the event of a failure, ensuring minimal downtime. Additionally, using an
RDS Proxy instance will help to improve the performance and scalability of the database.
upvoted 3 times
Option A: Place the EC2 instances in different AWS Regions. Use Amazon Route 53 health checks to redirect traffic. Use Aurora
PostgreSQL Cross-Region Replication.
While this solution would provide high availability with minimum downtime, it would involve significant operational effort and may
result in data loss. Placing the EC2 instances in different Regions would require significant infrastructure changes and could impact the
performance of the application. Additionally, Aurora PostgreSQL Cross-Region Replication is designed to provide disaster recovery
rather than high availability, and it may result in some data loss during the replication process.
upvoted 4 times
koreanmonkey 10 months, 1 week ago
maybe because of load balancer, diffrent region can't be answer.
upvoted 2 times
A company's HTTP application is behind a Network Load Balancer (NLB). The NLB's target group is configured to use an Amazon EC2 Auto Scaling
group with multiple EC2 instances that run the web service.
The company notices that the NLB is not detecting HTTP errors for the application. These errors require a manual restart of the EC2 instances
that run the web service. The company needs to improve the application's availability without writing custom scripts or code.
What should a solutions architect do to meet these requirements?
A. Enable HTTP health checks on the NLB, supplying the URL of the company's application.
B. Add a cron job to the EC2 instances to check the local application's logs once each minute. If HTTP errors are detected. the application will
restart.
C. Replace the NLB with an Application Load Balancer. Enable HTTP health checks by supplying the URL of the company's application.
Configure an Auto Scaling action to replace unhealthy instances.
D. Create an Amazon Cloud Watch alarm that monitors the UnhealthyHostCount metric for the NLB. Configure an Auto Scaling action to
replace unhealthy instances when the alarm is in the ALARM state.
Correct Answer: C
But NLBs only accept either selecting EC2 instances or IP addresses directly as targets. You can't provide a URL to your endpoints,
only a health check path (if you're using HTTP or HTTPS health checks).
upvoted 6 times
There just isn't an answer option that reflects that. My guess is that the question and/or answer options are outdated.
upvoted 4 times
Use an Application Load Balancer (ALB) instead of a Network Load Balancer (NLB) since ALBs support HTTP health checks.
Configure HTTP health checks on the ALB to monitor the application health.
Use an Auto Scaling action triggered by the ALB health checks to automatically replace unhealthy instances.
upvoted 1 times
miki111 2 months, 2 weeks ago
Option C is the right answer.
upvoted 1 times
B. This approach involves custom scripting and manual intervention, which contradicts the requirement of not writing custom scripts or
code.
D. Since the NLB does not detect HTTP errors, relying solely on the UnhealthyHostCount metric may not accurately capture the health of
the application instances.
Therefore, C is the recommended choice for improving the application's availability without custom scripting or code. By replacing the NLB
with an ALB, enabling HTTP health checks, and configuring Auto Scaling to replace unhealthy instances, the company can ensure that only
healthy instances are serving traffic, enhancing the application's availability automatically.
upvoted 5 times
Application availability: NLB cannot assure the availability of the application. This is because it bases its decisions solely on network and
TCP-layer variables and has no awareness of the application at all. Generally, NLB determines availability based on the ability of a server to
respond to ICMP ping or to correctly complete the three-way TCP handshake. ALB goes much deeper and is capable of determining
availability based on not only a successful HTTP GET of a particular page but also the verification that the content is as was expected
based on the input parameters.
upvoted 1 times
A solution architect can use Amazon EC2 Auto Scaling health checks to automatically detect and replace unhealthy instances in the EC2
Auto Scaling group. The health checks can be configured to check the HTTP errors returned by the application and terminate the
unhealthy instances. This will ensure that the application's availability is improved, without requiring custom scripts or code.
upvoted 1 times
Just a general tip: Medium is not a reliable resource. Anyone can create content there. Rely only on official AWS documentation.
upvoted 3 times
Option A - Enable HTTP health checks on the NLB, supplying the URL of the company's application.
This is the correct solution as it allows the NLB to automatically detect HTTP errors and take action.
upvoted 4 times
Option A suggests enabling HTTP health checks on the Network Load Balancer (NLB) by supplying the URL of the company's
application. While this can help the NLB detect if the application is accessible or not, it does not directly address the specific
requirement of automatically restarting the EC2 instances when HTTP errors occur.
upvoted 1 times
Option C - Replace the NLB with an Application Load Balancer. Enable HTTP health checks by supplying the URL of the company's
application. Configure an Auto Scaling action to replace unhealthy instances.
While this option may improve the availability of the application, it is not necessary to replace the NLB with an Application Load
Balancer in order to enable HTTP health checks. The NLB can support HTTP health checks as well, and replacing it may involve
additional effort and cost.
upvoted 3 times
A company runs a shopping application that uses Amazon DynamoDB to store customer information. In case of data corruption, a solutions
architect needs to design a solution that meets a recovery point objective (RPO) of 15 minutes and a recovery time objective (RTO) of 1 hour.
What should the solutions architect recommend to meet these requirements?
A. Configure DynamoDB global tables. For RPO recovery, point the application to a different AWS Region.
B. Configure DynamoDB point-in-time recovery. For RPO recovery, restore to the desired point in time.
C. Export the DynamoDB data to Amazon S3 Glacier on a daily basis. For RPO recovery, import the data from S3 Glacier to DynamoDB.
D. Schedule Amazon Elastic Block Store (Amazon EBS) snapshots for the DynamoDB table every 15 minutes. For RPO recovery, restore the
DynamoDB table by using the EBS snapshot.
Correct Answer: B
Create a new Amazon Elastic Block Store (EBS) volume from the DynamoDB table.
You can also use AWS Data pipeline to automate the above process and schedule regular snapshots of your DynamoDB table.
Note that, if your table is large and you want to take a snapshot of it, it could take a long time and consume a lot of bandwidth, so it's
recommended to use the Global Tables feature from DynamoDB in order to have a Multi-region and Multi-master DynamoDB table,
and you can snapshot each region separately.
upvoted 2 times
To meet the RTO requirement of 1 hour, you can use the DynamoDB console, AWS CLI, or the AWS SDKs to enable PITR on your table. Once
enabled, PITR continuously captures point-in-time copies of your table data in an S3 bucket. You can then use these point-in-time copies to
restore your table to any point in time within the retention period.
***CORRECT***
Option B. Configure DynamoDB point-in-time recovery. For RPO recovery, restore to the desired point in time.
upvoted 6 times
Option C (exporting data to S3 Glacier) would not meet the RPO or RTO requirements, as S3 Glacier is a cold storage service with a
retrieval time of several hours.
Option D (scheduling EBS snapshots) would not meet the RPO requirement, as EBS snapshots are taken on a schedule, rather than
continuously. Additionally, restoring a DynamoDB table from an EBS snapshot can take longer than 1 hour, so it would not meet the
RTO requirement.
upvoted 3 times
DynamoDB point-in-time recovery can restore to any point in time within the last 35 days. This supports an RPO of 15 minutes.
Restoring from a point-in-time backup meets the 1 hour RTO.
Point-in-time recovery is specifically designed to restore DynamoDB tables with second-level granularity.
upvoted 1 times
C. Exporting and importing data on a daily basis does not align with the desired RPO of 15 minutes.
D. EBS snapshots can be used for data backup, they are not directly applicable to DynamoDB and cannot provide the desired RPO and RTO
without custom implementation.
In comparison, option B utilizing DynamoDB's built-in point-in-time recovery functionality provides the most straightforward and effective
solution for meeting the specified RPO of 15 minutes and RTO of 1 hour. By enabling PITR and restoring the table to the desired point in
time, the company can recover the customer information with minimal data loss and within the required time frame.
upvoted 3 times
A company runs a photo processing application that needs to frequently upload and download pictures from Amazon S3 buckets that are located
in the same AWS Region. A solutions architect has noticed an increased cost in data transfer fees and needs to implement a solution to reduce
these costs.
How can the solutions architect meet this requirement?
A. Deploy Amazon API Gateway into a public subnet and adjust the route table to route S3 calls through it.
B. Deploy a NAT gateway into a public subnet and attach an endpoint policy that allows access to the S3 buckets.
C. Deploy the application into a public subnet and allow it to route through an internet gateway to access the S3 buckets.
D. Deploy an S3 VPC gateway endpoint into the VPC and attach an endpoint policy that allows access to the S3 buckets.
Correct Answer: D
By deploying an S3 VPC gateway endpoint, the application can access the S3 buckets over a private network connection within the VPC,
eliminating the need for data transfer over the internet. This can help reduce data transfer fees as well as improve the performance of the
application. The endpoint policy can be used to specify which S3 buckets the application has access to.
upvoted 23 times
Option B, deploying a NAT gateway into a public subnet and attaching an endpoint policy, would not address the issue of data transfer
fees either as the NAT gateway is used to enable outbound internet access for instances in a private subnet, rather than for connecting
to S3.
Option C, deploying the application into a public subnet and allowing it to route through an internet gateway, would not reduce data
transfer fees as the application would still be transferring data over the internet.
upvoted 6 times
B. Using a NAT gateway for accessing S3 introduces unnecessary data transfer costs as traffic would still flow over the internet.
C. This approach would incur data transfer fees as the traffic would go through the public internet.
In comparison, option D using an S3 VPC gateway endpoint provides a direct and cost-effective solution for accessing S3 buckets within
the same Region. By keeping the data transfer within the AWS network infrastructure, it helps reduce data transfer fees and provides
secure access to the S3 resources.
upvoted 2 times
Bmarodi 3 months, 3 weeks ago
Selected Answer: D
Option D is correct answer.
upvoted 1 times
A company recently launched Linux-based application instances on Amazon EC2 in a private subnet and launched a Linux-based bastion host on
an Amazon EC2 instance in a public subnet of a VPC. A solutions architect needs to connect from the on-premises network, through the
company's internet connection, to the bastion host, and to the application servers. The solutions architect must make sure that the security
groups of all the EC2 instances will allow that access.
Which combination of steps should the solutions architect take to meet these requirements? (Choose two.)
A. Replace the current security group of the bastion host with one that only allows inbound access from the application instances.
B. Replace the current security group of the bastion host with one that only allows inbound access from the internal IP range for the company.
C. Replace the current security group of the bastion host with one that only allows inbound access from the external IP range for the company.
D. Replace the current security group of the application instances with one that allows inbound SSH access from only the private IP address of
the bastion host.
E. Replace the current security group of the application instances with one that allows inbound SSH access from only the public IP address of
the bastion host.
Correct Answer: CD
D. This step enables SSH connectivity from the bastion host to the application instances in the private subnet. By allowing inbound SSH
access only from the private IP address of the bastion host, you ensure that SSH access is restricted to the bastion host only.
upvoted 2 times
D does not make sense because the bastion host is in public subnet and they don't have a private IP but only a public IP address attached
to them. The IP wanting to connect is Public as well.
Bastion host in public subnet allows external IP (via internet) of the company to access it. Which than leaves us to give permission to the
application private subnet and for that the private subnet with the application accepts the IP coming from Bastion Host by changing its
SG. C&E
upvoted 2 times
C. Replace the current security group of the bastion host with one that only allows inbound access from the external IP range for the
company. This will allow the solutions architect to connect to the bastion host from the company's on-premises network through the
internet connection.
E. Replace the current security group of the application instances with one that allows inbound SSH access from only the public IP address
of the bastion host. This will allow the solutions architect to connect to the application instances through the bastion host using SSH.
Note: It's important to ensure that the security groups for the bastion host and application instances are configured correctly to allow the
desired inbound traffic, while still protecting the instances from unwanted access.
upvoted 2 times
A. Replacing the current security group of the bastion host with one that only allows inbound access from the application instances
would not allow the solutions architect to connect to the bastion host from the company's on-premises network through the internet
connection. The bastion host needs to be accessible from the external network in order to allow the solutions architect to connect to it.
B. Replacing the current security group of the bastion host with one that only allows inbound access from the internal IP range for the
company would not allow the solutions architect to connect to the bastion host from the company's on-premises network through the
internet connection. The internal IP range is not accessible from the external network.
upvoted 1 times
A solutions architect is designing a two-tier web application. The application consists of a public-facing web tier hosted on Amazon EC2 in public
subnets. The database tier consists of Microsoft SQL Server running on Amazon EC2 in a private subnet. Security is a high priority for the
company.
How should security groups be configured in this situation? (Choose two.)
A. Configure the security group for the web tier to allow inbound traffic on port 443 from 0.0.0.0/0.
B. Configure the security group for the web tier to allow outbound traffic on port 443 from 0.0.0.0/0.
C. Configure the security group for the database tier to allow inbound traffic on port 1433 from the security group for the web tier.
D. Configure the security group for the database tier to allow outbound traffic on ports 443 and 1433 to the security group for the web tier.
E. Configure the security group for the database tier to allow inbound traffic on ports 443 and 1433 from the security group for the web tier.
Correct Answer: AC
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/security-group-rules-reference.html
upvoted 18 times
The security group for the database tier should allow inbound traffic on port 1433 from the security group for the web tier. This will allow
the web tier to connect to the database tier to access data. The security group for the database tier should not allow outbound traffic on
ports 443 and 1433 to the security group for the web tier. This will prevent the database tier from being exposed to the public internet.
upvoted 1 times
C. By allowing inbound traffic on port 1433 (default port for Microsoft SQL Server) from the security group associated with the web tier,
you ensure that the database tier can only be accessed by the EC2 instances in the web tier. This provides a level of isolation and restricts
direct access to the database tier from external sources.
upvoted 2 times
For security purposes, it is best practice to limit inbound and outbound traffic as much as possible. In this case, the web tier should only
be able to access the database tier and not the other way around. Therefore, the security group for the web tier should only allow
outbound traffic to the security group for the database tier on the necessary ports. Similarly, the security group for the database tier
should only allow inbound traffic from the security group for the web tier on the necessary ports.
Answer C: Configure the security group for the database tier to allow inbound traffic on port 1433 from the security group for the web tier.
This is correct because the web tier needs to be able to connect to the database on port 1433 in order to access the data.
upvoted 1 times
***WRONG***
Answer A: Configure the security group for the web tier to allow inbound traffic on port 443 from 0.0.0.0/0. This is not correct because
the web tier should not allow inbound traffic from the internet. Instead, it should only allow outbound traffic to the security group for
the database tier.
upvoted 1 times
Answer D: Configure the security group for the database tier to allow outbound traffic on ports 443 and 1433 to the security group
for the web tier. This is not correct because the database tier should not allow outbound traffic to the web tier. Instead, it should only
allow inbound traffic from the security group for the web tier on the necessary ports.
upvoted 1 times
A company wants to move a multi-tiered application from on premises to the AWS Cloud to improve the application's performance. The application
consists of application tiers that communicate with each other by way of RESTful services. Transactions are dropped when one tier becomes
overloaded. A solutions architect must design a solution that resolves these issues and modernizes the application.
Which solution meets these requirements and is the MOST operationally efficient?
A. Use Amazon API Gateway and direct transactions to the AWS Lambda functions as the application layer. Use Amazon Simple Queue Service
(Amazon SQS) as the communication layer between application services.
B. Use Amazon CloudWatch metrics to analyze the application performance history to determine the servers' peak utilization during the
performance failures. Increase the size of the application server's Amazon EC2 instances to meet the peak requirements.
C. Use Amazon Simple Notification Service (Amazon SNS) to handle the messaging between application servers running on Amazon EC2 in an
Auto Scaling group. Use Amazon CloudWatch to monitor the SNS queue length and scale up and down as required.
D. Use Amazon Simple Queue Service (Amazon SQS) to handle the messaging between application servers running on Amazon EC2 in an Auto
Scaling group. Use Amazon CloudWatch to monitor the SQS queue length and scale up when communication failures are detected.
Correct Answer: A
By using EC2 in ASG, the application can automatically scale based on the demand and the length of the SQS. This allows for efficient
utilization of resources and ensures that the application can handle increased workload and communication failures.
CloudWatch is used to monitor the length of SQS. When queue length exceeds a certain threshold, indicating potential communication
failures, the ASG can be configured to scale up by adding more instances to handle the load.
D. This solution utilizes Lambda and API Gateway, which can be a valid approach for building serverless applications. However, it may
introduce additional complexity and operational overhead compared to the requirement of modernizing an existing multi-tiered
application.
upvoted 3 times
A company receives 10 TB of instrumentation data each day from several machines located at a single factory. The data consists of JSON files
stored on a storage area network (SAN) in an on-premises data center located within the factory. The company wants to send this data to Amazon
S3 where it can be accessed by several additional systems that provide critical near-real-time analytics. A secure transfer is important because
the data is considered sensitive.
Which solution offers the MOST reliable data transfer?
D. AWS Database Migration Service (AWS DMS) over AWS Direct Connect
Correct Answer: B
AWS DataSync is a data transfer service that uses network optimization techniques to transfer data efficiently and securely between on-
premises storage systems and Amazon S3 or other storage targets. When used over AWS Direct Connect, DataSync can provide a
dedicated and secure network connection between your on-premises data center and AWS. This can help to ensure a more reliable and
secure data transfer compared to using the public internet.
upvoted 8 times
Option C, AWS Database Migration Service (DMS) over the public internet, is not a suitable solution for transferring large amounts of
data, as it is designed for migrating databases rather than transferring large amounts of data from a storage area network (SAN).
Option D, AWS DMS over AWS Direct Connect, is also not a suitable solution, as it is designed for migrating databases and may not be
efficient for transferring large amounts of data from a SAN.
upvoted 6 times
By leveraging Direct Connect, which establishes a dedicated network connection between the on-premises data center and AWS, the data
transfer is conducted over a private and dedicated network link. This approach offers increased reliability, lower latency, and consistent
network performance compared to transferring data over the public internet.
Database Migration Service is primarily focused on database migration and replication, and it may not be the most appropriate tool for
general-purpose data transfer like JSON files.
Transferring data over the public internet may introduce potential security risks and performance variability due to factors like network
congestion, latency, and potential interruptions.
upvoted 2 times
- A SAN presents storage devices to a host such that the storage appears to be locally attached. ( NFS is, or can be, a SAN -
https://serverfault.com/questions/499185/is-san-storage-better-than-nfs )
- AWS Direct Connect does not encrypt your traffic that is in transit by default. But the connection is private
(https://docs.aws.amazon.com/directconnect/latest/UserGuide/encryption-in-transit.html)
upvoted 4 times
Question #77 Topic 1
A company needs to configure a real-time data ingestion architecture for its application. The company needs an API, a process that transforms
data as the data is streamed, and a storage solution for the data.
Which solution will meet these requirements with the LEAST operational overhead?
A. Deploy an Amazon EC2 instance to host an API that sends data to an Amazon Kinesis data stream. Create an Amazon Kinesis Data
Firehose delivery stream that uses the Kinesis data stream as a data source. Use AWS Lambda functions to transform the data. Use the
Kinesis Data Firehose delivery stream to send the data to Amazon S3.
B. Deploy an Amazon EC2 instance to host an API that sends data to AWS Glue. Stop source/destination checking on the EC2 instance. Use
AWS Glue to transform the data and to send the data to Amazon S3.
C. Configure an Amazon API Gateway API to send data to an Amazon Kinesis data stream. Create an Amazon Kinesis Data Firehose delivery
stream that uses the Kinesis data stream as a data source. Use AWS Lambda functions to transform the data. Use the Kinesis Data Firehose
delivery stream to send the data to Amazon S3.
D. Configure an Amazon API Gateway API to send data to AWS Glue. Use AWS Lambda functions to transform the data. Use AWS Glue to send
the data to Amazon S3.
Correct Answer: C
A. This adds operational overhead as you need to handle EC2 management, scaling, and maintenance. It is less efficient compared to
using a serverless solution like API Gateway.
B. It requires deploying and managing an EC2 to host the API and configuring Glue. This adds operational overhead, including EC2
management and potential scalability limitations.
D. It still requires managing and configuring Glue, which adds operational overhead. Additionally, it may not be the most efficient solution
as Glue is primarily used for ETL scenarios, and in this case, real-time data transformation is required.
upvoted 2 times
Kinesis Firehose is NEAR real-time, not real-time like your bots tell you.
upvoted 2 times
In Option C, you can use Amazon API Gateway as a fully managed service to create, publish, maintain, monitor, and secure APIs. This
means that you don't have to worry about the operational overhead of deploying and maintaining an EC2 instance to host the API.
Option C also uses Amazon Kinesis Data Firehose, which is a fully managed service for delivering real-time streaming data to destinations
such as Amazon S3. With Kinesis Data Firehose, you don't have to worry about the operational overhead of setting up and maintaining a
data ingestion infrastructure.
upvoted 1 times
Overall, Option C provides a fully managed solution for real-time data ingestion with minimal operational overhead.
upvoted 2 times
Option B is incorrect because it involves deploying an EC2 instance to host an API and disabling source/destination checking on the
instance. Disabling source/destination checking can make the instance vulnerable to attacks, which adds operational overhead in
the form of securing the instance.
upvoted 2 times
A company needs to keep user transaction data in an Amazon DynamoDB table. The company must retain the data for 7 years.
What is the MOST operationally efficient solution that meets these requirements?
B. Use AWS Backup to create backup schedules and retention policies for the table.
C. Create an on-demand backup of the table by using the DynamoDB console. Store the backup in an Amazon S3 bucket. Set an S3 Lifecycle
configuration for the S3 bucket.
D. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to invoke an AWS Lambda function. Configure the Lambda function to
back up the table and to store the backup in an Amazon S3 bucket. Set an S3 Lifecycle configuration for the S3 bucket.
Correct Answer: B
AWS Backup is a fully managed backup service that makes it easy to centralize and automate the backup of data across AWS resources. It
allows you to create backup policies and schedules to automatically back up your DynamoDB tables on a regular basis. You can also
specify retention policies to ensure that your backups are retained for the required period of time. This solution is fully automated and
requires minimal maintenance, making it the most operationally efficient option.
upvoted 8 times
Option C, creating an on-demand backup of the table and storing it in an S3 bucket, is also a viable option but it requires manual
intervention and does not provide the automation and scheduling capabilities of AWS Backup.
Option D, using Amazon EventBridge (CloudWatch Events) and a Lambda function to back up the table and store it in an S3 bucket, is
also a viable option but it requires more complex setup and maintenance compared to using AWS Backup.
upvoted 7 times
This solution is more efficient compared to the other options because it provides a centralized and automated backup management
approach specifically designed for AWS services. It eliminates the need to manually configure and maintain backup processes, making it
easier to ensure data retention compliance without significant operational effort.
upvoted 2 times
AWS Backup is a fully managed service that makes it easy to centralize and automate the backup of data across AWS resources. It can be
used to create backup schedules and retention policies for DynamoDB tables, which will ensure that the data is retained for the desired
period of 7 years. This solution will provide the most operationally efficient method for meeting the requirements because it requires
minimal effort to set up and manage.
upvoted 3 times
A company is planning to use an Amazon DynamoDB table for data storage. The company is concerned about cost optimization. The table will not
be used on most mornings. In the evenings, the read and write traffic will often be unpredictable. When traffic spikes occur, they will happen very
quickly.
What should a solutions architect recommend?
D. Create a DynamoDB table in provisioned capacity mode, and configure it as a global table.
Correct Answer: A
With provisioned capacity mode, you specify the number of reads and writes per second that you expect your application to require, and
you are billed based on that. Furthermore if you can forecast your capacity requirements you can also reserve a portion of DynamoDB
provisioned capacity and optimize your costs even further.
upvoted 1 times
Enabling auto scaling allows DynamoDB to automatically adjust the provisioned capacity up or down based on the actual usage. This is
beneficial in handling quick traffic spikes without manual intervention and ensuring that the required capacity is available to handle
increased load efficiently. Auto scaling helps to optimize costs by dynamically adjusting the capacity to match the demand, avoiding
overprovisioning during periods of low usage.
A. Creating a DynamoDB table in on-demand capacity mode, may not be the most cost-effective solution in this scenario. On-demand
capacity mode charges you based on the actual usage of read and write requests, which can be beneficial for sporadic or unpredictable
workloads. However, it may not be the optimal choice if the table is not used on most mornings.
upvoted 7 times
beginnercloud 4 months, 1 week ago
Selected Answer: A
Correct answer is A
- You create new tables with unknown workloads. - You have unpredictable application traffic. - You prefer the ease of paying for only what
you use.
upvoted 1 times
"This means that provisioned capacity is probably best for you if you have relatively predictable application traffic, run applications whose
traffic is consistent, and ramps up or down gradually.
Whereas on-demand capacity mode is probably best when you have new tables with unknown workloads, unpredictable application traffic
and also if you only want to pay exactly for what you use. The on-demand pricing model is ideal for bursty, new, or unpredictable
workloads whose traffic can spike in seconds or minutes, and when under-provisioned capacity would impact the user experience."
upvoted 2 times
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html
upvoted 3 times
Question #80 Topic 1
A company recently signed a contract with an AWS Managed Service Provider (MSP) Partner for help with an application migration initiative. A
solutions architect needs ta share an Amazon Machine Image (AMI) from an existing AWS account with the MSP Partner's AWS account. The AMI
is backed by Amazon Elastic Block Store (Amazon EBS) and uses an AWS Key Management Service (AWS KMS) customer managed key to encrypt
EBS volume snapshots.
What is the MOST secure way for the solutions architect to share the AMI with the MSP Partner's AWS account?
A. Make the encrypted AMI and snapshots publicly available. Modify the key policy to allow the MSP Partner's AWS account to use the key.
B. Modify the launchPermission property of the AMI. Share the AMI with the MSP Partner's AWS account only. Modify the key policy to allow
the MSP Partner's AWS account to use the key.
C. Modify the launchPermission property of the AMI. Share the AMI with the MSP Partner's AWS account only. Modify the key policy to trust a
new KMS key that is owned by the MSP Partner for encryption.
D. Export the AMI from the source account to an Amazon S3 bucket in the MSP Partner's AWS account, Encrypt the S3 bucket with a new KMS
key that is owned by the MSP Partner. Copy and launch the AMI in the MSP Partner's AWS account.
Correct Answer: B
https://docs.aws.amazon.com/kms/latest/developerguide/key-policy-modifying-external-accounts.html
upvoted 15 times
Additionally, modifying the key policy to allow the MSP Partner's account to use KMS customer managed key used for encrypting the EBS
snapshots ensures that the MSP Partner has the necessary permissions to access and use the key for decryption.
upvoted 2 times
Explanation why..
Making the AMI and snapshots publicly available, is not a secure option as it would allow anyone with access to the AMI to use it. Best
practice would be to share the AMI with the MSP Partner's AWS account then Modify launchPermission property of the AMI. This ensures
that the AMI is shared only with the MSP Partner and is encrypted with a key that they are authorised to use.
upvoted 1 times
The most secure way for the solutions architect to share the AMI with the MSP Partner's AWS account would be to modify the
launchPermission property of the AMI and share it with the MSP Partner's AWS account only. The key policy should also be modified to
allow the MSP Partner's AWS account to use the key. This ensures that the AMI is only shared with the MSP Partner and is encrypted with a
key that they are authorized to use.
upvoted 3 times
Option C, modifying the key policy to trust a new KMS key owned by the MSP Partner, is also not a secure option as it would involve
sharing the key with the MSP Partner, which could potentially compromise the security of the data encrypted with the key.
Option D, exporting the AMI to an S3 bucket in the MSP Partner's AWS account and encrypting the S3 bucket with a new KMS key
owned by the MSP Partner, is also not the most secure option as it involves sharing the AMI and a new key with the MSP Partner, which
could potentially compromise the security of the data.
upvoted 6 times
A solutions architect is designing the cloud architecture for a new application being deployed on AWS. The process should run in parallel while
adding and removing application nodes as needed based on the number of jobs to be processed. The processor application is stateless. The
solutions architect must ensure that the application is loosely coupled and the job items are durably stored.
Which design should the solutions architect use?
A. Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon Machine Image (AMI) that consists of the
processor application. Create a launch configuration that uses the AMI. Create an Auto Scaling group using the launch configuration. Set the
scaling policy for the Auto Scaling group to add and remove nodes based on CPU usage.
B. Create an Amazon SQS queue to hold the jobs that need to be processed. Create an Amazon Machine Image (AMI) that consists of the
processor application. Create a launch configuration that uses the AMI. Create an Auto Scaling group using the launch configuration. Set the
scaling policy for the Auto Scaling group to add and remove nodes based on network usage.
C. Create an Amazon SQS queue to hold the jobs that need to be processed. Create an Amazon Machine Image (AMI) that consists of the
processor application. Create a launch template that uses the AMI. Create an Auto Scaling group using the launch template. Set the scaling
policy for the Auto Scaling group to add and remove nodes based on the number of items in the SQS queue.
D. Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon Machine Image (AMI) that consists of the
processor application. Create a launch template that uses the AMI. Create an Auto Scaling group using the launch template. Set the scaling
policy for the Auto Scaling group to add and remove nodes based on the number of messages published to the SNS topic.
Correct Answer: C
Option B suggests using network usage as a scaling metric, which may not be directly related to the number of jobs to be processed. The
number of items in the SQS provides a more accurate metric for scaling based on the workload.
upvoted 4 times
Bmarodi 4 months, 2 weeks ago
Selected Answer: C
C for sure
upvoted 1 times
This design satisfies the requirements of the application by using Amazon Simple Queue Service (SQS) as durable storage for the job items
and Amazon Elastic Compute Cloud (EC2) Auto Scaling to add and remove nodes based on the number of items in the queue. The
processor application can be run in parallel on multiple nodes, and the use of launch templates allows for flexibility in the configuration of
the EC2 instances.
upvoted 4 times
Option B is incorrect because it uses CPU usage as the scaling trigger instead of the number of items in the queue.
Why use SQS instead of SNS? In the question it says parallel execution of processes. SNS has that ability.
upvoted 1 times
A company hosts its web applications in the AWS Cloud. The company configures Elastic Load Balancers to use certificates that are imported into
AWS Certificate Manager (ACM). The company's security team must be notified 30 days before the expiration of each certificate.
What should a solutions architect recommend to meet this requirement?
A. Add a rule in ACM to publish a custom message to an Amazon Simple Notification Service (Amazon SNS) topic every day, beginning 30
days before any certificate will expire.
B. Create an AWS Config rule that checks for certificates that will expire within 30 days. Configure Amazon EventBridge (Amazon CloudWatch
Events) to invoke a custom alert by way of Amazon Simple Notification Service (Amazon SNS) when AWS Config reports a noncompliant
resource.
C. Use AWS Trusted Advisor to check for certificates that will expire within 30 days. Create an Amazon CloudWatch alarm that is based on
Trusted Advisor metrics for check status changes. Configure the alarm to send a custom alert by way of Amazon Simple Notification Service
(Amazon SNS).
D. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to detect any certificates that will expire within 30 days. Configure the
rule to invoke an AWS Lambda function. Configure the Lambda function to send a custom alert by way of Amazon Simple Notification Service
(Amazon SNS).
Correct Answer: D
Use the ACM API in Amazon EventBridge to configure the ACM Certificate Approaching Expiration event.
Create a custom EventBridge rule to receive email notifications when certificates are nearing the expiration date.
Use AWS Config to check for certificates that are nearing the expiration date.
If you use AWS Config for this resolution, then be aware of the following:
Before you set up the AWS Config rule, create the Amazon Simple Notification Service (Amazon SNS) topic and EventBridge rule. This
makes sure that all non-compliant certificates invoke a notification before the expiration date.
Activating AWS Config incurs an additional cost based on usage. For more information, see AWS Config pricing.
https://repost.aws/knowledge-center/acm-certificate-expiration
upvoted 2 times
Reference: https://aws.amazon.com/premiumsupport/knowledge-center/acm-certificate-expiration/
upvoted 4 times
https://aws.amazon.com/about-aws/whats-new/2021/03/aws-certificate-manager-provides-certificate-expiry-monitoring-
through-amazon-cloudwatch/
upvoted 1 times
The first of the two options I describe is to use the ACM built-in Certificate Expiration event, which is raised through Amazon EventBridge,
to invoke a Lambda function. In this option, the function is configured to publish the result as a finding in Security Hub, and also as an SNS
topic used for email subscriptions. As a result, an administrator can be notified of a specific expiring certificate, or an IT service
management (ITSM) system can automatically open a case or incident through email or SNS.
https://aws.amazon.com/blogs/security/how-to-monitor-expirations-of-imported-certificates-in-aws-certificate-manager-acm/
upvoted 6 times
A company's dynamic website is hosted using on-premises servers in the United States. The company is launching its product in Europe, and it
wants to optimize site loading times for new European users. The site's backend must remain in the United States. The product is being launched
in a few days, and an immediate solution is needed.
What should the solutions architect recommend?
A. Launch an Amazon EC2 instance in us-east-1 and migrate the site to it.
B. Move the website to Amazon S3. Use Cross-Region Replication between Regions.
C. Use Amazon CloudFront with a custom origin pointing to the on-premises servers.
Correct Answer: C
Amazon CloudFront is a content delivery network (CDN) that speeds up the delivery of static and dynamic web content, such as HTML,
CSS, JavaScript, images, and videos. By using CloudFront, the company can distribute the content of their website from edge locations that
are closer to the users in Europe, reducing the loading times for these users.
To use CloudFront, the company can set up a custom origin pointing to their on-premises servers in the United States. CloudFront will
then cache the content of the website at edge locations around the world and serve the content to users from the location that is closest
to them. This will allow the company to optimize the loading times for their European users without having to move the backend of the
website to a different region.
upvoted 19 times
Option B (move the website to Amazon S3 and use Cross-Region Replication between Regions) would not be an immediate solution as
it would require time to set up and migrate the website.
Option D (use an Amazon Route 53 geoproximity routing policy pointing to on-premises servers) would not be suitable because it
would not improve the loading times for users in Europe.
upvoted 6 times
CloudFront can cache static content close to European users using edge locations, improving site performance.
The custom origin feature allows seamlessly integrating the CloudFront CDN with existing on-premises servers.
No changes are needed to the site backend or servers. CloudFront just acts as a globally distributed cache.
This can be set up very quickly, meeting the launch deadline.
Other options like migrating to EC2 or S3 would require more time and changes. CloudFront is an easier lift.
Route 53 geoproximity routing alone would not improve performance much without a CDN.
upvoted 1 times
B. Moving the website to S3 and implementing Cross-Region Replication would distribute the website's content across multiple regions,
including Europe. S3 is primarily used for static content hosting, and it does not provide server-side processing capabilities necessary for
dynamic website functionality.
D. Using a geoproximity routing policy in Route 53 would allow you to direct traffic to the on-premises servers based on the geographic
location of the users. However, this option does not optimize site loading times for European users as it still requires them to access the
website from the on-premises servers in the United States. It does not leverage the benefits of content caching and edge locations for
improved performance.
upvoted 2 times
Bmarodi 4 months, 1 week ago
Selected Answer: C
C is best solution.
upvoted 1 times
https://www.examtopics.com/discussions/amazon/view/27898-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 3 times
A company wants to reduce the cost of its existing three-tier web architecture. The web, application, and database servers are running on Amazon
EC2 instances for the development, test, and production environments. The EC2 instances average 30% CPU utilization during peak hours and 10%
CPU utilization during non-peak hours.
The production EC2 instances run 24 hours a day. The development and test EC2 instances run for at least 8 hours each day. The company plans
to implement automation to stop the development and test EC2 instances when they are not in use.
Which EC2 instance purchasing solution will meet the company's requirements MOST cost-effectively?
A. Use Spot Instances for the production EC2 instances. Use Reserved Instances for the development and test EC2 instances.
B. Use Reserved Instances for the production EC2 instances. Use On-Demand Instances for the development and test EC2 instances.
C. Use Spot blocks for the production EC2 instances. Use Reserved Instances for the development and test EC2 instances.
D. Use On-Demand Instances for the production EC2 instances. Use Spot blocks for the development and test EC2 instances.
Correct Answer: B
Spot Instances are a cost-effective choice for non-critical, flexible workloads that can be interrupted. Since the development and test EC2
instances are only needed for at least 8 hours per day and can be stopped when not in use, they would be a good fit for Spot Instances.
upvoted 2 times
Option A is the correct answer because it meets the company's requirements for cost-effectively running the development and test EC2
instances and the production EC2 instances.
upvoted 1 times
Option C is not the correct solution because Spot blocks are a variant of Spot Instances that offer a guaranteed capacity and
duration, but they are not available for all instance types and are not necessarily the most cost-effective option in all cases. In this
case, it would be more cost-effective to use Spot Instances for the development and test EC2 instances, as they can be interrupted
when not in use.
upvoted 1 times
https://www.examtopics.com/discussions/amazon/view/80956-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times
Question #85 Topic 1
A company has a production web application in which users upload documents through a web interface or a mobile app. According to a new
regulatory requirement. new documents cannot be modified or deleted after they are stored.
What should a solutions architect do to meet this requirement?
A. Store the uploaded documents in an Amazon S3 bucket with S3 Versioning and S3 Object Lock enabled.
B. Store the uploaded documents in an Amazon S3 bucket. Configure an S3 Lifecycle policy to archive the documents periodically.
C. Store the uploaded documents in an Amazon S3 bucket with S3 Versioning enabled. Configure an ACL to restrict all access to read-only.
D. Store the uploaded documents on an Amazon Elastic File System (Amazon EFS) volume. Access the data by mounting the volume in read-
only mode.
Correct Answer: A
S3 Versioning allows multiple versions of an object to be stored in the same bucket. This means that when an object is modified or
deleted, the previous version is preserved. S3 Object Lock adds additional protection by allowing objects to be placed under a legal hold or
retention period, during which they cannot be deleted or modified. Together, S3 Versioning and S3 Object Lock can be used to meet the
requirement of not allowing documents to be modified or deleted after they are stored.
upvoted 6 times
Option C, storing the documents in an S3 bucket with S3 Versioning enabled and configuring an ACL to restrict all access to read-only,
would also not prevent the documents from being modified or deleted, since an ACL only controls access to the object and does not
prevent it from being modified or deleted.
Option D, storing the documents on an Amazon Elastic File System (Amazon EFS) volume and accessing the data in read-only mode,
would prevent the documents from being modified, but would not prevent them from being deleted.
upvoted 2 times
B. Configuring an S3 Lifecycle policy to archive documents periodically does not guarantee the prevention of document modification or
deletion after they are stored.
C. Enabling S3 Versioning alone does not prevent modifications or deletions of objects. Configuring an ACL does not guarantee the
prevention of modifications or deletions by authorized users.
D. Using EFS does not prevent modifications or deletions of the documents by users or processes with write permissions.
upvoted 2 times
Bmarodi 4 months, 1 week ago
Selected Answer: A
S3 Versioning and S3 Object Lock enabled meet the requirements, hence A is correct ans.
upvoted 2 times
A company has several web servers that need to frequently access a common Amazon RDS MySQL Multi-AZ DB instance. The company wants a
secure method for the web servers to connect to the database while meeting a security requirement to rotate user credentials frequently.
Which solution meets these requirements?
A. Store the database user credentials in AWS Secrets Manager. Grant the necessary IAM permissions to allow the web servers to access AWS
Secrets Manager.
B. Store the database user credentials in AWS Systems Manager OpsCenter. Grant the necessary IAM permissions to allow the web servers to
access OpsCenter.
C. Store the database user credentials in a secure Amazon S3 bucket. Grant the necessary IAM permissions to allow the web servers to
retrieve credentials and access the database.
D. Store the database user credentials in files encrypted with AWS Key Management Service (AWS KMS) on the web server file system. The
web server should be able to decrypt the files and access the database.
Correct Answer: A
AWS Secrets Manager is a service that helps you store, manage, and rotate secrets. Secrets Manager is a good choice for storing database
user credentials because it is secure and scalable.
IAM permissions can be used to grant web servers access to AWS Secrets Manager. This will allow the web servers to retrieve the database
user credentials from Secrets Manager and use them to connect to the database.
Rotation of user credentials can be automated using Secrets Manager. This will ensure that the database user credentials are rotated on a
regular basis, meeting the security requirement.
upvoted 1 times
C. Storing credentials in an S3 bucket may provide some level of security, but it lacks the additional features and security controls offered
by AWS Secrets Manager.
D. While using KMS for encryption is a good practice, managing credentials directly on the web server file system can introduce
complexities and potential security risks. It can be challenging to securely manage and rotate credentials across multiple web servers,
especially when considering scalability and automation.
In summary, option A is the recommended solution as it leverages AWS Secrets Manager, which is purpose-built for securely storing and
managing secrets, and provides the necessary IAM permissions to allow the web servers to access the credentials securely.
upvoted 3 times
Option A is correct because it meets the requirements specified in the question: a secure method for the web servers to connect to the
database while meeting a security requirement to rotate user credentials frequently. AWS Secrets Manager is designed specifically to store
and manage secrets like database credentials, and it provides an automated way to rotate secrets every time they are used, ensuring that
the secrets are always fresh and secure. This makes it a good choice for storing and managing the database user credentials in a secure
way.
upvoted 4 times
Option C, storing the database user credentials in a secure Amazon S3 bucket, is not a secure option because S3 buckets are not
designed to store secrets. While it is possible to store secrets in S3, it is not recommended because S3 is not a secure secrets
management service and does not provide the same level of security and automation as AWS Secrets Manager.
upvoted 3 times
A company hosts an application on AWS Lambda functions that are invoked by an Amazon API Gateway API. The Lambda functions save
customer data to an Amazon Aurora MySQL database. Whenever the company upgrades the database, the Lambda functions fail to establish
database connections until the upgrade is complete. The result is that customer data is not recorded for some of the event.
A solutions architect needs to design a solution that stores customer data that is created during database upgrades.
Which solution will meet these requirements?
A. Provision an Amazon RDS proxy to sit between the Lambda functions and the database. Configure the Lambda functions to connect to the
RDS proxy.
B. Increase the run time of the Lambda functions to the maximum. Create a retry mechanism in the code that stores the customer data in the
database.
C. Persist the customer data to Lambda local storage. Configure new Lambda functions to scan the local storage to save the customer data to
the database.
D. Store the customer data in an Amazon Simple Queue Service (Amazon SQS) FIFO queue. Create a new Lambda function that polls the
queue and stores the customer data in the database.
Correct Answer: A
RDS Proxy minimizes application disruption from outages affecting the availability of your database by automatically connecting to a new
database instance while preserving application connections. When failovers occur, RDS Proxy routes requests directly to the new database
instance. This reduces failover times for Aurora and RDS databases by up to 66%.
upvoted 35 times
With RDS proxy, you only expose a single endpoint for request to hit and any failure of the primary DB in a Multi-AZ configuration is will
be managed automatically by RDS Proxy to point to the new primary DB. Hence RDS proxy is the most efficient way of solving the issue
as additional code change is required.
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/rds-proxy.howitworks.html
upvoted 8 times
Options C and D involve storing data locally or using Amazon SQS, but these approaches might not ensure data consistency and
availability during database upgrades, which is a critical requirement.
In summary, using Amazon RDS Proxy (Option A) is the best approach to address the challenge of maintaining data availability and
consistency during database upgrades for Lambda functions that interact with the Amazon Aurora MySQL database.
upvoted 1 times
zjcorpuz 2 months, 1 week ago
A. Amazon RDS Proxy is available for Amazon Aurora with MySQL compatibility, Amazon Aurora with PostgreSQL compatibility, Amazon
RDS for MariaDB, Amazon RDS..
https://aws.amazon.com/rds/proxy/
upvoted 1 times
this clearly states if there is standby instance then RDS proxy would help to connect to that instance. In the question, its not mention that
database is highly available or its in Multi-az environment
upvoted 1 times
B. Increasing the Lambda run time and implementing a retry mechanism can help mitigate some failures, but it does not provide a reliable
solution for storing customer data during database upgrades. The issue is not with the Lambda functions' execution time or retry logic,
but with the database connection failures during upgrades.
C. Lambda local storage is temporary and is not designed for durable data storage. It is not a reliable solution for persisting customer
data, especially during database upgrades.
In summary, option D is the recommended solution as it utilizes an SQS FIFO queue to store customer data. By decoupling the data
storage from the database connection, the Lambda can store the data reliably in the queue even during database upgrades. A separate
Lambda can then poll the queue and save the customer data to the database, ensuring no data loss during upgrade periods.
upvoted 2 times
Supports Aurora MySQL or Amazon RDS MySQL. It is designed for that reason.
upvoted 1 times
MostafaWardany 4 months, 1 week ago
Selected Answer: D
RDS proxy for HA but not suitable for store data during DB outage, I think D is the correct answer
upvoted 1 times
Question #88 Topic 1
A survey company has gathered data for several years from areas in the United States. The company hosts the data in an Amazon S3 bucket that
is 3 TB in size and growing. The company has started to share the data with a European marketing firm that has S3 buckets. The company wants
to ensure that its data transfer costs remain as low as possible.
Which solution will meet these requirements?
B. Configure S3 Cross-Region Replication from the company's S3 bucket to one of the marketing firm's S3 buckets.
C. Configure cross-account access for the marketing firm so that the marketing firm has access to the company's S3 bucket.
D. Configure the company's S3 bucket to use S3 Intelligent-Tiering. Sync the S3 bucket to one of the marketing firm's S3 buckets.
Correct Answer: B
if they are looking to reduce overall data transfer cost, then B makes sense because the data does not leave the AWS network, thus data
transfer cost should be lower technically?
A. makes sense because the US company saves money, but the European company is paying for the charges so there is no overall saving
in cost when you look at the big picture
I will go for B because they are not explicitly stating that they want the other company to pay for the charges
upvoted 44 times
The key requirements are to minimize data transfer costs while sharing large amounts of data with the marketing firm.
S3 Cross-Region Replication will replicate objects from the source bucket to a destination bucket in a different region. This avoids any data
transfer charges for the company when the marketing firm accesses the replicated data in their own region
upvoted 1 times
https://docs.aws.amazon.com/AmazonS3/latest/userguide/RequesterPaysBuckets.html
upvoted 1 times
B. Enabling cross-region replication would copy the data from the company's S3 to the marketing firm's S3, but it would incur additional
data transfer costs. This solution doesn't focus on minimizing data transfer costs for the company.
D. Using S3 Intelligent-Tiering and syncing the bucket to the marketing firm's S3 may help optimize storage costs by automatically moving
objects to the most cost-effective storage class. However, it does not specifically address the goal of minimizing data transfer costs for the
company.
In summary, option C is the recommended solution as it allows the marketing firm to access the company's S3 through cross-account
access. This enables the marketing firm to retrieve the data directly from the company's bucket without incurring additional data transfer
costs. It ensures that the survey company retains control over its data and can minimize its own data transfer expenses.
upvoted 6 times
A company uses Amazon S3 to store its confidential audit documents. The S3 bucket uses bucket policies to restrict access to audit team IAM
user credentials according to the principle of least privilege. Company managers are worried about accidental deletion of documents in the S3
bucket and want a more secure solution.
What should a solutions architect do to secure the audit documents?
B. Enable multi-factor authentication (MFA) on the IAM user credentials for each audit team IAM user account.
C. Add an S3 Lifecycle policy to the audit team's IAM user accounts to deny the s3:DeleteObject action during audit dates.
D. Use AWS Key Management Service (AWS KMS) to encrypt the S3 bucket and restrict audit team IAM user accounts from accessing the KMS
key.
Correct Answer: A
MFA Delete requires additional authentication to permanently delete an object version. This prevents accidental deletion
upvoted 2 times
C. Adding an S3 Lifecycle policy to deny the delete action during audit dates would prevent intentional deletions during specific time
periods. However, it does not address accidental deletions that can occur at any time.
D. Using KMS for encryption and restricting access to the KMS key provides additional security for the data stored in the S3 . However, it
does not directly prevent accidental deletion of documents in the S3.
Enabling versioning and MFA Delete on the S3 (option A) is the most appropriate solution for securing the audit documents. Versioning
ensures that multiple versions of the documents are stored, allowing for easy recovery in case of accidental deletions. Enabling MFA
Delete requires the use of multi-factor authentication to authorize deletion actions, adding an extra layer of protection against unintended
deletions.
upvoted 2 times
This will secure the audit documents by providing an additional layer of protection against accidental deletion. With versioning enabled,
any deleted or overwritten objects in the S3 bucket will be preserved as previous versions, allowing the company to recover them if
needed. With MFA Delete enabled, any delete request made to the S3 bucket will require the use of an MFA code, which provides an
additional layer of security.
upvoted 2 times
Option C: Adding an S3 Lifecycle policy to the audit team's IAM user accounts to deny the s3:DeleteObject action during audit dates,
which would not provide protection against accidental deletion outside of the specified audit dates.
Option D: Use AWS Key Management Service (AWS KMS) to encrypt the S3 bucket and restrict audit team IAM user accounts from
accessing the KMS key, would not provide protection against accidental deletion.
upvoted 2 times
A company is using a SQL database to store movie data that is publicly accessible. The database runs on an Amazon RDS Single-AZ DB instance.
A script runs queries at random intervals each day to record the number of new movies that have been added to the database. The script must
report a final total during business hours.
The company's development team notices that the database performance is inadequate for development tasks when the script is running. A
solutions architect must recommend a solution to resolve this issue.
Which solution will meet this requirement with the LEAST operational overhead?
B. Create a read replica of the database. Configure the script to query only the read replica.
C. Instruct the development team to manually export the entries in the database at the end of each day.
D. Use Amazon ElastiCache to cache the common queries that the script runs against the database.
Correct Answer: D
1 - Somewhat true.
2 - Not true for our case.
3 - Also not true. The data changes throughout the day.
For my understanding, caching has to do with millisecond results, high-performance reads. These are not the issues mentioned in the
questions, therefore B.
upvoted 11 times
Configuring the read replica is much easier than configuring and integrating new service.
upvoted 1 times
C. Instructing the development team to manually export the entries in the database introduces manual effort and is not a scalable or
efficient solution.
D. While using ElastiCache for caching can improve read performance for common queries, it may not be the most suitable solution for
the scenario described. Caching is effective for reducing the load on the database for frequently accessed data, but it may not directly
address the performance issue during the script execution.
Creating a read replica of the database (option B) provides a scalable solution that offloads read traffic from the primary database. The
script can be configured to query the read replica, reducing the impact on the primary database during the script execution.
upvoted 4 times
The other way of looking at this question is : Elastic Cache could be beneficial for development tasks (and hence improve the overall DB
performance). But then, Option D mentions that the queries for scripts are cached, and not the DB content (or metadata). This may not
necessarily improve the performance of the DB.
A read replica is a fully managed database that is kept in sync with the primary database. Read replicas allow you to scale out read-heavy
workloads by distributing read queries across multiple databases. This can help improve the performance of the database and reduce the
impact on the primary database.
By configuring the script to query the read replica, the development team can continue to use the primary database for development
tasks, while the script's queries will be directed to the read replica. This will reduce the load on the primary database and improve its
performance.
upvoted 6 times
Option C (instructing the development team to manually export the entries in the database at the end of each day) would not be an
efficient solution as it would require manual effort and could lead to data loss if the export process is not done properly.
Option D (using Amazon ElastiCache to cache the common queries) could improve the performance of the script's queries, but it would
not address the issue of the script's queries impacting the primary database.
upvoted 4 times
Question #91 Topic 1
A company has applications that run on Amazon EC2 instances in a VPC. One of the applications needs to call the Amazon S3 API to store and
read objects. According to the company's security regulations, no traffic from the applications is allowed to travel across the internet.
Which solution will meet these requirements?
Correct Answer: A
https://docs.aws.amazon.com/vpc/latest/privatelink/gateway-endpoints.html
upvoted 23 times
A gateway endpoint is a VPC endpoint that you can use to connect to Amazon S3 from within your VPC. Traffic between your VPC and
Amazon S3 never leaves the Amazon network, so it doesn't traverse the internet. This means you can access Amazon S3 without the need
to use a NAT gateway or a VPN connection.
***WRONG***
Option B (creating an S3 bucket in a private subnet) is not a valid solution because S3 buckets do not have subnets.
Option C (creating an S3 bucket in the same AWS Region as the EC2 instances) is not a requirement for meeting the given security
regulations.
Option D (configuring a NAT gateway in the same subnet as the EC2 instances) is not a valid solution because it would allow traffic to leave
the VPC and travel across the Internet.
upvoted 11 times
C. Creating an S3 in the same Region as the EC2 does not inherently prevent traffic from traversing the internet.
D. Configuring a NAT gateway allows outbound internet connectivity for resources in private subnets, but it does not provide a direct and
secure connection to the S3 service. The traffic from the EC2 to the S3 API would still traverse the internet.
The most suitable solution is to configure an S3 gateway endpoint (option A). It provides a secure and private connection between the VPC
and the S3 service without requiring the traffic to traverse the internet. With an S3 gateway endpoint, the EC2 can access the S3 API
directly within the VPC, meeting the security requirement of preventing traffic from traveling across the internet.
upvoted 2 times
Bmarodi 4 months, 1 week ago
Selected Answer: A
Configure an S3 gateway endpoint is answer.
upvoted 1 times
A company is storing sensitive user information in an Amazon S3 bucket. The company wants to provide secure access to this bucket from the
application tier running on Amazon EC2 instances inside a VPC.
Which combination of steps should a solutions architect take to accomplish this? (Choose two.)
C. Create a bucket policy that limits access to only the application tier running in the VPC.
D. Create an IAM user with an S3 access policy and copy the IAM credentials to the EC2 instance.
E. Create a NAT instance and have the EC2 instances use the NAT instance to access the S3 bucket.
Correct Answer: AC
A VPC gateway endpoint allows private access to S3 from within the VPC without needing internet access. This keeps the traffic secure
within the AWS network.
The bucket policy should limit access to only the application tier, not make the objects public. This restricts access to the sensitive data to
only the authorized application tier.
upvoted 1 times
C) Create a bucket policy that limits access to only the application tier running in the VPC.
The key requirements are secure access to the S3 bucket from EC2 instances in the VPC.
A VPC endpoint for S3 allows connectivity from the VPC to S3 without needing internet access. The bucket policy should limit access only to
the VPC by whitelisting the VPC endpoint.
upvoted 2 times
B. It is important to restrict access to the bucket and its objects only to authorized entities.
C. This helps maintain the confidentiality of the sensitive user information by limiting access to authorized resources.
D. In this case, since the EC2 instances are accessing the S3 bucket from within the VPC, using IAM user credentials is unnecessary and can
introduce additional security risks.
E. a NAT instance to access the S3 bucket adds unnecessary complexity and overhead.
In summary, the recommended steps to provide secure access to the S3 from the application tier running on EC2 inside a VPC are to
configure a VPC gateway endpoint for S3 within the VPC (option A) and create a bucket policy that limits access to only the application tier
running in the VPC (option C).
upvoted 2 times
Bmarodi 4 months, 1 week ago
Selected Answer: AC
A & C the correct solutions.
upvoted 2 times
Additionally, you should create an IAM user with an S3 access policy and copy the IAM credentials to the EC2 instance. This will allow the
EC2 instance to access the S3 bucket using the IAM user's permissions.
Option A, configuring a VPC gateway endpoint for Amazon S3 within the VPC, would not provide any additional security for the S3 bucket.
Option B, creating a bucket policy to make the objects in the S3 bucket public, would not provide sufficient security for sensitive user
information.
Option E, creating a NAT instance and having the EC2 instances use the NAT instance to access the S3 bucket, would not provide any
additional security for the S3 bucket
upvoted 1 times
C. Create a bucket policy that limits access to only the application tier running in the VPC.
D. Create an IAM user with an S3 access policy and copy the IAM credentials to the EC2 instance.
upvoted 3 times
To provide secure access to the S3 bucket from the application tier running on Amazon EC2 instances inside the VPC, the solutions
architect should take the following combination of steps:
Option A: Configure a VPC gateway endpoint for Amazon S3 within the VPC.
Option C: Create a bucket policy that limits access to only the application tier running in the VPC.
Option A is incorrect because configuring a VPC gateway endpoint for Amazon S3 does not directly control access to the S3 bucket.
Option B is incorrect because making the objects in the S3 bucket public would not provide secure access to the bucket.
Option E is incorrect because creating a NAT instance is not necessary to provide secure access to the S3 bucket from the application
tier running on EC2 instances in the VPC.
upvoted 1 times
Question #93 Topic 1
A company runs an on-premises application that is powered by a MySQL database. The company is migrating the application to AWS to increase
the application's elasticity and availability.
The current architecture shows heavy read activity on the database during times of normal operation. Every 4 hours, the company's development
team pulls a full export of the production database to populate a database in the staging environment. During this period, users experience
unacceptable application latency. The development team is unable to use the staging environment until the procedure completes.
A solutions architect must recommend replacement architecture that alleviates the application latency issue. The replacement architecture also
must give the development team the ability to continue using the staging environment without delay.
Which solution meets these requirements?
A. Use Amazon Aurora MySQL with Multi-AZ Aurora Replicas for production. Populate the staging database by implementing a backup and
restore process that uses the mysqldump utility.
B. Use Amazon Aurora MySQL with Multi-AZ Aurora Replicas for production. Use database cloning to create the staging database on-demand.
C. Use Amazon RDS for MySQL with a Multi-AZ deployment and read replicas for production. Use the standby instance for the staging
database.
D. Use Amazon RDS for MySQL with a Multi-AZ deployment and read replicas for production. Populate the staging database by implementing a
backup and restore process that uses the mysqldump utility.
Correct Answer: B
To alleviate the application latency issue, the recommended solution is to use Amazon Aurora MySQL with Multi-AZ Aurora Replicas for
production, and use database cloning to create the staging database on-demand. This allows the development team to continue using the
staging environment without delay, while also providing elasticity and availability for the production application.
Option C: Use Amazon RDS for MySQL with a Multi-AZ deployment and read replicas for production. Using the standby instance for the
staging database is not the recommended solution because it does not give the development team the ability to continue using the
staging environment without delay. The standby instance is used for failover in case of a production instance failure, and it is not
intended for use as a staging environment.
upvoted 12 times
C. https://aws.amazon.com/blogs/database/readable-standby-instances-in-amazon-rds-multi-az-deployments-a-new-high-availability-
option/#:~:text=read%20replicas.-,Amazon%20RDS,-now%20offers%20Multi
B.https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Managing.Clone.html#:~:text=cloning%20works.-,Aurora%20
cloning,-is%20especially%20useful
upvoted 1 times
Use Amazon Aurora MySQL with Multi-AZ Aurora Replicas for production. Use database cloning to create the staging database on-
demand.
Database cloning creates an instantly available copy of the production database that can be used for staging. This avoids any export or
restoration del
upvoted 1 times
B. With Aurora, you can create a clone of the production database quickly and efficiently, without the need for time-consuming backup
and restore processes. The development team can spin up the staging database on-demand, eliminating delays and allowing them to
continue using the staging environment without interruption.
C. Using the standby instance for the staging database would not provide the development team with the ability to use the staging
environment without delay. The standby instance is designed for failover purposes and may not be readily available for immediate use.
D. Relying on a backup and restore process using the mysqldump utility would still introduce delays and impact application latency during
the data population phase.
upvoted 2 times
A company is designing an application where users upload small files into Amazon S3. After a user uploads a file, the file requires one-time simple
processing to transform the data and save the data in JSON format for later analysis.
Each file must be processed as quickly as possible after it is uploaded. Demand will vary. On some days, users will upload a high number of files.
On other days, users will upload a few files or no files.
Which solution meets these requirements with the LEAST operational overhead?
A. Configure Amazon EMR to read text files from Amazon S3. Run processing scripts to transform the data. Store the resulting JSON file in an
Amazon Aurora DB cluster.
B. Configure Amazon S3 to send an event notification to an Amazon Simple Queue Service (Amazon SQS) queue. Use Amazon EC2 instances
to read from the queue and process the data. Store the resulting JSON file in Amazon DynamoDB.
C. Configure Amazon S3 to send an event notification to an Amazon Simple Queue Service (Amazon SQS) queue. Use an AWS Lambda
function to read from the queue and process the data. Store the resulting JSON file in Amazon DynamoDB.
D. Configure Amazon EventBridge (Amazon CloudWatch Events) to send an event to Amazon Kinesis Data Streams when a new file is
uploaded. Use an AWS Lambda function to consume the event from the stream and process the data. Store the resulting JSON file in an
Amazon Aurora DB cluster.
Correct Answer: C
B. While using S3 event notifications and SQS for decoupling is a good approach, using EC2 to process the data would introduce
operational overhead in terms of managing and scaling the EC2.
D. Using EventBridge and Kinesis Data Streams for this use case would introduce additional complexity and operational overhead
compared to the other options. EventBridge and Kinesis are typically used for real-time streaming and processing of large volumes of
data.
In summary, option C is the recommended solution as it provides a serverless and scalable approach for processing uploaded files using
S3 event notifications, SQS, and Lambda. It offers low operational overhead, automatic scaling, and efficient handling of varying demand.
Storing the resulting JSON file in DynamoDB aligns with the requirement of saving the data for later analysis.
upvoted 5 times
SQS queues the notifications and Lambda scales automatically to handle spikes and drops in file uploads. No EMR cluster or EC2 Fleet is
needed to manage.
upvoted 1 times
beginnercloud 4 months, 1 week ago
Selected Answer: C
Option C is correct - Dynamo DB is a NoSQL-JSON supported
upvoted 1 times
AWS Lambda is a serverless computing service that allows you to run code without the need to provision or manage infrastructure. When
a new file is uploaded to Amazon S3, it can trigger an event notification which sends a message to an SQS queue. The Lambda function
can then be set up to be triggered by messages in the queue, and it can process the data and store the resulting JSON file in Amazon
DynamoDB.
upvoted 3 times
An application allows users at a company's headquarters to access product data. The product data is stored in an Amazon RDS MySQL DB
instance. The operations team has isolated an application performance slowdown and wants to separate read traffic from write traffic. A solutions
architect needs to optimize the application's performance quickly.
What should the solutions architect recommend?
A. Change the existing database to a Multi-AZ deployment. Serve the read requests from the primary Availability Zone.
B. Change the existing database to a Multi-AZ deployment. Serve the read requests from the secondary Availability Zone.
C. Create read replicas for the database. Configure the read replicas with half of the compute and storage resources as the source database.
D. Create read replicas for the database. Configure the read replicas with the same compute and storage resources as the source database.
Correct Answer: D
Creating read replicas allows the application to offload read traffic from the source database, improving its performance. The read replicas
should be configured with the same compute and storage resources as the source database to ensure that they can handle the read
workload effectively.
upvoted 12 times
Amazon RDS now offers Multi-AZ deployments with readable standby instances (also called Multi-AZ DB cluster deployments) . You should
consider using Multi-AZ DB cluster deployments with two readable DB instances if you need additional read capacity in your Amazon RDS
Multi-AZ deployment and if your application workload has strict transaction latency requirements such as single-digit milliseconds
transactions.
https://aws.amazon.com/blogs/database/readable-standby-instances-in-amazon-rds-multi-az-deployments-a-new-high-availability-
option/#:~:text=read%20replicas.-,Amazon%20RDS,-now%20offers%20Multi
upvoted 1 times
The key requirements are to quickly optimize performance by isolating reads from writes.
Read replicas allow read-only workloads to be directed to one or more replicas of the source RDS instance. This separates reporting or
analytics queries from transactional workloads.
The read replicas should have the same compute and storage as the source to provide equivalent performance for reads. Scaling down
the replicas would limit read performance.
Using Multi-AZ alone does not achieve read/write separation. The secondary AZ instance is for disaster recovery, not performance.
upvoted 4 times
B. The secondary instance in a Multi-AZ deployment is intended for failover and backup purposes, not for actively serving read traffic. It
operates in a standby mode and is not optimized for handling read queries efficiently.
C. Configuring the read replicas with half of the compute and storage resources as the source database might not be optimal. It's
generally recommended to configure the read replicas with the same compute and storage resources as the source database to ensure
they can handle the read workload effectively.
D. Configuring the read replicas with the same compute and storage resources as the source database ensures that they can handle the
read workload efficiently and provide the required performance boost.
upvoted 3 times
Bmarodi 4 months, 1 week ago
Selected Answer: D
D meets the requiremets.
upvoted 1 times
To optimize the application's performance and separate read traffic from write traffic, the solutions architect should recommend
creating read replicas for the database and configuring them to serve read requests. Option C and D both suggest creating read
replicas, but option D is a better choice because it configures the read replicas with the same compute and storage resources as the
source database.
Option A and B suggest changing the existing database to a Multi-AZ deployment, which would provide high availability by replicating
the database across multiple Availability Zones. However, it would not separate read and write traffic, so it is not the best solution for
optimizing application performance in this scenario.
upvoted 4 times
https://www.examtopics.com/discussions/amazon/view/46461-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_MySQL.Replication.ReadReplicas.html
upvoted 2 times
Question #96 Topic 1
An Amazon EC2 administrator created the following policy associated with an IAM group containing several users:
A. Users can terminate an EC2 instance in any AWS Region except us-east-1.
B. Users can terminate an EC2 instance with the IP address 10.100.100.1 in the us-east-1 Region.
C. Users can terminate an EC2 instance in the us-east-1 Region when the user's source IP is 10.100.100.254.
D. Users cannot terminate an EC2 instance in the us-east-1 Region when the user's source IP is 10.100.100.254.
Correct Answer: C
Wondering though; this policy also allows to terminate EC2 instances in US-east-1 even if your source IP is not the 10.100.100.254, right?
The idea is that since I do not deny this for the other source IP addresses, the Allow action is a obsolete?
upvoted 1 times
D. Users cannot terminate an EC2 instance in the us-east-1 Region when the user's source IP is 10.100.100.254.
The policy allows users to terminate EC2 instances only when their source IP is within the range 10.100.100.0/24.
However, there is a Deny statement that blocks users from terminating any EC2 instance in regions other than us-east-1.
So, when a user tries to terminate an EC2 instance from the IP 10.100.100.254 in the us-east-1 region, the Deny statement will take effect,
and the action will be denied. However, if the user tries to terminate an instance from the 10.100.100.0/24 IP range in any region other
than us-east-1, the Deny statement will not apply, and the Allow statement will permit the action.
upvoted 4 times
A company has a large Microsoft SharePoint deployment running on-premises that requires Microsoft Windows shared file storage. The company
wants to migrate this workload to the AWS Cloud and is considering various storage options. The storage solution must be highly available and
integrated with Active Directory for access control.
Which solution will satisfy these requirements?
A. Configure Amazon EFS storage and set the Active Directory domain for authentication.
B. Create an SMB file share on an AWS Storage Gateway file gateway in two Availability Zones.
C. Create an Amazon S3 bucket and configure Microsoft Windows Server to mount it as a volume.
D. Create an Amazon FSx for Windows File Server file system on AWS and set the Active Directory domain for authentication.
Correct Answer: D
Amazon FSx for Windows File Server is a fully managed file storage service that is designed to be used with Microsoft Windows workloads.
It is integrated with Active Directory for access control and is highly available, as it stores data across multiple availability zones.
Additionally, FSx can be used to migrate data from on-premises Microsoft Windows file servers to the AWS Cloud. This makes it a good fit
for the requirements described in the question.
upvoted 14 times
B. It may introduce additional complexity for this use case. Creating an SMB file share using AWS Storage Gateway would require
maintaining the gateway and managing the synchronization between on-premises and AWS storage.
C. S3 does not natively provide the SMB file protocol required for MS SharePoint and Windows shared file storage. While it is possible to
mount an S3 as a volume using 3rd-party tools or configurations, it is not the recommended.
D. FSx for Windows File Server is a fully managed, highly available file storage service that is compatible with MSWindows shared file
storage requirements. It provides native integration with AD, allowing for seamless access control and authentication using existing AD
user accounts.
upvoted 2 times
Amazon FSx provides the ability to migrate data from on-premises file servers to the cloud, using tools like AWS DataSync, Robocopy or
PowerShell. Once the data is migrated, you can continue to use the same tools and processes to manage and access the file shares as you
would on-premises.
Amazon FSx also provides features such as automatic backups, data encryption, and native multi-Availability Zone (AZ) deployments for
high availability. It can be easily integrated with other AWS services, such as Amazon S3, Amazon EFS, and AWS Backup, for additional
functionality and backup options.
upvoted 2 times
https://www.examtopics.com/discussions/amazon/view/29780-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 2 times
An image-processing company has a web application that users use to upload images. The application uploads the images into an Amazon S3
bucket. The company has set up S3 event notifications to publish the object creation events to an Amazon Simple Queue Service (Amazon SQS)
standard queue. The SQS queue serves as the event source for an AWS Lambda function that processes the images and sends the results to users
through email.
Users report that they are receiving multiple email messages for every uploaded image. A solutions architect determines that SQS messages are
invoking the Lambda function more than once, resulting in multiple email messages.
What should the solutions architect do to resolve this issue with the LEAST operational overhead?
A. Set up long polling in the SQS queue by increasing the ReceiveMessage wait time to 30 seconds.
B. Change the SQS standard queue to an SQS FIFO queue. Use the message deduplication ID to discard duplicate messages.
C. Increase the visibility timeout in the SQS queue to a value that is greater than the total of the function timeout and the batch window
timeout.
D. Modify the Lambda function to delete each message from the SQS queue immediately after the message is read before processing.
Correct Answer: A
Correct Answer: C
upvoted 1 times
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-visibility-
timeout.html#:~:text=SQS%20sets%20a-,visibility%20timeout,-%2C%20a%20period%20of
upvoted 1 times
B. Change the SQS standard queue to an SQS FIFO queue. Use the message deduplication ID to discard duplicate messages.
By changing the SQS standard queue to an SQS FIFO (First-In-First-Out) queue, you can ensure that messages are processed in the order
they are received and that each message is processed only once. FIFO queues provide exactly-once processing and eliminate duplicates.
Using the message deduplication ID feature of SQS FIFO queues, you can assign a unique identifier (such as the S3 object key) to each
message. SQS will check the deduplication ID of incoming messages and discard duplicate messages with the same deduplication ID. This
ensures that only unique messages are processed by the Lambda function.
This solution requires minimal operational overhead as it mainly involves changing the queue type and using the deduplication ID feature,
without requiring modifications to the Lambda function or adjusting timeouts.
upvoted 3 times
B. Changing the queue type from standard to FIFO requires additional considerations and changes to the application architecture. It may
involve modifying the event configuration and handling message deduplication IDs, which can introduce operational overhead.
D. Deleting messages immediately after reading them may lead to message loss if the Lambda encounters an error or fails to process the
image successfully. It does not guarantee message processing and can result in data loss.
C. By setting the visibility timeout to a value greater than the total time required for the Lambda to process the image and send the email,
you ensure that the message is not made visible to other consumers during processing. This prevents duplicate invocations of the
Lambda for the same message.
upvoted 2 times
Abrar2022 4 months, 1 week ago
FIFO - IS A SOLUTION BUT REQUIRES OPERATIONAL OVERHEAD.
INCREASING VISIBILITY TIMEOUT - REQUIRES FAR LESS OPERATIONAL OVERHEAD.
upvoted 3 times
A company is implementing a shared storage solution for a gaming application that is hosted in an on-premises data center. The company needs
the ability to use Lustre clients to access data. The solution must be fully managed.
Which solution meets these requirements?
A. Create an AWS Storage Gateway file gateway. Create a file share that uses the required client protocol. Connect the application server to the
file share.
B. Create an Amazon EC2 Windows instance. Install and configure a Windows file share role on the instance. Connect the application server to
the file share.
C. Create an Amazon Elastic File System (Amazon EFS) file system, and configure it to support Lustre. Attach the file system to the origin
server. Connect the application server to the file system.
D. Create an Amazon FSx for Lustre file system. Attach the file system to the origin server. Connect the application server to the file system.
Correct Answer: D
Amazon FSx for Lustre is a fully managed file system that is designed for high-performance workloads, such as gaming applications. It
provides a high-performance, scalable, and fully managed file system that is optimized for Lustre clients, and it is fully integrated with
Amazon EC2. It is the only option that meets the requirements of being fully managed and able to support Lustre clients.
upvoted 9 times
A. Create an AWS Storage Gateway file gateway. Create a file share that uses the required client protocol. Connect the application server to
the file share.
By using AWS Storage Gateway in file gateway mode, you can extend your on-premises data center storage into the AWS cloud. The file
share created on AWS Storage Gateway can use the necessary client protocol (such as Lustre), which would allow the Lustre clients in your
on-premises data center to access the data stored on AWS Storage Gateway.
This solution enables you to use Lustre clients to access data, while still keeping the gaming application hosted in your on-premises data
center. AWS Storage Gateway provides a fully managed solution for this hybrid scenario, allowing seamless integration between on-
premises and AWS cloud storage.
upvoted 2 times
B. Creating a Windows file share on an EC2 Windows instance is suitable for Windows-based file sharing, but it does not provide the
required Lustre client access. Lustre is a high-performance parallel file system primarily used in high-performance computing (HPC)
environments.
C. EFS does not natively support Lustre client access. Although EFS is a managed file storage service, it is designed for general-purpose file
storage and is not optimized for Lustre workloads.
D. Amazon FSx for Lustre is a fully managed file system optimized for high-performance computing workloads, including Lustre clients. It
provides the ability to use Lustre clients to access data in a managed and scalable manner. By choosing this option, the company can
benefit from the performance and manageability of Amazon FSx for Lustre while meeting the requirement of Lustre client access.
upvoted 2 times
BUT the onprem server couldn't view and have good perf with the EFS, so the question is an absurd !
upvoted 1 times
A company's containerized application runs on an Amazon EC2 instance. The application needs to download security certificates before it can
communicate with other business applications. The company wants a highly secure solution to encrypt and decrypt the certificates in near real
time. The solution also needs to store data in highly available storage after the data is encrypted.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create AWS Secrets Manager secrets for encrypted certificates. Manually update the certificates as needed. Control access to the data by
using fine-grained IAM access.
B. Create an AWS Lambda function that uses the Python cryptography library to receive and perform encryption operations. Store the function
in an Amazon S3 bucket.
C. Create an AWS Key Management Service (AWS KMS) customer managed key. Allow the EC2 role to use the KMS key for encryption
operations. Store the encrypted data on Amazon S3.
D. Create an AWS Key Management Service (AWS KMS) customer managed key. Allow the EC2 role to use the KMS key for encryption
operations. Store the encrypted data on Amazon Elastic Block Store (Amazon EBS) volumes.
Correct Answer: D
https://aws.amazon.com/ebs/features/
upvoted 2 times
A solutions architect is designing a VPC with public and private subnets. The VPC and subnets use IPv4 CIDR blocks. There is one public subnet
and one private subnet in each of three Availability Zones (AZs) for high availability. An internet gateway is used to provide internet access for the
public subnets. The private subnets require access to the internet to allow Amazon EC2 instances to download software updates.
What should the solutions architect do to enable Internet access for the private subnets?
A. Create three NAT gateways, one for each public subnet in each AZ. Create a private route table for each AZ that forwards non-VPC traffic to
the NAT gateway in its AZ.
B. Create three NAT instances, one for each private subnet in each AZ. Create a private route table for each AZ that forwards non-VPC traffic
to the NAT instance in its AZ.
C. Create a second internet gateway on one of the private subnets. Update the route table for the private subnets that forward non-VPC traffic
to the private internet gateway.
D. Create an egress-only internet gateway on one of the public subnets. Update the route table for the private subnets that forward non-VPC
traffic to the egress-only Internet gateway.
Correct Answer: A
NAT gateways allow private subnets to access the internet for things like software updates, without exposing those instances directly to
the internet. An egress-only internet gateway would allow outbound access, but also allow inbound internet traffic, which is not desired
for the private subnets.
upvoted 1 times
Now has only A and B. The different between A versus B is "1 NAT gateway, 1 for public subnet in each AZ" (A) and "1 NAT gateway, 1 for
private subnet in each AZ" (B).
Choose A.
upvoted 3 times
Additionally, a separate private route table should be created for each AZ. The private route tables should have a default route that
forwards non-VPC traffic (0.0.0.0/0) to the corresponding NAT gateway in the same AZ. This ensures that the private subnets use the
appropriate NAT gateway for Internet access.
B is incorrect because NAT instances require manual management and configuration compared to NAT gateways, which are a fully
managed service. NAT instances are also being deprecated in favor of NAT gateways.
C is incorrect because creating a second internet gateway on a private subnet is not a valid solution. Internet gateways are associated with
public subnets and cannot be directly associated with private subnets.
D is incorrect because egress-only internet gateways are used for IPv6 traffic.
upvoted 3 times
Jeeva28 4 months, 1 week ago
NAT Gateway will be created Public Subnet and Provide access to Private Subnet
upvoted 1 times
To enable Internet access for the private subnets, the solutions architect should create three NAT gateways, one for each public subnet in
each Availability Zone (AZ). NAT gateways allow private instances to initiate outbound traffic to the Internet but do not allow inbound
traffic from the Internet to reach the private instances.
The solutions architect should then create a private route table for each AZ that forwards non-VPC traffic to the NAT gateway in its AZ. This
will allow instances in the private subnets to access the Internet through the NAT gateways in the public subnets.
upvoted 4 times
A company wants to migrate an on-premises data center to AWS. The data center hosts an SFTP server that stores its data on an NFS-based file
system. The server holds 200 GB of data that needs to be transferred. The server must be hosted on an Amazon EC2 instance that uses an
Amazon Elastic File System (Amazon EFS) file system.
Which combination of steps should a solutions architect take to automate this task? (Choose two.)
A. Launch the EC2 instance into the same Availability Zone as the EFS file system.
C. Create a secondary Amazon Elastic Block Store (Amazon EBS) volume on the EC2 instance for the data.
D. Manually use an operating system copy command to push the data to the EC2 instance.
E. Use AWS DataSync to create a suitable location configuration for the on-premises SFTP server.
Correct Answer: AB
To automate the process of transferring the data from the on-premises SFTP server to an EC2 instance with an EFS file system, you can use
AWS DataSync. AWS DataSync is a fully managed data transfer service that simplifies, automates, and accelerates transferring data
between on-premises storage systems and Amazon S3, Amazon EFS, or Amazon FSx for Windows File Server.
To use AWS DataSync for this task, you should first install an AWS DataSync agent in the on-premises data center. This agent is a
lightweight software application that you install on your on-premises data source. The agent communicates with the AWS DataSync
service to transfer data between the data source and target locations.
upvoted 22 times
Once you have created the location configuration for the on-premises SFTP server, you can use AWS DataSync to transfer the data to
the EC2 instance with the EFS file system. AWS DataSync handles the data transfer process automatically and efficiently, transferring
the data at high speeds and minimizing downtime.
upvoted 9 times
A. Launch the EC2 instance into the same Availability Zone as the EFS file system.
This option is not wrong, but it is not directly related to automating the process of transferring the data from the on-premises SFTP
server to the EC2 instance with the EFS file system. Launching the EC2 instance into the same Availability Zone as the EFS file system
can improve the performance and reliability of the file system, as it reduces the latency between the EC2 instance and the file
system. However, it is not necessary for automating the data transfer process.
upvoted 5 times
This option is incorrect because Amazon EBS is a block-level storage service that is designed for use with Amazon EC2 instances.
It is not suitable for storing large amounts of data that need to be accessed by multiple EC2 instances, like in the case of the NFS-
based file system on the on-premises SFTP server. Instead, you should use Amazon EFS, which is a fully managed, scalable, and
distributed file system that can be accessed by multiple EC2 instances concurrently.
upvoted 3 times
This option is not wrong, but it is not the most efficient or automated way to transfer the data from the on-premises SFTP
server to the EC2 instance with the EFS file system. Manually transferring the data using an operating system copy command
would require manual intervention and would not scale well for large amounts of data. It would also not provide the same
level of performance and reliability as a fully managed service like AWS DataSync.
upvoted 3 times
E. Once the DataSync agent is installed, the solutions architect should configure it to create a suitable location configuration that specifies
the source location as the on-premises SFTP server and the target location as the EFS. AWS DataSync will handle the secure and efficient
transfer of the data from the on-premises server to the EC2 using EFS.
A. Launching EC2 into the same AZ as the EFS is not directly related to automating the migration task.
C. Creating a secondary EBS on the EC2 for the data is not necessary when using EFS. EFS provides a scalable, fully managed NFS-based
file system that can be mounted directly on the EC2, eliminating the need for separate EBS.
D. It would require manual intervention and could be error-prone, especially for large amounts of data.
upvoted 2 times
E* A location is a storage system or service that AWS DataSync reads from or writes to. Each DataSync transfer has a source and
destination location.
https://docs.aws.amazon.com/datasync/latest/userguide/configure-agent.html
upvoted 1 times
A company has an AWS Glue extract, transform, and load (ETL) job that runs every day at the same time. The job processes XML data that is in an
Amazon S3 bucket. New data is added to the S3 bucket every day. A solutions architect notices that AWS Glue is processing all the data during
each run.
What should the solutions architect do to prevent AWS Glue from reprocessing old data?
Correct Answer: A
Job bookmarks allow AWS Glue ETL jobs to track which data has already been processed during previous runs. This prevents reprocessing
of old data.
Deleting the data after processing would cause the data to be lost and unavailable for future processing. Reducing the number of workers
may improve performance but does not prevent reprocessing of old data. Using a FindMatches ML transform is used for record matching,
not preventing reprocessing.
So the solutions architect should enable job bookmarks in the AWS Glue job configuration. This will allow the ETL job to keep track of
processed data and only transform the new data added since the last run.
upvoted 1 times
B. Results in the permanent removal of the data from the S3, making it unavailable for future job runs. This is not desirable if the data
needs to be retained or used for subsequent analysis.
C.It would only affect the parallelism of the job but would not address the issue of reprocessing old data. It does not provide a mechanism
to track the processed data or skip already processed data.
D. It is not directly related to preventing Glue from reprocessing old data. The FindMatches transform is used for identifying and matching
duplicate or matching records in a dataset. While it can be used in data processing pipelines, it does not address the specific requirement
of avoiding reprocessing old data in this scenario.
upvoted 4 times
Job bookmarks in AWS Glue allow the ETL job to track the data that has been processed and to skip data that has already been processed.
This can prevent AWS Glue from reprocessing old data and can improve the performance of the ETL job by only processing new data. To
use job bookmarks, the solutions architect can edit the job and set the "Use job bookmark" option to "True". The ETL job will then use the
job bookmark to track the data that has been processed and skip data that has already been processed in subsequent runs.
upvoted 3 times
A solutions architect must design a highly available infrastructure for a website. The website is powered by Windows web servers that run on
Amazon EC2 instances. The solutions architect must implement a solution that can mitigate a large-scale DDoS attack that originates from
thousands of IP addresses. Downtime is not acceptable for the website.
Which actions should the solutions architect take to protect the website from such an attack? (Choose two.)
C. Configure the website to use Amazon CloudFront for both static and dynamic content.
D. Use an AWS Lambda function to automatically add attacker IP addresses to VPC network ACLs.
E. Use EC2 Spot Instances in an Auto Scaling group with a target tracking scaling policy that is set to 80% CPU utilization.
Correct Answer: AC
It provides always-on protection for Amazon EC2 instances, Elastic Load Balancers, and Amazon Route 53 resources. By using AWS Shield
Advanced, the solutions architect can help protect the website from large-scale DDoS attacks.
Option C. Configure the website to use Amazon CloudFront for both static and dynamic content.
CloudFront is a content delivery network (CDN) that integrates with other Amazon Web Services products, such as Amazon S3 and Amazon
EC2, to deliver content to users with low latency and high data transfer speeds. By using CloudFront, the solutions architect can distribute
the website's content across multiple edge locations, which can help absorb the impact of a DDoS attack and reduce the risk of downtime
for the website.
upvoted 8 times
C. CloudFront is a CDN service that can help mitigate DDoS attacks. By routing traffic through CloudFront, requests to the website are
distributed across multiple edge locations, which can absorb and mitigate DDoS attacks more effectively. CloudFront also provides
additional DDoS protection features, such as rate limiting, SSL/TLS termination, and custom security policies.
B. While GuardDuty can detect and provide insights into potential malicious activity, it is not specifically designed for DDoS mitigation.
D. Network ACLs are not designed to handle high-volume traffic or DDoS attacks efficiently.
E. Spot Instances are a cost optimization strategy and may not provide the necessary availability and protection against DDoS attacks
compared to using dedicated instances with DDoS protection mechanisms like Shield Advanced and CloudFront.
upvoted 2 times
Amazon GuardDuty is a threat detection service, it can NOT take action directly, it needs to work with Lambda.
upvoted 1 times
Question #105 Topic 1
A company is preparing to deploy a new serverless workload. A solutions architect must use the principle of least privilege to configure
permissions that will be used to run an AWS Lambda function. An Amazon EventBridge (Amazon CloudWatch Events) rule will invoke the function.
Which solution meets these requirements?
A. Add an execution role to the function with lambda:InvokeFunction as the action and * as the principal.
B. Add an execution role to the function with lambda:InvokeFunction as the action and Service: lambda.amazonaws.com as the principal.
C. Add a resource-based policy to the function with lambda:* as the action and Service: events.amazonaws.com as the principal.
D. Add a resource-based policy to the function with lambda:InvokeFunction as the action and Service: events.amazonaws.com as the
principal.
Correct Answer: D
The principle of least privilege requires that permissions are granted only to the minimum necessary to perform a task. In this case, the
Lambda function needs to be able to be invoked by Amazon EventBridge (Amazon CloudWatch Events). To meet these requirements, you
can add a resource-based policy to the function that allows the InvokeFunction action to be performed by the Service:
events.amazonaws.com principal. This will allow Amazon EventBridge to invoke the function, but will not grant any additional permissions
to the function.
upvoted 13 times
Option A is incorrect because it grants the lambda:InvokeFunction action to any principal (*), which would allow any entity to invoke the
function and goes beyond the minimum permissions needed.
Option B is incorrect because it grants the lambda:InvokeFunction action to the Service: lambda.amazonaws.com principal, which
would allow any Lambda function to invoke the function and goes beyond the minimum permissions needed.
Option C is incorrect because it grants the lambda:* action to the Service: events.amazonaws.com principal, which would allow Amazon
EventBridge to perform any action on the function and goes beyond the minimum permissions needed.
upvoted 11 times
Option A is incorrect because it assigns the lambda:InvokeFunction action to all principals (*), which grants permission to invoke the
function to any entity, which is broader than necessary.
Option B is incorrect because it assigns the lambda:InvokeFunction action to the specific principal "lambda.amazonaws.com," which is the
service principal for AWS Lambda. However, the requirement is for the EventBridge service principal to invoke the function.
Option C is incorrect because it assigns the lambda:* action to the specific principal "events.amazonaws.com," which is the service
principal for Amazon EventBridge. However, it grants broader permissions than necessary, allowing any Lambda function action, not just
lambda:InvokeFunction.
upvoted 2 times
A company is preparing to store confidential data in Amazon S3. For compliance reasons, the data must be encrypted at rest. Encryption key
usage must be logged for auditing purposes. Keys must be rotated every year.
Which solution meets these requirements and is the MOST operationally efficient?
C. Server-side encryption with AWS KMS keys (SSE-KMS) with manual rotation
D. Server-side encryption with AWS KMS keys (SSE-KMS) with automatic rotation
Correct Answer: D
SSE-KMS is the most secure way to encrypt data in Amazon S3. It uses AWS KMS, which is a highly secure key management service that is
managed by AWS. AWS KMS logs all key usage, so the company can meet its compliance requirements. AWS KMS also rotates keys
automatically, so the company does not have to worry about manually rotating keys.
upvoted 2 times
Additionally, SSE-KMS provides built-in audit logging for encryption key usage through CloudTrail, which captures API calls related to the
management and usage of KMS keys. This meets the requirement for logging key usage for auditing purposes.
Option A (SSE-C) requires customers to provide their own encryption keys, but it does not provide key rotation or built-in logging of key
usage.
Option B (SSE-S3) uses Amazon S3 managed keys for encryption, which simplifies key management but does not provide key rotation or
detailed key usage logging.
Option C (SSE-KMS with manual rotation) uses AWS KMS keys but requires manual rotation, which is less operationally efficient than the
automatic key rotation available with option D.
upvoted 3 times
AWS Lambda is a serverless compute service that lets you run code in response to events or HTTP requests. You can use Lambda to write
the code that retrieves the location data from your data store and returns it to API Gateway as a response to API requests. This allows you
to scale the API to handle a large number of requests without the need to provision or manage any infrastructure.
upvoted 2 times
Buruguduystunstugudunstuy 9 months, 2 weeks ago
Selected Answer: D
The most operationally efficient solution that meets the requirements listed would be option D: Server-side encryption with AWS KMS keys
(SSE-KMS) with automatic rotation.
SSE-KMS allows you to use keys that are managed by the AWS Key Management Service (KMS) to encrypt your data at rest. KMS is a fully
managed service that makes it easy to create and control the encryption keys used to encrypt your data. With automatic key rotation
enabled, KMS will automatically create a new key for you on a regular basis, typically every year, and use it to encrypt your data. This
simplifies the key rotation process and reduces the operational burden on your team.
In addition, SSE-KMS provides logging of key usage through AWS CloudTrail, which can be used for auditing purposes.
upvoted 1 times
Option A: Server-side encryption with customer-provided keys (SSE-C) would require you to manage the encryption keys yourself, which
can be more operationally burdensome.
Option B: Server-side encryption with Amazon S3 managed keys (SSE-S3) does not allow for key rotation or logging of the key usage.
Option C: Server-side encryption with AWS KMS keys (SSE-KMS) with manual rotation would require you to manually initiate the key
rotation process, which can be more operationally burdensome compared to automatic rotation.
upvoted 3 times
A bicycle sharing company is developing a multi-tier architecture to track the location of its bicycles during peak operating hours. The company
wants to use these data points in its existing analytics platform. A solutions architect must determine the most viable multi-tier option to support
this architecture. The data points must be accessible from the REST API.
Which action meets these requirements for storing and retrieving location data?
Correct Answer: D
In this use case there will obviously be a ton of data and you want to get real-time location data of the bicycles, and to analyze all these
info kinesis is the one that makes most sense here.
upvoted 38 times
Amazon API Gateway provides a REST API that can be used to ingest and retrieve the location data points. Kinesis Data Analytics can then
process and analyze those data streams in real-time. The results can be queried through the API Gateway, meeting the requirements.
upvoted 2 times
Options A, C, and D are not the most suitable options for storing and retrieving location data in this scenario:
Option A suggests using Amazon Athena with Amazon S3, which is a query service for data in S3 but does not provide a direct REST API
integration for real-time location data retrieval.
Option C suggests using Amazon QuickSight with Amazon Redshift, which is more suitable for data analytics and visualization rather than
real-time data retrieval through a REST API.
Option D suggests using Amazon API Gateway with Amazon Kinesis Data Analytics, which is more suitable for real-time streaming
analytics rather than data storage and retrieval for REST APIs.
upvoted 4 times
Question #108 Topic 1
A company has an automobile sales website that stores its listings in a database on Amazon RDS. When an automobile is sold, the listing needs
to be removed from the website and the data must be sent to multiple target systems.
Which design should a solutions architect recommend?
A. Create an AWS Lambda function triggered when the database on Amazon RDS is updated to send the information to an Amazon Simple
Queue Service (Amazon SQS) queue for the targets to consume.
B. Create an AWS Lambda function triggered when the database on Amazon RDS is updated to send the information to an Amazon Simple
Queue Service (Amazon SQS) FIFO queue for the targets to consume.
C. Subscribe to an RDS event notification and send an Amazon Simple Queue Service (Amazon SQS) queue fanned out to multiple Amazon
Simple Notification Service (Amazon SNS) topics. Use AWS Lambda functions to update the targets.
D. Subscribe to an RDS event notification and send an Amazon Simple Notification Service (Amazon SNS) topic fanned out to multiple Amazon
Simple Queue Service (Amazon SQS) queues. Use AWS Lambda functions to update the targets.
Correct Answer: C
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.Lambda.html
upvoted 1 times
BlueVolcano1 8 months, 2 weeks ago
To add to that though, A also states to only use SQS (no SNS to SQS fan-out), which doesn't seem right as the message needs to go
to multiple targets?
upvoted 4 times
Correct answer is A
upvoted 1 times
Option A and B: Lambda functions are not directly triggered by Amazon RDS updates.
Option C: SQS does not fan out to multiple SNS topics. It's the other way around; an SNS topic can fan out to multiple SQS queues.
upvoted 1 times
A company needs to store data in Amazon S3 and must prevent the data from being changed. The company wants new objects that are uploaded
to Amazon S3 to remain unchangeable for a nonspecific amount of time until the company decides to modify the objects. Only specific users in
the company's AWS account can have the ability 10 delete the objects.
What should a solutions architect do to meet these requirements?
A. Create an S3 Glacier vault. Apply a write-once, read-many (WORM) vault lock policy to the objects.
B. Create an S3 bucket with S3 Object Lock enabled. Enable versioning. Set a retention period of 100 years. Use governance mode as the S3
bucket’s default retention mode for new objects.
C. Create an S3 bucket. Use AWS CloudTrail to track any S3 API events that modify the objects. Upon notification, restore the modified objects
from any backup versions that the company has.
D. Create an S3 bucket with S3 Object Lock enabled. Enable versioning. Add a legal hold to the objects. Add the s3:PutObjectLegalHold
permission to the IAM policies of users who need to delete the objects.
Correct Answer: D
Correct
upvoted 1 times
Once the legal hold is set on an object, it is in effect until the hold is removed by the user who applied it or an admin with the necessary
permissions. Other users, even if they have the s3:PutObjectLegalHold permission, won't be able to remove the hold unless they are
granted access by the user who originally applied it.
upvoted 2 times
omoakin 4 months, 1 week ago
I go with option B as they still need some specific users to be able to make changes so Gov mode is the best choice and 100 yrs is like
infinity as well haha
upvoted 3 times
Correct answer is D
https://docs.aws.amazon.com/AmazonS3/latest/userguide/batch-ops-legal-hold.html
upvoted 3 times
S3 Object Lock is a feature of Amazon S3 that allows you to apply a retention period to objects in your bucket, during which time the
objects cannot be deleted or overwritten. By enabling versioning on the bucket, you can ensure that all versions of an object are retained,
including any deletions or overwrites. By setting a retention period of 100 years, you can ensure that the objects remain unchangeable for
a long time.
By using governance mode as the default retention mode for new objects, you can ensure that the retention period is applied to all new
objects that are uploaded to the bucket. This will prevent the objects from being deleted or overwritten until the retention period expires.
upvoted 2 times
Option C (using CloudTrail to track API events and restoring modified objects from backup versions) would not prevent the objects from
being changed in the first place.
Option D (adding a legal hold and the s3:PutObjectLegalHold permission to IAM policies) would not meet the requirement to prevent
the objects from being changed for a nonspecific amount of time.
upvoted 1 times
In addition, the s3:PutObjectLegalHold permission allows users to place a legal hold on an object, but it does not prevent the object
from being changed. To prevent the objects from being changed for a nonspecific amount of time, the solution architect should use
S3 Object Lock and set a longer retention period on the objects.
upvoted 3 times
A social media company allows users to upload images to its website. The website runs on Amazon EC2 instances. During upload requests, the
website resizes the images to a standard size and stores the resized images in Amazon S3. Users are experiencing slow upload requests to the
website.
The company needs to reduce coupling within the application and improve website performance. A solutions architect must design the most
operationally efficient process for image uploads.
Which combination of actions should the solutions architect take to meet these requirements? (Choose two.)
B. Configure the web server to upload the original images to Amazon S3.
C. Configure the application to upload images directly from each user's browser to Amazon S3 through the use of a presigned URL
D. Configure S3 Event Notifications to invoke an AWS Lambda function when an image is uploaded. Use the function to resize the image.
E. Create an Amazon EventBridge (Amazon CloudWatch Events) rule that invokes an AWS Lambda function on a schedule to resize uploaded
images.
Correct Answer: BD
C. Configure the application to upload images directly from each user's browser to Amazon S3 through the use of a pre-signed URL. This
will allow the application to upload images directly to S3 without having to go through the web server, which can reduce the load on the
web server and improve performance.
D. Configure S3 Event Notifications to invoke an AWS Lambda function when an image is uploaded. Use the function to resize the image.
This will allow the application to resize images asynchronously, rather than having to do it synchronously during the upload request,
which can improve performance.
upvoted 30 times
Option B, Configuring the webserver to upload the original images to Amazon S3, is not a recommended solution as it would not
reduce coupling within the application or improve performance.
Option E, Creating an Amazon EventBridge (Amazon CloudWatch Events) rule that invokes an AWS Lambda function on a schedule to
resize uploaded images, is not a recommended solution as it would not be able to resize images in a timely manner and would not
improve performance.
upvoted 3 times
And additional! "Configure the application to upload images directly from EACH USER'S BROWSER to Amazon S3 through the use of
a pre-signed URL"
I am not an expert, but I can't imagine that you can store an image that an user uploads in his browser etc.
upvoted 3 times
https://docs.aws.amazon.com/AmazonS3/latest/userguide/PresignedUrlUploadObject.html
https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-presigned-
url.html#:~:text=User%20Guide.-,Expiration%20time%20for%20presigned%20URLs,-A%20presigned%20URL
upvoted 3 times
A presigned URL remains valid for the period of time specified when the URL is generated. If you create a presigned URL with the Amazon
S3 console, the expiration time can be set between 1 minute and 12 hours. If you use the AWS CLI or AWS SDKs, the expiration time can be
set as high as 7 days.
upvoted 1 times
A company recently migrated a message processing system to AWS. The system receives messages into an ActiveMQ queue running on an
Amazon EC2 instance. Messages are processed by a consumer application running on Amazon EC2. The consumer application processes the
messages and writes results to a MySQL database running on Amazon EC2. The company wants this application to be highly available with low
operational complexity.
Which architecture offers the HIGHEST availability?
A. Add a second ActiveMQ server to another Availability Zone. Add an additional consumer EC2 instance in another Availability Zone.
Replicate the MySQL database to another Availability Zone.
B. Use Amazon MQ with active/standby brokers configured across two Availability Zones. Add an additional consumer EC2 instance in another
Availability Zone. Replicate the MySQL database to another Availability Zone.
C. Use Amazon MQ with active/standby brokers configured across two Availability Zones. Add an additional consumer EC2 instance in
another Availability Zone. Use Amazon RDS for MySQL with Multi-AZ enabled.
D. Use Amazon MQ with active/standby brokers configured across two Availability Zones. Add an Auto Scaling group for the consumer EC2
instances across two Availability Zones. Use Amazon RDS for MySQL with Multi-AZ enabled.
Correct Answer: D
Using Amazon MQ with active/standby brokers provides highly available message queuing across AZs.
Adding an Auto Scaling group for consumer EC2 instances across 2 AZs provides highly available processing.
This architecture provides high availability for all components of the system - queue, processing, and database.
upvoted 2 times
Adding an ASG for the consumer EC2 instances across two AZ provides redundancy and automatic scaling based on demand. If one
consumer instance becomes unavailable or if the message load increases, the ASG can automatically launch additional instances to
handle the workload.
Using RDS for MySQL with Multi-AZ enabled ensures high availability for the database. Multi-AZ automatically replicates the database to a
standby instance in another AZ. If a failure occurs, RDS automatically fails over to the standby instance without manual intervention.
This architecture combines high availability for the message broker (Amazon MQ), scalability and redundancy for the consumer EC2
instances (ASG), and high availability for the database (RDS Multi-AZ). It offers the highest availability with low operational complexity by
leveraging managed services and automated failover mechanisms.
upvoted 2 times
Amazon MQ with active/standby brokers configured across two Availability Zones ensures that the message queue is available even if one
Availability Zone experiences an outage.
An Auto Scaling group for the consumer EC2 instances across two Availability Zones ensures that the consumer application is able to
continue processing messages even if one Availability Zone experiences an outage.
Amazon RDS for MySQL with Multi-AZ enabled ensures that the database is available even if one Availability Zone experiences an outage.
upvoted 3 times
Option B addresses some potential points of failure, but it does not address the potential for the database to become unavailable due
to an Availability Zone outage.
Option C addresses some potential points of failure, but it does not address the potential for the consumer application to become
unavailable due to an Availability Zone outage.
upvoted 1 times
A company hosts a containerized web application on a fleet of on-premises servers that process incoming requests. The number of requests is
growing quickly. The on-premises servers cannot handle the increased number of requests. The company wants to move the application to AWS
with minimum code changes and minimum development effort.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use AWS Fargate on Amazon Elastic Container Service (Amazon ECS) to run the containerized web application with Service Auto Scaling.
Use an Application Load Balancer to distribute the incoming requests.
B. Use two Amazon EC2 instances to host the containerized web application. Use an Application Load Balancer to distribute the incoming
requests.
C. Use AWS Lambda with a new code that uses one of the supported languages. Create multiple Lambda functions to support the load. Use
Amazon API Gateway as an entry point to the Lambda functions.
D. Use a high performance computing (HPC) solution such as AWS ParallelCluster to establish an HPC cluster that can process the incoming
requests at the appropriate scale.
Correct Answer: A
AWS Fargate removes the need to provision and manage servers. Fargate will automatically scale the application based on demand. This
removes a significant operational burden.
Using ECS along with Fargate provides a managed orchestration layer to easily run and scale the containerized application.
The Application Load Balancer handles distribution of traffic without additional effort.
No code changes are required to move the application to Fargate. The containers can run as-is.
upvoted 1 times
AWS Fargate removes the need to provision and manage servers, allowing you to focus on deploying and running applications. Fargate
will scale compute capacity up and down automatically based on application load. This removes the operational overhead of managing
servers.
upvoted 1 times
Option B (Amazon EC2 instances with an Application Load Balancer) requires manual management of EC2 instances, resulting in more
operational overhead compared to option A.
Option C (AWS Lambda with API Gateway) may require significant code changes and restructuring, introducing complexity and potentially
increasing development effort.
Option D (AWS ParallelCluster) is not suitable for a containerized web application and involves significant setup and configuration
overhead.
upvoted 2 times
https://docs.aws.amazon.com/AmazonECS/latest/userguide/what-is-fargate.html
upvoted 1 times
with A, you are just moving from an on-prem container to AWS container
upvoted 3 times
Question #113 Topic 1
A company uses 50 TB of data for reporting. The company wants to move this data from on premises to AWS. A custom application in the
company’s data center runs a weekly data transformation job. The company plans to pause the application until the data transfer is complete and
needs to begin the transfer process as soon as possible.
The data center does not have any available network bandwidth for additional workloads. A solutions architect must transfer the data and must
configure the transformation job to continue to run in the AWS Cloud.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use AWS DataSync to move the data. Create a custom transformation job by using AWS Glue.
B. Order an AWS Snowcone device to move the data. Deploy the transformation application to the device.
C. Order an AWS Snowball Edge Storage Optimized device. Copy the data to the device. Create a custom transformation job by using AWS
Glue.
D. Order an AWS Snowball Edge Storage Optimized device that includes Amazon EC2 compute. Copy the data to the device. Create a new EC2
instance on AWS to run the transformation application.
Correct Answer: C
Technically option D will work but with the overhead of EC2, negating the requirement for LEAST ops.
upvoted 1 times
"A solutions architect must transfer the data and must configure the transformation job to continue to run in the AWS Cloud."
LEAST operational overhead -> just take app and put on EC2, instead of configuring Glue
upvoted 1 times
Option B (AWS Snowcone device with deployment) is designed for smaller workloads and may not have enough storage capacity for
transferring 50 TB of data. Additionally, deploying the transformation application on the Snowcone device could introduce complexity and
operational overhead.
Option D (AWS Snowball Edge with EC2 compute) involves transferring the data using a Snowball Edge device and then creating a new EC2
instance in AWS to run the transformation application. This option adds additional complexity and operational overhead of managing an
EC2 instance.
In comparison, option C offers a straightforward and efficient approach. The Snowball Edge Storage Optimized device can handle the
large data transfer without relying on network bandwidth. Once the data is transferred, AWS Glue can be used to create the
transformation job, ensuring the continuity of the application's processing in the AWS Cloud.
upvoted 4 times
“The data center does not have any available network bandwidth for additional workloads.”
upvoted 1 times
Ans D
upvoted 1 times
A company has created an image analysis application in which users can upload photos and add photo frames to their images. The users upload
images and metadata to indicate which photo frames they want to add to their images. The application uses a single Amazon EC2 instance and
Amazon DynamoDB to store the metadata.
The application is becoming more popular, and the number of users is increasing. The company expects the number of concurrent users to vary
significantly depending on the time of day and day of week. The company must ensure that the application can scale to meet the needs of the
growing user base.
Which solution meats these requirements?
A. Use AWS Lambda to process the photos. Store the photos and metadata in DynamoDB.
B. Use Amazon Kinesis Data Firehose to process the photos and to store the photos and metadata.
C. Use AWS Lambda to process the photos. Store the photos in Amazon S3. Retain DynamoDB to store the metadata.
D. Increase the number of EC2 instances to three. Use Provisioned IOPS SSD (io2) Amazon Elastic Block Store (Amazon EBS) volumes to store
the photos and metadata.
Correct Answer: A
Option A does not provide an appropriate solution for storing the photos, as DynamoDB is not suitable for storing large binary data like
images.
Option B is more focused on real-time streaming data processing and is not the ideal service for processing and storing photos and
metadata in this use case.
Option D involves manual scaling and management of EC2 instances, which is less flexible and more labor-intensive compared to the
serverless nature of Lambda. It may not efficiently handle the varying number of concurrent users and can introduce higher operational
overhead.
In conclusion, option C provides the best solution for scaling the application to meet the needs of the growing user base by leveraging the
scalability and durability of Lambda, S3, and DynamoDB.
upvoted 5 times
La opción C proporciona una solución escalable para el procesamiento y almacenamiento de imágenes y metadatos. La aplicación puede
utilizar AWS Lambda para procesar las fotos y almacenar las imágenes en Amazon S3, que es un servicio de almacenamiento escalable y
altamente disponible. Los metadatos pueden almacenarse en DynamoDB, que es un servicio de base de datos escalable y de alto
rendimiento que puede manejar una gran cantidad de solicitudes simultáneas.
upvoted 3 times
La opción C proporciona una solución escalable para el procesamiento y almacenamiento de imágenes y metadatos. La aplicación puede
utilizar AWS Lambda para procesar las fotos y almacenar las imágenes en Amazon S3, que es un servicio de almacenamiento escalable y
altamente disponible. Los metadatos pueden almacenarse en DynamoDB, que es un servicio de base de datos escalable y de alto
rendimiento que puede manejar una gran cantidad de solicitudes simultáneas.
upvoted 1 times
A medical records company is hosting an application on Amazon EC2 instances. The application processes customer data files that are stored on
Amazon S3. The EC2 instances are hosted in public subnets. The EC2 instances access Amazon S3 over the internet, but they do not require any
other network access.
A new requirement mandates that the network traffic for file transfers take a private route and not be sent over the internet.
Which change to the network architecture should a solutions architect recommend to meet this requirement?
A. Create a NAT gateway. Configure the route table for the public subnets to send traffic to Amazon S3 through the NAT gateway.
B. Configure the security group for the EC2 instances to restrict outbound traffic so that only traffic to the S3 prefix list is permitted.
C. Move the EC2 instances to private subnets. Create a VPC endpoint for Amazon S3, and link the endpoint to the route table for the private
subnets.
D. Remove the internet gateway from the VPC. Set up an AWS Direct Connect connection, and route traffic to Amazon S3 over the Direct
Connect connection.
Correct Answer: C
Option B (configuring security groups) focuses on controlling outbound traffic using security groups. While it can restrict outbound traffic,
it doesn't provide a private route for accessing S3.
Option D (setting up Direct Connect) involves establishing a dedicated private network connection between the on-premises environment
and AWS. While it offers private connectivity, it is more suitable for hybrid scenarios and not necessary for achieving private access to S3
within the VPC.
In summary, option C provides a straightforward solution by moving the EC2 instances to private subnets, creating a VPC endpoint for S3,
and linking the endpoint to the route table for private subnets. This ensures that file transfer traffic between the EC2 instances and S3
remains within the private network without going over the internet.
upvoted 4 times
To meet the new requirement of transferring files over a private route, the EC2 instances should be moved to private subnets, which do
not have direct access to the internet. This ensures that the traffic for file transfers does not go over the internet.
To enable the EC2 instances to access Amazon S3, a VPC endpoint for Amazon S3 can be created. VPC endpoints allow resources within a
VPC to communicate with resources in other services without the traffic being sent over the internet. By linking the VPC endpoint to the
route table for the private subnets, the EC2 instances can access Amazon S3 over a private connection within the VPC.
upvoted 3 times
Buruguduystunstugudunstuy 9 months, 2 weeks ago
Option A (Create a NAT gateway) would not work, as a NAT gateway is used to allow resources in private subnets to access the internet,
while the requirement is to prevent traffic from going over the internet.
Option B (Configure the security group for the EC2 instances to restrict outbound traffic) would not achieve the goal of routing traffic
over a private connection, as the traffic would still be sent over the internet.
Option D (Remove the internet gateway from the VPC and set up an AWS Direct Connect connection) would not be necessary, as the
requirement can be met by simply creating a VPC endpoint for Amazon S3 and routing traffic through it.
upvoted 1 times
Application must be moved in Private subnet. This is a prerequisite in using VPC endpoints with S3
https://aws.amazon.com/blogs/storage/managing-amazon-s3-access-with-vpc-endpoints-and-s3-access-points/
upvoted 4 times
A company uses a popular content management system (CMS) for its corporate website. However, the required patching and maintenance are
burdensome. The company is redesigning its website and wants anew solution. The website will be updated four times a year and does not need
to have any dynamic content available. The solution must provide high scalability and enhanced security.
Which combination of changes will meet these requirements with the LEAST operational overhead? (Choose two.)
B. Deploy an AWS WAF web ACL in front of the website to provide HTTPS functionality.
C. Create and deploy an AWS Lambda function to manage and serve the website content.
D. Create the new website and an Amazon S3 bucket. Deploy the website on the S3 bucket with static website hosting enabled.
E. Create the new website. Deploy the website by using an Auto Scaling group of Amazon EC2 instances behind an Application Load Balancer.
Correct Answer: AD
B is out since AWS WAF Web ACL does not to provide HTTPS functionality, but to protect HTTPS only.
upvoted 25 times
B does not make sense because you are not replacing the CDN with anything,
E works too but takes too much effort and compared to S3, S3 still wins in term of scalability. plus why use EC2 when you are only hosting
static website
upvoted 5 times
D. Deploying the website on an Amazon S3 bucket with static website hosting reduces operational overhead by eliminating server
maintenance and patching.
C. Using AWS Lambda introduces complexity and does not directly address patching and maintenance.
E. Managing EC2 instances and an Application Load Balancer increases operational overhead and does not minimize patching and
maintenance tasks.
In summary, configuring Amazon CloudFront for HTTPS and deploying on Amazon S3 with static website hosting provide security,
scalability, and reduced operational overhead.
upvoted 1 times
beginnercloud 3 months, 3 weeks ago
Selected Answer: AD
AD
so my answer is AD.
upvoted 1 times
By deploying the website on an S3 bucket with static website hosting enabled, the company can take advantage of the high scalability and
cost-efficiency of S3 while also reducing the operational overhead of managing and patching a CMS.
By configuring Amazon CloudFront in front of the website, it will automatically handle the HTTPS functionality, this way the company can
have a secure website with very low operational overhead.
upvoted 1 times
D. Create the new website and an Amazon S3 bucket. Deploy the website on the S3 bucket with static website hosting enabled.
C. Create and deploy an AWS Lambda function to manage and serve the website content.
Option D (using Amazon S3 with static website hosting) would provide high scalability and enhanced security with minimal operational
overhead because it requires little maintenance and can automatically scale to meet increased demand.
Option C (using an AWS Lambda function) would also provide high scalability and enhanced security with minimal operational overhead.
AWS Lambda is a serverless compute service that runs your code in response to events and automatically scales to meet demand. It is
easy to set up and requires minimal maintenance.
upvoted 3 times
Option A (using Amazon CloudFront) and Option B (using an AWS WAF web ACL) would provide HTTPS functionality but would require
additional configuration and maintenance to ensure that they are set up correctly and remain secure.
Option E (using an Auto Scaling group of Amazon EC2 instances behind an Application Load Balancer) would provide high scalability,
but it would require more operational overhead because it involves managing and maintaining EC2 instances.
upvoted 1 times
A company stores its application logs in an Amazon CloudWatch Logs log group. A new policy requires the company to store all application logs
in Amazon OpenSearch Service (Amazon Elasticsearch Service) in near-real time.
Which solution will meet this requirement with the LEAST operational overhead?
A. Configure a CloudWatch Logs subscription to stream the logs to Amazon OpenSearch Service (Amazon Elasticsearch Service).
B. Create an AWS Lambda function. Use the log group to invoke the function to write the logs to Amazon OpenSearch Service (Amazon
Elasticsearch Service).
C. Create an Amazon Kinesis Data Firehose delivery stream. Configure the log group as the delivery streams sources. Configure Amazon
OpenSearch Service (Amazon Elasticsearch Service) as the delivery stream's destination.
D. Install and configure Amazon Kinesis Agent on each application server to deliver the logs to Amazon Kinesis Data Streams. Configure
Kinesis Data Streams to deliver the logs to Amazon OpenSearch Service (Amazon Elasticsearch Service).
Correct Answer: C
> You can configure a CloudWatch Logs log group to stream data it receives to your Amazon OpenSearch Service cluster in NEAR REAL-
TIME through a CloudWatch Logs subscription
This solution uses Amazon Kinesis Data Firehose, which is a fully managed service for streaming data to Amazon OpenSearch Service
(Amazon Elasticsearch Service) and other destinations. You can configure the log group as the source of the delivery stream and Amazon
OpenSearch Service as the destination. This solution requires minimal operational overhead, as Kinesis Data Firehose automatically scales
and handles data delivery, transformation, and indexing.
upvoted 14 times
Option B: Create an AWS Lambda function. Use the log group to invoke the function to write the logs to Amazon OpenSearch Service
(Amazon Elasticsearch Service) would also work, but it may require more operational overhead as you would need to set up and
manage the Lambda function and ensure that it scales to handle the incoming logs.
Option D: Install and configure Amazon Kinesis Agent on each application server to deliver the logs to Amazon Kinesis Data Streams.
Configure Kinesis Data Streams to deliver the logs to Amazon OpenSearch Service (Amazon Elasticsearch Service) would also work, but
it may require more operational overhead as you would need to install and configure the Kinesis Agent on each application server and
set up and manage the Kinesis Data Streams.
upvoted 2 times
Using Kinesis Data Firehose will allow near real-time delivery of the CloudWatch logs to Amazon Elasticsearch Service with the least
operational overhead compared to the other options.
Firehose can be configured to automatically ingest data from CloudWatch Logs into Elasticsearch without needing to run Lambda
functions or install agents on the application servers. This makes it the most operationally simple way to meet the stated requirements.
upvoted 1 times
You can use subscriptions to get access to a real-time feed of log events from CloudWatch Logs and have it delivered to other services
such as an Amazon Kinesis stream, an Amazon Kinesis Data Firehose stream, or AWS Lambda for custom processing, analysis, or loading
to other systems.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Subscriptions.html
upvoted 1 times
Option C (Creating an Amazon Kinesis Data Firehose delivery stream) introduces an additional service (Kinesis Data Firehose) that may not
be necessary for this specific requirement, adding unnecessary complexity.
Option D (Installing and configuring Amazon Kinesis Agent) also introduces additional overhead in terms of manual installation and
configuration on each application server, which may not be needed if the logs are already stored in CloudWatch Logs.
In summary, option A is the correct choice as it provides a straightforward and efficient way to stream logs from CloudWatch Logs to
Amazon OpenSearch Service with minimal operational overhead.
upvoted 3 times
Correct answer is C
upvoted 1 times
answer is A
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_OpenSearch_Stream.html
upvoted 1 times
A company is building a web-based application running on Amazon EC2 instances in multiple Availability Zones. The web application will provide
access to a repository of text documents totaling about 900 TB in size. The company anticipates that the web application will experience periods
of high demand. A solutions architect must ensure that the storage component for the text documents can scale to meet the demand of the
application at all times. The company is concerned about the overall cost of the solution.
Which storage solution meets these requirements MOST cost-effectively?
D. Amazon S3
Correct Answer: D
Option A (Amazon EBS) is block storage designed for individual EC2 instances and may not scale as seamlessly and cost-effectively as S3
for large amounts of data.
Option B (Amazon EFS) is a scalable file storage service, but it may not be the most cost-effective option compared to S3, especially for the
anticipated storage size of 900 TB.
Option C (Amazon OpenSearch Service) is a search and analytics service and may not be suitable as the primary storage solution for the
text documents.
In summary, Amazon S3 is the recommended choice as it offers high scalability, cost-effectiveness, and durability for storing the large
repository of text documents required by the web application.
upvoted 2 times
https://aws.amazon.com/es/opensearch-service/features/
upvoted 2 times
Dr_Chomp 6 months ago
EFS is a good option but expensive alongside S3 and customer concerned about cost - thus: S3 (D)
upvoted 2 times
Amazon S3 is an object storage service that can store and retrieve large amounts of data at any time, from anywhere on the web. It is
designed for high durability, scalability, and cost-effectiveness, making it a suitable choice for storing a large repository of text documents.
With S3, you can store and retrieve any amount of data, at any time, from anywhere on the web, and you can scale your storage up or
down as needed, which will help to meet the demand of the web application. Additionally, S3 allows you to choose between different
storage classes, such as standard, infrequent access, and archive, which will enable you to optimize costs based on your specific use case.
upvoted 1 times
Amazon S3 is an object storage service that is designed to store and retrieve large amounts of data from anywhere on the web. It is highly
scalable, highly available, and cost-effective, making it an ideal choice for storing a large repository of text documents that will experience
periods of high demand. S3 is a standalone storage service that can be accessed from anywhere, and it is designed to handle large
numbers of objects, making it well-suited for storing the 900 TB repository of text documents described in the scenario. It is also designed
to handle high levels of demand, making it suitable for handling periods of high demand.
upvoted 1 times
A global company is using Amazon API Gateway to design REST APIs for its loyalty club users in the us-east-1 Region and the ap-southeast-2
Region. A solutions architect must design a solution to protect these API Gateway managed REST APIs across multiple accounts from SQL
injection and cross-site scripting attacks.
Which solution will meet these requirements with the LEAST amount of administrative effort?
A. Set up AWS WAF in both Regions. Associate Regional web ACLs with an API stage.
B. Set up AWS Firewall Manager in both Regions. Centrally configure AWS WAF rules.
C. Set up AWS Shield in bath Regions. Associate Regional web ACLs with an API stage.
D. Set up AWS Shield in one of the Regions. Associate Regional web ACLs with an API stage.
Correct Answer: A
Using AWS WAF has several benefits. Additional protection against web attacks using criteria that you specify. You can define criteria using
characteristics of web requests such as the following:
Presence of SQL code that is likely to be malicious (known as SQL injection).
Presence of a script that is likely to be malicious (known as cross-site scripting).
AWS Firewall Manager simplifies your administration and maintenance tasks across multiple accounts and resources for a variety of
protections.
https://docs.aws.amazon.com/waf/latest/developerguide/what-is-aws-waf.html
upvoted 14 times
No, AWS Firewall Manager security policies are region specific. Each Firewall Manager policy can only include resources available in that
specified AWS Region. You can create a new policy for each region where you operate.
So you could not centrally (i.e. in one place) configure policies, you would need to do this is each region
upvoted 2 times
Using AWS Firewall Manager to centrally configure AWS WAF rules provides the least administrative effort compared to the other options.
Firewall Manager allows centralized administration of AWS WAF rules across multiple accounts and Regions. WAF rules can be defined
once in Firewall Manager and automatically applied to APIs in all the required Regions and accounts.
upvoted 1 times
Option A (Setting up AWS WAF with Regional web ACLs) requires setting up and managing AWS WAF in each Region separately, which
increases administrative effort.
Option C (Setting up AWS Shield with Regional web ACLs) primarily focuses on DDoS protection and may not provide the same level of
protection against SQL injection and cross-site scripting attacks as AWS WAF.
Option D (Setting up AWS Shield in one Region) provides DDoS protection but does not directly address protection against SQL injection
and cross-site scripting attacks.
In summary, option B offers the most efficient and centralized approach by leveraging AWS Firewall Manager to configure AWS WAF rules
across multiple Regions, minimizing administrative effort while ensuring protection against SQL injection and cross-site scripting attacks.
upvoted 1 times
AWS Firewall Manager allows you to centrally configure and manage AWS WAF rules across multiple accounts and resources. By
setting up AWS Firewall Manager in both the us-east-1 and ap-southeast-2 Regions, you can apply consistent WAF rules to the API
Gateway instances in those regions without the need to individually configure WAF rules for each API Gateway.
upvoted 1 times
La opción B es una solución centralizada que utiliza AWS Firewall Manager para administrar las reglas de AWS WAF en múltiples regiones.
Con esta opción, es posible configurar las reglas de AWS WAF en una sola ubicación y aplicarlas a todas las regiones relevantes de manera
uniforme. Esta solución puede reducir significativamente el esfuerzo administrativo en comparación con la opción A.
upvoted 3 times
https://docs.aws.amazon.com/waf/latest/developerguide/get-started-fms-create-security-policy.html
upvoted 5 times
Regarding "to protect API Gateways across multiple accounts". may be it is an extra information. Web ACLs are at regional level, essentially
filters out HTTP messages irrespective of the account i.e., it is applicable to all accounts.
upvoted 1 times
A company has implemented a self-managed DNS solution on three Amazon EC2 instances behind a Network Load Balancer (NLB) in the us-west-
2 Region. Most of the company's users are located in the United States and Europe. The company wants to improve the performance and
availability of the solution. The company launches and configures three EC2 instances in the eu-west-1 Region and adds the EC2 instances as
targets for a new NLB.
Which solution can the company use to route traffic to all the EC2 instances?
A. Create an Amazon Route 53 geolocation routing policy to route requests to one of the two NLBs. Create an Amazon CloudFront distribution.
Use the Route 53 record as the distribution’s origin.
B. Create a standard accelerator in AWS Global Accelerator. Create endpoint groups in us-west-2 and eu-west-1. Add the two NLBs as
endpoints for the endpoint groups.
C. Attach Elastic IP addresses to the six EC2 instances. Create an Amazon Route 53 geolocation routing policy to route requests to one of the
six EC2 instances. Create an Amazon CloudFront distribution. Use the Route 53 record as the distribution's origin.
D. Replace the two NLBs with two Application Load Balancers (ALBs). Create an Amazon Route 53 latency routing policy to route requests to
one of the two ALBs. Create an Amazon CloudFront distribution. Use the Route 53 record as the distribution’s origin.
Correct Answer: A
Global Accelerator: AWS Global Accelerator is designed to improve the availability and performance of applications by using static IP
addresses (Anycast IPs) and routing traffic over the AWS global network infrastructure.
Endpoint Groups: By creating endpoint groups in both the us-west-2 and eu-west-1 Regions, the company can effectively distribute traffic
to the NLBs in both Regions. This improves availability and allows traffic to be directed to the closest Region based on latency.
upvoted 1 times
AWS Global Accelerator allows routing traffic to endpoints in multiple AWS Regions. It uses the AWS global network to optimize availability
and performance.
Creating an accelerator with endpoint groups in us-west-2 and eu-west-1 allows traffic to be distributed across both regions.
Adding the NLBs in each region as endpoints allows the traffic to be routed to the EC2 instances behind them.
This provides improved performance and availability compared to just using Route 53 geolocation routing.
upvoted 3 times
MNotABot 2 months, 2 weeks ago
B
route requests to one of the two NLBs --> hence AD out / Attach Elastic IP addresses --> who will pay for it?
upvoted 1 times
Option A does not directly address the requirement of routing traffic to all EC2 instances. It focuses on routing based on geolocation and
using CloudFront as a distribution, which may not achieve the desired outcome.
Option C involves managing Elastic IP addresses and routing based on geolocation. However, it may not provide the same level of
performance and availability as AWS Global Accelerator.
Option D focuses on ALBs and latency-based routing. While it can be a valid solution, it does not utilize AWS Global Accelerator and may
require more configuration and management compared to option B.
upvoted 3 times
if it is self-managed DNS, you cannot use Route 53. There can be only 1 DNS service for the domain.
upvoted 1 times
La opción B es la solución adecuada porque permite que la empresa utilice AWS Global Accelerator para enrutar el tráfico a los NLB en
ambas regiones, lo que permite que el tráfico se enrute automáticamente a las instancias EC2 en ambas regiones. AWS Global Accelerator
se encarga de enrutar el tráfico de manera óptima a través de la red global de AWS para minimizar la latencia y mejorar el rendimiento y
la disponibilidad de la solución.
upvoted 3 times
A company is running an online transaction processing (OLTP) workload on AWS. This workload uses an unencrypted Amazon RDS DB instance in
a Multi-AZ deployment. Daily database snapshots are taken from this instance.
What should a solutions architect do to ensure the database and snapshots are always encrypted moving forward?
A. Encrypt a copy of the latest DB snapshot. Replace existing DB instance by restoring the encrypted snapshot.
B. Create a new encrypted Amazon Elastic Block Store (Amazon EBS) volume and copy the snapshots to it. Enable encryption on the DB
instance.
C. Copy the snapshots and enable encryption using AWS Key Management Service (AWS KMS) Restore encrypted snapshot to an existing DB
instance.
D. Copy the snapshots to an Amazon S3 bucket that is encrypted using server-side encryption with AWS Key Management Service (AWS KMS)
managed keys (SSE-KMS).
Correct Answer: A
B. Creating a new encrypted EBS volume for snapshots does not address the encryption of the DB instance itself.
D. Copying snapshots to an encrypted S3 bucket only encrypts the snapshots, but does not address the encryption of the DB instance.
Option C is the most suitable as it involves copying and encrypting the snapshots using AWS KMS, ensuring encryption for both the
database and snapshots.
upvoted 2 times
refers to create a NEW DB instance (this encrypted), never restoring in a existing one.
The RDB engine understands that restoring from a encrypted snapshot is form create an encrypted NEW database.
upvoted 2 times
Target architecture
The destination RDS DB instance is created by restoring the DB snapshot copy of the source RDS DB instance.
An AWS KMS key is used for encryption while restoring the snapshot.
AWS KMS key for encryption – When you create an encrypted DB instance, you can choose a customer managed key or the AWS managed
key for Amazon RDS to encrypt your DB instance. If you don't specify the key identifier for a customer managed key, Amazon RDS uses the
AWS managed key for your new DB instance. Amazon RDS creates an AWS managed key for Amazon RDS for your AWS account.
https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/encrypt-an-existing-amazon-rds-for-postgresql-db-instance.html
upvoted 2 times
Copy the snapshots and enable encryption using AWS Key Management Service (AWS KMS)
Restore encrypted snapshot to an existing DB instance.
This is the correct approach as it allows you to encrypt the existing snapshots and the existing DB instance using AWS KMS. This way, you
can ensure that all data stored in the DB instance and the snapshots are encrypted at rest, providing an additional layer of security.
upvoted 1 times
refers to create a NEW DB instance (this encrypted), never restoring in a existing one.
The RDB engine understands that restoring from a encrypted snapshot is form create an encrypted NEW database.
upvoted 1 times
This option ensures that the database snapshots are encrypted at rest by copying them to an S3 bucket that is encrypted using SSE-KMS.
This option also provides the flexibility to restore the snapshots to a new RDS DB instance in the future, which will also be encrypted by
default.
upvoted 1 times
A company wants to build a scalable key management infrastructure to support developers who need to encrypt data in their applications.
What should a solutions architect do to reduce the operational burden?
B. Use AWS Key Management Service (AWS KMS) to protect the encryption keys.
C. Use AWS Certificate Manager (ACM) to create, store, and assign the encryption keys.
D. Use an IAM policy to limit the scope of users who have access permissions to protect the encryption keys.
Correct Answer: B
AWS KMS handles the encryption key management, rotation, and auditing. This removes the undifferentiated heavy lifting for developers.
KMS integrates natively with many AWS services like S3, EBS, RDS for encryption. This makes it easy to encrypt data.
KMS scales automatically as key usage increases. Developers don't have to worry about provisioning key infrastructure.
Fine-grained access controls are available via IAM policies and grants. KMS is secure by default.
Features like envelope encryption make compliance easier for regulated workloads.
AWS handles the hardware security modules (HSMs) for cryptographic key storage
upvoted 2 times
Option A (MFA), option C (ACM), and option D (IAM policy) are not directly related to reducing the operational burden of key management.
While these options may provide additional security measures or access controls, they do not specifically address the scalability and
management aspects of a key management infrastructure. AWS KMS is designed to simplify the key management process and is the most
suitable option for reducing the operational burden in this scenario.
upvoted 2 times
AWS KMS is a fully managed service that makes it easy to create and manage encryption keys. It allows developers to easily encrypt and
decrypt data in their applications, and it automatically handles the underlying key management tasks, such as key generation, key
rotation, and key deletion. This can help to reduce the operational burden associated with key management.
upvoted 4 times
A company has a dynamic web application hosted on two Amazon EC2 instances. The company has its own SSL certificate, which is on each
instance to perform SSL termination.
There has been an increase in traffic recently, and the operations team determined that SSL encryption and decryption is causing the compute
capacity of the web servers to reach their maximum limit.
What should a solutions architect do to increase the application's performance?
A. Create a new SSL certificate using AWS Certificate Manager (ACM). Install the ACM certificate on each instance.
B. Create an Amazon S3 bucket Migrate the SSL certificate to the S3 bucket. Configure the EC2 instances to reference the bucket for SSL
termination.
C. Create another EC2 instance as a proxy server. Migrate the SSL certificate to the new instance and configure it to direct connections to the
existing EC2 instances.
D. Import the SSL certificate into AWS Certificate Manager (ACM). Create an Application Load Balancer with an HTTPS listener that uses the
SSL certificate from ACM.
Correct Answer: D
An Application Load Balancer (ALB) can offload the SSL termination process from the EC2 instances, which can help to increase the
compute capacity available for the web application. By creating an ALB with an HTTPS listener and using the SSL certificate from ACM, the
ALB can handle the SSL termination process, leaving the EC2 instances free to focus on running the web application.
upvoted 8 times
Using an Application Load Balancer with an HTTPS listener allows SSL termination to happen at the load balancer layer.
The EC2 instances behind the load balancer receive only unencrypted traffic, reducing load on them.
Importing the custom SSL certificate into ACM allows the ALB to use it for HTTPS listeners.
This removes the need to install and manage SSL certificates on each EC2 instance.
ALB handles the SSL overhead and scales automatically. The EC2 fleet focuses on app logic.
Options A, B, C don't offload SSL overhead from the EC2 instances themselves.
upvoted 2 times
Option A suggests creating a new SSL certificate using ACM, but it does not address the SSL termination offloading and load balancing
capabilities provided by an ALB.
Option B suggests migrating the SSL certificate to an S3 bucket, but this approach does not provide the necessary SSL termination and
load balancing functionalities.
Option C suggests creating another EC2 instance as a proxy server, but this adds unnecessary complexity and management overhead
without leveraging the benefits of ALB's built-in load balancing and SSL termination capabilities.
Therefore, option D is the most suitable choice to increase the application's performance in this scenario.
upvoted 1 times
dejung 7 months, 3 weeks ago
Selected Answer: A
Why is A wrong?
upvoted 2 times
A company has a highly dynamic batch processing job that uses many Amazon EC2 instances to complete it. The job is stateless in nature, can be
started and stopped at any given time with no negative impact, and typically takes upwards of 60 minutes total to complete. The company has
asked a solutions architect to design a scalable and cost-effective solution that meets the requirements of the job.
What should the solutions architect recommend?
Correct Answer: A
Answer >> A
upvoted 11 times
Spot can provide significant cost savings (up to 90%) compared to On-Demand.
Since the job is stateless and can be stopped/restarted anytime, the intermittent availability of Spot is not an issue.
Spot supports the same instance types as On-Demand, so optimal instance types can be chosen.
For a 60+ minute batch job, the chance of Spot interruption is low. But if it happens, the job can just be restarted.
Reserved Instances don't offer any advantage for a highly dynamic job like this.
Lambda is not a good fit given the long runtime requirement.
upvoted 3 times
EC2 Spot Instances allow users to bid on spare Amazon EC2 computing capacity and can be a cost-effective solution for stateless,
interruptible workloads that can be started and stopped at any time. Since the batch processing job is stateless, can be started and
stopped at any time, and typically takes upwards of 60 minutes to complete, EC2 Spot Instances would be a good fit for this workload.
upvoted 2 times
k1kavi1 9 months, 1 week ago
Selected Answer: A
Spot Instances should be good enough and cost effective because the job can be started and stopped at any given time with no negative
impact.
upvoted 1 times
A company runs its two-tier ecommerce website on AWS. The web tier consists of a load balancer that sends traffic to Amazon EC2 instances. The
database tier uses an Amazon RDS DB instance. The EC2 instances and the RDS DB instance should not be exposed to the public internet. The
EC2 instances require internet access to complete payment processing of orders through a third-party web service. The application must be highly
available.
Which combination of configuration options will meet these requirements? (Choose two.)
A. Use an Auto Scaling group to launch the EC2 instances in private subnets. Deploy an RDS Multi-AZ DB instance in private subnets.
B. Configure a VPC with two private subnets and two NAT gateways across two Availability Zones. Deploy an Application Load Balancer in the
private subnets.
C. Use an Auto Scaling group to launch the EC2 instances in public subnets across two Availability Zones. Deploy an RDS Multi-AZ DB instance
in private subnets.
D. Configure a VPC with one public subnet, one private subnet, and two NAT gateways across two Availability Zones. Deploy an Application
Load Balancer in the public subnet.
D. Configure a VPC with two public subnets, two private subnets, and two NAT gateways across two Availability Zones. Deploy an Application
Load Balancer in the public subnets.
Correct Answer: CE
If the instances did not require access to the internet, then the answer could have been
(B) to use a private NAT gateway and keep it in the private subnets to communicate only to the VPCs.
https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Scenario2.html
upvoted 15 times
darn 5 months, 2 weeks ago
your link is right but your voting is wrong, should be AD, although that still doesnt explain why 2 NAT gateways
upvoted 3 times
Option A uses an Auto Scaling group to launch the EC2 instances in private subnets, ensuring they are not directly accessible from the
public internet. The RDS Multi-AZ DB instance is also placed in private subnets, maintaining security.
upvoted 1 times
A solutions architect needs to implement a solution to reduce a company's storage costs. All the company's data is in the Amazon S3 Standard
storage class. The company must keep all data for at least 25 years. Data from the most recent 2 years must be highly available and immediately
retrievable.
Which solution will meet these requirements?
B. Set up an S3 Lifecycle policy to transition objects to S3 Glacier Deep Archive after 2 years.
C. Use S3 Intelligent-Tiering. Activate the archiving option to ensure that data is archived in S3 Glacier Deep Archive.
D. Set up an S3 Lifecycle policy to transition objects to S3 One Zone-Infrequent Access (S3 One Zone-IA) immediately and to S3 Glacier Deep
Archive after 2 years.
Correct Answer: B
We have a hard requirement in question which says it should be retreivable immediately for the 2 yrs. which cannot be acheived in
Intelligent tier. So B is the correct option imho.
Because of the above reason Its possible only in S3 standard and then configure lifecycle configuration to move to S3 Glacier Deep Archive
after 2 yrs.
upvoted 10 times
Option C is also incorrect because using S3 Intelligent-Tiering with archiving option would not meet the requirement of immediately
retrievable data for the most recent 2 years.
Option D is not the best choice because transitioning objects to S3 One Zone-Infrequent Access (S3 One Zone-IA) and then to S3 Glacier
Deep Archive would not satisfy the requirement of immediately retrievable data for the most recent 2 years.
Option B is the correct solution. By setting up an S3 Lifecycle policy to transition objects to S3 Glacier Deep Archive after 2 years, the
company can keep all data for at least 25 years while ensuring that data from the most recent 2 years remains highly available and
immediately retrievable in the Amazon S3 Standard storage class. This solution optimizes storage costs by leveraging the Glacier Deep
Archive for long-term storage.
upvoted 1 times
S3 Standard answers for High Availaibility/Immediate retrieval for 2 years. S3 Intelligent Tiering would just incur additional cost of analysis
while the company insures that it requires immediate retrieval in any moment and without risk to Availability. So a capital B
upvoted 2 times
G3 8 months ago
C appears to be appropriate - good case for intelligent tiering
upvoted 1 times
S3 Intelligent Tiering supports changing the default archival time to 730 days (2 years) from the default 90 or 180 days. Other levels of
tiering are instant access tiers.
upvoted 2 times
S3 Lifecycle policies allow you to automatically transition objects to different storage classes based on the age of the object or other
specific criteria. In this case, the company needs to keep all data for at least 25 years, and the data from the most recent 2 years must be
highly available and immediately retrievable.
upvoted 2 times
Option C is not a good solution because S3 Intelligent-Tiering is designed to automatically move objects between two storage classes
(Standard and Infrequent Access) based on object access patterns. It does not provide a way to transition objects to S3 Glacier Deep
Archive, which is required for long-term storage.
Option D is the correct solution because it would transition objects to S3 One Zone-Infrequent Access (S3 One Zone-IA) immediately,
making the data from the most recent 2 years immediately retrievable. After 2 years, the objects would be transitioned to S3 Glacier
Deep Archive for long-term storage. This solution meets the requirements of the company to keep all data for at least 25 years and
make the data from the most recent 2 years immediately retrievable.
upvoted 1 times
S3 Intelligent-Tiering automatically stores objects in three access tiers: one tier optimized for frequent access, a lower-cost tier optimized
for infrequent access, and a very-low-cost tier optimized for rarely accessed data. For a small monthly object monitoring and automation
charge, S3 Intelligent-Tiering moves objects that have not been accessed for 30 consecutive days to the Infrequent Access tier for savings
of 40%; and after 90 days of no access, they’re
There are no retrieval charges in S3 Intelligent-Tiering. S3 Intelligent-Tiering has no minimum eligible object size, but objects smaller than
128 KB are not eligible for auto tiering. These smaller objects may be stored, but they’ll always be charged at the Frequent Access tier
rates and don’t incur the monitoring and automation charge
upvoted 1 times
A media company is evaluating the possibility of moving its systems to the AWS Cloud. The company needs at least 10 TB of storage with the
maximum possible I/O performance for video processing, 300 TB of very durable storage for storing media content, and 900 TB of storage to meet
requirements for archival media that is not in use anymore.
Which set of services should a solutions architect recommend to meet these requirements?
A. Amazon EBS for maximum performance, Amazon S3 for durable data storage, and Amazon S3 Glacier for archival storage
B. Amazon EBS for maximum performance, Amazon EFS for durable data storage, and Amazon S3 Glacier for archival storage
C. Amazon EC2 instance store for maximum performance, Amazon EFS for durable data storage, and Amazon S3 for archival storage
D. Amazon EC2 instance store for maximum performance, Amazon S3 for durable data storage, and Amazon S3 Glacier for archival storage
Correct Answer: A
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html#instance-store-volumes
upvoted 22 times
The biggest Instance Store Storage Optimized option (is4gen.8xlarge) has a capacity of only 3TB.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-store-volumes.html#instance-store-vol-so
upvoted 1 times
D) Amazon EC2 instance store for maximum performance, Amazon S3 for durable data storage, and Amazon S3 Glacier for archival
storage
EC2 instance store provides the highest performance storage for I/O intensive video processing.
S3 provides durable, scalable object storage for the media content library.
Glacier provides the lowest cost archival storage for media no longer in active use.
EBS volumes don't offer the IOPS needed for video processing.
EFS file storage isn't as durable or cost effective for large media libraries as S3.
By matching each storage need with the optimal storage service - EC2, S3, Glacier - this combination meets the performance, durability,
and cost requirements for each storage use case.
upvoted 2 times
Options C and D suggest using Amazon EC2 instance store, which is ephemeral storage that is directly attached to an EC2 instance. While
it can provide high I/O performance, it is not as durable as Amazon EBS or Amazon S3 and does not meet the durability requirements for
long-term data storage.
Therefore, option A is the most suitable recommendation to meet the specified storage requirements for the media company.
upvoted 1 times
Option A suggests using Amazon EBS for maximum performance, but it may not deliver the same level of performance as instance store
for I/O-intensive workloads.
Option B recommends Amazon EFS for durable data storage, but it may not provide the required performance for video processing.
Option C suggests using Amazon EC2 instance store for maximum performance and Amazon EFS for durable data storage, but instance
store may not offer the durability and scalability required for the storage needs of the media company.
upvoted 2 times
The biggest Instance Store Storage Optimized option (is4gen.8xlarge) has a capacity of only 3TB.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-store-volumes.html#instance-store-vol-so
upvoted 2 times
https://aws.amazon.com/es/ec2/instance-types/
upvoted 1 times
So my answer is D
upvoted 2 times
Question #128 Topic 1
A company wants to run applications in containers in the AWS Cloud. These applications are stateless and can tolerate disruptions within the
underlying infrastructure. The company needs a solution that minimizes cost and operational overhead.
What should a solutions architect do to meet these requirements?
A. Use Spot Instances in an Amazon EC2 Auto Scaling group to run the application containers.
B. Use Spot Instances in an Amazon Elastic Kubernetes Service (Amazon EKS) managed node group.
C. Use On-Demand Instances in an Amazon EC2 Auto Scaling group to run the application containers.
D. Use On-Demand Instances in an Amazon Elastic Kubernetes Service (Amazon EKS) managed node group.
Correct Answer: A
https://aws.amazon.com/pm/eks/?trk=c69c708c-c423-4c07-9fc8-
513781540cc7&sc_channel=ps&ef_id=Cj0KCQjw9MCnBhCYARIsAB1WQVWD7pSyGgjzsk6QHMNAIZrHvuAzZd4cy9b4QAaCcB5QTn6MR_czh
WkaAm6UEALw_wcB:G:s&s_kwcid=AL!4422!3!669047416746!e!!g!!eks!20433874212!155230227787#:~:text=is%20Amazon%20EKS%3F-,Ama
zon%20EKS,-is%20a%20managed
I would not try to overthink this.
upvoted 1 times
Guru4Cloud 1 month, 2 weeks ago
Selected Answer: B
The key reasons are:
Option A suggests using Spot Instances in an EC2 Auto Scaling group, which is a valid approach. However, utilizing Amazon EKS provides a
more streamlined and managed environment for running containers.
Options C and D suggest using On-Demand Instances, which would provide stable capacity but may not be the most cost-effective
solution for minimizing costs, as On-Demand Instances typically have higher prices compared to Spot Instances.
upvoted 2 times
Amazon EKS is a fully managed service that makes it easy to run Kubernetes on AWS. By using a managed node group, the company can
take advantage of the operational benefits of Amazon EKS while minimizing the operational overhead of managing the Kubernetes
infrastructure. Spot Instances provide a cost-effective way to run stateless, fault-tolerant applications in containers, making them a good
fit for the company's requirements.
upvoted 5 times
A company is running a multi-tier web application on premises. The web application is containerized and runs on a number of Linux hosts
connected to a PostgreSQL database that contains user records. The operational overhead of maintaining the infrastructure and capacity planning
is limiting the company's growth. A solutions architect must improve the application's infrastructure.
Which combination of actions should the solutions architect take to accomplish this? (Choose two.)
D. Set up Amazon ElastiCache between the web application and the PostgreSQL database.
E. Migrate the web application to be hosted on AWS Fargate with Amazon Elastic Container Service (Amazon ECS).
Correct Answer: AE
Migrating the database to Amazon Aurora provides a high performance, scalable PostgreSQL-compatible database with minimal
overhead.
Migrating the containerized web app to Fargate removes the need to provision and manage EC2 instances. Fargate auto-scales.
Together, Aurora and Fargate reduce operational overhead and complexity for the data and application tiers.
upvoted 1 times
E is the correct answer because migrating the web application to AWS Fargate with Amazon ECS eliminates the need for infrastructure
management, simplifies deployment, and improves resource utilization.
B. Migrating the web application to Amazon EC2 instances would not directly address the operational overhead and capacity planning
concerns mentioned in the scenario.
C. Setting up an Amazon CloudFront distribution improves content delivery but does not directly address the operational overhead or
capacity planning limitations.
D. Configuring Amazon ElastiCache improves performance but does not directly address the operational overhead or capacity planning
challenges mentioned.
Therefore, the correct answers are A and E as they address the requirements, while the incorrect answers (B, C, D) do not provide the
desired solutions.
upvoted 1 times
Amazon Aurora is a fully managed, scalable, and highly available relational database service that is compatible with PostgreSQL. Migrating
the database to Amazon Aurora would reduce the operational overhead of maintaining the database infrastructure and allow the
company to focus on building and scaling the application.
AWS Fargate is a fully managed container orchestration service that enables users to run containers without the need to manage the
underlying EC2 instances. By using AWS Fargate with Amazon Elastic Container Service (Amazon ECS), the solutions architect can improve
the scalability and efficiency of the web application and reduce the operational overhead of maintaining the underlying infrastructure.
upvoted 1 times
An application runs on Amazon EC2 instances across multiple Availability Zonas. The instances run in an Amazon EC2 Auto Scaling group behind
an Application Load Balancer. The application performs best when the CPU utilization of the EC2 instances is at or near 40%.
What should a solutions architect do to maintain the desired performance across all instances in the group?
A. Use a simple scaling policy to dynamically scale the Auto Scaling group.
B. Use a target tracking policy to dynamically scale the Auto Scaling group.
C. Use an AWS Lambda function ta update the desired Auto Scaling group capacity.
D. Use scheduled scaling actions to scale up and scale down the Auto Scaling group.
Correct Answer: B
A target tracking policy allows the Auto Scaling group to automatically adjust the number of EC2 instances in the group based on a target
value for a metric. In this case, the target value for the CPU utilization metric could be set to 40% to maintain the desired performance of
the application. The Auto Scaling group would then automatically scale the number of instances up or down as needed to maintain the
target value for the metric.
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-simple-step.html
upvoted 7 times
A target tracking policy allows defining a specific target metric value to maintain, in this case 40% CPU utilization.
Auto Scaling will automatically add or remove instances to keep utilization at the target level, without manual intervention.
This will dynamically scale the group to maintain performance as load changes.
A simple scaling policy only responds to breaching thresholds, not maintaining a target.
Scheduled actions and Lambda would require manual calculation and updates to track utilization.
Target tracking policies are the native Auto Scaling feature designed to maintain a metric at a target value.
upvoted 1 times
A suggests using a simple scaling policy, which allows for scaling based on a fixed metric or threshold. However, it may not be as effective
as a target tracking policy in dynamically adjusting the capacity to maintain a specific CPU utilization level.
C suggests using an Lambda to update the desired capacity. While this can be done programmatically, it would require custom scripting
and may not provide the same level of automation and responsiveness as a target tracking policy.
D suggests using scheduled scaling actions to scale up and down ASG at predefined times. This approach is not suitable for maintaining
the desired performance in real-time based on actual CPU utilization.
upvoted 2 times
With a target tracking scaling policy, you can increase or decrease the current capacity of the group based on a target value for a specific
metric. This policy will help resolve the over-provisioning of your resources. The scaling policy adds or removes capacity as required to
keep the metric at, or close to, the specified target value. In addition to keeping the metric close to the target value, a target tracking
scaling policy also adjusts to changes in the metric due to a changing load pattern.
upvoted 3 times
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-target-tracking.html
upvoted 4 times
A company is developing a file-sharing application that will use an Amazon S3 bucket for storage. The company wants to serve all the files
through an Amazon CloudFront distribution. The company does not want the files to be accessible through direct navigation to the S3 URL.
What should a solutions architect do to meet these requirements?
A. Write individual policies for each S3 bucket to grant read permission for only CloudFront access.
B. Create an IAM user. Grant the user read permission to objects in the S3 bucket. Assign the user to CloudFront.
C. Write an S3 bucket policy that assigns the CloudFront distribution ID as the Principal and assigns the target S3 bucket as the Amazon
Resource Name (ARN).
D. Create an origin access identity (OAI). Assign the OAI to the CloudFront distribution. Configure the S3 bucket permissions so that only the
OAI has read permission.
Correct Answer: D
An OAI provides secure access between CloudFront and S3 without exposing the S3 bucket publicly.
The OAI is associated with the CloudFront distribution.
The S3 bucket policy limits access only to that OAI.
This ensures only CloudFront can access the objects, not direct S3 access.
Option A is complex to manage individual bucket policies.
Option B exposes credentials that aren't needed.
Option C works but OAI is the preferred method.
So using an origin access identity provides the most secure way to serve private S3 content through CloudFront. The OAI prevents direct
public access to the S3 bucket.
upvoted 2 times
Option A suggests writing individual policies for each S3 bucket, which can be cumbersome and difficult to manage, especially if there are
multiple buckets involved.
Option B suggests creating an IAM user and assigning it to CloudFront, but this does not address restricting direct access to the S3 bucket
URL.
Option C suggests writing an S3 bucket policy with CloudFront distribution ID as the Principal, but this alone does not provide the
necessary restrictions to prevent direct access to the S3 bucket URL.
upvoted 2 times
All Amazon S3 buckets in all AWS Regions, including opt-in Regions launched after December 2022
Amazon S3 server-side encryption with AWS KMS (SSE-KMS)
Dynamic requests (PUT and DELETE) to Amazon S3
OAI doesn't work for the scenarios in the preceding list, or it requires extra workarounds in those scenarios.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html
upvoted 1 times
Buruguduystunstugudunstuy 9 months, 1 week ago
Selected Answer: D
The correct answer is D. To meet the requirements, the solutions architect should create an origin access identity (OAI) and assign it to the
CloudFront distribution. The S3 bucket permissions should be configured so that only the OAI has read permission.
An OAI is a special CloudFront user that is associated with a CloudFront distribution and is used to give CloudFront access to the files in an
S3 bucket. By using an OAI, the company can serve the files through the CloudFront distribution while preventing direct access to the S3
bucket.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html
upvoted 3 times
A company’s website provides users with downloadable historical performance reports. The website needs a solution that will scale to meet the
company’s website demands globally. The solution should be cost-effective, limit the provisioning of infrastructure resources, and provide the
fastest possible response time.
Which combination should a solutions architect recommend to meet these requirements?
Correct Answer: A
Option B is incorrect because AWS Lambda and Amazon DynamoDB are not the most suitable services for serving downloadable files and
meeting the website demands globally.
Option C is incorrect because using an Application Load Balancer with Amazon EC2 Auto Scaling may require more infrastructure
provisioning and management compared to the CloudFront and S3 combination. Additionally, it may not provide the same level of global
scalability and fast response times as CloudFront.
Option D is incorrect because while Amazon Route 53 is a global DNS service, it alone does not provide the caching and content delivery
capabilities required for serving the downloadable reports. Internal Application Load Balancers do not address the global scalability and
caching requirements specified in the scenario.
upvoted 4 times
By combining Amazon CloudFront and Amazon S3, the solutions architect can provide a scalable and cost-effective solution that limits the
provisioning of infrastructure resources and provides the fastest possible response time.
https://aws.amazon.com/cloudfront/
https://aws.amazon.com/s3/
upvoted 3 times
techhb 9 months, 1 week ago
A is correct
upvoted 1 times
A company runs an Oracle database on premises. As part of the company’s migration to AWS, the company wants to upgrade the database to the
most recent available version. The company also wants to set up disaster recovery (DR) for the database. The company needs to minimize the
operational overhead for normal operations and DR setup. The company also needs to maintain access to the database's underlying operating
system.
Which solution will meet these requirements?
A. Migrate the Oracle database to an Amazon EC2 instance. Set up database replication to a different AWS Region.
B. Migrate the Oracle database to Amazon RDS for Oracle. Activate Cross-Region automated backups to replicate the snapshots to another
AWS Region.
C. Migrate the Oracle database to Amazon RDS Custom for Oracle. Create a read replica for the database in another AWS Region.
D. Migrate the Oracle database to Amazon RDS for Oracle. Create a standby database in another Availability Zone.
Correct Answer: D
https://aws.amazon.com/blogs/database/implementing-a-disaster-recovery-strategy-with-amazon-rds/
upvoted 21 times
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/rds-custom.html
and
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/working-with-custom-oracle.html
upvoted 15 times
RDS Custom provides a fully managed Oracle database instance. This reduces operational overhead compared to EC2.
RDS Custom allows accessing the underlying OS which is required.
Creating a read replica in another Region provides a simple DR solution.
RDS Automated Backups are within a single region. Cross-region DR requires replication.
RDS standby in the same AZ doesn't provide geographic diversity for DR.
So RDS Custom meets the managed database, OS access, and simple DR needs. The cross-region read replica provides geographic
diversity for DR. This is the right fit based on the requirements.
upvoted 2 times
You can't create RDS Custom for Oracle replicas in read-only mode. However, you can manually change the mode of mounted replicas to
read-only, and from read-only to mounted. For more information, see the documentation for the create-db-instance-read-replica AWS CLI
command.
You can't change the value of the Oracle Data Guard CommunicationTimeout parameter. This parameter is set to 15 seconds for RDS
Custom for Oracle DB instances.
upvoted 2 times
this link clearly mentions, you cant create cross region replicas RDS custom for oracle
upvoted 2 times
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/custom-rr.html#custom-rr.limitations
upvoted 1 times
Option A suggests migrating the Oracle database to an Amazon EC2 instance and setting up database replication to a different AWS
Region. This approach requires more operational overhead and management compared to using a managed service like Amazon RDS.
Option B suggests migrating the Oracle database to Amazon RDS for Oracle and activating Cross-Region automated backups. While this
provides backups in another AWS Region, it does not provide the same level of disaster recovery and failover capabilities as a read replica
in another Region.
Option D suggests migrating the Oracle database to Amazon RDS for Oracle and creating a standby database in another Availability Zone.
However, this solution only provides availability within the same Region and does not meet the requirement of having disaster recovery
across AWS Regions.
upvoted 1 times
So, as access to the OS is needed and RDS Custom is ruled out (which DOES give you access), the answer is clearly A.
upvoted 3 times
Question #134 Topic 1
A company wants to move its application to a serverless solution. The serverless solution needs to analyze existing and new data by using SL.
The company stores the data in an Amazon S3 bucket. The data requires encryption and must be replicated to a different AWS Region.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create a new S3 bucket. Load the data into the new S3 bucket. Use S3 Cross-Region Replication (CRR) to replicate encrypted objects to an
S3 bucket in another Region. Use server-side encryption with AWS KMS multi-Region kays (SSE-KMS). Use Amazon Athena to query the data.
B. Create a new S3 bucket. Load the data into the new S3 bucket. Use S3 Cross-Region Replication (CRR) to replicate encrypted objects to an
S3 bucket in another Region. Use server-side encryption with AWS KMS multi-Region keys (SSE-KMS). Use Amazon RDS to query the data.
C. Load the data into the existing S3 bucket. Use S3 Cross-Region Replication (CRR) to replicate encrypted objects to an S3 bucket in another
Region. Use server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Use Amazon Athena to query the data.
D. Load the data into the existing S3 bucket. Use S3 Cross-Region Replication (CRR) to replicate encrypted objects to an S3 bucket in another
Region. Use server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Use Amazon RDS to query the data.
Correct Answer: A
A company runs workloads on AWS. The company needs to connect to a service from an external provider. The service is hosted in the provider's
VPC. According to the company’s security team, the connectivity must be private and must be restricted to the target service. The connection
must be initiated only from the company’s VPC.
Which solution will mast these requirements?
A. Create a VPC peering connection between the company's VPC and the provider's VPC. Update the route table to connect to the target
service.
B. Ask the provider to create a virtual private gateway in its VPC. Use AWS PrivateLink to connect to the target service.
C. Create a NAT gateway in a public subnet of the company’s VPUpdate the route table to connect to the target service.
D. Ask the provider to create a VPC endpoint for the target service. Use AWS PrivateLink to connect to the target service.
Correct Answer: D
By asking the provider to create a VPC endpoint for the target service, the company can use AWS PrivateLink to connect to the target
service. This enables the company to access the service privately and securely over an Amazon VPC endpoint, without requiring a NAT
gateway, VPN, or AWS Direct Connect. Additionally, this will restrict the connectivity only to the target service, as required by the
company's security team.
Option A VPC peering connection may not meet security requirement as it can allow communication between all resources in both VPCs.
Option B, asking the provider to create a virtual private gateway in its VPC and use AWS PrivateLink to connect to the target service is not
the optimal solution because it may require the provider to make changes and also you may face security issues.
Option C, creating a NAT gateway in a public subnet of the company’s VPC can expose the target service to the internet, which would not
meet the security requirements.
upvoted 6 times
Ask the provider to create a VPC endpoint for the target service
Use AWS PrivateLink to connect to the target service
The reasons are:
PrivateLink provides private connectivity between VPCs without using public internet.
The provider creates a VPC endpoint in their VPC for the target service.
The company uses PrivateLink to securely access the endpoint from their VPC.
Connectivity is restricted only to the target service.
The connection is initiated only from the company's VPC.
Options A, B, C would expose the connection to the public internet or require infrastructure changes in the provider's VPC.
PrivateLink enables private, restricted connectivity to the target service without VPC peering or public exposure.
upvoted 1 times
A. VPC peering does not restrict access only to the target service.
B. PrivateLink is typically used for accessing AWS services, not external services in a provider's VPC.
C. NAT gateway does not provide a private and restricted connection to the target service.
Option D is the correct choice as it uses AWS PrivateLink and VPC endpoint to establish a private and restricted connection from the
company's VPC to the target service in the provider's VPC.
upvoted 2 times
VPC Endpoint:
Allows access to a specific service or application. Only the ECSs and load balancers in the VPC for which VPC endpoint services are created
can be accessed.
upvoted 1 times
https://www.tinystacks.com/blog-post/aws-vpc-peering-vs-privatelink-which-to-use-and-when/
upvoted 1 times
* Ask the provider to create a VPC endpoint for the target service.
* Use AWS PrivateLink to connect to the target service.
Option D involves asking the provider to create a VPC endpoint for the target service, which is a private connection to the service that is
hosted in the provider's VPC. This ensures that the connection is private and restricted to the target service, as required by the company's
security team. The company can then use AWS PrivateLink to connect to the target service over the VPC endpoint. AWS PrivateLink is a
fully managed service that enables you to privately access services hosted on AWS, on-premises, or in other VPCs. It provides secure and
private connectivity to services by using private IP addresses, which ensures that traffic stays within the Amazon network and does not
traverse the public internet.
https://support.huaweicloud.com/intl/en-
us/vpcep_faq/vpcep_04_0004.html#:~:text=You%20can%20create%20a%20VPC%20endpoint%20to%20connect%20your%20local,connectio
n%20over%20an%20internal%20network.&text=VPC%20Peering%20supports%20only%20communications%20between%20two%20VPCs
%20in%20the%20same%20region.&text=You%20can%20use%20Cloud%20Connect,between%20VPCs%20in%20different%20regions.
upvoted 1 times
A company is migrating its on-premises PostgreSQL database to Amazon Aurora PostgreSQL. The on-premises database must remain online and
accessible during the migration. The Aurora database must remain synchronized with the on-premises database.
Which combination of actions must a solutions architect take to meet these requirements? (Choose two.)
D. Convert the database schema by using the AWS Schema Conversion Tool (AWS SCT).
E. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to monitor the database synchronization.
Correct Answer: CD
An ongoing DMS replication task keeps the source and target databases synchronized during the migration.
The DMS replication server manages and executes the replication tasks.
Together, these will continuously replicate changes from on-prem to Aurora to keep them in sync.
A database backup alone wouldn't maintain synchronization.
upvoted 1 times
PS: if the question was just asking us something related to the DB migration process alone, all options would be correct.
upvoted 2 times
G3 8 months ago
https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-an-on-premises-postgresql-database-to-aurora-
postgresql.html
This link talks about using DMS . I saw the other link pointing to SCT - not sure which one is correct
upvoted 1 times
D. Convert the database schema by using the AWS Schema Conversion Tool (AWS SCT): The AWS SCT can be used to convert the schema of
the on-premises database to a format that is compatible with Aurora. This will ensure that the data can be properly migrated and that the
Aurora database can be used with the same applications and queries as the on-premises database.
upvoted 2 times
Option C. Create an AWS Database Migration Service (AWS DMS) replication server. This will allow the architect to use AWS DMS to migrate
data from the on-premises database to the Aurora database. AWS DMS can also be used to continuously replicate data between the two
databases to keep them synchronized.
upvoted 3 times
techhb 9 months, 1 week ago
Selected Answer: CD
C&D ,SCT is required,its a mandate not an option.
upvoted 2 times
Question #137 Topic 1
A company uses AWS Organizations to create dedicated AWS accounts for each business unit to manage each business unit's account
independently upon request. The root email recipient missed a notification that was sent to the root user email address of one account. The
company wants to ensure that all future notifications are not missed. Future notifications must be limited to account administrators.
Which solution will meet these requirements?
A. Configure the company’s email server to forward notification email messages that are sent to the AWS account root user email address to
all users in the organization.
B. Configure all AWS account root user email addresses as distribution lists that go to a few administrators who can respond to alerts.
Configure AWS account alternate contacts in the AWS Organizations console or programmatically.
C. Configure all AWS account root user email messages to be sent to one administrator who is responsible for monitoring alerts and
forwarding those alerts to the appropriate groups.
D. Configure all existing AWS accounts and all newly created accounts to use the same root user email address. Configure AWS account
alternate contacts in the AWS Organizations console or programmatically.
Correct Answer: D
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_best-practices_mgmt-acct.html#best-practices_mgmt-acct_email-
address
is B.
D would be best if it'd said that the email you configure as "root user email address" will be a distribution list.
The phrase "all future notifications are not missed" points to D, cos' it said:
".. and all newly created accounts to use the same root user email address"
so the future account that will be created will be covered with the business policy.
Use an email address that forwards received messages directly to a list of senior business managers. In the event that AWS needs to
contact the owner of the account, for example, to confirm access, the email is distributed to multiple parties. This approach helps to
reduce the risk of delays in responding, even if individuals are on vacation, out sick, or leave the business.
upvoted 2 times
Option D. Configure all existing AWS accounts and all newly created accounts to use the same root user email address. Configure AWS
account alternate contacts in the AWS Organizations console or programmatically.
By configuring all AWS accounts to use the same root user email address and setting up AWS account alternate contacts, the company can
ensure that all notifications are sent to a single email address that is monitored by one or more administrators. This will allow the
company to ensure that all notifications are received and responded to promptly, without the risk of notifications being missed.
upvoted 3 times
bullrem 8 months, 1 week ago
Option D would not meet the requirement of limiting the notifications to account administrators. Instead, it is better to use option B,
which is to configure all AWS account root user email addresses as distribution lists that go to a few administrators who can respond to
alerts. This way, the company can ensure that the notifications are received by the appropriate people and that they are not missed.
Additionally, AWS account alternate contacts can be configured in the AWS Organizations console or programmatically, which allows
the company to have more granular control over who receives the notifications.
upvoted 4 times
A company runs its ecommerce application on AWS. Every new order is published as a massage in a RabbitMQ queue that runs on an Amazon EC2
instance in a single Availability Zone. These messages are processed by a different application that runs on a separate EC2 instance. This
application stores the details in a PostgreSQL database on another EC2 instance. All the EC2 instances are in the same Availability Zone.
The company needs to redesign its architecture to provide the highest availability with the least operational overhead.
What should a solutions architect do to meet these requirements?
A. Migrate the queue to a redundant pair (active/standby) of RabbitMQ instances on Amazon MQ. Create a Multi-AZ Auto Scaling group for
EC2 instances that host the application. Create another Multi-AZ Auto Scaling group for EC2 instances that host the PostgreSQL database.
B. Migrate the queue to a redundant pair (active/standby) of RabbitMQ instances on Amazon MQ. Create a Multi-AZ Auto Scaling group for
EC2 instances that host the application. Migrate the database to run on a Multi-AZ deployment of Amazon RDS for PostgreSQL.
C. Create a Multi-AZ Auto Scaling group for EC2 instances that host the RabbitMQ queue. Create another Multi-AZ Auto Scaling group for EC2
instances that host the application. Migrate the database to run on a Multi-AZ deployment of Amazon RDS for PostgreSQL.
D. Create a Multi-AZ Auto Scaling group for EC2 instances that host the RabbitMQ queue. Create another Multi-AZ Auto Scaling group for EC2
instances that host the application. Create a third Multi-AZ Auto Scaling group for EC2 instances that host the PostgreSQL database
Correct Answer: B
A. Incorrect because it does not address the high availability requirement for the RabbitMQ queue and the PostgreSQL database.
C. Incorrect because it does not provide redundancy for the RabbitMQ queue and does not address the high availability requirement for
the PostgreSQL database.
D. Incorrect because it does not address the high availability requirement for the RabbitMQ queue and does not provide redundancy for
the application instances.
upvoted 2 times
* By migrating the queue to Amazon MQ, the architect can take advantage of the built-in high availability and failover capabilities of the
service, which will help ensure that messages are delivered reliably and without interruption.
* By creating a Multi-AZ Auto Scaling group for the EC2 instances that host the application, the architect can ensure that the application is
highly available and able to handle increased traffic without the need for manual intervention.
* By migrating the database to a Multi-AZ deployment of Amazon RDS for PostgreSQL, the architect can take advantage of the built-in
high availability and failover capabilities of the service, which will help ensure that the database is always available and able to handle
increased traffic.
A reporting team receives files each day in an Amazon S3 bucket. The reporting team manually reviews and copies the files from this initial S3
bucket to an analysis S3 bucket each day at the same time to use with Amazon QuickSight. Additional teams are starting to send more files in
larger sizes to the initial S3 bucket.
The reporting team wants to move the files automatically analysis S3 bucket as the files enter the initial S3 bucket. The reporting team also wants
to use AWS Lambda functions to run pattern-matching code on the copied data. In addition, the reporting team wants to send the data files to a
pipeline in Amazon SageMaker Pipelines.
What should a solutions architect do to meet these requirements with the LEAST operational overhead?
A. Create a Lambda function to copy the files to the analysis S3 bucket. Create an S3 event notification for the analysis S3 bucket. Configure
Lambda and SageMaker Pipelines as destinations of the event notification. Configure s3:ObjectCreated:Put as the event type.
B. Create a Lambda function to copy the files to the analysis S3 bucket. Configure the analysis S3 bucket to send event notifications to
Amazon EventBridge (Amazon CloudWatch Events). Configure an ObjectCreated rule in EventBridge (CloudWatch Events). Configure Lambda
and SageMaker Pipelines as targets for the rule.
C. Configure S3 replication between the S3 buckets. Create an S3 event notification for the analysis S3 bucket. Configure Lambda and
SageMaker Pipelines as destinations of the event notification. Configure s3:ObjectCreated:Put as the event type.
D. Configure S3 replication between the S3 buckets. Configure the analysis S3 bucket to send event notifications to Amazon EventBridge
(Amazon CloudWatch Events). Configure an ObjectCreated rule in EventBridge (CloudWatch Events). Configure Lambda and SageMaker
Pipelines as targets for the rule.
Correct Answer: A
They are doing exactly the same thing while C and D do not require setting up of lambda, which should be more efficient
The question says the team is manually copying the files, automatically replicating the files should be the most efficient method vs
manually copying or copying with lambda.
upvoted 20 times
S3 event notification can only send notifications to SQS. SNS and Lambda, BUT not Sagamaker
https://docs.aws.amazon.com/AmazonS3/latest/userguide/NotificationHowTo.html
S3 event notification can send notification to SNS, SQS and Lambda, but not SageMaker
upvoted 8 times
A solutions architect needs to help a company optimize the cost of running an application on AWS. The application will use Amazon EC2
instances, AWS Fargate, and AWS Lambda for compute within the architecture.
The EC2 instances will run the data ingestion layer of the application. EC2 usage will be sporadic and unpredictable. Workloads that run on EC2
instances can be interrupted at any time. The application front end will run on Fargate, and Lambda will serve the API layer. The front-end
utilization and API layer utilization will be predictable over the course of the next year.
Which combination of purchasing options will provide the MOST cost-effective solution for hosting this application? (Choose two.)
C. Purchase a 1-year Compute Savings Plan for the front end and API layer.
D. Purchase 1-year All Upfront Reserved instances for the data ingestion layer.
E. Purchase a 1-year EC2 instance Savings Plan for the front end and API layer.
Correct Answer: AC
C) Purchase a 1-year Compute Savings Plan for the front end and API layer
Spot Instances provide the greatest savings for flexible, interruptible EC2 workloads like data ingestion.
Savings Plans offer significant discounts for predictable usage like the front end and API layer.
All Upfront and partial/no Upfront RI's don't align well with the sporadic EC2 usage.
On-Demand is more expensive than Spot for flexible EC2 workloads.
By matching purchasing options to the workload patterns, Spot for unpredictable EC2 and Savings Plans for steady-state usage, the
solutions architect optimizes cost efficiency.
upvoted 1 times
Purchasing a 1-year Compute Savings Plan for the front end and API layer will provide cost savings for predictable utilization over the
course of a year (Option C).
Option B is less cost-effective as it suggests using On-Demand Instances for the data ingestion layer, which does not take advantage of
cost-saving opportunities.
Option D suggests purchasing 1-year All Upfront Reserved instances for the data ingestion layer, which may not be optimal for sporadic
and unpredictable workloads.
Option E suggests purchasing a 1-year EC2 instance Savings Plan for the front end and API layer, but Compute Savings Plans are typically
more suitable for predictable workloads.
upvoted 3 times
Abrar2022 4 months ago
Spot instances for data injection because the task can be terminated at anytime and tolerate disruption. Compute Saving Plan is cheaper
than EC2 instance Savings plan.
upvoted 1 times
Therefore, the most cost-effective solution for hosting this application would be to use Spot Instances for the data ingestion layer and to
purchase either a 1-year Compute Savings Plan or a 1-year EC2 instance Savings Plan for the front-end and API layer.
upvoted 1 times
A company runs a web-based portal that provides users with global breaking news, local alerts, and weather updates. The portal delivers each
user a personalized view by using mixture of static and dynamic content. Content is served over HTTPS through an API server running on an
Amazon EC2 instance behind an Application Load Balancer (ALB). The company wants the portal to provide this content to its users across the
world as quickly as possible.
How should a solutions architect design the application to ensure the LEAST amount of latency for all users?
A. Deploy the application stack in a single AWS Region. Use Amazon CloudFront to serve all static and dynamic content by specifying the ALB
as an origin.
B. Deploy the application stack in two AWS Regions. Use an Amazon Route 53 latency routing policy to serve all content from the ALB in the
closest Region.
C. Deploy the application stack in a single AWS Region. Use Amazon CloudFront to serve the static content. Serve the dynamic content
directly from the ALB.
D. Deploy the application stack in two AWS Regions. Use an Amazon Route 53 geolocation routing policy to serve all content from the ALB in
the closest Region.
Correct Answer: B
CloudFront reduces latency if its only static content, which is not the case here.
For Dynamic content, CF cant cache the content so it sends the traffic through the AWS Network which does reduces latency, but it still has
to travel through another region.
For the case with 2 region and Route 53 latency routing, Route 53 detects the nearest resouce (with lowest latency) and routes the traffic
there. Because the traffic does not have to travel to resources far away, it should have the least latency in this case here.
upvoted 8 times
A. Deploy the application stack in a single AWS Region. Use Amazon CloudFront to serve all static and dynamic content by specifying the
ALB as an origin.
Here's why:
Option A (Single AWS Region, Amazon CloudFront for both static and dynamic content):
Deploying the application stack in a single AWS Region helps reduce complexity and potential data synchronization issues that might arise
from using multiple regions
upvoted 1 times
https://aws.amazon.com/cloudfront/dynamic-content/
upvoted 1 times
https://aws.amazon.com/cloudfront/dynamic-content/
upvoted 1 times
https://aws.amazon.com/cloudfront/dynamic-content/
upvoted 2 times
https://aws.amazon.com/cloudfront/dynamic-content/
upvoted 2 times
A gaming company is designing a highly available architecture. The application runs on a modified Linux kernel and supports only UDP-based
traffic. The company needs the front-end tier to provide the best possible user experience. That tier must have low latency, route traffic to the
nearest edge location, and provide static IP addresses for entry into the application endpoints.
What should a solutions architect do to meet these requirements?
A. Configure Amazon Route 53 to forward requests to an Application Load Balancer. Use AWS Lambda for the application in AWS Application
Auto Scaling.
B. Configure Amazon CloudFront to forward requests to a Network Load Balancer. Use AWS Lambda for the application in an AWS Application
Auto Scaling group.
C. Configure AWS Global Accelerator to forward requests to a Network Load Balancer. Use Amazon EC2 instances for the application in an
EC2 Auto Scaling group.
D. Configure Amazon API Gateway to forward requests to an Application Load Balancer. Use Amazon EC2 instances for the application in an
EC2 Auto Scaling group.
Correct Answer: C
* AWS Global Accelerator is a service that routes traffic to the nearest edge location, providing low latency and static IP addresses for the
front-end tier. It supports UDP-based traffic, which is required by the application.
* A Network Load Balancer is a layer 4 load balancer that can handle UDP traffic and provide static IP addresses for the application
endpoints.
* An EC2 Auto Scaling group ensures that the required number of Amazon EC2 instances is available to meet the demand of the
application. This will help the front-end tier to provide the best possible user experience.
Option A is not a valid solution because Amazon Route 53 does not support UDP traffic.
Option B is not a valid solution because Amazon CloudFront does not support UDP traffic.
Option D is not a valid solution because Amazon API Gateway does not support UDP traffic.
upvoted 5 times
Option A suggests using AWS Lambda for the application, but Lambda is not suitable for long-running UDP-based applications and may
not provide the required low latency.
Option B suggests using CloudFront, which is primarily designed for HTTP/HTTPS traffic and does not have native support for UDP-based
traffic.
Option D suggests using API Gateway, which is primarily used for RESTful APIs and does not support UDP-based traffic.
upvoted 2 times
A company wants to migrate its existing on-premises monolithic application to AWS. The company wants to keep as much of the front-end code
and the backend code as possible. However, the company wants to break the application into smaller applications. A different team will manage
each application. The company needs a highly scalable solution that minimizes operational overhead.
Which solution will meet these requirements?
A. Host the application on AWS Lambda. Integrate the application with Amazon API Gateway.
B. Host the application with AWS Amplify. Connect the application to an Amazon API Gateway API that is integrated with AWS Lambda.
C. Host the application on Amazon EC2 instances. Set up an Application Load Balancer with EC2 instances in an Auto Scaling group as
targets.
D. Host the application on Amazon Elastic Container Service (Amazon ECS). Set up an Application Load Balancer with Amazon ECS as the
target.
Correct Answer: D
ECS allows running Docker containers, so the existing monolithic app can be containerized and run on ECS with minimal code changes.
The app can be broken into smaller microservices by containerizing different components and managing them separately.
ECS provides auto scaling capabilities to scale each microservice independently.
Using an Application Load Balancer with ECS enables distributing traffic across containers and auto scaling.
ECS has minimal operational overhead compared to managing EC2 instances directly.
Serverless options like Lambda and API Gateway would require significant code refactoring which is not ideal for migrating an existing
app.
upvoted 2 times
A is more suitable for event-driven and serverless workloads. It may not be the ideal choice for migrating a monolithic application and
maintaining the existing codebase.
B integrates with Lambda and API Gateway, it may not provide the required flexibility for breaking the application into smaller applications
and managing them independently.
C would involve managing the infrastructure and scaling manually. It may result in higher operational overhead compared to using a
container service like ECS.
upvoted 2 times
antropaws 4 months ago
Selected Answer: D
I was confused about this, but actually Amazon ECS service can be configured to use Elastic Load Balancing to distribute traffic evenly
across the tasks in your service.
https://docs.aws.amazon.com/AmazonECS/latest/userguide/create-application-load-balancer.html
upvoted 1 times
If it is ECS, operational overhead and can only be scaled up to an EC2 assigned under it.
upvoted 2 times
Hosting the application on Amazon ECS would allow the company to break the monolithic application into smaller, more manageable
applications that can be managed by different teams. Amazon ECS is a fully managed container orchestration service that makes it easy to
deploy, run, and scale containerized applications. By setting up an Application Load Balancer with Amazon ECS as the target, the company
can ensure that the solution is highly scalable and minimizes operational overhead.
upvoted 1 times
Option A is not a valid solution because AWS Lambda is not suitable for hosting long-running applications.
Option B is not a valid solution because AWS Amplify is a framework for building, deploying, and managing web applications, not a
hosting solution.
Option C is not a valid solution because Amazon EC2 instances are not fully managed container orchestration services. The company will
need to manage the EC2 instances, which will increase operational overhead.
upvoted 3 times
career360guru 9 months, 2 weeks ago
Selected Answer: D
It can be C or D depending on how easy it would be to containerize the application. If application needs persistent local data store then C
would be a better choice.
Also from the usecase description it is not clear whether application is http based application or not though all options uses ALB only so
we can safely assume that this is http based application only.
upvoted 2 times
A company recently started using Amazon Aurora as the data store for its global ecommerce application. When large reports are run, developers
report that the ecommerce application is performing poorly. After reviewing metrics in Amazon CloudWatch, a solutions architect finds that the
ReadIOPS and CPUUtilizalion metrics are spiking when monthly reports run.
What is the MOST cost-effective solution?
Correct Answer: B
A is wrong because migrating to Amazon Redshift introduces additional costs and complexity, and it may not be necessary to switch to a
separate data warehousing service for this specific issue.
C is wrong because simply increasing the instance class of the Aurora database may not be the most cost-effective solution if the
performance issue can be resolved by offloading the reporting workload to an Aurora Replica.
D is wrong because increasing the Provisioned IOPS alone may not address the issue of spikes in CPUUtilization during large reports, as it
primarily focuses on storage performance rather than overall database performance.
upvoted 3 times
Option A: Migrating the monthly reporting to Amazon Redshift may not be cost-effective because it involves creating a new data store
and potentially significant data migration and ETL costs.
Option C: Migrating the Aurora database to a larger instance class may not be cost-effective because it involves changing the
underlying hardware of the database and potentially incurring additional costs for the larger instance.
Option D: Increasing the Provisioned IOPS on the Aurora instance may not be cost-effective because it involves paying for additional
I/O capacity that may not be necessary for other workloads on the database.
upvoted 5 times
A company hosts a website analytics application on a single Amazon EC2 On-Demand Instance. The analytics software is written in PHP and uses
a MySQL database. The analytics software, the web server that provides PHP, and the database server are all hosted on the EC2 instance. The
application is showing signs of performance degradation during busy times and is presenting 5xx errors. The company needs to make the
application scale seamlessly.
Which solution will meet these requirements MOST cost-effectively?
A. Migrate the database to an Amazon RDS for MySQL DB instance. Create an AMI of the web application. Use the AMI to launch a second
EC2 On-Demand Instance. Use an Application Load Balancer to distribute the load to each EC2 instance.
B. Migrate the database to an Amazon RDS for MySQL DB instance. Create an AMI of the web application. Use the AMI to launch a second EC2
On-Demand Instance. Use Amazon Route 53 weighted routing to distribute the load across the two EC2 instances.
C. Migrate the database to an Amazon Aurora MySQL DB instance. Create an AWS Lambda function to stop the EC2 instance and change the
instance type. Create an Amazon CloudWatch alarm to invoke the Lambda function when CPU utilization surpasses 75%.
D. Migrate the database to an Amazon Aurora MySQL DB instance. Create an AMI of the web application. Apply the AMI to a launch template.
Create an Auto Scaling group with the launch template Configure the launch template to use a Spot Fleet. Attach an Application Load Balancer
to the Auto Scaling group.
Correct Answer: D
Migrating the database to Amazon Aurora MySQL provides a scalable, high performance database to support the application.
Creating an AMI of the web application and using it in an Auto Scaling group with Spot instances allows cheap and efficient scaling of the
web tier.
The Application Load Balancer distributes traffic across the Auto Scaling group.
Spot instances in an Auto Scaling group allow cost-optimized automatic scaling based on demand.
This approach provides high availability and seamless scaling without manual intervention.
upvoted 2 times
A is incorrect because using an Application Load Balancer with multiple EC2 instances is a better approach for scalability compared to
relying on a single instance.
B is incorrect because weighted routing in Amazon Route 53 distributes traffic based on fixed weights, which may not dynamically adjust
to the changing load.
C is incorrect because using AWS Lambda to stop and change the instance type based on CPU utilization is not an efficient way to handle
scaling for a web application. Auto Scaling is a better approach for dynamic scaling.
upvoted 2 times
None of A-C give seamless scalability. A and B are about adding second instance (which I assume does not match to "scale seamlessly"). C
is about changing instance type.
Migrating the database to Amazon Aurora MySQL will allow the database to scale automatically, so it can handle an increase in traffic
without manual intervention. Creating an AMI of the web application and using a launch template will allow the company to quickly and
easily launch new instances of the application, which can then be added to an Auto Scaling group. This will allow the application to
automatically scale up and down based on demand, ensuring that there are enough resources to handle busy times without incurring the
cost of running idle resources.
Using a Spot Fleet to launch the instances will allow the company to take advantage of Amazon's spare capacity and get a discount on
their EC2 instances. Attaching an Application Load Balancer to the Auto Scaling group will allow the load to be distributed across all of the
available instances, improving the performance and reliability of the application.
upvoted 3 times
* it uses an Auto Scaling group with a launch template and a Spot Fleet to automatically scale the number of EC2 instances based on the
workload.
* using a Spot Fleet allows the company to take advantage of the lower prices of Spot Instances while still providing the required
performance and availability for the application.
* using an Aurora MySQL database instance allows the company to take advantage of the scalability and performance of Aurora.
upvoted 2 times
A company runs a stateless web application in production on a group of Amazon EC2 On-Demand Instances behind an Application Load Balancer.
The application experiences heavy usage during an 8-hour period each business day. Application usage is moderate and steady overnight.
Application usage is low during weekends.
The company wants to minimize its EC2 costs without affecting the availability of the application.
Which solution will meet these requirements?
B. Use Reserved Instances for the baseline level of usage. Use Spot instances for any additional capacity that the application needs.
C. Use On-Demand Instances for the baseline level of usage. Use Spot Instances for any additional capacity that the application needs.
D. Use Dedicated Instances for the baseline level of usage. Use On-Demand Instances for any additional capacity that the application needs.
Correct Answer: B
On-Demand Instances provide stable, reliable baseline capacity for the normal workload.
Spot Instances can provide the additional capacity needed during peak periods at a much lower hourly rate compared to On-Demand.
The stateless nature of the application allows taking advantage of Spot without affecting availability. If Spot is interrupted, the baseline
On-Demand capacity remains available.
Reserved Instances require upfront commitment and may not match the variable workload.
Dedicated Instances are more expensive than On-Demand for baseline capacity.
Using only Spot Instances risks interruption during peak times if capacity is not available.
upvoted 2 times
With Reserve instance on the other hand we are locked in for a year, but at 60% discount. That means we’ll be paying $40 per hour.
Running it for a day: $40 * 24 = $960
upvoted 3 times
cookieMr 3 months, 1 week ago
Selected Answer: B
B is correct because it combines the use of Reserved Instances and Spot Instances to minimize EC2 costs while ensuring availability.
Reserved Instances provide cost savings for the baseline level of usage during the heavy usage period, while Spot Instances are utilized
for any additional capacity needed during peak times, taking advantage of their cost-effectiveness.
A is incorrect because relying solely on Spot Instances for the entire workload can result in potential interruptions and instability during
peak usage periods.
C is incorrect because using On-Demand Instances for the baseline level of usage does not provide the cost savings and long-term
commitment benefits that Reserved Instances offer.
D is incorrect because using Dedicated Instances for the baseline level of usage incurs additional costs without significant benefits for this
scenario. Dedicated Instances are typically used for compliance or regulatory requirements rather than cost optimization.
upvoted 2 times
* Using Reserved Instances for the baseline level of usage will provide a discount on the EC2 costs for steady overnight and weekend
usage.
* Using Spot Instances for any additional capacity that the application needs during peak usage times will allow the company to take
advantage of spare capacity in the region at a lower cost than On-Demand Instances.
upvoted 4 times
A company needs to retain application log files for a critical application for 10 years. The application team regularly accesses logs from the past
month for troubleshooting, but logs older than 1 month are rarely accessed. The application generates more than 10 TB of logs per month.
Which storage option meets these requirements MOST cost-effectively?
A. Store the logs in Amazon S3. Use AWS Backup to move logs more than 1 month old to S3 Glacier Deep Archive.
B. Store the logs in Amazon S3. Use S3 Lifecycle policies to move logs more than 1 month old to S3 Glacier Deep Archive.
C. Store the logs in Amazon CloudWatch Logs. Use AWS Backup to move logs more than 1 month old to S3 Glacier Deep Archive.
D. Store the logs in Amazon CloudWatch Logs. Use Amazon S3 Lifecycle policies to move logs more than 1 month old to S3 Glacier Deep
Archive.
Correct Answer: B
https://docs.aws.amazon.com/aws-backup/latest/devguide/s3-backups.html
AWS Backup allows you to backup your S3 data stored in the following S3 Storage Classes:
• S3 Standard
• S3 Standard - Infrequently Access (IA)
• S3 One Zone-IA
• S3 Glacier Instant Retrieval
• S3 Intelligent-Tiering (S3 INT)
upvoted 7 times
A is incorrect because using AWS Backup to move logs to S3 Glacier Deep Archive can incur additional costs and complexity compared to
using S3 Lifecycle policies directly.
C adds unnecessary complexity and costs by involving CloudWatch Logs and AWS Backup when direct management through S3 is
sufficient.
D is incorrect because using S3 Lifecycle policies to move logs from CloudWatch Logs to S3 Glacier Deep Archive is not a valid option.
CloudWatch Logs and S3 have separate storage mechanisms, and S3 Lifecycle policies cannot be applied directly to CloudWatch Logs.
upvoted 2 times
This solution would allow the application team to quickly access the logs from the past month for troubleshooting, while also providing a
cost-effective storage solution for the logs that are rarely accessed and need to be retained for 10 years.
upvoted 1 times
career360guru 9 months, 2 weeks ago
Selected Answer: B
Option B is most cost effective. Moving logs to Cloudwatch logs may incure additional cost.
upvoted 1 times
A company has a data ingestion workflow that includes the following components:
An Amazon Simple Notification Service (Amazon SNS) topic that receives notifications about new data deliveries
An AWS Lambda function that processes and stores the data
The ingestion workflow occasionally fails because of network connectivity issues. When failure occurs, the corresponding data is not ingested
unless the company manually reruns the job.
What should a solutions architect do to ensure that all notifications are eventually processed?
A. Configure the Lambda function for deployment across multiple Availability Zones.
B. Modify the Lambda function's configuration to increase the CPU and memory allocations for the function.
C. Configure the SNS topic’s retry strategy to increase both the number of retries and the wait time between retries.
D. Configure an Amazon Simple Queue Service (Amazon SQS) queue as the on-failure destination. Modify the Lambda function to process
messages in the queue.
Correct Answer: D
To ensure that all notifications are eventually processed, the solutions architect can set up an Amazon SQS queue as the on-failure
destination for the Amazon SNS topic. This way, when the Lambda function fails due to network connectivity issues, the notification will be
sent to the queue instead of being lost. The Lambda function can then be modified to process messages in the queue, ensuring that all
notifications are eventually processed.
upvoted 3 times
A company has a service that produces event data. The company wants to use AWS to process the event data as it is received. The data is written
in a specific order that must be maintained throughout processing. The company wants to implement a solution that minimizes operational
overhead.
How should a solutions architect accomplish this?
A. Create an Amazon Simple Queue Service (Amazon SQS) FIFO queue to hold messages. Set up an AWS Lambda function to process
messages from the queue.
B. Create an Amazon Simple Notification Service (Amazon SNS) topic to deliver notifications containing payloads to process. Configure an
AWS Lambda function as a subscriber.
C. Create an Amazon Simple Queue Service (Amazon SQS) standard queue to hold messages. Set up an AWS Lambda function to process
messages from the queue independently.
D. Create an Amazon Simple Notification Service (Amazon SNS) topic to deliver notifications containing payloads to process. Configure an
Amazon Simple Queue Service (Amazon SQS) queue as a subscriber.
Correct Answer: A
B is incorrect because Amazon Simple Notification Service (Amazon SNS) topics are not designed to preserve message order. SNS is a
publish-subscribe messaging service and does not guarantee the order of message delivery.
C is incorrect because using an SQS standard queue does not guarantee the order of message processing. SQS standard queues provide
high throughput and scale, but they do not guarantee strict message ordering.
D is incorrect because configuring an SQS queue as a subscriber to an SNS topic does not ensure message ordering. SNS topics distribute
messages to subscribers independently, and the order of message processing is not guaranteed.
upvoted 2 times
Option B is incorrect because using Amazon Simple Notification Service (Amazon SNS) does not guarantee the order in which messages
are delivered.
Option C is incorrect because using an Amazon SQS standard queue does not guarantee the order in which messages are processed.
Option D is incorrect because using an Amazon SQS queue as a subscriber to an Amazon SNS topic does not guarantee the order in which
messages are processed.
upvoted 3 times
techhb 9 months, 1 week ago
Only A is right option here.
upvoted 1 times
A company is migrating an application from on-premises servers to Amazon EC2 instances. As part of the migration design requirements, a
solutions architect must implement infrastructure metric alarms. The company does not need to take action if CPU utilization increases to more
than 50% for a short burst of time. However, if the CPU utilization increases to more than 50% and read IOPS on the disk are high at the same time,
the company needs to act as soon as possible. The solutions architect also must reduce false alarms.
What should the solutions architect do to meet these requirements?
B. Create Amazon CloudWatch dashboards to visualize the metrics and react to issues quickly.
C. Create Amazon CloudWatch Synthetics canaries to monitor the application and raise an alarm.
D. Create single Amazon CloudWatch metric alarms with multiple metric thresholds where possible.
Correct Answer: A
Composite alarms allow defining alarms with multiple metrics and conditions, like high CPU AND high read IOPS in this case.
Composite alarms can avoid false positives triggered by a single metric spike.
Dashboards help visualize but won't take automated action. Synthetics tests application availability but doesn't address the metrics.
Single metric alarms with multiple thresholds can't correlate across metrics and may still trigger false positives.
Composite alarms allow acting quickly when both CPU and IOPS are high, per the stated need.
upvoted 2 times
B can help in monitoring the overall health and performance of the application. However, it does not directly address the specific
requirement of triggering an action when CPU utilization and read IOPS exceed certain thresholds simultaneously.
C. Creating CloudWatch Synthetics canaries is useful for actively monitoring the application's behavior and availability. However, it does
not directly address the specific requirement of monitoring CPU utilization and read IOPS to trigger an action.
D. Creating single CloudWatch metric alarms with multiple metric thresholds where possible can be an option, but it does not address the
requirement of triggering an action only when both CPU utilization and read IOPS exceed their respective thresholds simultaneously.
upvoted 3 times
In contrast, Option B, creating Amazon CloudWatch dashboards, would not directly address the requirement to trigger an alarm when
both CPU utilization is high and read IOPS on the disk are high at the same time. Dashboards can be useful for visualizing metric data
and identifying trends, but they do not have the capability to trigger alarms based on multiple metric thresholds.
Option C, using Amazon CloudWatch Synthetics canaries, may not be the best choice for this scenario, as canaries are used for
synthetic testing rather than for monitoring live traffic. Canaries can be useful for monitoring the availability and performance of an
application, but they may not be the most effective way to monitor the specific metric thresholds and conditions described in this
scenario.
upvoted 2 times
The alarms specified in a composite alarm's rule expression can include metric alarms and other composite alarms.Using composite
alarms can reduce alarm noise.
upvoted 3 times
A company wants to migrate its on-premises data center to AWS. According to the company's compliance requirements, the company can use
only the ap-northeast-3 Region. Company administrators are not permitted to connect VPCs to the internet.
Which solutions will meet these requirements? (Choose two.)
A. Use AWS Control Tower to implement data residency guardrails to deny internet access and deny access to all AWS Regions except ap-
northeast-3.
B. Use rules in AWS WAF to prevent internet access. Deny access to all AWS Regions except ap-northeast-3 in the AWS account settings.
C. Use AWS Organizations to configure service control policies (SCPS) that prevent VPCs from gaining internet access. Deny access to all
AWS Regions except ap-northeast-3.
D. Create an outbound rule for the network ACL in each VPC to deny all traffic from 0.0.0.0/0. Create an IAM policy for each user to prevent the
use of any AWS Region other than ap-northeast-3.
E. Use AWS Config to activate managed rules to detect and alert for internet gateways and to detect and alert for new resources deployed
outside of ap-northeast-3.
Correct Answer: AC
Option B is incorrect because using rules in AWS WAF alone does not address the requirement of denying access to all AWS Regions except
ap-northeast-3.
Option D is incorrect because configuring outbound rules in network ACLs and IAM policies for users can help restrict traffic and access,
but it does not enforce the company's requirement of denying access to all Regions except ap-northeast-3.
Option E is incorrect because using AWS Config and managed rules can help detect and alert for specific resources and configurations, but
it does not directly enforce the restriction of internet access or deny access to specific Regions.
upvoted 3 times
Abrar2022 3 months, 3 weeks ago
Didn't know that SCPS (Service Control Policies) could be used to deny users internet access. Good to know. Always thought it's got
controlling who can and can't access AWS Services.
upvoted 1 times
A - CANNOT BE!!! AWS Control Tower is not available in ap-northeast-3! Check your
B- for sure no
C - SCPS (Service Control Policies)- For sure
D - Deny outbound rule to be place in prod and also IAM Policy to deny Users creating services in AP-Northeast3
E - it creates an alert, which means it happens but an alert is triggered. so I think it's not good either.
upvoted 2 times
https://www.aws-services.info/controltower.html
upvoted 1 times
Config: Can(not).
Yes, AWS Config can help you enforce restrictions on internet access and control access to specific AWS Regions using AWS Config Rules.
It's worth noting that AWS Config is a monitoring service that provides continuous assessment of your AWS resources against desired
configurations. While AWS Config can alert you when a configuration change occurs, it cannot directly restrict access to resources or
enforce specific policies. For that, you may need to use other AWS services such as AWS Identity and Access Management (IAM), AWS
Firewall Manager, or AWS Organizations.
upvoted 3 times
A company uses a three-tier web application to provide training to new employees. The application is accessed for only 12 hours every day. The
company is using an Amazon RDS for MySQL DB instance to store information and wants to minimize costs.
What should a solutions architect do to meet these requirements?
A. Configure an IAM policy for AWS Systems Manager Session Manager. Create an IAM role for the policy. Update the trust relationship of the
role. Set up automatic start and stop for the DB instance.
B. Create an Amazon ElastiCache for Redis cache cluster that gives users the ability to access the data from the cache when the DB instance
is stopped. Invalidate the cache after the DB instance is started.
C. Launch an Amazon EC2 instance. Create an IAM role that grants access to Amazon RDS. Attach the role to the EC2 instance. Configure a
cron job to start and stop the EC2 instance on the desired schedule.
D. Create AWS Lambda functions to start and stop the DB instance. Create Amazon EventBridge (Amazon CloudWatch Events) scheduled rules
to invoke the Lambda functions. Configure the Lambda functions as event targets for the rules.
Correct Answer: D
It is option D. Option A could have been applicable had it been AWS Systems Manager State Manager & not AWS Systems Manager
Session Manager
upvoted 26 times
https://aws.amazon.com/blogs/database/schedule-amazon-rds-stop-and-start-using-aws-
lambda/#:~:text=you%20to%20schedule%20a-,Lambda%20function,-to%20stop%20and%20start
upvoted 1 times
Option A is not the most suitable solution because it refers to IAM policies for AWS Systems Manager Session Manager, which is primarily
used for interactive shell access to EC2 instances and does not directly address the requirement of starting and stopping the DB instance.
Option B is not the most suitable solution because it suggests using Amazon ElastiCache for Redis as a cache cluster, which may not
provide the desired cost optimization for the DB instance.
Option C is not the most suitable solution because launching an EC2 instance and configuring cron jobs to start and stop it does not
directly address the requirement of minimizing costs for the Amazon RDS DB instance.
upvoted 2 times
This post presents a solution using AWS Lambda and Amazon EventBridge that allows you to schedule a Lambda function to stop and
start the idle databases with specific tags to save on compute costs. The second post presents a solution that accomplishes stop and start
of the idle Amazon RDS databases using AWS Systems Manager.
upvoted 2 times
A company sells ringtones created from clips of popular songs. The files containing the ringtones are stored in Amazon S3 Standard and are at
least 128 KB in size. The company has millions of files, but downloads are infrequent for ringtones older than 90 days. The company needs to
save money on storage while keeping the most accessed files readily available for its users.
Which action should the company take to meet these requirements MOST cost-effectively?
A. Configure S3 Standard-Infrequent Access (S3 Standard-IA) storage for the initial storage tier of the objects.
B. Move the files to S3 Intelligent-Tiering and configure it to move objects to a less expensive storage tier after 90 days.
C. Configure S3 inventory to manage objects and move them to S3 Standard-Infrequent Access (S3 Standard-1A) after 90 days.
D. Implement an S3 Lifecycle policy that moves the objects from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-1A) after 90
days.
Correct Answer: D
https://aws.amazon.com/s3/pricing/?nc=sn&loc=4
upvoted 1 times
S3 Lifecycle policies can automatically transition objects from S3 Standard to S3 Standard-IA after 90 days.
S3 Standard provides high performance for frequently accessed newer files.
S3 Standard-IA costs 20-30% less than S3 Standard for infrequently accessed files.
This matches access patterns - high performance for new files, cost savings for older files.
S3 Intelligent Tiering has higher request costs and complexity for this simple access pattern.
S3 Inventory lists objects and their properties but does not directly transition objects.
Lifecycle policies provide automated transitions without manual intervention.
upvoted 1 times
https://aws.amazon.com/s3/pricing/?nc=sn&loc=4
upvoted 1 times
A is not the most cost-effective solution because it doesn't consider the requirement of keeping the most accessed files readily available.
S3 Standard-IA is designed for data that is accessed less frequently, but it still incurs higher costs compared to IT.
C is not the most suitable solution for reducing storage costs. S3 inventory provides a list of objects and their metadata, but it does not
offer direct cost optimization features.
D is not the most cost-effective solution because it only moves objects from S3 Standard to S3 Standard-IA after 90 days. It doesn't take
advantage of the benefits of IT, which automatically optimizes costs based on access patterns.
upvoted 3 times
kelvintoys93 3 months, 4 weeks ago
Selected Answer: D
128kB is a just a trap.
It cannot be B because:
1. Intelligent-tiering requires no configuration for class transitions - your option is just whether to opt into Archive/Deep Archive Access
tier, which does not make sense for the requirement. Those two classes are cheapest in terms of storage but charges high for retrieval.
2. Nowhere has it mentioned that the access pattern is unpredictable. If we really have to assume, I would rather assume that new songs
have higher access frequency. In this case, you dont really benefit from the auto-transition feature that Intel-tier provides. You will be
paying the same rate as S3 Standard class + additional fee for using Intel-tiering. Since the req is to have the most cost-efficient solution, D
is the answer.
upvoted 1 times
A company needs to save the results from a medical trial to an Amazon S3 repository. The repository must allow a few scientists to add new files
and must restrict all other users to read-only access. No users can have the ability to modify or delete any files in the repository. The company
must keep every file in the repository for a minimum of 1 year after its creation date.
Which solution will meet these requirements?
B. Use S3 Object Lock in compliance mode with a retention period of 365 days.
C. Use an IAM role to restrict all users from deleting or changing objects in the S3 bucket. Use an S3 bucket policy to only allow the IAM role.
D. Configure the S3 bucket to invoke an AWS Lambda function every time an object is added. Configure the function to track the hash of the
saved object so that modified objects can be marked accordingly.
Correct Answer: B
A does not provide the same level of protection as compliance mode. In governance mode, there is a possibility for authorized users to
remove the legal hold, potentially allowing objects to be modified or deleted.
C can restrict users from deleting or changing objects, but it does not enforce the retention period requirement. It also does not provide
the same level of immutability and protection against accidental or malicious modifications.
D does not address the requirement of preventing users from modifying or deleting files. It provides a mechanism for tracking changes
but does not enforce the desired access restrictions or retention period.
upvoted 3 times
To meet all the requirements, you should use S3 Object Lock in governance mode and use IAM policies to control access to the objects in
the bucket. This would allow you to specify a legal hold with a retention period of at least 1 year and to restrict all users except a few
scientists to read-only access.
upvoted 3 times
notacert 5 months, 3 weeks ago
Legal hold needs to be removed manually.
"The Object Lock legal hold operation enables you to place a legal hold on an object version. Like setting a retention period, a legal
hold prevents an object version from being overwritten or deleted. However, a legal hold doesn't have an associated retention period
and remains in effect until removed. "
upvoted 1 times
Governance:
- Most users can't overwrite or delete an object version or alter its lock settings
- Some users have special permissions to change the retention or delete the object
upvoted 3 times
A large media company hosts a web application on AWS. The company wants to start caching confidential media files so that users around the
world will have reliable access to the files. The content is stored in Amazon S3 buckets. The company must deliver the content quickly, regardless
of where the requests originate geographically.
Which solution will meet these requirements?
B. Deploy AWS Global Accelerator to connect the S3 buckets to the web application.
D. Use Amazon Simple Queue Service (Amazon SQS) to connect the S3 buckets to the web application.
Correct Answer: C
Amazon CloudFront is a content delivery network (CDN) that caches content at edge locations around the world.
Connecting the S3 buckets containing the media files to CloudFront will cache the content at global edge locations.
This provides fast reliable access to users everywhere by serving content from the nearest edge location.
CloudFront integrates tightly with S3 for secure, durable storage.
Global Accelerator improves availability and performance for TCP/UDP traffic, not HTTP-based content delivery.
DataSync and SQS are not technologies for a global CDN like CloudFront.
upvoted 1 times
A. is a data transfer service that is not designed for caching or content delivery. It is used for transferring data between on-premises
storage systems and AWS services.
B. is a service that improves the performance and availability of applications for global users. While it can provide fast and reliable access,
it is not specifically designed for caching media files or connecting directly to S3.
D. is a message queue service that is not suitable for caching or content delivery. It is used for decoupling and coordinating message-
based communication between different components of an application.
Therefore, the correct solution is option C, deploying CloudFront to connect the S3 to CloudFront edge servers.
upvoted 2 times
A company produces batch data that comes from different databases. The company also produces live stream data from network sensors and
application APIs. The company needs to consolidate all the data into one place for business analytics. The company needs to process the
incoming data and then stage the data in different Amazon S3 buckets. Teams will later run one-time queries and import the data into a business
intelligence tool to show key performance indicators (KPIs).
Which combination of steps will meet these requirements with the LEAST operational overhead? (Choose two.)
A. Use Amazon Athena for one-time queries. Use Amazon QuickSight to create dashboards for KPIs.
B. Use Amazon Kinesis Data Analytics for one-time queries. Use Amazon QuickSight to create dashboards for KPIs.
C. Create custom AWS Lambda functions to move the individual records from the databases to an Amazon Redshift cluster.
D. Use an AWS Glue extract, transform, and load (ETL) job to convert the data into JSON format. Load the data into multiple Amazon
OpenSearch Service (Amazon Elasticsearch Service) clusters.
E. Use blueprints in AWS Lake Formation to identify the data that can be ingested into a data lake. Use AWS Glue to crawl the source, extract
the data, and load the data into Amazon S3 in Apache Parquet format.
Correct Answer: AC
AWS Lake Formation and Glue provide automated data lake creation with minimal coding. Glue crawlers identify sources and ETL jobs load
to S3.
Athena allows ad-hoc queries directly on S3 data with no infrastructure to manage.
QuickSight provides easy cloud BI for dashboards.
Options C and D require significant custom coding for ETL and queries.
Redshift and OpenSearch would require additional setup and management overhead.
upvoted 3 times
Using Amazon Kinesis Data Analytics, as mentioned in option B, would be a better choice for running one-time queries on streaming data,
as it is specifically designed to process data in real-time and can automatically scale to match the incoming data rate.
upvoted 2 times
A company stores data in an Amazon Aurora PostgreSQL DB cluster. The company must store all the data for 5 years and must delete all the data
after 5 years. The company also must indefinitely keep audit logs of actions that are performed within the database. Currently, the company has
automated backups configured for Aurora.
Which combination of steps should a solutions architect take to meet these requirements? (Choose two.)
E. Use AWS Backup to take the backups and to keep the backups for 5 years.
Correct Answer: BE
Configuring the automated backups for the Aurora PostgreSQL DB cluster to retain backups for 5 years will meet the requirement to store
all data for that duration.
Exporting the database logs to CloudWatch Logs will capture the audit logs of actions performed in the database. CloudWatch Logs
retention can be configured to store logs indefinitely.
This meets the need to keep audit logs available beyond the 5 year data retention period.
Additional manual snapshots or using AWS Backup for backups is not necessary since automated backups are already enabled.
A lifecycle policy is useful for transitioning storage classes but does not apply here for a set 5 year retention.
upvoted 2 times
neverdie 6 months, 1 week ago
Selected Answer: AD
Automated backup is limited 35 days
upvoted 1 times
If you want to retain a backup beyond the backup retention period, you can also take a snapshot of the data in your cluster volume.
Because Aurora retains incremental restore data for the entire backup retention period, you only need to create a snapshot for data that
you want to retain beyond the backup retention period. You can create a new DB cluster from the snapshot.
upvoted 3 times
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Managing.Backups.html
upvoted 2 times
A solutions architect is optimizing a website for an upcoming musical event. Videos of the performances will be streamed in real time and then
will be available on demand. The event is expected to attract a global online audience.
Which service will improve the performance of both the real-time and on-demand streaming?
A. Amazon CloudFront
C. Amazon Route 53
Correct Answer: A
You can use CloudFront to deliver video on demand (VOD) or live streaming video using any HTTP origin
Global Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over IP, as well as for HTTP use cases
that specifically require static IP addresses
upvoted 23 times
CloudFront is a content delivery network (CDN) that caches content at edge locations around the world.
Caching the video content globally brings it closer to viewers, reducing latency.
This improves performance for both live streaming and on-demand playback for the global audience.
Route 53 provides DNS resolution but does not cache content locally.
Global Accelerator improves TCP traffic routing performance but is not a caching CDN.
S3 Transfer Acceleration optimizes uploads to S3 over long distances but does not help with content delivery.
upvoted 1 times
B. AWS Global Accelerator: Global Accelerator is more suitable for non-HTTP use cases or when static IP addresses are required.
C. Amazon Route 53: Route 53 is a DNS service and not designed specifically for streaming video.
D. Amazon S3 Transfer Acceleration: S3 Transfer Acceleration improves upload speeds to Amazon S3 but does not directly enhance
streaming performance.
upvoted 2 times
Jeeva28 4 months, 1 week ago
Selected Answer: A
Serve video on demand or live streaming video
CloudFront offers several options for streaming your media to global viewers—both pre-recorded files and live events.
For video on demand (VOD) streaming, you can use CloudFront to stream in common formats such as MPEG DASH, Apple HLS, Microsoft
Smooth Streaming, and CMAF, to any device.
For broadcasting a live stream, you can cache media fragments at the edge, so that multiple requests for the manifest file that delivers the
fragments in the right order can be combined, to reduce the load on your origin server.
upvoted 1 times
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/IntroductionUseCases.html
upvoted 2 times
A company is running a publicly accessible serverless application that uses Amazon API Gateway and AWS Lambda. The application’s traffic
recently spiked due to fraudulent requests from botnets.
Which steps should a solutions architect take to block requests from unauthorized users? (Choose two.)
A. Create a usage plan with an API key that is shared with genuine users only.
B. Integrate logic within the Lambda function to ignore the requests from fraudulent IP addresses.
C. Implement an AWS WAF rule to target malicious requests and trigger actions to filter them out.
D. Convert the existing public API to a private API. Update the DNS records to redirect users to the new API endpoint.
E. Create an IAM role for each user attempting to access the API. A user will assume the role when making the API call.
Correct Answer: CD
A) remember the question : "...block requests from unauthorized users?" -- an api key is involved in a authorization process. It's not the
more secure process, but it's better than an totoally anonymous process. If you don't know the key, you can't authenticate. So the bots, at
least the first days/weeks could not access the service (at the end they'll do, cos' the key will be spread informally). So it's CORRECT.
B) Implement a logic in the Lambda to detect fraudulent ip's is almost impossible, cos' it's a dynamic and changing pattern that you
cannot handle easily.
D) creating a rol is not going to imply be more protected from unauth. request, because a rol is a "principal", it's not involved in the
authorization process.
upvoted 5 times
An API key with a usage plan limits access to only authorized apps and users. This prevents general public access.
WAF rules can identify and block malicious bot traffic through pattern matching and IP reputation lists.
Together, the API key and WAF provide preventative and detective controls against unauthorized requests.
The other options add complexity or are reactive. IAM roles per user is not feasible for a public API.
Ignoring requests in Lambda and changing DNS are response actions after an attack.
upvoted 2 times
it's essential to note that while API keys are commonly associated with private APIs, they can also be used in conjunction with public APIs.
In some cases, even public APIs may require API keys to control usage and monitor how the API is being utilized. The API provider might
enforce usage limits, track API usage, or monitor for potential misuse, all of which can be managed effectively using API keys.
In summary, API keys are not exclusive to private APIs and can be used for both private and public APIs, depending on the specific
requirements and use case of the API provider.
upvoted 1 times
API keys are alphanumeric string values that you distribute to application developer customers to grant access to your API. You can use
API keys together with Lambda authorizers, IAM roles, or Amazon Cognito to control access to your APIs.
upvoted 1 times
Don't use API keys for authentication or authorization for your APIs. If you have multiple APIs in a usage plan, a user with a valid API key
for one API in that usage plan can access all APIs in that usage plan. Instead, use an IAM role, a Lambda authorizer, or an Amazon Cognito
user pool.
API keys are intended for software developers wanting to access an API from their application. This link then goes on to say an IAM role
should be used instead.
upvoted 1 times
I will go with CD
upvoted 3 times
An ecommerce company hosts its analytics application in the AWS Cloud. The application generates about 300 MB of data each month. The data
is stored in JSON format. The company is evaluating a disaster recovery solution to back up the data. The data must be accessible in milliseconds
if it is needed, and the data must be kept for 30 days.
B. Amazon S3 Glacier
C. Amazon S3 Standard
Correct Answer: C
A. OpenSearch Service (Elasticsearch Service): While it offers fast data retrieval, it may incur higher costs compared to storing data directly
in S3, especially considering the amount of data being generated.
B. S3 Glacier: While it provides long-term archival storage at a lower cost, it does not meet the requirement of immediate access in
milliseconds. Retrieving data from Glacier typically takes several hours.
D. RDS for PostgreSQL: While it can be used for data storage, it may be overkill and more expensive for a backup and disaster recovery
solution compared to S3 Standard, which is more suitable and cost-effective for storing and retrieving data.
upvoted 2 times
joehong 3 months, 3 weeks ago
Selected Answer: B
https://aws.amazon.com/s3/storage-classes/glacier/instant-retrieval/
upvoted 2 times
D. PostgreSQL is a relational database service and may not be the most cost-effective solution.
upvoted 3 times
A company has a small Python application that processes JSON documents and outputs the results to an on-premises SQL database. The
application runs thousands of times each day. The company wants to move the application to the AWS Cloud. The company needs a highly
available solution that maximizes scalability and minimizes operational overhead.
A. Place the JSON documents in an Amazon S3 bucket. Run the Python code on multiple Amazon EC2 instances to process the documents.
Store the results in an Amazon Aurora DB cluster.
B. Place the JSON documents in an Amazon S3 bucket. Create an AWS Lambda function that runs the Python code to process the documents
as they arrive in the S3 bucket. Store the results in an Amazon Aurora DB cluster.
C. Place the JSON documents in an Amazon Elastic Block Store (Amazon EBS) volume. Use the EBS Multi-Attach feature to attach the volume
to multiple Amazon EC2 instances. Run the Python code on the EC2 instances to process the documents. Store the results on an Amazon RDS
DB instance.
D. Place the JSON documents in an Amazon Simple Queue Service (Amazon SQS) queue as messages. Deploy the Python code as a container
on an Amazon Elastic Container Service (Amazon ECS) cluster that is configured with the Amazon EC2 launch type. Use the container to
process the SQS messages. Store the results on an Amazon RDS DB instance.
Correct Answer: D
A. This option requires manual management and scaling of EC2 instances, resulting in higher operational overhead and complexity.
C. This approach still involves manual management and scaling of EC2 instances, increasing operational complexity and overhead.
D. This solution requires managing and scaling an ECS cluster, adding operational overhead and complexity. Utilizing SQS adds complexity
to the system, requiring custom handling of message consumption and processing in the Python code.
upvoted 2 times
A company wants to use high performance computing (HPC) infrastructure on AWS for financial risk modeling. The company’s HPC workloads run
on Linux. Each HPC workflow runs on hundreds of Amazon EC2 Spot Instances, is short-lived, and generates thousands of output files that are
ultimately stored in persistent storage for analytics and long-term future use.
The company seeks a cloud storage solution that permits the copying of on-premises data to long-term persistent storage to make data available
for processing by all EC2 instances. The solution should also be a high performance file system that is integrated with persistent storage to read
and write datasets and output files.
C. Amazon S3 Glacier integrated with Amazon Elastic Block Store (Amazon EBS)
D. Amazon S3 bucket with a VPC endpoint integrated with an Amazon Elastic Block Store (Amazon EBS) General Purpose SSD (gp2) volume
Correct Answer: A
In absence of EFS, it should be FSx. Amazon FSx For Lustre provides a high-performance, parallel file system for hot data
upvoted 5 times
Amazon FSx for Lustre provides a high-performance, scalable file system optimized for compute-intensive workloads like HPC. It has
native integration with Amazon S3.
Data can be copied from on-premises to an S3 bucket, acting as persistent long-term storage.
The FSx for Lustre file system can then access the S3 data for high speed processing of datasets and output files.
FSx for Lustre is designed for the Linux environments used in this HPC workload.
upvoted 1 times
Option C, S3 Glacier integrated with EBS, is not the best choice as it is a low-cost archival storage service and not optimized for high-
performance file system requirements.
Option D, using an S3 bucket with a VPC endpoint integrated with an Amazon EBS General Purpose SSD (gp2) volume, does not provide
the required high-performance file system capabilities for HPC workloads.
upvoted 2 times
Bmarodi 4 months, 1 week ago
Selected Answer: A
Option A is right answer.
upvoted 1 times
A company is building a containerized application on premises and decides to move the application to AWS. The application will have thousands
of users soon after it is deployed. The company is unsure how to manage the deployment of containers at scale. The company needs to deploy
the containerized application in a highly available architecture that minimizes operational overhead.
A. Store container images in an Amazon Elastic Container Registry (Amazon ECR) repository. Use an Amazon Elastic Container Service
(Amazon ECS) cluster with the AWS Fargate launch type to run the containers. Use target tracking to scale automatically based on demand.
B. Store container images in an Amazon Elastic Container Registry (Amazon ECR) repository. Use an Amazon Elastic Container Service
(Amazon ECS) cluster with the Amazon EC2 launch type to run the containers. Use target tracking to scale automatically based on demand.
C. Store container images in a repository that runs on an Amazon EC2 instance. Run the containers on EC2 instances that are spread across
multiple Availability Zones. Monitor the average CPU utilization in Amazon CloudWatch. Launch new EC2 instances as needed.
D. Create an Amazon EC2 Amazon Machine Image (AMI) that contains the container image. Launch EC2 instances in an Auto Scaling group
across multiple Availability Zones. Use an Amazon CloudWatch alarm to scale out EC2 instances when the average CPU utilization threshold
is breached.
Correct Answer: C
Option B is not the best choice because using the EC2 launch type requires managing and scaling EC2 instances, which increases
operational overhead.
Option C is not the optimal solution as it involves managing the container repository on an EC2 instance and manually launching EC2
instances, which adds complexity and operational overhead.
Option D also requires managing EC2 instances, configuring ASGs, and setting up manual scaling rules based on CloudWatch alarms,
which is not as efficient or scalable as using Fargate in combination with ECS.
upvoted 4 times
AWS Fargate is a technology that you can use with Amazon ECS to run containers without having to manage servers or clusters of Amazon
EC2 instances. With Fargate, you no longer have to provision, configure, or scale clusters of virtual machines to run containers. This
removes the need to choose server types, decide when to scale your clusters, or optimize cluster packing.
upvoted 1 times
A company has two applications: a sender application that sends messages with payloads to be processed and a processing application intended
to receive the messages with payloads. The company wants to implement an AWS service to handle messages between the two applications. The
sender application can send about 1,000 messages each hour. The messages may take up to 2 days to be processed: If the messages fail to
process, they must be retained so that they do not impact the processing of any remaining messages.
Which solution meets these requirements and is the MOST operationally efficient?
A. Set up an Amazon EC2 instance running a Redis database. Configure both applications to use the instance. Store, process, and delete the
messages, respectively.
B. Use an Amazon Kinesis data stream to receive the messages from the sender application. Integrate the processing application with the
Kinesis Client Library (KCL).
C. Integrate the sender and processor applications with an Amazon Simple Queue Service (Amazon SQS) queue. Configure a dead-letter queue
to collect the messages that failed to process.
D. Subscribe the processing application to an Amazon Simple Notification Service (Amazon SNS) topic to receive notifications to process.
Integrate the sender application to write to the SNS topic.
Correct Answer: С
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-dead-letter-queues.html
upvoted 7 times
SQS can handle the sending and processing of 1,000 messages per hour
Messages can be retained for up to 14 days to allow the full 2 days for processing
Using a dead-letter queue will retain failed messages without impacting other processing
SQS requires minimal operational overhead compared to running your own message queue server
upvoted 2 times
A is not the optimal choice as it involves managing and configuring an EC2 instance running a Redis, which adds operational overhead
and maintenance requirements.
B is not the most operationally efficient solution as it introduces additional complexity by using Amazon Kinesis data streams and
integrating with the Kinesis Client Library for message processing.
D, using SNS, is not the best fit for the scenario as it is more suitable for pub/sub messaging and broadcasting notifications rather than
the specific requirement of message processing between two applications.
upvoted 3 times
only problem with this is the limit of the visibility timeout is 12H max. as the second application take 2 days to process, there will be a
duplicate of processing messages in the queue. this might complicate things.
upvoted 2 times
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-dead-letter-queues.html
vs
https://docs.aws.amazon.com/streams/latest/dev/shared-throughput-kcl-consumers.html
upvoted 4 times
"KCL helps you consume and process data from a Kinesis data stream by taking care of many of the complex tasks associated with
distributed computing."
https://docs.aws.amazon.com/streams/latest/dev/shared-throughput-kcl-consumers.html
upvoted 2 times
A solutions architect must design a solution that uses Amazon CloudFront with an Amazon S3 origin to store a static website. The company’s
security policy requires that all website traffic be inspected by AWS WAF.
A. Configure an S3 bucket policy to accept requests coming from the AWS WAF Amazon Resource Name (ARN) only.
B. Configure Amazon CloudFront to forward all incoming requests to AWS WAF before requesting content from the S3 origin.
C. Configure a security group that allows Amazon CloudFront IP addresses to access Amazon S3 only. Associate AWS WAF to CloudFront.
D. Configure Amazon CloudFront and Amazon S3 to use an origin access identity (OAI) to restrict access to the S3 bucket. Enable AWS WAF
on the distribution.
Correct Answer: D
AWS WAF can then be enabled on the CloudFront distribution to inspect all incoming traffic.
The correct answer is D. Configure Amazon CloudFront and Amazon S3 to use an origin access identity (OAI) to restrict access to the S3
bucket. Enable AWS WAF on the distribution.
upvoted 1 times
CloudFront's Origin Access Identity (OAI) is a special CloudFront user that you can associate with your distribution. By applying an OAI to
your S3 bucket, you're able to securely lock down all direct access to your S3 files and require all requests to come through CloudFront.
Amazon Web Application Firewall (WAF) is a security feature that helps protect your resources against common exploits. You can configure
AWS WAF directly on your CloudFront distribution to inspect incoming requests to your web application.
upvoted 2 times
Option A is not the correct choice as configuring an S3 bucket policy to accept requests from the AWS WAF ARN only would bypass the
inspection of traffic by AWS WAF. It does not ensure that all website traffic is inspected.
Option C is not the optimal solution as it focuses on controlling access to S3 using a security group. Although it associates AWS WAF with
CloudFront, it does not guarantee that all incoming requests are inspected by AWS WAF.
Option D is not the recommended solution as configuring an OAI in CloudFront and restricting access to the S3 bucket does not ensure
that all website traffic is inspected by AWS WAF. The OAI is used for restricting direct access to S3 content, but the traffic should still pass
through AWS WAF for inspection.
upvoted 4 times
Regarding answer D, from what I can tell when you use OAI (or OAC) you don't use WAF, and the question specifically asks for us to use
WAF.
upvoted 1 times
Organizers for a global event want to put daily reports online as static HTML pages. The pages are expected to generate millions of views from
users around the world. The files are stored in an Amazon S3 bucket. A solutions architect has been asked to design an efficient and effective
solution.
Correct Answer: D
Amazon CloudFront is a content delivery network (CDN) that speeds up the delivery of static and dynamic web content, such as HTML
pages, images, and videos. By using CloudFront, the HTML pages will be served to users from the edge location that is closest to them,
resulting in faster delivery and a better user experience. CloudFront can also handle the high traffic and large number of requests
expected for the global event, ensuring that the HTML pages are available and accessible to users around the world.
upvoted 7 times
CloudFront is a content delivery network (CDN) that caches content at edge locations around the world. This brings content closer to users
for fast performance.
For high traffic global events with millions of viewers, a CDN is necessary for effective distribution.
Using the S3 bucket as the origin, CloudFront can fetch the files once and cache them globally.
upvoted 1 times
A would allow temporary access to the files, but it does not address the scalability and performance requirements of serving millions of
views globally.
B is not necessary for this scenario as the goal is to distribute the static HTML pages efficiently to users worldwide, not replicate the files
across multiple Regions.
C is primarily used for routing DNS traffic based on the geographic location of users, but it does not provide the caching and content
delivery capabilities required for this use case.
upvoted 2 times
A company runs a production application on a fleet of Amazon EC2 instances. The application reads the data from an Amazon SQS queue and
processes the messages in parallel. The message volume is unpredictable and often has intermittent traffic. This application should continually
process messages without any downtime.
C. Use Reserved Instances for the baseline capacity and use Spot Instances to handle additional capacity.
D. Use Reserved Instances for the baseline capacity and use On-Demand Instances to handle additional capacity.
Correct Answer: C
While using on-demand for the unpredictable workloads would mean that they are always available when required, this question asks for
the most cost effective solution, and does not specify for the most operationally efficient solution.
upvoted 1 times
C is correct.
upvoted 1 times
Question #168 Topic 1
A security team wants to limit access to specific services or actions in all of the team’s AWS accounts. All accounts belong to a large organization
in AWS Organizations. The solution must be scalable and there must be a single point where permissions can be maintained.
C. Create cross-account roles in each account to deny access to the services or actions.
D. Create a service control policy in the root organizational unit to deny access to the services or actions.
Correct Answer: D
Option A and option B are not suitable for controlling access across multiple accounts in AWS Organizations. ACLs and security groups are
typically used for managing network traffic and access within a single account or a specific resource.
Option C is not the recommended approach. Cross-account roles are used for granting access, and denying access through cross-account
roles can be complex and less manageable compared to using SCPs.
upvoted 4 times
Service control policies (SCPs) are policies that you can use to set fine-grained permissions for your AWS accounts within your
organization. SCPs are attached to the root of the organizational unit (OU) or to individual accounts, and they specify the permissions that
are allowed or denied for the accounts within the scope of the policy. By creating an SCP in the root organizational unit, the security team
can set permissions for all of the accounts in the organization from a single location, ensuring that the permissions are consistently
applied across all accounts.
upvoted 4 times
career360guru 9 months, 2 weeks ago
Selected Answer: D
Option D
upvoted 1 times
A company is concerned about the security of its public web application due to recent web attacks. The application uses an Application Load
Balancer (ALB). A solutions architect must reduce the risk of DDoS attacks against the application.
Correct Answer: C
A is not related to DDoS protection. Amazon Inspector is a security assessment service that helps identify vulnerabilities and security
issues in applications and EC2.
B is also not the appropriate solution. Macie is a service that uses machine learning to discover, classify, and protect sensitive data stored
in AWS. It focuses on data security and protection, not specifically on DDoS prevention.
D is not the most effective solution. GuardDuty is a threat detection service that analyzes events and network traffic to identify potential
security threats and anomalies. While it can provide insights into potential DDoS attacks, it does not actively prevent or mitigate them.
upvoted 2 times
AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that helps protect web applications running on AWS from
DDoS attacks. AWS Shield Advanced is an additional layer of protection that provides enhanced DDoS protection capabilities, including
proactive monitoring and automatic inline mitigations, to help protect against even the largest and most sophisticated DDoS attacks. By
enabling AWS Shield Advanced, the solutions architect can help protect the application from DDoS attacks and reduce the risk of
disruption to the application.
upvoted 4 times
A company’s web application is running on Amazon EC2 instances behind an Application Load Balancer. The company recently changed its policy,
which now requires the application to be accessed from one specific country only.
D. Configure the network ACL for the subnet that contains the EC2 instances.
Correct Answer: C
Option A and option B focus on network-level access control and do not provide country-specific filtering capabilities.
Option D is not the ideal solution for restricting access based on country. Network ACLs primarily control traffic at the subnet level based
on IP addresses and port numbers, but they do not have built-in capabilities for country-based filtering.
upvoted 3 times
AWS WAF is a web application firewall service that helps protect web applications from common web exploits that could affect application
availability, compromise security, or consume excessive resources. AWS WAF allows you to create rules that block or allow traffic based on
the values of specific request parameters, such as IP address, HTTP header, or query string value. By configuring AWS WAF on the
Application Load Balancer and creating rules that allow traffic from a specific country, the company can ensure that the web application is
only accessible from that country.
upvoted 4 times
A company provides an API to its users that automates inquiries for tax computations based on item prices. The company experiences a larger
number of inquiries during the holiday season only that cause slower response times. A solutions architect needs to design a solution that is
scalable and elastic.
A. Provide an API hosted on an Amazon EC2 instance. The EC2 instance performs the required computations when the API request is made.
B. Design a REST API using Amazon API Gateway that accepts the item names. API Gateway passes item names to AWS Lambda for tax
computations.
C. Create an Application Load Balancer that has two Amazon EC2 instances behind it. The EC2 instances will compute the tax on the received
item names.
D. Design a REST API using Amazon API Gateway that connects with an API hosted on an Amazon EC2 instance. API Gateway accepts and
passes the item names to the EC2 instance for tax computations.
Correct Answer: D
API Gateway handles creating the REST API frontend to receive requests
Lambda functions scale automatically to handle spikes in traffic during peak seasons
No servers to manage for the computations, providing high scalability
upvoted 1 times
Option C (creating an Application Load Balancer with EC2 instances for tax computations) also involves manual management of the
instances and does not offer the same level of scalability and elasticity as a serverless solution.
Option D (designing a REST API using API Gateway and connecting it with an API hosted on an EC2 instance) adds unnecessary complexity
and management overhead. It is more efficient to directly integrate API Gateway with AWS Lambda for tax computations.
Therefore, designing a REST API using Amazon API Gateway and integrating it with AWS Lambda (option B) is the recommended approach
to achieve a scalable and elastic solution for the company's API during the holiday season.
upvoted 2 times
Bmarodi 4 months, 1 week ago
Selected Answer: B
Option B is the solution that is scalable and elastic, hence this meets requirements.
upvoted 1 times
API Gateway is a fully managed service that makes it easy to create, publish, maintain, monitor, and secure APIs at any scale. By designing
a REST API using API Gateway, the solutions architect can create an API that is scalable, flexible, and easy to use. The API Gateway can
accept and pass the item names to the EC2 instance for tax computations, and the EC2 instance can perform the required computations
when the API request is made.
upvoted 2 times
Option B (designing a REST API using API Gateway that passes item names to Lambda for tax computations) would not be a suitable
solution as it may not be suitable for computations that require a larger amount of resources or longer execution times.
Option C (creating an Application Load Balancer with two EC2 instances behind it) would not be a suitable solution as it may not
provide the necessary scalability and elasticity. Additionally, it would not provide the benefits of using API Gateway, such as API
management and monitoring capabilities.
upvoted 1 times
A solutions architect is creating a new Amazon CloudFront distribution for an application. Some of the information submitted by users is
sensitive. The application uses HTTPS but needs another layer of security. The sensitive information should.be protected throughout the entire
application stack, and access to the information should be restricted to certain applications.
D. Configure CloudFront and set the Origin Protocol Policy setting to HTTPS Only for the Viewer Protocol Policy.
Correct Answer: A
Option D (configuring CloudFront with the Origin Protocol Policy set to HTTPS Only for the Viewer Protocol Policy) is related to enforcing
HTTPS communication between CloudFront and the viewer (end-user). While important for security, it doesn't address the specific
requirement of protecting sensitive data within the application stack.
upvoted 1 times
Field-level encryption allows you to encrypt sensitive information at the edge before distributing content through CloudFront. It provides
an additional layer of security for sensitive user-submitted data.
Option D ensures that communication between the viewer and CloudFront is encrypted with HTTPS. However, it does not specifically
address the protection and encryption of sensitive information within the application stack.
Therefore, the most appropriate action to protect sensitive information throughout the entire application stack and restrict access to
certain applications is to configure a CloudFront field-level encryption profile (Option C).
upvoted 2 times
"Field-level encryption allows you to enable your users to securely upload sensitive information to your web servers. The sensitive
information provided by your users is encrypted at the edge, close to the user, and remains encrypted throughout your entire application
stack".
upvoted 2 times
I concur. why? CloudFront's field-level encryption further encrypts sensitive data in an HTTPS form using field-specific encryption keys
(which you supply) before a POST request is forwarded to your origin. This ensures that sensitive data can only be decrypted and viewed
by certain components or services in your application stack.
upvoted 2 times
CloudFront signed cookies can be used to protect sensitive information by requiring users to authenticate with a signed cookie before
they can access content that is served through CloudFront. This can be used to restrict access to certain applications and ensure that the
sensitive information is protected throughout the entire application stack.
Option A, Configure a CloudFront signed URL, would also provide an additional layer of security by requiring users to authenticate with a
signed URL before they can access content served through CloudFront. However, this option would not protect the sensitive information
throughout the entire application stack.
upvoted 2 times
A gaming company hosts a browser-based application on AWS. The users of the application consume a large number of videos and images that
are stored in Amazon S3. This content is the same for all users.
The application has increased in popularity, and millions of users worldwide accessing these media files. The company wants to provide the files
to the users while reducing the load on the origin.
C. Deploy an Amazon ElastiCache for Redis instance in front of the web servers.
D. Deploy an Amazon ElastiCache for Memcached instance in front of the web servers.
Correct Answer: B
CloudFront is the most cost-effective solution for this use case because:
CloudFront can cache static assets like videos and images at edge locations closer to users. This improves performance.
Serving files from the CloudFront cache reduces load on the S3 origin.
CloudFront pricing is very low for data transfer and requests.
upvoted 1 times
Options C and D are used for caching frequently accessed data in-memory to improve application performance. However, they are not
specifically designed for caching and serving media files like CloudFront, and therefore, may not provide the same cost-effectiveness and
scalability for this use case.
Hence, deploying an CloudFront web distribution in front of the S3 is the most cost-effective solution for delivering media files to millions
of users worldwide while reducing the load on the origin.
upvoted 3 times
Amazon CloudFront supports dynamic content from HTTP and WebSocket protocols, which are based on the Transmission Control
Protocol (TCP) protocol. Common use cases include dynamic API calls, web pages and web applications, as well as an application's static
files such as audio and images. It also supports on-demand media streaming over HTTP.
AWS Global Accelerator supports both User Datagram Protocol (UDP) and TCP-based protocols. It is commonly used for non-HTTP use
cases, such as gaming, IoT and voice over IP. It is also good for HTTP use cases that need static IP addresses or fast regional failover
upvoted 2 times
LuckyAro 8 months, 2 weeks ago
Selected Answer: C
The company wants to provide the files to the users while reducing the load on the origin.
Cloudfront speeds-up content delivery but I'm not sure it reduces the load on the origin.
Some form of caching would cache content and deliver to users without going to the origin for each request.
upvoted 1 times
CloudFront is a content delivery network (CDN) that speeds up the delivery of static and dynamic web content, such as images and videos,
to users. By using CloudFront, the media files will be served to users from the edge location that is closest to them, resulting in faster
delivery and a better user experience. CloudFront can also handle the high traffic and large number of requests expected from the
millions of users, ensuring that the media files are available and accessible to users around the world.
upvoted 3 times
A company has a multi-tier application that runs six front-end web servers in an Amazon EC2 Auto Scaling group in a single Availability Zone
behind an Application Load Balancer (ALB). A solutions architect needs to modify the infrastructure to be highly available without modifying the
application.
Which architecture should the solutions architect choose that provides high availability?
A. Create an Auto Scaling group that uses three instances across each of two Regions.
B. Modify the Auto Scaling group to use three instances across each of two Availability Zones.
C. Create an Auto Scaling template that can be used to quickly create more instances in another Region.
D. Change the ALB in front of the Amazon EC2 instances in a round-robin configuration to balance traffic to the web tier.
Correct Answer: B
Option C (creating an Auto Scaling template for another Region) suggests multi-region redundancy, which may not be the most
straightforward solution for achieving high availability without modifying the application.
Option D (changing the ALB to a round-robin configuration) does not provide the desired high availability. Round-robin configuration
alone does not ensure fault tolerance and does not leverage multiple Availability Zones for resilience.
Hence, modifying the Auto Scaling group to use three instances across each of two Availability Zones is the appropriate choice to provide
high availability for the multi-tier application.
upvoted 4 times
This option would provide high availability by distributing the front-end web servers across multiple Availability Zones. If there is an issue
with one Availability Zone, the other Availability Zone would still be available to serve traffic. This would ensure that the application
remains available and highly available even if there is a failure in one of the Availability Zones.
upvoted 4 times
An ecommerce company has an order-processing application that uses Amazon API Gateway and an AWS Lambda function. The application
stores data in an Amazon Aurora PostgreSQL database. During a recent sales event, a sudden surge in customer orders occurred. Some
customers experienced timeouts, and the application did not process the orders of those customers.
A solutions architect determined that the CPU utilization and memory utilization were high on the database because of a large number of open
connections. The solutions architect needs to prevent the timeout errors while making the least possible changes to the application.
A. Configure provisioned concurrency for the Lambda function. Modify the database to be a global database in multiple AWS Regions.
B. Use Amazon RDS Proxy to create a proxy for the database. Modify the Lambda function to use the RDS Proxy endpoint instead of the
database endpoint.
C. Create a read replica for the database in a different AWS Region. Use query string parameters in API Gateway to route traffic to the read
replica.
D. Migrate the data from Aurora PostgreSQL to Amazon DynamoDB by using AWS Database Migration Service (AWS DMS). Modify the Lambda
function to use the DynamoDB table.
Correct Answer: B
Option C (creating a read replica in a different AWS Region) introduces additional data replication and management complexity, which
may not be necessary to address the timeout errors.
Option D (migrating to Amazon DynamoDB) involves a significant change in the data storage technology and requires modifying the
application to use DynamoDB instead of Aurora PostgreSQL. This may not be the most suitable solution when the goal is to make minimal
changes to the application.
Therefore, using Amazon RDS Proxy and modifying the Lambda function to use the RDS Proxy endpoint is the recommended solution to
prevent timeout errors and reduce the impact on the database during peak loads.
upvoted 3 times
obifranky 6 months ago
its there anyone that would love to share his/her contributor access? please write me [email protected] thanks
upvoted 1 times
Using Amazon RDS Proxy can help reduce the number of connections to the database and improve the performance of the application.
RDS Proxy establishes a connection pool to the database and routes connections to the available connections in the pool. This can help
reduce the number of open connections to the database and improve the performance of the application. The Lambda function can be
modified to use the RDS Proxy endpoint instead of the database endpoint to take advantage of this improvement.
upvoted 1 times
Option C is not a valid solution because creating a read replica in a different Region does not address the issue of high CPU utilization
and memory utilization on the database.
Option D is not a valid solution because migrating the data from Aurora PostgreSQL to DynamoDB would require significant changes to
the application and may not be the best solution for this particular problem.
upvoted 2 times
An application runs on Amazon EC2 instances in private subnets. The application needs to access an Amazon DynamoDB table.
What is the MOST secure way to access the table while ensuring that the traffic does not leave the AWS network?
Correct Answer: D
Option D (using the internet gateway attached to the VPC) would require routing traffic through the internet gateway, which would result
in the traffic leaving the AWS network.
Therefore, the recommended and most secure approach is to use a VPC endpoint for DynamoDB to ensure private and secure access to
the DynamoDB table from your EC2 instances in private subnets, without the need to traverse the internet or leave the AWS network.
upvoted 4 times
markw92 3 months, 2 weeks ago
VPC endpoints for DynamoDB can alleviate these challenges. A VPC endpoint for DynamoDB enables Amazon EC2 instances in your VPC to
use their private IP addresses to access DynamoDB with no exposure to the public internet. Your EC2 instances do not require public IP
addresses, and you don't need an internet gateway, a NAT device, or a virtual private gateway in your VPC. You use endpoint policies to
control access to DynamoDB. Traffic between your VPC and the AWS service does not leave the Amazon network.
upvoted 1 times
Amazon VPC Endpoints enable private communication between Amazon EC2 instances in a VPC and Amazon services such as DynamoDB,
without the need for an internet gateway, NAT device, or VPN connection. When you create a VPC endpoint for DynamoDB, traffic from the
EC2 instances to the DynamoDB table remains within the AWS network and does not traverse the public internet.
upvoted 1 times
A VPC endpoint for DynamoDB allows you to privately connect your VPC to the DynamoDB service without requiring an Internet Gateway,
VPN connection, or AWS Direct Connect connection. This ensures that the traffic between the application and the DynamoDB table stays
within the AWS network and is not exposed to the public Internet.
upvoted 2 times
Option C, using a NAT instance in a private subnet, would also allow the traffic to leave the AWS network but would require you to
manage the NAT instance yourself.
Option D, using the internet gateway attached to the VPC, would also expose the traffic to the public Internet.
upvoted 2 times
Question #177 Topic 1
An entertainment company is using Amazon DynamoDB to store media metadata. The application is read intensive and experiencing delays. The
company does not have staff to handle additional operational overhead and needs to improve the performance efficiency of DynamoDB without
reconfiguring the application.
Correct Answer: B
https://aws.amazon.com/dynamodb/dax/#:~:text=Amazon%20DynamoDB%20Accelerator%20(-,DAX),-is%20a%20fully
upvoted 1 times
C. Replicating data with DynamoDB global tables would require additional configuration and operational overhead.
D. Using Amazon ElastiCache for Memcached with Auto Discovery enabled would also require application code modifications and is not
specifically designed for improving DynamoDB performance.
In contrast, option B, using Amazon DynamoDB Accelerator (DAX), is the recommended solution as it is purpose-built for enhancing
DynamoDB performance without the need for application reconfiguration. DAX provides a managed caching layer that significantly
reduces read latency and offloads traffic from DynamoDB tables.
upvoted 4 times
DAX is a fully managed, in-memory cache that can be used to improve the performance of read-intensive workloads on DynamoDB. DAX
stores frequently accessed data in memory, allowing the application to retrieve data from the cache rather than making a request to
DynamoDB. This can significantly reduce the number of read requests made to DynamoDB, improving the performance and reducing the
latency of the application.
upvoted 3 times
Option C, replicating data using DynamoDB global tables, would not directly improve the performance of reading requests and would
require additional operational overhead to maintain the replication.
Option D, using Amazon ElastiCache for Memcached with Auto Discovery enabled, would also not be a good fit because it is not
specifically designed for use with DynamoDB and would require reconfiguring the application to use it.
upvoted 1 times
A company’s infrastructure consists of Amazon EC2 instances and an Amazon RDS DB instance in a single AWS Region. The company wants to
back up its data in a separate Region.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use AWS Backup to copy EC2 backups and RDS backups to the separate Region.
B. Use Amazon Data Lifecycle Manager (Amazon DLM) to copy EC2 backups and RDS backups to the separate Region.
C. Create Amazon Machine Images (AMIs) of the EC2 instances. Copy the AMIs to the separate Region. Create a read replica for the RDS DB
instance in the separate Region.
D. Create Amazon Elastic Block Store (Amazon EBS) snapshots. Copy the EBS snapshots to the separate Region. Create RDS snapshots.
Export the RDS snapshots to Amazon S3. Configure S3 Cross-Region Replication (CRR) to the separate Region.
Correct Answer: A
Option B is incorrect because Amazon Data Lifecycle Manager (Amazon DLM) is not designed for directly copying RDS backups to a
separate region.
Option C is incorrect because creating Amazon Machine Images (AMIs) and read replicas adds complexity and operational overhead
compared to a dedicated backup solution.
Option D is incorrect because using Amazon EBS snapshots, RDS snapshots, and S3 Cross-Region Replication (CRR) involves multiple
manual steps and additional configuration, increasing complexity.
upvoted 3 times
Option C, creating AMIs of the EC2 instances and read replicas of the RDS DB instance in the separate Region, would require more manual
effort to manage the backup and disaster recovery process, as it requires manual creation and management of AMIs and read replicas.
upvoted 2 times
Using AWS Backup is a simple and efficient way to backup EC2 instances and RDS databases to a separate region. It requires minimal
operational overhead and can be easily managed through the AWS Backup console or API. AWS Backup can also provide automated
scheduling and retention management for backups, which can help ensure that backups are always available and up to date.
upvoted 2 times
Amazon DLM is a fully managed service that helps automate the creation and retention of Amazon EBS snapshots and RDS DB snapshots.
It can be used to create and manage backup policies that specify when and how often snapshots should be created, as well as how long
they should be retained. With Amazon DLM, you can easily and automatically create and manage backups of your EC2 instances and RDS
DB instances in a separate Region, with minimal operational overhead.
upvoted 1 times
Option C, creating AMIs of the EC2 instances and copying them to the separate Region, and creating a read replica for the RDS DB
instance in the separate Region, would work, but it may require more manual effort to set up and maintain.
Option D, creating EBS snapshots and copying them to the separate Region, creating RDS snapshots, and exporting them to Amazon
S3, and configuring S3 CRR to the separate Region, would also work, but it would involve multiple steps and may require more manual
effort to set up and maintain. Overall, using Amazon DLM is likely to be the easiest and most efficient option for meeting the
requirements with the least operational overhead.
upvoted 1 times
A solutions architect needs to securely store a database user name and password that an application uses to access an Amazon RDS DB
instance. The application that accesses the database runs on an Amazon EC2 instance. The solutions architect wants to create a secure
parameter in AWS Systems Manager Parameter Store.
A. Create an IAM role that has read access to the Parameter Store parameter. Allow Decrypt access to an AWS Key Management Service (AWS
KMS) key that is used to encrypt the parameter. Assign this IAM role to the EC2 instance.
B. Create an IAM policy that allows read access to the Parameter Store parameter. Allow Decrypt access to an AWS Key Management Service
(AWS KMS) key that is used to encrypt the parameter. Assign this IAM policy to the EC2 instance.
C. Create an IAM trust relationship between the Parameter Store parameter and the EC2 instance. Specify Amazon RDS as a principal in the
trust policy.
D. Create an IAM trust relationship between the DB instance and the EC2 instance. Specify Systems Manager as a principal in the trust policy.
Correct Answer: A
To securely store a database user name and password in AWS Systems Manager Parameter Store and allow an application running on an
EC2 instance to access it, the solutions architect should create an IAM role that has read access to the Parameter Store parameter and
allow Decrypt access to an AWS KMS key that is used to encrypt the parameter. The solutions architect should then assign this IAM role to
the EC2 instance.
This approach allows the EC2 instance to access the parameter in the Parameter Store and decrypt it using the specified KMS key while
enforcing the necessary security controls to ensure that the parameter is only accessible to authorized parties.
upvoted 6 times
Option C, would not be a valid solution, as the Parameter Store parameter and the EC2 instance are not entities that can be related
through an IAM trust relationship.
Option D, would not be a valid solution, as the trust policy would not allow the EC2 instance to access the parameter in the Parameter
Store or decrypt it using the specified KMS key.
upvoted 4 times
The IAM role should have read access to the Parameter Store parameter and Decrypt access to an AWS KMS key that is used to encrypt the
parameter to ensure that the parameter is protected at rest.
upvoted 1 times
A company is designing a cloud communications platform that is driven by APIs. The application is hosted on Amazon EC2 instances behind a
Network Load Balancer (NLB). The company uses Amazon API Gateway to provide external users with access to the application through APIs. The
company wants to protect the platform against web exploits like SQL injection and also wants to detect and mitigate large, sophisticated DDoS
attacks.
Correct Answer: BC
WAF - Amazon CloudFront, the Application Load Balancer (ALB), Amazon API Gateway, and AWS AppSync
upvoted 3 times
AWS Shield Advanced provides expanded DDoS protection against larger and more sophisticated attacks
Using it with the NLB helps protect against network floods
WAF still provides critical protection against exploits at the API lay
upvoted 1 times
C. AWS WAF is designed to provide protection at the application layer, making it suitable for securing the API Gateway against web exploits
like SQL injection.
A. AWS WAF is not compatible with NLB as it operates at the application layer, whereas NLB operates at the transport layer.
D. While GuardDuty helps detect threats, it does not directly protect against web exploits or DDoS attacks. Shield Standard focuses on
edge resources, not specifically NLBs.
E. Shield Standard provides basic DDoS protection for edge resources, but it does not directly protect the NLB or address web exploits at
the application layer.
upvoted 2 times
That is why WAF is only available for Application Load Balancer in the ELB portfolio. NLB does not terminate the TLS session therefore WAF
is not capable of acting on the content. I would consider using AWS Shield at Layer 3/4.
https://repost.aws/questions/QU2fYXwSWUS0q9vZiWDoaEzA/nlb-need-to-attach-aws-waf
upvoted 4 times
YES. AWS Shield Advanced provides additional protections against more sophisticated and larger attacks for your applications running in
AWS.
The doubt is : why apply the protection in the NLB when the facing of the app. is the API Gateway?, because Shield shoud be in front of the
communications, not behind.
Nevertheless, this is the best option.
You can protect regional resources in all Regions where AWS WAF is available. You can see the list at AWS WAF endpoints and quotas in the
Amazon Web Services General Reference.
You can use AWS WAF to protect the following regional resource types:
You can only associate a web ACL to an Application Load Balancer that's within AWS Regions. For example, you cannot associate a web ACL
to an Application Load Balancer that's on AWS Outposts.
upvoted 1 times
AWS WAF is a web application firewall that helps protect web applications from common web exploits such as SQL injection and cross-site
scripting attacks. By using AWS WAF to protect the NLB and Amazon API Gateway, the company can provide an additional layer of
protection for its cloud communications platform against these types of web exploits.
upvoted 1 times
Sophisticated DDOS = Shield Advanced (DD0S attacks the front!) What happens if your load balances goes down?
Your API gateway is on the BACK further behind the NLB. SQL Protect that with the WAF
AWS Shield Advanced is a managed DDoS protection service that provides additional protection for Amazon EC2 instances, Amazon
RDS DB instances, Amazon Elastic Load Balancers, and Amazon CloudFront distributions. It can help detect and mitigate large,
sophisticated DDoS attacks, "but it does not provide protection against web exploits like SQL injection."
Amazon GuardDuty is a threat detection service that uses machine learning and other techniques to identify potentially malicious
activity in your AWS accounts. It can be used in conjunction with AWS Shield Standard, which provides basic DDoS protection for
Amazon EC2 instances, Amazon RDS DB instances, and Amazon Elastic Load Balancers. However, neither Amazon GuardDuty nor AWS
Shield Standard provides protection against web exploits like SQL injection.
Overall, the combination of using AWS WAF to protect the NLB and Amazon API Gateway provides the most protection against web
exploits and large, sophisticated DDoS attacks.
upvoted 1 times
BENICE 9 months, 2 weeks ago
Option B and C
upvoted 1 times
A company has a legacy data processing application that runs on Amazon EC2 instances. Data is processed sequentially, but the order of results
does not matter. The application uses a monolithic architecture. The only way that the company can scale the application to meet increased
demand is to increase the size of the instances.
The company’s developers have decided to rewrite the application to use a microservices architecture on Amazon Elastic Container Service
(Amazon ECS).
What should a solutions architect recommend for communication between the microservices?
A. Create an Amazon Simple Queue Service (Amazon SQS) queue. Add code to the data producers, and send data to the queue. Add code to
the data consumers to process data from the queue.
B. Create an Amazon Simple Notification Service (Amazon SNS) topic. Add code to the data producers, and publish notifications to the topic.
Add code to the data consumers to subscribe to the topic.
C. Create an AWS Lambda function to pass messages. Add code to the data producers to call the Lambda function with a data object. Add
code to the data consumers to receive a data object that is passed from the Lambda function.
D. Create an Amazon DynamoDB table. Enable DynamoDB Streams. Add code to the data producers to insert data into the table. Add code to
the data consumers to use the DynamoDB Streams API to detect new table entries and retrieve the data.
Correct Answer: A
Option C, using an AWS Lambda function to pass messages, would not be suitable for this use case, as it would require the data producers
and data consumers to have a direct connection and invoke the Lambda function, rather than being decoupled through a message queue.
Option D, using an Amazon DynamoDB table with DynamoDB Streams, would not be suitable for this use case, as it would require the
data consumers to continuously poll the DynamoDB Streams API to detect new table entries, rather than being notified of new data
through a message queue.
upvoted 11 times
Create an Amazon Simple Queue Service (Amazon SQS) queue. Add code to the data producers, and send data to the queue. Add code
to the data consumers to process data from the queue.
upvoted 3 times
For asynchronous communication between decoupled microservices, an SQS queue is the most appropriate service to use.
SQS provides a scalable, highly available queue to buffer messages between producers and consumers.
The order of processing does not matter, so a queue model fits well.
The consumers can scale independently to process messages from the queue.
upvoted 2 times
B. Amazon SNS is more suitable for pub/sub messaging, where multiple subscribers receive the same message. It may not be the best fit
for sequential data processing.
C. Using AWS Lambda functions for communication introduces unnecessary complexity and may not be the optimal solution for
sequential data processing.
D. Amazon DynamoDB with DynamoDB Streams is primarily designed for real-time data streaming and change capture scenarios. It may
not be the most efficient choice for sequential data processing in a microservices architecture.
upvoted 4 times
omoakin 4 months, 1 week ago
BBBBBBBBB
upvoted 1 times
A company wants to migrate its MySQL database from on premises to AWS. The company recently experienced a database outage that
significantly impacted the business. To ensure this does not happen again, the company wants a reliable database solution on AWS that
minimizes data loss and stores every transaction on at least two nodes.
A. Create an Amazon RDS DB instance with synchronous replication to three nodes in three Availability Zones.
B. Create an Amazon RDS MySQL DB instance with Multi-AZ functionality enabled to synchronously replicate the data.
C. Create an Amazon RDS MySQL DB instance and then create a read replica in a separate AWS Region that synchronously replicates the data.
D. Create an Amazon EC2 instance with a MySQL engine installed that triggers an AWS Lambda function to synchronously replicate the data to
an Amazon RDS MySQL DB instance.
Correct Answer: B
Enabling Multi-AZ functionality in Amazon RDS ensures synchronous replication of data to a standby replica in a different Availability Zone.
This provides high availability and minimizes data loss in the event of a database outage.
A. Creating an Amazon RDS DB instance with synchronous replication to three nodes in three Availability Zones would provide even higher
availability but is not necessary for the stated requirements.
C. Creating a read replica in a separate AWS Region would provide disaster recovery capabilities but does not ensure synchronous
replication or meet the requirement of storing every transaction on at least two nodes.
D. Using an EC2 instance with a MySQL engine and triggering an AWS Lambda function for replication introduces unnecessary complexity
and is not the most suitable solution for ensuring reliable and synchronous replication.
upvoted 2 times
Amazon RDS provides high availability and failover support for DB instances using Multi-AZ deployments with a single standby DB
instance.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZSingleStandby.html
upvoted 3 times
Option B is not a correct answer because it creates an Amazon RDS MySQL DB instance with Multi-AZ functionality enabled, which only
provides failover capabilities. It does not enable synchronous replication to multiple nodes, which is required in this scenario.
upvoted 2 times
By creating an Amazon RDS MySQL DB instance with Multi-AZ functionality enabled, the solutions architect will ensure that data is
automatically synchronously replicated across multiple AZs within the same Region. This provides high availability and data durability,
minimizing the risk of data loss and ensuring that every transaction is stored on at least two nodes.
upvoted 1 times
A company is building a new dynamic ordering website. The company wants to minimize server maintenance and patching. The website must be
highly available and must scale read and write capacity as quickly as possible to meet changes in user demand.
A. Host static content in Amazon S3. Host dynamic content by using Amazon API Gateway and AWS Lambda. Use Amazon DynamoDB with
on-demand capacity for the database. Configure Amazon CloudFront to deliver the website content.
B. Host static content in Amazon S3. Host dynamic content by using Amazon API Gateway and AWS Lambda. Use Amazon Aurora with Aurora
Auto Scaling for the database. Configure Amazon CloudFront to deliver the website content.
C. Host all the website content on Amazon EC2 instances. Create an Auto Scaling group to scale the EC2 instances. Use an Application Load
Balancer to distribute traffic. Use Amazon DynamoDB with provisioned write capacity for the database.
D. Host all the website content on Amazon EC2 instances. Create an Auto Scaling group to scale the EC2 instances. Use an Application Load
Balancer to distribute traffic. Use Amazon Aurora with Aurora Auto Scaling for the database.
Correct Answer: A
https://aws.amazon.com/blogs/database/how-to-determine-if-amazon-dynamodb-is-appropriate-for-your-needs-and-then-plan-your-
migration/
upvoted 1 times
A. Using DynamoDB with on-demand capacity may provide scalability, but it does not offer the same level of flexibility and performance as
Aurora. Additionally, it does not address the hosting of dynamic content using serverless technologies.
C. Hosting all the website content on EC2 instances requires server maintenance and patching. While using ASG and an ALB helps with
availability and scalability, it does not minimize server maintenance as requested.
D. Hosting all the website content on EC2 instances introduces server maintenance and patching. Using Aurora with Aurora Auto Scaling is
a good choice for the database, but it does not address the need to minimize server maintenance and patching for the overall
infrastructure.
upvoted 1 times
The option A would also meet the company's requirements of minimizing server maintenance and patching, and providing high
availability and quick scaling for read and write capacity. However, there are a few reasons why option B is a more optimal solution:
In option A, it uses Amazon DynamoDB with on-demand capacity for the database, which may not provide the same level of scalability and
performance as using Amazon Aurora with Aurora Auto Scaling.
Amazon Aurora offers additional features such as automatic failover, read replicas, and backups that makes it a more robust and resilient
option than DynamoDB. Additionally, the auto scaling feature is better suited to handle the changes in user demand.
Additionally, option B provides a more cost-effective solution, as Amazon Aurora can be more cost-effective for high read and write
workloads than Amazon DynamoDB, and also it's providing more features.
upvoted 2 times
A company has an AWS account used for software engineering. The AWS account has access to the company’s on-premises data center through a
pair of AWS Direct Connect connections. All non-VPC traffic routes to the virtual private gateway.
A development team recently created an AWS Lambda function through the console. The development team needs to allow the function to access
a database that runs in a private subnet in the company’s data center.
A. Configure the Lambda function to run in the VPC with the appropriate security group.
B. Set up a VPN connection from AWS to the data center. Route the traffic from the Lambda function through the VPN.
C. Update the route tables in the VPC to allow the Lambda function to access the on-premises data center through Direct Connect.
D. Create an Elastic IP address. Configure the Lambda function to send traffic through the Elastic IP address without an elastic network
interface.
Correct Answer: C
Note:
If your function needs internet access, use network address translation (NAT). Connecting a function to a public subnet doesn't give it
internet access or a public IP address.
upvoted 12 times
C is wrong because there is no help adding routes to VPC without configuring your lambda to vpc.
upvoted 1 times
Option B is incorrect because setting up a VPN connection and routing the traffic from the Lambda function through the VPN would add
unnecessary complexity and overhead.
Option C is incorrect because updating the route tables in the VPC to allow access to the on-premises data center through Direct Connect
would affect the entire VPC's routing, potentially exposing other resources to the on-premises network.
Option D is incorrect because creating an Elastic IP address and sending traffic through it without an elastic network interface is not a
valid configuration for accessing resources in a private subnet.
upvoted 3 times
A is wrong as it says configure the lambda function in the VPC. the requirement to run in the database that is on-premise.
upvoted 4 times
When you create Lambda(Function) - > you need to choose VPN and than Security group inside VPC.
https://www.youtube.com/watch?v=beV1AYyhgYA&ab_channel=DigitalCloudTraining
upvoted 3 times
By configuring the Lambda function to run in the VPC, the function will have access to the private subnets in the company's data center
through the Direct Connect connections. Additionally, security groups can be used to control inbound and outbound traffic to and from
the Lambda function, ensuring that only the necessary traffic is allowed.
upvoted 2 times
Option C is not recommended as updating the route tables to allow the Lambda function to access the on-premises data center
through Direct Connect would allow all VPC traffic to route through the data center, which may not be desirable and could potentially
create security risks.
Option D is not a viable solution for accessing resources in the on-premises data center as Elastic IP addresses are only used for
outbound internet traffic from an Amazon VPC, and cannot be used to communicate with resources in an on-premises data center.
upvoted 2 times
Note that " All non-VPC traffic routes to the virtual gateway" meaning if traffic not meant for the VPC, it routes to on-prem (C answer
invalid). For the Lambda function to access the on-prem database you have to configure the Lambda function in the VPC and use
appropriate SG outbound.
Option C, updating the route tables in the VPC to allow the Lambda function to access the on-premises data center through Direct
Connect, is the correct solution to meet the requirements.
upvoted 2 times
Option B, setting up a VPN connection from AWS to the data center and routing the traffic from the Lambda function through the VPN,
is not the correct solution because it would not be the most efficient solution, as the traffic would need to be routed over the public
internet, potentially increasing latency.
Option D, creating an Elastic IP address and configuring the Lambda function to send traffic through the Elastic IP address without an
elastic network interface, is not a valid solution because Elastic IP addresses are used to assign a static public IP address to an instance
or network interface, and do not provide a direct connection to an on-premises data center.
upvoted 4 times
A company runs an application using Amazon ECS. The application creates resized versions of an original image and then makes Amazon S3 API
calls to store the resized images in Amazon S3.
How can a solutions architect ensure that the application has permission to access Amazon S3?
A. Update the S3 role in AWS IAM to allow read/write access from Amazon ECS, and then relaunch the container.
B. Create an IAM role with S3 permissions, and then specify that role as the taskRoleArn in the task definition.
C. Create a security group that allows access from Amazon ECS to Amazon S3, and update the launch configuration used by the ECS cluster.
D. Create an IAM user with S3 permissions, and then relaunch the Amazon EC2 instances for the ECS cluster while logged in as this account.
Correct Answer: B
Option B, creating an IAM role with S3 permissions and specifying that role as the taskRoleArn in the task definition, is the correct solution
to meet the requirement.
upvoted 6 times
Option C, creating a security group that allows access from ECS to S3 and updating the launch configuration used by the ECS cluster, is
not the correct solution because security groups are used to control inbound and outbound traffic to resources, and do not grant
permissions to access resources.
Option D, creating an IAM user with S3 permissions and relaunching the EC2 instances for the ECS cluster while logged in as this
account, is not the correct solution because it is generally considered best practice to use IAM roles rather than IAM users to grant
permissions to resources.
upvoted 4 times
Option A is incorrect because updating the S3 role in IAM and relaunching the container does not associate the updated role with the ECS
task.
Option C is incorrect because creating a security group that allows access from Amazon ECS to Amazon S3 does not grant the necessary
permissions to the ECS task.
Option D is incorrect because creating an IAM user with S3 permissions and relaunching the EC2 instances for the ECS cluster does not
associate the IAM user with the ECS task.
upvoted 2 times
A company has a Windows-based application that must be migrated to AWS. The application requires the use of a shared Windows file system
attached to multiple Amazon EC2 Windows instances that are deployed across multiple Availability Zone:
A. Configure AWS Storage Gateway in volume gateway mode. Mount the volume to each Windows instance.
B. Configure Amazon FSx for Windows File Server. Mount the Amazon FSx file system to each Windows instance.
C. Configure a file system by using Amazon Elastic File System (Amazon EFS). Mount the EFS file system to each Windows instance.
D. Configure an Amazon Elastic Block Store (Amazon EBS) volume with the required size. Attach each EC2 instance to the volume. Mount the
file system within the volume to each Windows instance.
Correct Answer: B
Option A is incorrect because AWS Storage Gateway in volume gateway mode is not designed for shared file systems.
Option C is incorrect because while Amazon EFS can be mounted to multiple instances, it is a Linux-based file system and may not be
suitable for Windows applications.
Option D is incorrect because attaching and mounting an Amazon EBS volume to multiple instances simultaneously is not supported.
upvoted 2 times
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/wfsx-volumes.html
upvoted 1 times
This option is incorrect because AWS Storage Gateway is not a file storage service. It is a hybrid storage service that allows you to store
data in the cloud while maintaining low-latency access to frequently accessed data. It is designed to integrate with on-premises storage
systems, not to provide file storage for Amazon EC2 instances.
B. Configure Amazon FSx for Windows File Server. Mount the Amazon FSx file system to each Windows instance.
This is the correct answer. Amazon FSx for Windows File Server is a fully managed file storage service that provides a native Windows file
system that can be accessed over the SMB protocol. It is specifically designed for use with Windows-based applications, and it can be
easily integrated with existing applications by mounting the file system to each EC2 instance.
upvoted 3 times
This option is incorrect because Amazon EFS is a file storage service that is designed for use with Linux-based applications. It is not
compatible with Windows-based applications, and it cannot be accessed over the SMB protocol.
D. Configure an Amazon Elastic Block Store (Amazon EBS) volume with the required size. Attach each EC2 instance to the volume.
Mount the file system within the volume to each Windows instance.
This option is incorrect because Amazon EBS is a block storage service, not a file storage service. It is designed for storing raw block-
level data that can be accessed by a single EC2 instance at a time. It is not designed for use as a shared file system that can be accessed
by multiple instances.
upvoted 1 times
A company is developing an ecommerce application that will consist of a load-balanced front end, a container-based application, and a relational
database. A solutions architect needs to create a highly available solution that operates with as little manual intervention as possible.
B. Create an Amazon RDS DB instance and one or more replicas in another Availability Zone.
C. Create an Amazon EC2 instance-based Docker cluster to handle the dynamic application load.
D. Create an Amazon Elastic Container Service (Amazon ECS) cluster with a Fargate launch type to handle the dynamic application load.
E. Create an Amazon Elastic Container Service (Amazon ECS) cluster with an Amazon EC2 launch type to handle the dynamic application load.
Correct Answer: AD
A) Creating an RDS DB instance in Multi-AZ mode provides automatic failover to a standby replica in another Availability Zone, providing
high availability.
D) Using ECS Fargate removes the need to provision and manage EC2 instances, allowing the service to scale dynamically based on
demand. ECS handles load balancing and availability out of the box.
upvoted 1 times
D. Create an Amazon Elastic Container Service (Amazon ECS) cluster with a Fargate launch type to handle the dynamic application load.
Fargate abstracts the underlying infrastructure, automatically scaling and managing the containers, making it a highly available and low-
maintenance option.
Option B is not the best choice as it only creates replicas in another Availability Zone without the automatic failover capability provided by
Multi-AZ mode.
Option C is not the best choice as managing a Docker cluster on EC2 instances requires more manual intervention compared to using the
serverless capabilities of Fargate in option D.
Option E is not the best choice as it uses the EC2 launch type, which requires managing and scaling the EC2 instances manually. Fargate,
as mentioned in option D, provides a more automated and scalable solution.
upvoted 2 times
studynoplay 4 months, 2 weeks ago
Selected Answer: AD
little manual intervention = Serverless
upvoted 1 times
A company uses Amazon S3 as its data lake. The company has a new partner that must use SFTP to upload data files. A solutions architect needs
to implement a highly available SFTP solution that minimizes operational overhead.
A. Use AWS Transfer Family to configure an SFTP-enabled server with a publicly accessible endpoint. Choose the S3 data lake as the
destination.
B. Use Amazon S3 File Gateway as an SFTP server. Expose the S3 File Gateway endpoint URL to the new partner. Share the S3 File Gateway
endpoint with the new partner.
C. Launch an Amazon EC2 instance in a private subnet in a VPInstruct the new partner to upload files to the EC2 instance by using a VPN. Run
a cron job script, on the EC2 instance to upload files to the S3 data lake.
D. Launch Amazon EC2 instances in a private subnet in a VPC. Place a Network Load Balancer (NLB) in front of the EC2 instances. Create an
SFTP listener port for the NLB. Share the NLB hostname with the new partner. Run a cron job script on the EC2 instances to upload files to the
S3 data lake.
Correct Answer: D
AWS Transfer Family provides a fully managed SFTP service that can integrate directly with S3. It handles scaling, availability, and security
automatically with minimal overhead.
upvoted 1 times
Option B is not the best choice as it suggests using Amazon S3 File Gateway, which is primarily used for file-based access to S3 storage
over NFS or SMB protocols, not for SFTP access.
Option C is not the best choice as it requires manual management of an EC2 instance, VPN setup, and cron job script for uploading files,
introducing operational overhead and potential complexity.
Option D is not the best choice as it also requires manual management of EC2 instances, Network Load Balancer, and cron job scripts for
file uploads. It is more complex and involves additional components compared to the simpler and fully managed solution provided by
AWS Transfer Family in option A.
upvoted 1 times
cookieMr 3 months, 1 week ago
This solution provides a highly available SFTP solution without the need for manual management or operational overhead. AWS Transfer
Family allows you to easily set up an SFTP server with authentication, authorization, and integration with S3 as the storage backend.
Option B is not the best choice as it suggests using Amazon S3 File Gateway, which is primarily used for file-based access to S3 storage
over NFS or SMB protocols, not for SFTP access.
Option C is not the best choice as it requires manual management of an EC2 instance, VPN setup, and cron job script for uploading files,
introducing operational overhead and potential complexity.
Option D is not the best choice as it also requires manual management of EC2 instances, Network Load Balancer, and cron job scripts for
file uploads. It is more complex and involves additional components compared to the simpler and fully managed solution provided by
AWS Transfer Family in option A.
upvoted 2 times
A company needs to store contract documents. A contract lasts for 5 years. During the 5-year period, the company must ensure that the
documents cannot be overwritten or deleted. The company needs to encrypt the documents at rest and rotate the encryption keys automatically
every year.
Which combination of steps should a solutions architect take to meet these requirements with the LEAST operational overhead? (Choose two.)
A. Store the documents in Amazon S3. Use S3 Object Lock in governance mode.
B. Store the documents in Amazon S3. Use S3 Object Lock in compliance mode.
C. Use server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Configure key rotation.
D. Use server-side encryption with AWS Key Management Service (AWS KMS) customer managed keys. Configure key rotation.
E. Use server-side encryption with AWS Key Management Service (AWS KMS) customer provided (imported) keys. Configure key rotation.
Correct Answer: CE
D) Use AWS KMS customer managed keys for encryption, and configure automatic annual rotation.
Compliance mode provides the protection against overwriting/deletion needed for the full contract duration. And KMS customer managed
keys allow automated key rotation each year.
upvoted 1 times
D. By using server-side encryption with AWS KMS customer managed keys, the documents are encrypted with a customer-controlled key.
Enabling key rotation ensures that a new encryption key is generated automatically at the defined rotation interval, enhancing security.
Option A: S3 Object Lock in governance mode does not provide the required immutability for the documents, allowing potential
modifications or deletions.
Option C: Server-side encryption with SSE-S3 alone does not fulfill the requirement of encryption key rotation, which is explicitly specified.
Option E: Server-side encryption with customer-provided (imported) keys (SSE-C) is not necessary when AWS KMS customer managed keys
(Option D) can be used, which provide a more integrated and manageable solution.
upvoted 4 times
You're uploading or accessing S3 objects using AWS Identity and Access Management (IAM) principals that are in the same AWS account
as the AWS KMS key.
You don't want to manage policies for the KMS key.
You want to create, rotate, disable, or define access controls for the key.
You want to grant cross-account access to your S3 objects. You can configure the policy of a customer managed key to allow access from
another account.
https://repost.aws/knowledge-center/s3-object-encryption-keys
upvoted 1 times
A company has a web application that is based on Java and PHP. The company plans to move the application from on premises to AWS. The
company needs the ability to test new site features frequently. The company also needs a highly available and managed solution that requires
minimum operational overhead.
A. Create an Amazon S3 bucket. Enable static web hosting on the S3 bucket. Upload the static content to the S3 bucket. Use AWS Lambda to
process all dynamic content.
B. Deploy the web application to an AWS Elastic Beanstalk environment. Use URL swapping to switch between multiple Elastic Beanstalk
environments for feature testing.
C. Deploy the web application to Amazon EC2 instances that are configured with Java and PHP. Use Auto Scaling groups and an Application
Load Balancer to manage the website’s availability.
D. Containerize the web application. Deploy the web application to Amazon EC2 instances. Use the AWS Load Balancer Controller to
dynamically route traffic between containers that contain the new site features for testing.
Correct Answer: D
AWS Elastic Beanstalk is a service that makes it easy to deploy and manage web applications in the AWS cloud. However, it is not a good
solution for testing new site features frequently, as it can be difficult to switch between multiple Elastic Beanstalk environments.
upvoted 1 times
A. Suggests using S3 for static content hosting and Lambda for dynamic content. While it offers simplicity for static content, it does not
provide the necessary flexibility and dynamic functionality required by a Java and PHP-based web application.
C. Involves manual management of EC2, ASG, and ELB, which requires more operational overhead and may not provide the desired level
of availability and ease of testing.
D. Introduces containerization, which adds complexity and operational overhead for managing containers and infrastructure, making it
less suitable for a requirement of minimum operational overhead.
upvoted 6 times
Using AWS Elastic Beanstalk provides a fully managed platform to deploy the web application. Elastic Beanstalk will handle provisioning
EC2 instances, load balancing, auto scaling, and application health monitoring.
Elastic Beanstalk's ability to support multiple environments and swap URLs allows easy testing of new features before swapping into
production. This requires minimal overhead compared to managing infrastructure directly.
upvoted 1 times
oguzbeliren 2 months ago
The correct answer is D.
AWS Elastic Beanstalk is a service that makes it easy to deploy and manage web applications in the AWS cloud. However, it is not a good
solution for testing new site features frequently, as it can be difficult to switch between multiple Elastic Beanstalk environments.
upvoted 1 times
A company has an ordering application that stores customer information in Amazon RDS for MySQL. During regular business hours, employees
run one-time queries for reporting purposes. Timeouts are occurring during order processing because the reporting queries are taking a long time
to run. The company needs to eliminate the timeouts without preventing employees from performing queries.
B. Create a read replica. Distribute the ordering application to the primary DB instance and the read replica.
Correct Answer: B
Creating an RDS MySQL read replica will allow the reporting queries to be isolated and run without affecting performance of the primary
ordering application.
Read replicas allow read-only workloads to be scaled out while eliminating contention with the primary write workload.
upvoted 2 times
Answer B, create a "read replica", it is ok, but "ordering application pointed to read replica" is incorrect.
B. While this can provide some level of load distribution, it does not specifically address the issue of timeouts caused by reporting queries
during order processing.
C. While DynamoDB offers scalability and performance benefits, it may require significant changes to the application's data model and
querying approach.
D. While this approach can help alleviate the impact on order processing, it does not address the requirement of eliminating timeouts
without preventing employees from performing queries.
upvoted 3 times
steev 3 months, 3 weeks ago
Selected Answer: A
correct
upvoted 1 times
Option B is not a good solution because distributing the ordering application to the primary DB instance and the read replica does not
address the issue of long-running reporting queries causing timeouts during order processing.
upvoted 1 times
A hospital wants to create digital copies for its large collection of historical written records. The hospital will continue to add hundreds of new
documents each day. The hospital’s data team will scan the documents and will upload the documents to the AWS Cloud.
A solutions architect must implement a solution to analyze the documents, extract the medical information, and store the documents so that an
application can run SQL queries on the data. The solution must maximize scalability and operational efficiency.
Which combination of steps should the solutions architect take to meet these requirements? (Choose two.)
A. Write the document information to an Amazon EC2 instance that runs a MySQL database.
B. Write the document information to an Amazon S3 bucket. Use Amazon Athena to query the data.
C. Create an Auto Scaling group of Amazon EC2 instances to run a custom application that processes the scanned files and extracts the
medical information.
D. Create an AWS Lambda function that runs when new documents are uploaded. Use Amazon Rekognition to convert the documents to raw
text. Use Amazon Transcribe Medical to detect and extract relevant medical information from the text.
E. Create an AWS Lambda function that runs when new documents are uploaded. Use Amazon Textract to convert the documents to raw text.
Use Amazon Comprehend Medical to detect and extract relevant medical information from the text.
Correct Answer: CD
B is correct because storing the scanned documents in Amazon S3 provides highly scalable and durable storage. Amazon Athena allows
running SQL queries directly against the data in S3 without needing to load the data into a database.
E is correct because using Lambda functions triggered by uploads provides a serverless approach to automatically process each
document. Amazon Textract and Comprehend Medical can extract text and medical information without needing to manage server
upvoted 2 times
E is correct because it involves creating an AWS Lambda function triggered by new document uploads. Amazon Textract is used to convert
the documents to raw text, and Amazon Comprehend Medical extracts relevant medical information from the text.
A is incorrect because writing the document information to an Amazon EC2 instance with a MySQL database is not a scalable or efficient
solution for analysis.
C is incorrect because creating an Auto Scaling group of Amazon EC2 instances for processing scanned files and extracting information
would introduce unnecessary complexity and management overhead.
D is incorrect because using an EC2 instance with a MySQL database for storing document information is not the optimal solution for
scalability and efficient analysis.
upvoted 2 times
A company is running a batch application on Amazon EC2 instances. The application consists of a backend with multiple Amazon RDS databases.
The application is causing a high number of reads on the databases. A solutions architect must reduce the number of database reads while
ensuring high availability.
Correct Answer: A
https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/AutoFailover.html
upvoted 1 times
Adding Amazon RDS read replicas is a commonly used strategy to offload read traffic from the primary database, thereby reducing the
number of database reads. Read replicas provide high availability and can distribute read queries across multiple instances, improving
overall read performance.
While options B and D suggest using Amazon ElastiCache for Redis or Memcached, these caching solutions are more focused on
improving read performance by caching frequently accessed data, but they do not inherently reduce the number of reads on the RDS
database. They can complement the solution by serving cached data, but they are not a direct way to reduce the reads on the database.
upvoted 1 times
Modulopi 2 days, 13 hours ago
A for Availability
upvoted 1 times
With ElastiCache, read hit will occur thus achieving the goal mentioned.
upvoted 1 times
Using ElastiCache for Redis allows caching the data from the RDS databases. This reduces the number of reads required from the
databases by serving repeated reads from the Redis cache instead.
A is incorrect because RDS read replicas only help scale reads and do not reduce the overall reads from the primary database.
upvoted 2 times
A company needs to run a critical application on AWS. The company needs to use Amazon EC2 for the application’s database. The database must
be highly available and must fail over automatically if a disruptive event occurs.
A. Launch two EC2 instances, each in a different Availability Zone in the same AWS Region. Install the database on both EC2 instances.
Configure the EC2 instances as a cluster. Set up database replication.
B. Launch an EC2 instance in an Availability Zone. Install the database on the EC2 instance. Use an Amazon Machine Image (AMI) to back up
the data. Use AWS CloudFormation to automate provisioning of the EC2 instance if a disruptive event occurs.
C. Launch two EC2 instances, each in a different AWS Region. Install the database on both EC2 instances. Set up database replication. Fail
over the database to a second Region.
D. Launch an EC2 instance in an Availability Zone. Install the database on the EC2 instance. Use an Amazon Machine Image (AMI) to back up
the data. Use EC2 automatic recovery to recover the instance if a disruptive event occurs.
Correct Answer: C
However, the likelihood of a failure in two different regions at the same time is 0. Therefore, to me it seems that C is the better option to
cater for HA requirement.
In addition, C does state like A that the DB app is installed on an EC2 instance.
upvoted 19 times
Cross-region redundancy provides the highest level of availability and disaster recovery. If one entire region goes down, the database can
fail over across regions.
Database replication ensures data is consistent between regions at all times.
Manual failover gives the flexibility to fail over on-demand in case of regional issues.
upvoted 1 times
"Different Region" is least condition for against "disruptive event", not "different Availability Zone".
Spoiler alert: it is not technically doable. Ergo, the only viable remaining option is C, as full of flaws as it may be... I choose the lesser evil.
upvoted 2 times
B. Launching a single EC2 instance and using an AMI for backup and provisioning automation does not provide automatic failover or high
availability.
C. Launching EC2 instances in different AWS Regions and setting up database replication is a multi-Region setup, which can provide
disaster recovery capabilities but does not provide automatic failover within a single Region.
D. Using EC2 automatic recovery can recover the instance if it fails due to hardware issues, but it does not provide automatic failover or
high availability across multiple instances or Availability Zones.
upvoted 2 times
antropaws 4 months, 2 weeks ago
Selected Answer: C
Cluster EC2s cannot span between AZs, which invalidates option A.
upvoted 10 times
A company’s order system sends requests from clients to Amazon EC2 instances. The EC2 instances process the orders and then store the orders
in a database on Amazon RDS. Users report that they must reprocess orders when the system fails. The company wants a resilient solution that
can process orders automatically if a system outage occurs.
A. Move the EC2 instances into an Auto Scaling group. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to target an Amazon
Elastic Container Service (Amazon ECS) task.
B. Move the EC2 instances into an Auto Scaling group behind an Application Load Balancer (ALB). Update the order system to send messages
to the ALB endpoint.
C. Move the EC2 instances into an Auto Scaling group. Configure the order system to send messages to an Amazon Simple Queue Service
(Amazon SQS) queue. Configure the EC2 instances to consume messages from the queue.
D. Create an Amazon Simple Notification Service (Amazon SNS) topic. Create an AWS Lambda function, and subscribe the function to the SNS
topic. Configure the order system to send messages to the SNS topic. Send a command to the EC2 instances to process the messages by
using AWS Systems Manager Run Command.
Correct Answer: D
Using an Auto Scaling group ensures the EC2 instances that process orders are highly available and scalable.
With SQS, the orders are decoupled from the instances that process them via asynchronous queuing.
If instances fail or go down, the orders remain in the queue until new instances can pick them up. This provides automated resilience.
Any failed processing can retry by resending messages back to the queue
upvoted 4 times
Using an Auto Scaling group with EC2 instances behind a load balancer provides high availability and scalability.
Sending the orders to an SQS queue decouples the ordering system from the processing system. The EC2 instances can poll the queue for
new orders and process them even during an outage. Any failed orders will go back to the queue for reprocessing.
upvoted 1 times
A. Using an ASG with an EventBridge rule targeting an ECS task does not provide the necessary decoupling and message queueing for
automatic order processing during outages.
B. Moving the EC2 instances into an ASG behind an
ALB does not address the need for message queuing and automatic processing during outages.
D. Using SNS and Lambda can provide notifications and orchestration capabilities, but it does not provide the necessary message
queueing and consumption for automatic order processing during outages. Additionally, using Systems Manager Run Command to send
commands for order processing adds complexity and does not provide the desired level of automation.
upvoted 2 times
pisica134 3 months, 1 week ago
D is so unnecessary .... this confuses people
upvoted 1 times
To meet the requirements of the company, a solutions architect should ensure that the system is resilient and can process orders
automatically in the event of a system outage. To achieve this, moving the EC2 instances into an Auto Scaling group is a good first step.
This will enable the system to automatically add or remove instances based on demand and availability.
upvoted 2 times
Finally, the EC2 instances can be configured to consume messages from the queue, process the orders and then store them in the
database on Amazon RDS. This approach ensures that orders are not lost and can be processed automatically if a system outage
occurs. Therefore, option C is the correct answer.
upvoted 2 times
Option B is incorrect because it suggests moving the EC2 instances into an Auto Scaling group behind an Application Load Balancer
(ALB) and updating the order system to send messages to the ALB endpoint. While this approach can provide resilience and
scalability, it does not address the issue of order processing and the need to ensure that orders are not lost if a system outage
occurs.
Option D is incorrect because it suggests using Amazon Simple Notification Service (SNS) to send messages to an AWS Lambda
function, which will then send a command to the EC2 instances to process the messages by using AWS Systems Manager Run
Command. While this approach may work, it is more complex than necessary and does not take advantage of the durability and
availability of SQS.
upvoted 2 times
In this solution, the EC2 instances are placed in an Auto Scaling group, which ensures that the number of instances can be automatically
scaled up or down based on demand. The ordering system is configured to send messages to an SQS queue, which acts as a buffer and
stores the messages until they can be processed by the EC2 instances. The EC2 instances are configured to consume messages from the
queue and process them. If a system outage occurs, the messages in the queue will remain available and can be processed once the
system is restored.
upvoted 2 times
A company runs an application on a large fleet of Amazon EC2 instances. The application reads and writes entries into an Amazon DynamoDB
table. The size of the DynamoDB table continuously grows, but the application needs only data from the last 30 days. The company needs a
solution that minimizes cost and development effort.
A. Use an AWS CloudFormation template to deploy the complete solution. Redeploy the CloudFormation stack every 30 days, and delete the
original stack.
B. Use an EC2 instance that runs a monitoring application from AWS Marketplace. Configure the monitoring application to use Amazon
DynamoDB Streams to store the timestamp when a new item is created in the table. Use a script that runs on the EC2 instance to delete items
that have a timestamp that is older than 30 days.
C. Configure Amazon DynamoDB Streams to invoke an AWS Lambda function when a new item is created in the table. Configure the Lambda
function to delete items in the table that are older than 30 days.
D. Extend the application to add an attribute that has a value of the current timestamp plus 30 days to each new item that is created in the
table. Configure DynamoDB to use the attribute as the TTL attribute.
Correct Answer: D
The DynamoDB TTL feature allows you to define a per-item timestamp to determine when an item is no longer needed. Shortly after the
date and time of the specified timestamp, DynamoDB deletes the item from your table without consuming any write throughput.
upvoted 26 times
Using DynamoDB's built-in TTL functionality is the most direct way to handle data expiration.
It avoids the complexity of triggers, streams, and lambda functions in option C.
Modifying the application code to add the TTL attribute is relatively simple and minimizes operational overhead
upvoted 1 times
A. Redeploying the CloudFormation stack every 30 days and deleting the original stack introduces unnecessary complexity and
operational overhead.
B. Using an EC2 instance with a monitoring application and a script to delete items older than 30 days adds additional infrastructure and
maintenance efforts.
C. Configuring DynamoDB Streams to invoke a Lambda function to delete items older than 30 days adds complexity and requires
additional development and operational effort compared to using the built-in TTL feature of DynamoDB.
upvoted 2 times
TTL is useful if you store items that lose relevance after a specific time.
upvoted 1 times
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html
upvoted 1 times
In this solution, the application is extended to add an attribute that has a value of the current timestamp plus 30 days to each new item
that is created in the table. DynamoDB is then configured to use this attribute as the TTL attribute, which causes items to be automatically
deleted from the table when their TTL value is reached. This solution requires minimal changes to the existing application and
infrastructure and does not require any additional resources or a complex setup.
upvoted 1 times
Option B involves using an EC2 instance and a monitoring application to delete items that are older than 30 days, but this requires
additional infrastructure and maintenance effort.
Option C involves using DynamoDB Streams and a Lambda function to delete items that are older than 30 days, but this requires
additional infrastructure and maintenance effort.
upvoted 1 times
techhb 9 months, 1 week ago
Selected Answer: D
TTL does the trick
upvoted 1 times
A company has a Microsoft .NET application that runs on an on-premises Windows Server. The application stores data by using an Oracle
Database Standard Edition server. The company is planning a migration to AWS and wants to minimize development changes while moving the
application. The AWS application environment should be highly available.
Which combination of actions should the company take to meet these requirements? (Choose two.)
A. Refactor the application as serverless with AWS Lambda functions running .NET Core.
B. Rehost the application in AWS Elastic Beanstalk with the .NET platform in a Multi-AZ deployment.
C. Replatform the application to run on Amazon EC2 with the Amazon Linux Amazon Machine Image (AMI).
D. Use AWS Database Migration Service (AWS DMS) to migrate from the Oracle database to Amazon DynamoDB in a Multi-AZ deployment.
E. Use AWS Database Migration Service (AWS DMS) to migrate from the Oracle database to Oracle on Amazon RDS in a Multi-AZ deployment.
Correct Answer: BD
Rehosting the application in Elastic Beanstalk with the .NET platform can minimize development changes. Multi-AZ deployment of Elastic
Beanstalk will increase the availability of application, so it meets the requirement of high availability.
Using AWS Database Migration Service (DMS) to migrate the database to Amazon RDS Oracle will ensure compatibility, so the application
can continue to use the same database technology, and the development team can use their existing skills. It also migrates to a managed
service, which will handle the availability, so the team do not have to worry about it. Multi-AZ deployment will increase the availability of
the database.
upvoted 9 times
E) Use AWS Database Migration Service (AWS DMS) to migrate from the Oracle database to Oracle on Amazon RDS in a Multi-AZ
deployment.
° Rehosting in Elastic Beanstalk allows lifting and shifting the .NET application with minimal code changes. Multi-AZ deployment provides
high availability.
° Using DMS to migrate the Oracle data to RDS Oracle in Multi-AZ deployment minimizes changes for the database while achieving high
availability.
° Together this "lift and shift" approach minimizes refactoring needs while providing HA on AWS.
upvoted 1 times
A. would require significant development changes and may not provide the same level of compatibility as rehosting or replatforming
options.
C. would still require changes to the application and the underlying infrastructure, whereas rehosting with EBS minimizes the need for
modification.
D. would likely require significant changes to the application code, as DynamoDB is a NoSQL database with a different data model
compared to Oracle.
upvoted 3 times
markw92 3 months, 2 weeks ago
Answer is BE. No idea why D was chosen. That requires development work and question clearly states minimize development changes,
changing db from Oracle to DynamoDB is LOT of development.
upvoted 1 times
To minimize development changes while moving the application to AWS and to ensure a high level of availability, the company can rehost
the application in AWS Elastic Beanstalk with the .NET platform in a Multi-AZ deployment. This will allow the application to run in a highly
available environment without requiring any changes to the application code.
The company can also use AWS Database Migration Service (AWS DMS) to migrate the Oracle database to Oracle on Amazon RDS in a
Multi-AZ deployment. This will allow the company to maintain the existing database platform while still achieving a high level of
availability.
upvoted 3 times
A company runs a containerized application on a Kubernetes cluster in an on-premises data center. The company is using a MongoDB database
for data storage. The company wants to migrate some of these environments to AWS, but no code changes or deployment method changes are
possible at this time. The company needs a solution that minimizes operational overhead.
A. Use Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 worker nodes for compute and MongoDB on EC2 for data storage.
B. Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate for compute and Amazon DynamoDB for data storage
C. Use Amazon Elastic Kubernetes Service (Amazon EKS) with Amazon EC2 worker nodes for compute and Amazon DynamoDB for data
storage.
D. Use Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate for compute and Amazon DocumentDB (with MongoDB
compatibility) for data storage.
Correct Answer: D
A. would require managing and scaling the EC2 instances manually, which increases operational overhead.
B. would require significant changes to the application code as DynamoDB is a NoSQL database with a different data model compared to
MongoDB.
C. would also require code changes to adapt to DynamoDB's different data model, and managing EC2 worker nodes increases operational
overhead.
upvoted 2 times
AWS Fargate is a fully-managed container execution environment that allows you to run containerized applications without the need to
manage the underlying EC2 instances.
Amazon DocumentDB is a fully-managed document database service that supports MongoDB workloads, allowing the company to use the
same database platform as in their on-premises environment without having to make any code changes.
upvoted 4 times
A telemarketing company is designing its customer call center functionality on AWS. The company needs a solution that provides multiple speaker
recognition and generates transcript files. The company wants to query the transcript files to analyze the business patterns. The transcript files
must be stored for 7 years for auditing purposes.
A. Use Amazon Rekognition for multiple speaker recognition. Store the transcript files in Amazon S3. Use machine learning models for
transcript file analysis.
B. Use Amazon Transcribe for multiple speaker recognition. Use Amazon Athena for transcript file analysis.
C. Use Amazon Translate for multiple speaker recognition. Store the transcript files in Amazon Redshift. Use SQL queries for transcript file
analysis.
D. Use Amazon Rekognition for multiple speaker recognition. Store the transcript files in Amazon S3. Use Amazon Textract for transcript file
analysis.
Correct Answer: C
Amazon Transcribe is a service that automatically transcribes spoken language into written text. It can handle multiple speakers and can
generate transcript files in real-time or asynchronously. These transcript files can be stored in Amazon S3 for long-term storage.
Amazon Athena is a query service that allows you to analyze data stored in Amazon S3 using SQL. You can use it to analyze the transcript
files and identify patterns in the data.
Option A is incorrect because Amazon Rekognition is a service for analyzing images and videos, not transcribing spoken language.
Option C is incorrect because Amazon Translate is a service for translating text from one language to another, not transcribing spoken
language.
Option D is incorrect because Amazon Textract is a service for extracting text and data from documents and images, not transcribing
spoken language.
upvoted 13 times
B is the answer because Transcribe is the right service for processing voice calls.
And Athena querying is just SQL querying, it cannot help you much to recognize business patterns, for that I would think some text
analysis service like Comprehend would be needed.
Unless... We use Transcribe not only to transcribe, but also to recognize some key words, and then create a DB/S3 record with multiple
fields, e.g. if it is a telemarketing questionnaire, record answer for each question. Then SQL querying might be useful.
upvoted 1 times
A. Amazon Rekognition is for image and video analysis, not audio transcription.
C. Amazon Translate is for language translation, not speaker recognition or transcript analysis. Amazon Redshift may not be the best
choice for storing and querying transcript files.
D. Amazon Rekognition is for image and video analysis, and Amazon Textract is for document extraction, not suitable for audio
transcription or analysis. Storing the transcript files in S3 is appropriate, but the analysis requires a different service like Amazon Athena.
upvoted 1 times
https://aws.amazon.com/transcribe/
Amazon Transcribe
Automatically convert speech to text
upvoted 1 times
A company hosts its application on AWS. The company uses Amazon Cognito to manage users. When users log in to the application, the
application fetches required data from Amazon DynamoDB by using a REST API that is hosted in Amazon API Gateway. The company wants an
AWS managed solution that will control access to the REST API to reduce development efforts.
Which solution will meet these requirements with the LEAST operational overhead?
A. Configure an AWS Lambda function to be an authorizer in API Gateway to validate which user made the request.
B. For each user, create and assign an API key that must be sent with each request. Validate the key by using an AWS Lambda function.
C. Send the user’s email address in the header with every request. Invoke an AWS Lambda function to validate that the user with that email
address has proper access.
D. Configure an Amazon Cognito user pool authorizer in API Gateway to allow Amazon Cognito to validate each request.
Correct Answer: A
To control access to the REST API and reduce development efforts, the company can use an Amazon Cognito user pool authorizer in API
Gateway. This will allow Amazon Cognito to validate each request and ensure that only authenticated users can access the API. This
solution has the LEAST operational overhead, as it does not require the company to develop and maintain any additional infrastructure or
code.
Option D. Configure an Amazon Cognito user pool authorizer in API Gateway to allow Amazon Cognito to validate each request.
upvoted 6 times
Configure an Amazon Cognito user pool authorizer in API Gateway to allow Amazon Cognito to validate each request.
º Cognito user pool authorizers allow seamless integration between Cognito and API Gateway for access control.
º API Gateway handles validating the access tokens from Cognito automatically without any custom code.
º This is a fully managed solution with minimal ops overhead.
upvoted 1 times
A. Configuring an AWS Lambda function as an authorizer in API Gateway would require custom implementation and management of the
authorization logic.
B. Creating and assigning an API key for each user would require additional management and validation logic in an AWS Lambda function.
C. Sending the user's email address in the header and validating it with an AWS Lambda function would also require custom
implementation and management of the authorization logic.
Option D, using an Amazon Cognito user pool authorizer, provides a streamlined and managed solution for controlling access to the REST
API with minimal operational overhead.
upvoted 1 times
Bmarodi 4 months, 1 week ago
Selected Answer: D
solution will meet these requirements with the LEAST operational overhead is option D.
upvoted 1 times
It starts "As an alternative to using IAM roles and policies or Lambda authorizers (formerly known as custom authorizers), you can use
an Amazon Cognito user pool to control who can access your API in Amazon API Gateway."
This suggests that Amazon Cognito user pools CAN be used for Authorization, which you say above cannot be done.
"To use an Amazon Cognito user pool with your API, you must first create an authorizer of the COGNITO_USER_POOLS type and then
configure an API method to use that authorizer"
So whilst A is a valid approach, it looks like D achieves the same with "the LEAST operational overhead".
upvoted 7 times
Use the Amazon Cognito console, CLI/SDK, or API to create a user pool—or use one that's owned by another AWS account
upvoted 1 times
A company is developing a marketing communications service that targets mobile app users. The company needs to send confirmation messages
with Short Message Service (SMS) to its users. The users must be able to reply to the SMS messages. The company must store the responses for
a year for analysis.
A. Create an Amazon Connect contact flow to send the SMS messages. Use AWS Lambda to process the responses.
B. Build an Amazon Pinpoint journey. Configure Amazon Pinpoint to send events to an Amazon Kinesis data stream for analysis and archiving.
C. Use Amazon Simple Queue Service (Amazon SQS) to distribute the SMS messages. Use AWS Lambda to process the responses.
D. Create an Amazon Simple Notification Service (Amazon SNS) FIFO topic. Subscribe an Amazon Kinesis data stream to the SNS topic for
analysis and archiving.
Correct Answer: A
A. Creating an Amazon Connect contact flow is primarily focused on customer support and engagement, and it lacks the capability to
store and process SMS responses for analysis.
C. Using SQS is a message queuing service and is not specifically designed for handling SMS responses or capturing them for analysis.
D. Creating an SNS FIFO topic and subscribing a Kinesis data stream is not the most appropriate solution for capturing and storing SMS
responses, as SNS is primarily used for message publishing and distribution.
In summary, option B is the best choice as it leverages Pinpoint to send SMS messages and captures user responses for analysis and
archiving using an Kinesis data stream.
upvoted 2 times
You can use Amazon Pinpoint to create targeted groups of customers, and then send them campaign-based messages. You can also use
Amazon Pinpoint to send direct messages, such as appointment confirmations, order updates, and one-time passwords.
upvoted 2 times
is not correct solution because while Amazon Pinpoint allows you to send SMS and Email campaigns, as well as handle push notifications
to a user base, it doesn't provide SMS sending feature by itself. Furthermore, it's a service mainly focused on sending and tracking
marketing campaigns, not for managing two-way SMS communication and the reception of reply.
upvoted 3 times
To meet the requirements of the company, a solutions architect can build an Amazon Pinpoint journey and configure Amazon Pinpoint to
send events to an Amazon Kinesis data stream for analysis and archiving. The Kinesis data stream can be configured to store the data for
a year, allowing the company to analyze the responses over time.
A company is planning to move its data to an Amazon S3 bucket. The data must be encrypted when it is stored in the S3 bucket. Additionally, the
encryption key must be automatically rotated every year.
Which solution will meet these requirements with the LEAST operational overhead?
A. Move the data to the S3 bucket. Use server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Use the built-in key rotation
behavior of SSE-S3 encryption keys.
B. Create an AWS Key Management Service (AWS KMS) customer managed key. Enable automatic key rotation. Set the S3 bucket’s default
encryption behavior to use the customer managed KMS key. Move the data to the S3 bucket.
C. Create an AWS Key Management Service (AWS KMS) customer managed key. Set the S3 bucket’s default encryption behavior to use the
customer managed KMS key. Move the data to the S3 bucket. Manually rotate the KMS key every year.
D. Encrypt the data with customer key material before moving the data to the S3 bucket. Create an AWS Key Management Service (AWS KMS)
key without key material. Import the customer key material into the KMS key. Enable automatic key rotation.
Correct Answer: B
AWS managed CMK. This is free CMK generated only for your account. You can only view it policies and audit usage, but not manage it.
Rotation is automatic - once per 1095 days (3 years),
Customer managed CMK. This uses your own key that you create and can manage. Rotation is not enabled by default. But if you enable it,
it will be automatically rotated every 1 year. This variant can also use an imported key material by you. If you create such key with an
imported material, there is no automated rotation. Only manual rotation.
SSE-C - customer provided key. The encryption key is fully managed by you outside of AWS. AWS will not rotate it.
upvoted 23 times
To encrypt the data when it is stored in the S3 bucket and automatically rotate the encryption key every year with the least operational
overhead, the company can use server-side encryption with Amazon S3-managed encryption keys (SSE-S3). SSE-S3 uses keys that are
managed by Amazon S3, and the built-in key rotation behavior of SSE-S3 encryption keys automatically rotates the keys every year.
To meet the requirements of the company, the solutions architect can move the data to the S3 bucket and enable server-side encryption
with SSE-S3. This solution requires no additional configuration or maintenance and has the least operational overhead.
Option A. Move the data to the S3 bucket. Use server-side encryption with Amazon S3-managed encryption keys (SSE-S3). Use the built-in
key rotation behavior of SSE-S3 encryption keys.
upvoted 22 times
Additionally, where is the reference that SSE-S3 will rotate keys every year (which is the question's requirement).
upvoted 1 times
LuckyAro 8 months, 1 week ago
https://docs.aws.amazon.com/AmazonS3/latest/userguide/default-bucket-encryption.html
upvoted 1 times
Option C involves using a customer-managed AWS KMS key, but this requires the company to manually rotate the key every year, which
introduces additional operational overhead.
Option D involves encrypting the data with customer key material and creating a KMS key without key material, but this requires the
company to manage the customer key material and import it into the KMS key, which introduces additional operational overhead.
upvoted 2 times
For A there is no reference to how often these keys are rotated, and to rotate to a new key, you need to upload it, which is
operational overhead. So not only does it not necessarily meet the 'rotate keys every year' requirement, but every year it requires
operational overhead.
More importantly, the question states move the objects first, and then configure encryption, but ..."There is no change to the
encryption of the objects that existed in the bucket before default encryption was enabled." from
https://docs.aws.amazon.com/AmazonS3/latest/userguide/default-bucket-encryption.html
So A is clearly wrong.
For B, whilst you have to set up KMS once, you then don't have to anything else, which i would say is LEAST operational overhead.
upvoted 13 times
https://saturncloud.io/blog/how-does-amazon-sses3-key-rotation-
work/#:~:text=Once%20you%20enable%20SSE%2DS3,your%20objects%20at%20any%20time.
upvoted 1 times
Going back to the question, since we cannot confirm the frequency of SSE-S3's key rotation the best answer for this question is B. It might
have higher operational overhead compared to A, but it is the only one that fulfills the requirements.
upvoted 1 times
Option B also provides a valid solution, but it involves more manual configuration and management of a customer-managed AWS Key
Management Service (AWS KMS) key, including enabling and configuring automatic key rotation.
upvoted 1 times
B. By using a customer managed key in AWS KMS with automatic key rotation enabled, and setting the S3 bucket's default encryption
behavior to use this key, the data stored in the S3 bucket will be encrypted and the encryption key will be automatically rotated every year.
C. This answer is not the most optimal solution as it suggests manually rotating the KMS key every year, which introduces manual
intervention and increases operational overhead.
D. This answer is not the most suitable option as it involves encrypting the data with customer key material and managing the key rotation
manually. It adds complexity and management overhead compared to using AWS KMS for key management and encryption.
upvoted 4 times
You can configure default encryption for a bucket. You can use either server-side encryption with Amazon S3 managed keys (SSE-S3)
(the default) or server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS).
upvoted 1 times
WilliamHoac 4 months, 3 weeks ago
B is correct answer.
KEYWORD: LEAST operational overhead and the encryption key must be automatically rotated every year
SSE-S3: cannot rotation.
Base on aws site: If you need more control over your keys, such as managing key rotation and access policy grants, you can choose to use
server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS)
upvoted 1 times
Question #203 Topic 1
The customers of a finance company request appointments with financial advisors by sending text messages. A web application that runs on
Amazon EC2 instances accepts the appointment requests. The text messages are published to an Amazon Simple Queue Service (Amazon SQS)
queue through the web application. Another application that runs on EC2 instances then sends meeting invitations and meeting confirmation
email messages to the customers. After successful scheduling, this application stores the meeting information in an Amazon DynamoDB
database.
As the company expands, customers report that their meeting invitations are taking longer to arrive.
B. Add an Amazon API Gateway API in front of the web application that accepts the appointment requests.
C. Add an Amazon CloudFront distribution. Set the origin as the web application that accepts the appointment requests.
D. Add an Auto Scaling group for the application that sends meeting invitations. Configure the Auto Scaling group to scale based on the depth
of the SQS queue.
Correct Answer: D
To resolve the issue of longer delivery times for meeting invitations, the solutions architect can recommend adding an Auto Scaling group
for the application that sends meeting invitations and configuring the Auto Scaling group to scale based on the depth of the SQS queue.
This will allow the application to scale up as the number of appointment requests increases, improving the performance and delivery
times of the meeting invitations.
upvoted 8 times
A. Adding a DynamoDB Accelerator (DAX) cluster in front of the DynamoDB database would improve read performance for DynamoDB,
but it does not directly address the issue of delayed meeting invitations.
B. Adding an API Gateway API in front of the web application that accepts the appointment requests may help with request handling and
management, but it does not directly address the issue of delayed meeting invitations.
C. Adding an CloudFront distribution with the web application as the origin would improve content delivery and caching, but it does not
directly address the issue of delayed meeting invitations.
upvoted 4 times
An online retail company has more than 50 million active customers and receives more than 25,000 orders each day. The company collects
purchase data for customers and stores this data in Amazon S3. Additional customer data is stored in Amazon RDS.
The company wants to make all the data available to various teams so that the teams can perform analytics. The solution must provide the ability
to manage fine-grained permissions for the data and must minimize operational overhead.
A. Migrate the purchase data to write directly to Amazon RDS. Use RDS access controls to limit access.
B. Schedule an AWS Lambda function to periodically copy data from Amazon RDS to Amazon S3. Create an AWS Glue crawler. Use Amazon
Athena to query the data. Use S3 policies to limit access.
C. Create a data lake by using AWS Lake Formation. Create an AWS Glue JDBC connection to Amazon RDS. Register the S3 bucket in Lake
Formation. Use Lake Formation access controls to limit access.
D. Create an Amazon Redshift cluster. Schedule an AWS Lambda function to periodically copy data from Amazon S3 and Amazon RDS to
Amazon Redshift. Use Amazon Redshift access controls to limit access.
Correct Answer: D
A. Directly writing purchase data to Amazon RDS with RDS access controls lacks comprehensive permissions management for both S3 and
RDS data.
B. Periodically copying data from RDS to S3 using Lambda and using AWS Glue and Athena for querying does not offer fine-grained
permissions management and introduces data synchronization complexities.
D. Creating an Redshift cluster and copying data from S3 and RDS to Redshift adds complexity and operational overhead without the
flexibility of Lake Formation's permissions management capabilities.
upvoted 3 times
pisica134 3 months, 1 week ago
Answer is C AWS Lake Formation provides a comprehensive solution for building and managing a data lake. It simplifies data ingestion,
organization, and access control. By creating a data lake using AWS Lake Formation, you can centralize and govern access to your data
across multiple sources.
upvoted 1 times
To make all the data available to various teams and minimize operational overhead, the company can create a data lake by using AWS
Lake Formation. This will allow the company to centralize all the data in one place and use fine-grained access controls to manage access
to the data.
To meet the requirements of the company, the solutions architect can create a data lake by using AWS Lake Formation, create an AWS
Glue JDBC connection to Amazon RDS, and register the S3 bucket in Lake Formation. The solutions architect can then use Lake Formation
access controls to limit access to the data. This solution will provide the ability to manage fine-grained permissions for the data and
minimize operational overhead.
upvoted 3 times
A company hosts a marketing website in an on-premises data center. The website consists of static documents and runs on a single server. An
administrator updates the website content infrequently and uses an SFTP client to upload new documents.
The company decides to host its website on AWS and to use Amazon CloudFront. The company’s solutions architect creates a CloudFront
distribution. The solutions architect must design the most cost-effective and resilient architecture for website hosting to serve as the CloudFront
origin.
A. Create a virtual server by using Amazon Lightsail. Configure the web server in the Lightsail instance. Upload website content by using an
SFTP client.
B. Create an AWS Auto Scaling group for Amazon EC2 instances. Use an Application Load Balancer. Upload website content by using an SFTP
client.
C. Create a private Amazon S3 bucket. Use an S3 bucket policy to allow access from a CloudFront origin access identity (OAI). Upload website
content by using the AWS CLI.
D. Create a public Amazon S3 bucket. Configure AWS Transfer for SFTP. Configure the S3 bucket for website hosting. Upload website content
by using the SFTP client.
Correct Answer: C
Why not C?
User is already uploading content via FTP, option C is eliminating this option for him and forces using the CLI. The solution from C does
not meet the requirements of having FTP.
upvoted 1 times
A. Hosting the website on an Lightsail virtual server would introduce additional management overhead and costs compared to using S3
directly for static content hosting.
B. Using an AWS ASG with EC2 instances and an ALB is not necessary for serving static website content. It would add unnecessary
complexity and cost.
D. While using AWS Transfer for SFTP allows for SFTP uploads, it introduces additional costs and complexity compared to directly
uploading content to an S3 using the AWS CLI. Additionally, hosting the website content in a public S3 may not be desirable from a
security standpoint.
upvoted 3 times
eugene_stalker 4 months, 1 week ago
Selected Answer: D
D - SFTP client to upload new documents.
upvoted 1 times
"D: Create a public Amazon S3 bucket. Configure AWS Transfer for SFTP. Configure the S3 bucket for website hosting. Upload website
content by using the SFTP client." questions says that the company has decided to use Amazon Cloudfront and this answer does not
reference using CF and setting S3 as the Origin
"C. Create a private Amazon S3 bucket. Use an S3 bucket policy to allow access from a CloudFront origin access identity (OAI). Upload
website content by using the AWS CLI." - mentions CF and the origin and the AWS CLI does infact support transfer by SFTP (which was the
part I originally doubted but this link evidences that it does:
https://docs.aws.amazon.com/cli/latest/reference/transfer/describe-server.html
upvoted 2 times
Option A involves using Amazon Lightsail to create a virtual server, which may not be the most cost-effective solution compared to using
S3. Option B involves using an Auto Scaling group with EC2 instances and an Application Load Balancer, which may be more expensive and
complex than using S3. Option C involves creating a private S3 bucket, which may not allow CloudFront to access the website content.
upvoted 1 times
Option D: Creating a public Amazon S3 bucket, configuring AWS Transfer for SFTP, configuring the S3 bucket for website hosting, and
uploading website content by using the SFTP client will meet these requirements with the most cost-effective and resilient architecture.
Configuring AWS Transfer for SFTP allows the company to securely upload content to the S3 bucket using the SFTP client, which the
administrator is already familiar with. This eliminates the need to change the administrator’s workflow or learn new tools.
upvoted 1 times
A company wants to manage Amazon Machine Images (AMIs). The company currently copies AMIs to the same AWS Region where the AMIs were
created. The company needs to design an application that captures AWS API calls and sends alerts whenever the Amazon EC2 CreateImage API
operation is called within the company’s account.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create an AWS Lambda function to query AWS CloudTrail logs and to send an alert when a CreateImage API call is detected.
B. Configure AWS CloudTrail with an Amazon Simple Notification Service (Amazon SNS) notification that occurs when updated logs are sent to
Amazon S3. Use Amazon Athena to create a new table and to query on CreateImage when an API call is detected.
C. Create an Amazon EventBridge (Amazon CloudWatch Events) rule for the CreateImage API call. Configure the target as an Amazon Simple
Notification Service (Amazon SNS) topic to send an alert when a CreateImage API call is detected.
D. Configure an Amazon Simple Queue Service (Amazon SQS) FIFO queue as a target for AWS CloudTrail logs. Create an AWS Lambda
function to send an alert to an Amazon Simple Notification Service (Amazon SNS) topic when a CreateImage API call is detected.
Correct Answer: D
With option C, it won't "The company needs to design an application that captures AWS API calls". it only sends the "CreateImage API "
event. We need to store the AWS API calls as well.
upvoted 1 times
cookieMr 3 months ago
EventBridge (formerly CloudWatch Events) is a fully managed event bus service that allows you to monitor and respond to events within
your AWS environment. By creating an EventBridge rule specifically for the CreateImage API call, you can easily detect and capture this
event. Configuring the target as an SNS topic allows you to send an alert whenever a CreateImage API call occurs. This solution requires
minimal operational overhead as EventBridge and SNS are fully managed services.
A. While using an Lambda to query CloudTrail logs and send an alert can achieve the desired outcome, it introduces additional operational
overhead compared to using EventBridge and SNS directly.
B. Configuring CloudTrail with an SNS notification and using Athena to query on CreateImage API calls would require more setup and
maintenance compared to using EventBridge and SNS.
D. Configuring an SQS FIFO queue as a target for CloudTrail logs and using a function to send an alert to an SNS topic adds unnecessary
complexity to the solution and increases operational overhead. Using EventBridge and SNS directly is a simpler and more efficient
approach.
upvoted 3 times
Amazon EventBridge is a serverless event bus that makes it easy to connect applications together using data from your own applications,
integrated Software as a Service (SaaS) applications, and AWS services. By creating an EventBridge rule for the CreateImage API call, the
company can set up alerts whenever this operation is called within their account. The alert can be sent to an SNS topic, which can then be
configured to send notifications to the company's email or other desired destination.
This solution does not require the company to create a Lambda function or query CloudTrail logs, which makes it the most cost-effective
and efficient option.
upvoted 7 times
A company owns an asynchronous API that is used to ingest user requests and, based on the request type, dispatch requests to the appropriate
microservice for processing. The company is using Amazon API Gateway to deploy the API front end, and an AWS Lambda function that invokes
Amazon DynamoDB to store user requests before dispatching them to the processing microservices.
The company provisioned as much DynamoDB throughput as its budget allows, but the company is still experiencing availability issues and is
losing user requests.
What should a solutions architect do to address this issue without impacting existing users?
C. Create a secondary index in DynamoDB for the table with the user requests.
D. Use the Amazon Simple Queue Service (Amazon SQS) queue and Lambda to buffer writes to DynamoDB.
Correct Answer: D
A. It limits can help control the request rate, but it may lead to an increase in errors and affect the user experience. Throttling alone may
not be sufficient to address the availability issues and prevent the loss of requests.
B. It can improve read performance but does not directly address the availability issues and loss of requests. It focuses on optimizing read
operations rather than buffering writes.
C. It may help with querying the user requests efficiently, but it does not directly solve the availability issues or prevent the loss of
requests. It is more focused on data retrieval rather than buffering writes.
upvoted 2 times
Applications that require strongly consistent reads (or that cannot tolerate eventually consistent reads).
Applications that do not require microsecond response times for reads, or that do not need to offload repeated read activity from
underlying tables.
Applications that are write-intensive, or that do not perform much read activity.
Applications that are already using a different caching solution with DynamoDB, and are using their own client-side logic for working
with that caching solution.
upvoted 2 times
Question states: "The company provisioned as much DynamoDB throughput as its budget allows"
upvoted 3 times
By using an SQS queue and Lambda, the solutions architect can decouple the API front end from the processing microservices and
improve the overall scalability and availability of the system. The SQS queue acts as a buffer, allowing the API front end to continue
accepting user requests even if the processing microservices are experiencing high workloads or are temporarily unavailable. The Lambda
function can then retrieve requests from the SQS queue and write them to DynamoDB, ensuring that all user requests are stored and
processed. This approach allows the company to scale the processing microservices independently from the API front end, ensuring that
the API remains available to users even during periods of high demand.
upvoted 4 times
"When you’re developing against DAX, instead of pointing your application at the DynamoDB endpoint, you point it at the DAX endpoint,
and DAX handles the rest. As a read-through/write-through cache, DAX seamlessly intercepts the API calls that an application normally
makes to DynamoDB so that both read and write activity are reflected in the DAX cache."
https://aws.amazon.com/es/blogs/database/amazon-dynamodb-accelerator-dax-a-read-throughwrite-through-cache-for-dynamodb/
upvoted 1 times
"Whereas both read-through and write-through caches address read-heavy workloads, a write-back (or write-behind) cache is designed
to address write-heavy workloads. Note that DAX is not a write-back cache currently"
upvoted 1 times
A company needs to move data from an Amazon EC2 instance to an Amazon S3 bucket. The company must ensure that no API calls and no data
are routed through public internet routes. Only the EC2 instance can have access to upload data to the S3 bucket.
A. Create an interface VPC endpoint for Amazon S3 in the subnet where the EC2 instance is located. Attach a resource policy to the S3 bucket
to only allow the EC2 instance’s IAM role for access.
B. Create a gateway VPC endpoint for Amazon S3 in the Availability Zone where the EC2 instance is located. Attach appropriate security
groups to the endpoint. Attach a resource policy to the S3 bucket to only allow the EC2 instance’s IAM role for access.
C. Run the nslookup tool from inside the EC2 instance to obtain the private IP address of the S3 bucket’s service API endpoint. Create a route
in the VPC route table to provide the EC2 instance with access to the S3 bucket. Attach a resource policy to the S3 bucket to only allow the
EC2 instance’s IAM role for access.
D. Use the AWS provided, publicly available ip-ranges.json file to obtain the private IP address of the S3 bucket’s service API endpoint. Create a
route in the VPC route table to provide the EC2 instance with access to the S3 bucket. Attach a resource policy to the S3 bucket to only allow
the EC2 instance’s IAM role for access.
Correct Answer: B
https://aws.amazon.com/premiumsupport/knowledge-center/connect-s3-vpc-endpoint/
upvoted 2 times
Option C is incorrect because using the nslookup tool to obtain the private IP address of the S3 bucket's service API endpoint would not
provide a secure connection between the EC2 instance and the S3 bucket.
Option D is incorrect because using the ip-ranges.json file to obtain the private IP address of the S3 bucket's service API endpoint is not
a secure method to connect the EC2 instance to the S3 bucket.
upvoted 2 times
Gateway endpoint
Interface endpoint
A Gateway endpoint:
An Interface endpoint:
1) Help you to securely connect to AWS services EXCEPT FOR Amazon S3 and DynamoDB
2) Powered by PrivateLink (keeps network traffic within AWS network)
3) Needs a elastic network interface (ENI) (entry point for traffic)
upvoted 16 times
Option A , which recommends creating an interface VPC endpoint for Amazon S3 in the subnet where the EC2 instance is located
and attaching a resource policy to the S3 bucket to only allow the EC2 instance's IAM role for access, is the correct solution for the
given scenario. It meets the requirement to ensure that no API calls and no data are routed through public internet routes and that
only the EC2 instance can have access to upload data to the S3 bucket.
upvoted 2 times
Options C and D suggest alternative approaches using DNS resolution and VPC route tables, but these options may not provide the same
level of security and isolation as the interface VPC endpoint in option A. Additionally, these options are more complex to set up and
maintain.
upvoted 1 times
To meet the requirements of no public internet access and only allowing the EC2 instance access, the solution is to:
Create a gateway VPC endpoint for S3 in the subnet where the EC2 instance is located. This keeps S3 access within the VPC and does not
route via the internet.
Attach appropriate security groups to the endpoint to control access.
Use a S3 bucket resource policy to only allow access from the EC2 instance IAM role.
upvoted 2 times
TariqKipkemei 1 week, 6 days ago
Selected Answer: A
You can provision interface endpoints for s3.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/privatelink-interface-endpoints.html#:~:text=With-,AWS%20PrivateLink,-
for%20Amazon%20S3
upvoted 1 times
https://docs.aws.amazon.com/whitepapers/latest/aws-privatelink/what-are-vpc-endpoints.html
https://repost.aws/knowledge-center/connect-s3-vpc-endpoint
https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html
https://digitalcloud.training/vpc-interface-endpoint-vs-gateway-endpoint-in-aws/
upvoted 3 times
Amazon S3 supports both gateway endpoints and interface endpoints. With a gateway endpoint, you can access Amazon S3 from your
VPC, without requiring an internet gateway or NAT device for your VPC, and with no additional cost. However, gateway endpoints do not
allow access from on-premises networks, from peered VPCs in other AWS Regions, or through a transit gateway. For those scenarios, you
must use an interface endpoint, which is available for an additional cost.
The topic mentioned no API or other public network service should access to S3.
So Gateway endpoint should be better.
upvoted 2 times
C. Running nslookup or creating a specific route in the VPC route table does not provide the desired level of security and privacy, as the
traffic may still traverse public internet routes.
D. Using the publicly available ip-ranges.json file to obtain the private IP address of the S3 bucket's service API endpoint is not a
recommended approach, as IP addresses can change over time, and it does not provide the same level of security as using VPC endpoints.
upvoted 2 times
abhishek2021 3 months, 2 weeks ago
Selected Answer: A
security group cannot be associated with Gateway Endpoint. so, the answer is A.
upvoted 1 times
Question #209 Topic 1
A solutions architect is designing the architecture of a new application being deployed to the AWS Cloud. The application will run on Amazon EC2
On-Demand Instances and will automatically scale across multiple Availability Zones. The EC2 instances will scale up and down frequently
throughout the day. An Application Load Balancer (ALB) will handle the load distribution. The architecture needs to support distributed session
data management. The company is willing to make changes to code if needed.
What should the solutions architect do to ensure that the architecture supports distributed session data management?
B. Use session affinity (sticky sessions) of the ALB to manage session data.
C. Use Session Manager from AWS Systems Manager to manage the session.
D. Use the GetSessionToken API operation in AWS Security Token Service (AWS STS) to manage the session.
Correct Answer: A
In order to support distributed session data management in this scenario, it is necessary to use a distributed data store such as Amazon
ElastiCache. This will allow the session data to be stored and accessed by multiple EC2 instances across multiple Availability Zones, which
is necessary for a scalable and highly available architecture.
Option B, using session affinity (sticky sessions) of the ALB, would not be sufficient because this would only allow the session data to be
stored on a single EC2 instance, which would not be able to scale across multiple Availability Zones.
Options C and D, using Session Manager and the GetSessionToken API operation in AWS STS, are not related to session data management
and would not be appropriate solutions for this scenario.
upvoted 16 times
Option B, using session affinity (sticky sessions) of the ALB, is not the best choice for distributed session data management because it ties
each session to a specific EC2 instance. As the instances scale up and down frequently, it can lead to uneven load distribution and may not
provide optimal scalability.
Options C and D are not applicable for managing session data. AWS Systems Manager's Session Manager is primarily used for secure
remote shell access to EC2 instances, and the AWS STS GetSessionToken API operation is used for temporary security credentials and not
session data management.
upvoted 1 times
Options C and D are not applicable for managing session data. AWS Systems Manager's Session Manager is primarily used for secure
remote shell access to EC2 instances, and the AWS STS GetSessionToken API operation is used for temporary security credentials and not
session data management.
upvoted 2 times
Abrar2022 3 months, 2 weeks ago
Selected Answer: A
A. Use Amazon ElastiCache to manage and store session data.
- Correct. - Session data is managed at the application-layer, and a distributed cache should be used
B. Use session affinity (sticky sessions) of the ALB to manage session data.
- Wrong. This tightly couples the individual EC2 instances to the session data, and requires additional logic in the ALB. When scale-in
happens, the session data stored on individual EC2 instances is destroyed
upvoted 1 times
A company offers a food delivery service that is growing rapidly. Because of the growth, the company’s order processing system is experiencing
scaling problems during peak traffic hours. The current architecture includes the following:
• A group of Amazon EC2 instances that run in an Amazon EC2 Auto Scaling group to collect orders from the application
• Another group of EC2 instances that run in an Amazon EC2 Auto Scaling group to fulfill orders
The order collection process occurs quickly, but the order fulfillment process can take longer. Data must not be lost because of a scaling event.
A solutions architect must ensure that the order collection process and the order fulfillment process can both scale properly during peak traffic
hours. The solution must optimize utilization of the company’s AWS resources.
A. Use Amazon CloudWatch metrics to monitor the CPU of each instance in the Auto Scaling groups. Configure each Auto Scaling group’s
minimum capacity according to peak workload values.
B. Use Amazon CloudWatch metrics to monitor the CPU of each instance in the Auto Scaling groups. Configure a CloudWatch alarm to invoke
an Amazon Simple Notification Service (Amazon SNS) topic that creates additional Auto Scaling groups on demand.
C. Provision two Amazon Simple Queue Service (Amazon SQS) queues: one for order collection and another for order fulfillment. Configure the
EC2 instances to poll their respective queue. Scale the Auto Scaling groups based on notifications that the queues send.
D. Provision two Amazon Simple Queue Service (Amazon SQS) queues: one for order collection and another for order fulfillment. Configure the
EC2 instances to poll their respective queue. Create a metric based on a backlog per instance calculation. Scale the Auto Scaling groups
based on this metric.
Correct Answer: C
B. While this approach incorporates alarms to trigger additional Auto Scaling groups, it lacks the decoupling and reliable message
processing provided by using SQS queues. It may lead to inefficient scaling and potential data loss.
C. Although using SQS queues is a step in the right direction, scaling solely based on queue notifications may not provide optimal resource
utilization. It does not consider the backlog per instance and does not allow for fine-grained control over scaling.
Overall, option D, which involves using SQS queues for order collection and fulfillment, creating a metric based on backlog per instance
calculation, and scaling the Auto Scaling groups accordingly, is the most suitable solution to address the scaling problems while
optimizing resource utilization and ensuring reliable message processing.
upvoted 2 times
studynoplay 4 months, 2 weeks ago
Selected Answer: D
C is incorrect. "based on notifications that the queues send" SQS does not send notification
upvoted 2 times
A company hosts multiple production applications. One of the applications consists of resources from Amazon EC2, AWS Lambda, Amazon RDS,
Amazon Simple Notification Service (Amazon SNS), and Amazon Simple Queue Service (Amazon SQS) across multiple AWS Regions. All company
resources are tagged with a tag name of “application” and a value that corresponds to each application. A solutions architect must provide the
quickest solution for identifying all of the tagged components.
A. Use AWS CloudTrail to generate a list of resources with the application tag.
B. Use the AWS CLI to query each service across all Regions to report the tagged components.
C. Run a query in Amazon CloudWatch Logs Insights to report on the components with the application tag.
D. Run a query with the AWS Resource Groups Tag Editor to report on the resources globally with the application tag.
Correct Answer: D
B involves manually querying each service using the AWS CLI, which can be time-consuming and cumbersome, especially when dealing
with multiple services and Regions. It is not the most efficient solution for quickly identifying tagged components.
C is focused on analyzing logs rather than directly identifying the tagged components. While CloudWatch Logs Insights can help extract
information from logs, it may not provide a straightforward and quick way to gather a consolidated list of all tagged components across
different services and Regions.
D is the quickest solution as it leverages the Resource Groups Tag Editor, which is specifically designed for managing and organizing
resources based on tags. It offers a centralized and efficient approach to generate a report of tagged components across multiple services
and Regions.
upvoted 4 times
A company needs to export its database once a day to Amazon S3 for other teams to access. The exported object size varies between 2 GB and 5
GB. The S3 access pattern for the data is variable and changes rapidly. The data must be immediately available and must remain accessible for up
to 3 months. The company needs the most cost-effective solution that will not increase retrieval time.
Which S3 storage class should the company use to meet these requirements?
A. S3 Intelligent-Tiering
C. S3 Standard
Correct Answer: A
Option B is optimized for long-term archival storage and may not provide the immediate accessibility required by the company. Retrieving
data from Glacier storage typically incurs a longer retrieval time compared to other storage classes.
Option C is the appropriate choice for immediate availability and quick access to the data. It offers high durability, availability, and low
latency access, making it suitable for the company's needs. However, it is not the most cost-effective option for long-term storage.
Option D is a more cost-effective storage class compared to S3 Standard, especially for data that is accessed less frequently. However,
since the access pattern for the data is variable and changes rapidly, S3 Standard-IA may not be the most cost-effective solution, as it
incurs additional retrieval fees for frequent access.
upvoted 2 times
markw92 3 months, 2 weeks ago
Answer A: S3 Intelligent-Tiering is the recommended storage class for data with unknown, changing, or unpredictable access patterns,
independent of object size or retention period, such as data lakes, data analytics, and new applications.
upvoted 1 times
On the other hand, S3 Standard-Infrequent Access (S3 Standard-IA) provides low cost storage with low latency and high throughput
performance. It is designed for infrequently accessed data that can be recreated if lost, and can be retrieved in a timely manner if
required. It is a cost-effective solution that meets the requirement of immediately available data and remains accessible for up to 3
months.
upvoted 2 times
S3 Standard-IA is the most cost-effective storage class that meets the company's requirements. It provides immediate access to the data,
and the data remains accessible for up to 3 months. S3 Standard-IA is optimized for infrequently accessed data, which is suitable for the
company's use case of exporting the database once a day. This storage class also has a lower retrieval fee compared to S3 Glacier, which is
important for the company as the S3 access pattern for the data is variable and changes rapidly. S3 Intelligent-Tiering and S3 Standard are
not the best choice in this case because they are designed for frequently accessed data and have higher retrieval fees
upvoted 2 times
A company is developing a new mobile app. The company must implement proper traffic filtering to protect its Application Load Balancer (ALB)
against common application-level attacks, such as cross-site scripting or SQL injection. The company has minimal infrastructure and operational
staff. The company needs to reduce its share of the responsibility in managing, updating, and securing servers for its AWS environment.
A. Configure AWS WAF rules and associate them with the ALB.
C. Deploy AWS Shield Advanced and add the ALB as a protected resource.
D. Create a new ALB that directs traffic to an Amazon EC2 instance running a third-party firewall, which then passes the traffic to the current
ALB.
Correct Answer: A
Option B does not provide the necessary security and traffic filtering capabilities to protect against application-level attacks. It is more
suitable for hosting static content rather than implementing security measures.
Option C is focused on DDoS protection rather than application-level attacks like XSS or SQL injection. While AWS Shield Advanced does
not address the specific requirements mentioned in the scenario.
Option D involves maintaining and securing additional infrastructure, which goes against the requirement of reducing responsibility and
relying on minimal operational staff.
upvoted 3 times
Since, there is no mention of protection against DDoS attacks, C is a more costly and not useful.
upvoted 2 times
A company’s reporting system delivers hundreds of .csv files to an Amazon S3 bucket each day. The company must convert these files to Apache
Parquet format and must store the files in a transformed data bucket.
Which solution will meet these requirements with the LEAST development effort?
A. Create an Amazon EMR cluster with Apache Spark installed. Write a Spark application to transform the data. Use EMR File System (EMRFS)
to write files to the transformed data bucket.
B. Create an AWS Glue crawler to discover the data. Create an AWS Glue extract, transform, and load (ETL) job to transform the data. Specify
the transformed data bucket in the output step.
C. Use AWS Batch to create a job definition with Bash syntax to transform the data and output the data to the transformed data bucket. Use
the job definition to submit a job. Specify an array job as the job type.
D. Create an AWS Lambda function to transform the data and output the data to the transformed data bucket. Configure an event notification
for the S3 bucket. Specify the Lambda function as the destination for the event notification.
Correct Answer: D
Option A requires more development effort as it involves writing a Spark application to transform the data. It also introduces additional
infrastructure management with the EMR cluster.
Option C requires writing and managing custom Bash scripts for data transformation. It requires more manual effort and does not
provide the built-in capabilities of AWS Glue for data transformation.
Option D requires developing and managing a custom Lambda for data transformation. While Lambda can handle the transformation, it
requires more effort compared to AWS Glue, which is specifically designed for ETL operations.
Therefore, option B provides the easiest and least development effort by leveraging AWS Glue's capabilities for data discovery,
transformation, and output to the transformed data bucket.
upvoted 3 times
For S3 buckets with a large number of objects (millions to billions), use Amazon S3 Inventory to get a list of the unencrypted objects, and
Amazon S3 Batch Operations to encrypt the large number of old, unencrypted files.
upvoted 2 times
When you overwrite an S3 object, it results in a new object version in the bucket. However, this will not remove the old unencrypted
versions of the object. If you do not delete the old version of your newly encrypted objects, you will be charged for the storage of both
versions of the objects.
S3 Lifecycle
If you want to remove these unencrypted versions, use S3 Lifecycle to expire previous versions of objects. When you add a Lifecycle
configuration to a bucket, the configuration rules apply to both existing objects and objects added later. C is missing this step, which I
believe is what makes B the better choice. B includes the functionality of encrypting the old unencrypted objects via Batch Operations,
whereas, Versioning does not address the old unencrypted objects.
upvoted 1 times
https://docs.aws.amazon.com/pt_br/prescriptive-guidance/latest/patterns/three-aws-glue-etl-job-types-for-converting-data-to-apache-
parquet.html
upvoted 1 times
A company has 700 TB of backup data stored in network attached storage (NAS) in its data center. This backup data need to be accessible for
infrequent regulatory requests and must be retained 7 years. The company has decided to migrate this backup data from its data center to AWS.
The migration must be complete within 1 month. The company has 500 Mbps of dedicated bandwidth on its public internet connection available
for data transfer.
What should a solutions architect do to migrate and store the data at the LOWEST cost?
A. Order AWS Snowball devices to transfer the data. Use a lifecycle policy to transition the files to Amazon S3 Glacier Deep Archive.
B. Deploy a VPN connection between the data center and Amazon VPC. Use the AWS CLI to copy the data from on premises to Amazon S3
Glacier.
C. Provision a 500 Mbps AWS Direct Connect connection and transfer the data to Amazon S3. Use a lifecycle policy to transition the files to
Amazon S3 Glacier Deep Archive.
D. Use AWS DataSync to transfer the data and deploy a DataSync agent on premises. Use the DataSync task to copy files from the on-premises
NAS storage to Amazon S3 Glacier.
Correct Answer: A
Option B would require continuous data transfer over the public internet, which could be time-consuming and costly given the large
amount of data. It may also require significant bandwidth allocation.
Option C would involve additional costs for provisioning and maintaining the dedicated connection, which may not be necessary for a one-
time data migration.
Option D could be a viable option, but it may incur additional costs for deploying and managing the DataSync agent.
Therefore, option A is the recommended choice as it provides a secure and efficient data transfer method using Snowball devices and
allows for cost optimization through lifecycle policies by transitioning the data to S3 Glacier Deep Archive for long-term storage.
upvoted 2 times
arjundevops 5 months, 2 weeks ago
A is the correct answer.
even though they have 500mbps internetspeed, it will take around 130days to transfer the data from on premises to AWS
AWS Snowball Edge is an edge computing and data transfer device provided by the AWS Snowball service. It has on-board storage and
compute power that provides select AWS services for use in edge locations. Snowball Edge comes in two options, Storage Optimized and
Compute Optimized, to support local data processing and collection in disconnected environments such as ships, windmills, and remote
factories. Learn more about its features here.
The original Snowball devices were transitioned out of service and Snowball Edge Storage Optimized are now the primary devices used for
data transfer.
No. For data transfer needs now, please select the Snowball Edge Storage Optimized devices.
upvoted 1 times
Cannot copy files directly from on-prem to S3 Glacier with DataSync. It should be S3 standard first, then configuration S3 Lifecycle to
transit to Glacier => Exclude D.
upvoted 1 times
A company has a serverless website with millions of objects in an Amazon S3 bucket. The company uses the S3 bucket as the origin for an
Amazon CloudFront distribution. The company did not set encryption on the S3 bucket before the objects were loaded. A solutions architect needs
to enable encryption for all existing objects and for all objects that are added to the S3 bucket in the future.
Which solution will meet these requirements with the LEAST amount of effort?
A. Create a new S3 bucket. Turn on the default encryption settings for the new S3 bucket. Download all existing objects to temporary local
storage. Upload the objects to the new S3 bucket.
B. Turn on the default encryption settings for the S3 bucket. Use the S3 Inventory feature to create a .csv file that lists the unencrypted
objects. Run an S3 Batch Operations job that uses the copy command to encrypt those objects.
C. Create a new encryption key by using AWS Key Management Service (AWS KMS). Change the settings on the S3 bucket to use server-side
encryption with AWS KMS managed encryption keys (SSE-KMS). Turn on versioning for the S3 bucket.
D. Navigate to Amazon S3 in the AWS Management Console. Browse the S3 bucket’s objects. Sort by the encryption field. Select each
unencrypted object. Use the Modify button to apply default encryption settings to every unencrypted object in the S3 bucket.
Correct Answer: B
A. This solution involves creating a new S3 and manually downloading and uploading all existing objects. It requires significant effort and
time to transfer millions of objects, making it a less efficient solution.
C. While enabling SSE with AWS KMS is a valid approach to encrypt objects in an S3, it does not address the requirement of encrypting
existing objects. It only applies encryption to new objects added to the bucket.
D. Manually modifying each object in the S3 to apply default encryption settings is a labor-intensive and error-prone process. It would
require individually selecting and modifying each unencrypted object, which is impractical for a large number of objects.
upvoted 3 times
https://catalog.us-east-1.prod.workshops.aws/workshops/05f16f1a-0bbf-45a7-a304-4fcd7fca3d1f/en-US/s3-track/module-2
You're welcome
upvoted 3 times
For S3 buckets with a large number of objects (millions to billions), use Amazon S3 Inventory to get a list of the unencrypted objects, and
Amazon S3 Batch Operations to encrypt the large number of old, unencrypted files.
upvoted 3 times
When you overwrite an S3 object, it results in a new object version in the bucket. However, this will not remove the old unencrypted
versions of the object. If you do not delete the old version of your newly encrypted objects, you will be charged for the storage of both
versions of the objects.
S3 Lifecycle
If you want to remove these unencrypted versions, use S3 Lifecycle to expire previous versions of objects. When you add a Lifecycle
configuration to a bucket, the configuration rules apply to both existing objects and objects added later. C is missing this step, which I
believe is what makes B the better choice. B includes the functionality of encrypting the old unencrypted objects via Batch Operations,
whereas, Versioning does not address the old unencrypted objects.
upvoted 1 times
For S3 buckets with a large number of objects (millions to billions), use Amazon S3 Inventory to get a list of the unencrypted objects, and
Amazon S3 Batch Operations to encrypt the large number of old, unencrypted files.
upvoted 1 times
When you overwrite an S3 object, it results in a new object version in the bucket. However, this will not remove the old unencrypted
versions of the object. If you do not delete the old version of your newly encrypted objects, you will be charged for the storage of both
versions of the objects.
S3 Lifecycle
If you want to remove these unencrypted versions, use S3 Lifecycle to expire previous versions of objects. When you add a Lifecycle
configuration to a bucket, the configuration rules apply to both existing objects and objects added later. C is missing this step, which I
believe is what makes B the better choice. B includes the functionality of encrypting the old unencrypted objects via Batch Operations,
whereas, Versioning does not address the old unencrypted objects.
upvoted 1 times
A company runs a global web application on Amazon EC2 instances behind an Application Load Balancer. The application stores data in Amazon
Aurora. The company needs to create a disaster recovery solution and can tolerate up to 30 minutes of downtime and potential data loss. The
solution does not need to handle the load when the primary infrastructure is healthy.
A. Deploy the application with the required infrastructure elements in place. Use Amazon Route 53 to configure active-passive failover. Create
an Aurora Replica in a second AWS Region.
B. Host a scaled-down deployment of the application in a second AWS Region. Use Amazon Route 53 to configure active-active failover.
Create an Aurora Replica in the second Region.
C. Replicate the primary infrastructure in a second AWS Region. Use Amazon Route 53 to configure active-active failover. Create an Aurora
database that is restored from the latest snapshot.
D. Back up data with AWS Backup. Use the backup to create the required infrastructure in a second AWS Region. Use Amazon Route 53 to
configure active-passive failover. Create an Aurora second primary instance in the second Region.
Correct Answer: D
Option B, is not necessary in this case as the solution does not need to handle the load when the primary infrastructure is healthy, and it
may involve higher complexity and costs.
Option C, may introduce additional complexity and potential data loss, as the standby database might not be up-to-date with the primary
database.
Option D, may be suitable for backup and recovery scenarios but may not provide the required failover and downtime tolerance specified
in the requirements.
upvoted 1 times
antropaws 3 months, 4 weeks ago
Selected Answer: D
I vote D, because option A is not highly available. In option A, you can't configure active-passive failover because you haven't created a
backup infrastructure.
upvoted 1 times
If you have strict RTO and RPO requirements, you should consider a different DR strategy, such as Amazon Aurora Global Database .
https://aws.amazon.com/blogs/database/cost-effective-disaster-recovery-for-amazon-aurora-databases-using-aws-backup/
upvoted 1 times
A company has a web server running on an Amazon EC2 instance in a public subnet with an Elastic IP address. The default security group is
assigned to the EC2 instance. The default network ACL has been modified to block all traffic. A solutions architect needs to make the web server
accessible from everywhere on port 443.
A. Create a security group with a rule to allow TCP port 443 from source 0.0.0.0/0.
B. Create a security group with a rule to allow TCP port 443 to destination 0.0.0.0/0.
C. Update the network ACL to allow TCP port 443 from source 0.0.0.0/0.
D. Update the network ACL to allow inbound/outbound TCP port 443 from source 0.0.0.0/0 and to destination 0.0.0.0/0.
E. Update the network ACL to allow inbound TCP port 443 from source 0.0.0.0/0 and outbound TCP port 32768-65535 to destination
0.0.0.0/0.
Correct Answer: AE
Here, it says that the client chooses the ephemeral port, and it can start from 1024. Only Linux clients have the range starting at 32768
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html#nacl-ephemeral-ports
Unless the destination advertises the ephemeral ports, which I don't think is the case
upvoted 1 times
A company’s application is having performance issues. The application is stateful and needs to complete in-memory tasks on Amazon EC2
instances. The company used AWS CloudFormation to deploy infrastructure and used the M5 EC2 instance family. As traffic increased, the
application performance degraded. Users are reporting delays when the users attempt to access the application.
Which solution will resolve these issues in the MOST operationally efficient way?
A. Replace the EC2 instances with T3 EC2 instances that run in an Auto Scaling group. Make the changes by using the AWS Management
Console.
B. Modify the CloudFormation templates to run the EC2 instances in an Auto Scaling group. Increase the desired capacity and the maximum
capacity of the Auto Scaling group manually when an increase is necessary.
C. Modify the CloudFormation templates. Replace the EC2 instances with R5 EC2 instances. Use Amazon CloudWatch built-in EC2 memory
metrics to track the application performance for future capacity planning.
D. Modify the CloudFormation templates. Replace the EC2 instances with R5 EC2 instances. Deploy the Amazon CloudWatch agent on the EC2
instances to generate custom application latency metrics for future capacity planning.
Correct Answer: D
"in-memory tasks" => need the "R" EC2 instance type to archive memory optimization. So we are concerned about C & D.
Because EC2 instances don't have built-in memory metrics to CW by default. As a result, we have to install the CW agent to archive the
purpose.
upvoted 16 times
In addition, deploying the CloudWatch agent on the EC2 instances allows for the generation of custom application latency metrics, which
can provide valuable insights into the application's performance.
This solution addresses the performance issues efficiently by leveraging the appropriate instance types and collecting custom application
metrics for better monitoring and future capacity planning.
A. Replacing with T3 instances may not provide enough memory capacity for in-memory tasks.
B. Manually increasing the capacity of the ASG does not directly address the performance issues.
C. Relying solely on built-in EC2 memory metrics may not provide enough granularity for optimizing in-memory tasks.
The most efficient solution is to modify the CloudFormation templates, replace with R5 instances, and deploy the CloudWatch agent for
custom metrics.
upvoted 2 times
A solutions architect is designing a new API using Amazon API Gateway that will receive requests from users. The volume of requests is highly
variable; several hours can pass without receiving a single request. The data processing will take place asynchronously, but should be completed
within a few seconds after a request is made.
Which compute service should the solutions architect have the API invoke to deliver the requirements at the lowest cost?
Correct Answer: B
A. Glue is a fully managed ETL service. It is designed for data processing and transformation tasks rather than serving API requests. It may
not be suitable for handling variable request volumes and delivering responses within a few seconds.
C. While EKS provides scalability and flexibility, it may introduce additional complexity and overhead for managing and scaling the
infrastructure for handling variable API request volumes.
D. Similar to the previous option, using ECS with EC2 would require additional effort for infrastructure management and scaling, which
may not be necessary for handling intermittent and variable API request volumes.
upvoted 2 times
A company runs an application on a group of Amazon Linux EC2 instances. For compliance reasons, the company must retain all application log
files for 7 years. The log files will be analyzed by a reporting tool that must be able to access all the files concurrently.
D. Amazon S3
Correct Answer: D
B. EFS is a scalable file storage service that can be mounted on multiple EC2 instances concurrently. While it provides concurrent access to
files, it may not be the most cost-effective option for long-term retention due to its higher pricing compared to S3.
C. The instance store is a temporary storage option that is physically attached to the EC2 instance. It does not provide the durability and
long-term retention required for compliance purposes. Additionally, the instance store is not accessible outside of the specific EC2
instance it is attached to, so concurrent access by the reporting tool would not be possible.
Therefore, considering the requirements for long-term retention, concurrent access, and cost-effectiveness, S3 is the most suitable and
cost-effective storage solution.
upvoted 4 times
A company has hired an external vendor to perform work in the company’s AWS account. The vendor uses an automated tool that is hosted in an
AWS account that the vendor owns. The vendor does not have IAM access to the company’s AWS account.
A. Create an IAM role in the company’s account to delegate access to the vendor’s IAM role. Attach the appropriate IAM policies to the role for
the permissions that the vendor requires.
B. Create an IAM user in the company’s account with a password that meets the password complexity requirements. Attach the appropriate
IAM policies to the user for the permissions that the vendor requires.
C. Create an IAM group in the company’s account. Add the tool’s IAM user from the vendor account to the group. Attach the appropriate IAM
policies to the group for the permissions that the vendor requires.
D. Create a new identity provider by choosing “AWS account” as the provider type in the IAM console. Supply the vendor’s AWS account ID and
user name. Attach the appropriate IAM policies to the new provider for the permissions that the vendor requires.
Correct Answer: A
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_common-scenarios_third-party.html
upvoted 7 times
By attaching the appropriate IAM policies to the role, you can define the precise permissions that the vendor requires for their tool to
perform its tasks. This ensures that the vendor has the necessary access without granting them direct IAM access to the company's
account.
B is incorrect because creating an IAM user with a password would require sharing the credentials with the vendor, which is not
recommended for security reasons.
C is incorrect because adding the vendor's IAM user to an IAM group in the company's account would not provide a direct and controlled
way to delegate access to the vendor's tool.
D is incorrect because creating a new identity provider for the vendor's AWS account would not provide a straightforward way to delegate
access to the vendor's tool. Identity providers are typically used for federated access using external identity systems.
upvoted 3 times
A company has deployed a Java Spring Boot application as a pod that runs on Amazon Elastic Kubernetes Service (Amazon EKS) in private
subnets. The application needs to write data to an Amazon DynamoDB table. A solutions architect must ensure that the application can interact
with the DynamoDB table without exposing traffic to the internet.
Which combination of steps should the solutions architect take to accomplish this goal? (Choose two.)
A. Attach an IAM role that has sufficient privileges to the EKS pod.
B. Attach an IAM user that has sufficient privileges to the EKS pod.
C. Allow outbound connectivity to the DynamoDB table through the private subnets’ network ACLs.
Correct Answer: AD
D. Creating a VPC endpoint for DynamoDB allows the EKS pod to access DynamoDB privately within the VPC, without the need for internet
connectivity. The VPC endpoint provides a direct and secure connection to DynamoDB, eliminating the need for traffic to flow over the
internet.
upvoted 1 times
D. Creating a VPC endpoint for DynamoDB allows the EKS pod to access DynamoDB privately within the VPC, without the need for internet
connectivity. The VPC endpoint provides a direct and secure connection to DynamoDB, eliminating the need for traffic to flow over the
internet.
B is incorrect because attaching an IAM user to the pod is not a recommended approach. IAM users are meant for accessing AWS services
through the AWS Management Console or AP.
C is incorrect because configuring outbound connectivity through network ACLs would not provide a secure and direct connection to
DynamoDB.
E is incorrect because embedding access keys in the code is not a recommended security practice. It can lead to potential security
vulnerabilities. It is better to use IAM roles or other secure mechanisms for providing access to AWS services.
upvoted 2 times
A company recently migrated its web application to AWS by rehosting the application on Amazon EC2 instances in a single AWS Region. The
company wants to redesign its application architecture to be highly available and fault tolerant. Traffic must reach all running EC2 instances
randomly.
Which combination of steps should the company take to meet these requirements? (Choose two.)
D. Launch three EC2 instances: two instances in one Availability Zone and one instance in another Availability Zone.
E. Launch four EC2 instances: two instances in one Availability Zone and two instances in another Availability Zone.
Correct Answer: CE
Weighted routing policy – Use to route traffic to multiple resources in proportions that you specify. You can use weighted routing to create
records in a private hosted zone.
upvoted 1 times
E. By launching EC2 instances in different AZs, you achieve high availability and fault tolerance. Launching four instances (two in each AZ)
ensures that there are enough resources to handle the traffic load and maintain the desired level of availability.
A. Failover routing is designed to direct traffic to a backup resource or secondary location only when the primary resource or location is
unavailable.
B. Although a weighted routing policy allows you to distribute traffic across multiple EC2 instances, it does not ensure random
distribution.
D. While launching instances in multiple AZs is important for fault tolerance, having only three instances does not provide an even
distribution of traffic. With only three instances, the traffic may not be evenly distributed, potentially leading to imbalanced resource
utilization.
upvoted 4 times
Option E is the correct choice. By launching instances in different Availability Zones, the company ensures that there are redundant copies
of the application running in separate physical locations, providing fault tolerance. With two instances in one Availability Zone and two
instances in another, traffic can be distributed randomly among them, improving availability and load balancing.
upvoted 1 times
"Active-active failover:
Use this failover configuration when you want all of your resources to be available the majority of the time. When a resource becomes
unavailable, Route 53 can detect that it's unhealthy and stop including it when responding to queries.
In active-active failover, all the records that have the same name, the same type (such as A or AAAA), and the same routing policy (such as
weighted or latency) are active unless Route 53 considers them unhealthy. Route 53 can respond to a DNS query using any healthy
record".
upvoted 1 times
A media company collects and analyzes user activity data on premises. The company wants to migrate this capability to AWS. The user activity
data store will continue to grow and will be petabytes in size. The company needs to build a highly available data ingestion solution that facilitates
on-demand analytics of existing data and new data with SQL.
Which solution will meet these requirements with the LEAST operational overhead?
A. Send activity data to an Amazon Kinesis data stream. Configure the stream to deliver the data to an Amazon S3 bucket.
B. Send activity data to an Amazon Kinesis Data Firehose delivery stream. Configure the stream to deliver the data to an Amazon Redshift
cluster.
C. Place activity data in an Amazon S3 bucket. Configure Amazon S3 to run an AWS Lambda function on the data as the data arrives in the S3
bucket.
D. Create an ingestion service on Amazon EC2 instances that are spread across multiple Availability Zones. Configure the service to forward
data to an Amazon RDS Multi-AZ database.
Correct Answer: A
A. While Kinesis can handle streaming data, it requires additional processing to load the data into an analytics solution.
C. Although S3 and Lambda can handle the storage and processing of data, it requires more manual configuration and management
compared to the fully managed solution offered by KDF and Redshift.
D. This option involves more operational overhead, as it requires managing and scaling the EC2 instances and RDS database infrastructure
manually.
Therefore, option B with KDF delivering the data to Redshift cluster offers the most streamlined and operationally efficient solution for
ingesting and analyzing the user activity data in the given scenario.
upvoted 1 times
pisica134 3 months, 1 week ago
petabytes in size => redshift
upvoted 2 times
Amazon Redshift is a fully managed, petabyte-scale data warehouse service in the cloud. Amazon Redshift Serverless lets you access and
analyze data without all of the configurations of a provisioned data warehouse.
https://docs.aws.amazon.com/redshift/latest/mgmt/welcome.html
upvoted 2 times
Why not A: It is a viable solution, but storing the data in S3 would require you to set up additional services like Amazon Redshift or
Amazon Athena to perform the analytics.
upvoted 2 times
Data Streams is a low latency streaming service in AWS Kinesis with the facility for ingesting at scale. On the other hand, Kinesis Firehose
aims to serve as a data transfer service. The primary purpose of Kinesis Firehose focuses on loading streaming data to Amazon S3, Splunk,
ElasticSearch, and RedShift
upvoted 3 times
Aninina 8 months, 2 weeks ago
Selected Answer: B
petabytes: redshift
upvoted 3 times
A company collects data from thousands of remote devices by using a RESTful web services application that runs on an Amazon EC2 instance.
The EC2 instance receives the raw data, transforms the raw data, and stores all the data in an Amazon S3 bucket. The number of remote devices
will increase into the millions soon. The company needs a highly scalable solution that minimizes operational overhead.
Which combination of steps should a solutions architect take to meet these requirements? (Choose two.)
C. Add more EC2 instances to accommodate the increasing amount of incoming data.
D. Send the raw data to Amazon Simple Queue Service (Amazon SQS). Use EC2 instances to process the data.
E. Use Amazon API Gateway to send the raw data to an Amazon Kinesis data stream. Configure Amazon Kinesis Data Firehose to use the data
stream as a source to deliver the data to Amazon S3.
Correct Answer: AE
E. API Gateway can be used to receive the raw data from the remote devices via RESTful web services. It provides a scalable and managed
infrastructure to handle the incoming requests. The data can then be sent to an Amazon Kinesis data stream, which is a highly scalable
and durable real-time data streaming service. From there, Amazon Kinesis Data Firehose can be configured to use the data stream as a
source and deliver the transformed data to Amazon S3. This combination of services allows for the seamless ingestion and processing of
data while minimizing operational overhead.
upvoted 1 times
E. API Gateway can be used to receive the raw data from the remote devices via RESTful web services. It provides a scalable and managed
infrastructure to handle the incoming requests. The data can then be sent to an Amazon Kinesis data stream, which is a highly scalable
and durable real-time data streaming service. From there, Amazon Kinesis Data Firehose can be configured to use the data stream as a
source and deliver the transformed data to Amazon S3. This combination of services allows for the seamless ingestion and processing of
data while minimizing operational overhead.
B. It does not directly address the need for scalable data processing and storage. It focuses on managing DNS and routing traffic to
different endpoints.
C. Adding more EC2 can lead to increased operational overhead in terms of managing and scaling the instances.
D. Using SQS and EC2 for processing data introduces more complexity and operational overhead.
upvoted 2 times
wRhlH 3 months, 2 weeks ago
Why not BC?
upvoted 1 times
A company needs to retain its AWS CloudTrail logs for 3 years. The company is enforcing CloudTrail across a set of AWS accounts by using AWS
Organizations from the parent account. The CloudTrail target S3 bucket is configured with S3 Versioning enabled. An S3 Lifecycle policy is in
place to delete current objects after 3 years.
After the fourth year of use of the S3 bucket, the S3 bucket metrics show that the number of objects has continued to rise. However, the number
of new CloudTrail logs that are delivered to the S3 bucket has remained consistent.
Which solution will delete objects that are older than 3 years in the MOST cost-effective manner?
A. Configure the organization’s centralized CloudTrail trail to expire objects after 3 years.
B. Configure the S3 Lifecycle policy to delete previous versions as well as current versions.
C. Create an AWS Lambda function to enumerate and delete objects from Amazon S3 that are older than 3 years.
D. Configure the parent account as the owner of all objects that are delivered to the S3 bucket.
Correct Answer: B
A. This option is not directly related to managing objects in the S3. It focuses on configuring the expiration of CloudTrail trails, which may
not address the need to delete objects from the S3 bucket.
C. While it is technically possible to create a Lambda to delete objects older than 3 years, this approach would introduce additional
complexity and operational overhead.
D. Changing the ownership of the objects in the S3 bucket does not directly address the need to delete objects older than 3 years.
Ownership does not affect the deletion behavior of the objects.
upvoted 1 times
To delete objects that are older than 3 years in the most cost-effective manner, the company should configure the S3 Lifecycle policy to
delete previous versions as well as current versions. This will ensure that all versions of the objects, including the previous versions, are
deleted after 3 years.
upvoted 1 times
Question #228 Topic 1
A company has an API that receives real-time data from a fleet of monitoring devices. The API stores this data in an Amazon RDS DB instance for
later analysis. The amount of data that the monitoring devices send to the API fluctuates. During periods of heavy traffic, the API often returns
timeout errors.
After an inspection of the logs, the company determines that the database is not capable of processing the volume of write traffic that comes
from the API. A solutions architect must minimize the number of connections to the database and must ensure that data is not lost during periods
of heavy traffic.
A. Increase the size of the DB instance to an instance type that has more available memory.
B. Modify the DB instance to be a Multi-AZ DB instance. Configure the application to write to all active RDS DB instances.
C. Modify the API to write incoming data to an Amazon Simple Queue Service (Amazon SQS) queue. Use an AWS Lambda function that
Amazon SQS invokes to write data from the queue to the database.
D. Modify the API to write incoming data to an Amazon Simple Notification Service (Amazon SNS) topic. Use an AWS Lambda function that
Amazon SNS invokes to write data from the topic to the database.
Correct Answer: C
A. Increasing the size of the DB instance may provide more memory, but it does not address the issue of handling high write traffic
efficiently and minimizing connections to the database.
B. Modifying the DB instance to be a Multi-AZ instance and writing to all active instances can improve availability but does not address the
issue of efficiently handling high write traffic and minimizing connections to the database.
D. Using SNS and an Lambda can provide decoupling and scalability, but it is not suitable for handling heavy write traffic efficiently and
minimizing connections to the database.
upvoted 2 times
To minimize the number of connections to the database and ensure that data is not lost during periods of heavy traffic, the company
should modify the API to write incoming data to an Amazon SQS queue. The use of a queue will act as a buffer between the API and the
database, reducing the number of connections to the database. And the use of an AWS Lambda function invoked by SQS will provide a
more flexible way of handling the data and processing it. This way, the function will process the data from the queue and insert it into the
database in a more controlled way.
upvoted 2 times
A company manages its own Amazon EC2 instances that run MySQL databases. The company is manually managing replication and scaling as
demand increases or decreases. The company needs a new solution that simplifies the process of adding or removing compute capacity to or
from its database tier as needed. The solution also must offer improved performance, scaling, and durability with minimal effort from operations.
C. Combine the databases into one larger MySQL database. Run the larger database on larger EC2 instances.
D. Create an EC2 Auto Scaling group for the database tier. Migrate the existing databases to the new environment.
Correct Answer: A
B. Incorrect because it suggests migrating to a different database engine, which may introduce compatibility issues and require significant
code modifications.
C. Incorrect because consolidating into a larger MySQL database on larger EC2 instances does not provide the desired scalability and
automation.
D. Incorrect because using EC2 Auto Scaling groups for the database tier still requires manual management of replication and scaling.
upvoted 2 times
https://aws.amazon.com/rds/aurora/serverless/
upvoted 3 times
A company is concerned that two NAT instances in use will no longer be able to support the traffic needed for the company’s application. A
solutions architect wants to implement a solution that is highly available, fault tolerant, and automatically scalable.
A. Remove the two NAT instances and replace them with two NAT gateways in the same Availability Zone.
B. Use Auto Scaling groups with Network Load Balancers for the NAT instances in different Availability Zones.
C. Remove the two NAT instances and replace them with two NAT gateways in different Availability Zones.
D. Replace the two NAT instances with Spot Instances in different Availability Zones and deploy a Network Load Balancer.
Correct Answer: C
Option A is incorrect because placing both NAT gateways in the same Availability Zone does not provide fault tolerance.
Option B is incorrect because using Auto Scaling groups with Network Load Balancers is not the recommended approach for NAT
instances.
Option D is incorrect because Spot Instances are not suitable for critical infrastructure components like NAT instances.
upvoted 1 times
I honestly think that C is not enough, because each NAT gateway can provide a few scalability, but the bandwidth limit is clearly explained
in the document. The C exactly mentioned "two NAT gateways" so the number of NAT is fixed, which will reach its limit soon.
upvoted 2 times
An application runs on an Amazon EC2 instance that has an Elastic IP address in VPC A. The application requires access to a database in VPC B.
Both VPCs are in the same AWS account.
A. Create a DB instance security group that allows all traffic from the public IP address of the application server in VPC A.
C. Make the DB instance publicly accessible. Assign a public IP address to the DB instance.
D. Launch an EC2 instance with an Elastic IP address into VPC B. Proxy all requests through the new EC2 instance.
Correct Answer: B
"Jaybee" - Please dont ever say that traffic over the public internet is secure :D
upvoted 3 times
https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html
upvoted 1 times
animefan1 3 months ago
Selected Answer: B
With peering, we EC2 can communicate with RDS. RDS SG can have inbound from EC2 IP rather than VPC CIDR for more security
upvoted 1 times
Option A is not the best solution as it requires allowing all traffic from the public IP address of the application server, which can be less
secure.
Option C involves making the DB instance publicly accessible, which introduces security risks by exposing the database directly to the
internet.
Option D adds unnecessary complexity by launching an additional EC2 instance in VPC B and proxying all requests through it, which is not
the most efficient and secure approach in this scenario.
upvoted 3 times
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide
upvoted 1 times
A company runs demonstration environments for its customers on Amazon EC2 instances. Each environment is isolated in its own VPC. The
company’s operations team needs to be notified when RDP or SSH access to an environment has been established.
A. Configure Amazon CloudWatch Application Insights to create AWS Systems Manager OpsItems when RDP or SSH access is detected.
B. Configure the EC2 instances with an IAM instance profile that has an IAM role with the AmazonSSMManagedInstanceCore policy attached.
C. Publish VPC flow logs to Amazon CloudWatch Logs. Create required metric filters. Create an Amazon CloudWatch metric alarm with a
notification action for when the alarm is in the ALARM state.
D. Configure an Amazon EventBridge rule to listen for events of type EC2 Instance State-change Notification. Configure an Amazon Simple
Notification Service (Amazon SNS) topic as a target. Subscribe the operations team to the topic.
Correct Answer: C
Adding this to support that VPC flow logs can be used to cvapture Accepted or Rejected SSH and RDP traffic.
upvoted 2 times
Option A is incorrect because CloudWatch Application Insights is not designed for detecting RDP or SSH access.
Option B is also incorrect because configuring an IAM instance profile with the AmazonSSMManagedInstanceCore policy does not directly
address the requirement of notifying the operations team when RDP or SSH access occurs.
Option D is wrong beacuse configuring an EventBridge rule to listen for EC2 Instance State-change Notification events and using an SNS
topic as a target will notify the operations team about changes in the instance state, such as starting or stopping instances. However, it
does not specifically detect or notify when RDP or SSH access is established, which is the requirement stated in the question.
upvoted 5 times
Flow logs can help you with a number of tasks, such as:
https://www.youtube.com/watch?v=KAe3Eju59OU
upvoted 1 times
C: The logs will need to be analyzed and metric filters applied to detect access, and then the alarm will trigger based on that analysis. This
method could have a delay in providing notifications. Thus, not the best solution if real-time notification is required.
Why not D: RDP or SSH access does not cause an EC2 instance to have a state change. The state change events that Amazon EventBridge
can listen for include stopping, starting, and terminated instances, which do not apply to RDP or SSH access. But RDP or SSH connection to
an EC2 instance does generate an event in the system, such as a log entry which can be used to notify the Operation team. Since its a log,
you would require a service that monitors logs like CloudTrail, VPC Flow logs, or AWS Systems Manager Session Manager.
upvoted 2 times
EC2 instances sends events to the EventBridge when state change occurs, such as when a new RDP or SSH connection is established, you
can use EventBridge to configure a rule that listens for these events and trigger an action, like sending an email or SMS, when the
connection is detected. The operations team can be notified by subscribing to the Amazon Simple Notification Service (Amazon SNS) topic,
which can be configured as the target of the EventBridge rule.
upvoted 3 times
https://aws.amazon.com/blogs/security/how-to-monitor-and-visualize-failed-ssh-access-attempts-to-amazon-ec2-linux-instances/
upvoted 2 times
A solutions architect has created a new AWS account and must secure AWS account root user access.
E. Apply the required permissions to the root user with an inline policy document.
Correct Answer: AB
B. Enabling MFA adds an extra layer of security by requiring an additional authentication factor, such as a code from a mobile app or a
hardware token, in addition to the password.
C. Root user access keys should be avoided whenever possible, and it is best to use IAM users with restricted permissions instead.
D. The root user already has unrestricted access to all resources and services in the account, so granting additional administrative
permissions could increase the risk of unauthorized actions.
E. Instead, it is recommended to create IAM users with appropriate permissions and use those users for day-to-day operations, while
keeping the root user secured and only using it for necessary administrative tasks.
upvoted 2 times
Option A: A strong password is always required for any AWS account you create, and should not be shared or stored anywhere as there is
always a risk.
Option B: This is following AWS best practice, by enabling MFA on your root user which provides another layer of security on the account
and unauthorised access will be denied if the user does not have the correct password and MFA.
upvoted 1 times
A company is building a new web-based customer relationship management application. The application will use several Amazon EC2 instances
that are backed by Amazon Elastic Block Store (Amazon EBS) volumes behind an Application Load Balancer (ALB). The application will also use
an Amazon Aurora database. All data for the application must be encrypted at rest and in transit.
A. Use AWS Key Management Service (AWS KMS) certificates on the ALB to encrypt data in transit. Use AWS Certificate Manager (ACM) to
encrypt the EBS volumes and Aurora database storage at rest.
B. Use the AWS root account to log in to the AWS Management Console. Upload the company’s encryption certificates. While in the root
account, select the option to turn on encryption for all data at rest and in transit for the account.
C. Use AWS Key Management Service (AWS KMS) to encrypt the EBS volumes and Aurora database storage at rest. Attach an AWS Certificate
Manager (ACM) certificate to the ALB to encrypt data in transit.
D. Use BitLocker to encrypt all data at rest. Import the company’s TLS certificate keys to AWS Key Management Service (AWS KMS) Attach the
KMS keys to the ALB to encrypt data in transit.
Correct Answer: C
To encrypt data at rest, AWS Key Management Service (AWS KMS) can be used to encrypt EBS volumes and Aurora database storage.
To encrypt data in transit, an AWS Certificate Manager (ACM) certificate can be attached to the Application Load Balancer (ALB) to enable
HTTPS and TLS encryption.
upvoted 1 times
A is incorrect because it suggests using ACM to encrypt the EBS, which is not the correct service for encrypting EBS.
B is incorrect because relying on the AWS root account and selecting an option in the AWS Management Console to enable encryption for
all data at rest and in transit is not a valid approach.
D is incorrect because BitLocker is not a suitable solution for encrypting data in AWS services. It is primarily used for encrypting data on
Windows-based operating systems. Additionally, importing TLS certificate keys to AWS KMS and attaching them to the ALB is not the
recommended approach for encrypting data in transit.
upvoted 4 times
A company is moving its on-premises Oracle database to Amazon Aurora PostgreSQL. The database has several applications that write to the
same tables. The applications need to be migrated one by one with a month in between each migration. Management has expressed concerns
that the database has a high number of reads and writes. The data must be kept in sync across both databases throughout the migration.
A. Use AWS DataSync for the initial migration. Use AWS Database Migration Service (AWS DMS) to create a change data capture (CDC)
replication task and a table mapping to select all tables.
B. Use AWS DataSync for the initial migration. Use AWS Database Migration Service (AWS DMS) to create a full load plus change data capture
(CDC) replication task and a table mapping to select all tables.
C. Use the AWS Schema Conversion Tool with AWS Database Migration Service (AWS DMS) using a memory optimized replication instance.
Create a full load plus change data capture (CDC) replication task and a table mapping to select all tables.
D. Use the AWS Schema Conversion Tool with AWS Database Migration Service (AWS DMS) using a compute optimized replication instance.
Create a full load plus change data capture (CDC) replication task and a table mapping to select the largest tables.
Correct Answer: C
Option A & B are incorrect because using AWS DataSync alone is not sufficient for database migration and data synchronization.
Option D is incorrect because using a compute optimized replication instance is not the most suitable choice for handling the high
number of reads and writes.
upvoted 1 times
Option A is a better choice for migrations where the data is more complex and may require more memory.
Option C is a better choice for migrations that require more processing power.
It is also depend on the size of the data, the complexity of the data, and the resources available in the target Aurora cluster.
upvoted 1 times
A company has a three-tier application for image sharing. The application uses an Amazon EC2 instance for the front-end layer, another EC2
instance for the application layer, and a third EC2 instance for a MySQL database. A solutions architect must design a scalable and highly
available solution that requires the least amount of change to the application.
A. Use Amazon S3 to host the front-end layer. Use AWS Lambda functions for the application layer. Move the database to an Amazon
DynamoDB table. Use Amazon S3 to store and serve users’ images.
B. Use load-balanced Multi-AZ AWS Elastic Beanstalk environments for the front-end layer and the application layer. Move the database to an
Amazon RDS DB instance with multiple read replicas to serve users’ images.
C. Use Amazon S3 to host the front-end layer. Use a fleet of EC2 instances in an Auto Scaling group for the application layer. Move the
database to a memory optimized instance type to store and serve users’ images.
D. Use load-balanced Multi-AZ AWS Elastic Beanstalk environments for the front-end layer and the application layer. Move the database to an
Amazon RDS Multi-AZ DB instance. Use Amazon S3 to store and serve users’ images.
Correct Answer: A
A is incorrect because using S3 for the front-end layer and Lambda for the application layer would require significant changes to the
application architecture. Moving the DB to DynamoDB would require rewriting the DB-related code.
B is incorrect because using load-balanced Multi-AZ AWS EBS environments and an RDS DB with read replicas for serving images would be
a more suitable solution. RDS with read replicas can handle the image-serving workload more efficiently than using S3 for this purpose.
C is incorrect because using S3 for the front-end layer and an ASG of EC2 for the application layer would require modifying the application
architecture. Storing and serving images from a memory-optimized EC2 type may not be the most efficient and scalable approach
compared to using S3.
upvoted 2 times
markw92 3 months, 2 weeks ago
"least amount of change to the application." - A has lots of changes, completely revamping the application and lots of new pieces. D is
closest with only addition of s3 to store images which is right move. You do not want images to store in any database anyway.
upvoted 2 times
An application running on an Amazon EC2 instance in VPC-A needs to access files in another EC2 instance in VPC-B. Both VPCs are in separate
AWS accounts. The network administrator needs to design a solution to configure secure access to EC2 instance in VPC-B from VPC-A. The
connectivity should not have a single point of failure or bandwidth concerns.
B. Set up VPC gateway endpoints for the EC2 instance running in VPC-B.
C. Attach a virtual private gateway to VPC-B and set up routing from VPC-A.
D. Create a private virtual interface (VIF) for the EC2 instance running in VPC-B and add appropriate routes from VPC-A.
Correct Answer: A
https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html
upvoted 6 times
B is incorrect because VPC gateway endpoints are used for accessing S3 or DynamoDB from a VPC without going over the internet. They
are not designed for establishing connectivity between EC2 instances in different VPCs.
C is incorrect because it would require configuring a VPN connection between the VPCs. This would introduce additional complexity and
potential single points of failure.
D is incorrect because creating a private VIF and adding routes would be applicable for establishing a direct connection between on-
premises infrastructure and VPC-B using Direct Connect, but it is not suitable for the scenario of communication between EC2 instances in
separate VPCs within different AWS accounts.
upvoted 2 times
https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html
upvoted 1 times
https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html
upvoted 2 times
the following paragraph is taken from the AWS docs page linked below that backs this up
"AWS uses the existing infrastructure of a VPC to create a VPC peering connection; it is neither a gateway nor a VPN connection, and does
not rely on a separate piece of physical hardware. There is no single point of failure for communication or a bandwidth bottleneck."
https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html
upvoted 2 times
A company wants to experiment with individual AWS accounts for its engineer team. The company wants to be notified as soon as the Amazon
EC2 instance usage for a given month exceeds a specific threshold for each account.
A. Use Cost Explorer to create a daily report of costs by service. Filter the report by EC2 instances. Configure Cost Explorer to send an Amazon
Simple Email Service (Amazon SES) notification when a threshold is exceeded.
B. Use Cost Explorer to create a monthly report of costs by service. Filter the report by EC2 instances. Configure Cost Explorer to send an
Amazon Simple Email Service (Amazon SES) notification when a threshold is exceeded.
C. Use AWS Budgets to create a cost budget for each account. Set the period to monthly. Set the scope to EC2 instances. Set an alert
threshold for the budget. Configure an Amazon Simple Notification Service (Amazon SNS) topic to receive a notification when a threshold is
exceeded.
D. Use AWS Cost and Usage Reports to create a report with hourly granularity. Integrate the report data with Amazon Athena. Use Amazon
EventBridge to schedule an Athena query. Configure an Amazon Simple Notification Service (Amazon SNS) topic to receive a notification when
a threshold is exceeded.
Correct Answer: B
Option D suggests using AWS Cost and Usage Reports integrated with Amazon Athena and Amazon EventBridge, which can be a more
complex and potentially costlier solution compared to AWS Budgets for this specific use case. It's also more suitable for fine-grained,
custom analytics rather than straightforward threshold-based alerts.
upvoted 1 times
A and B are not the most cost-effective solutions as they involve using Cost Explorer to create reports, which may not provide real-time
notifications when the threshold is exceeded. Additionally, A. suggests using a daily report, while B. suggests using a monthly report,
which may not provide the desired level of granularity for immediate notifications.
D involves using Cost and Usage Reports with Athena and EventBridge. This solution provides more flexibility and data analysis
capabilities, it is more complex and may incur additional costs for using Athena and generating hourly reports.
upvoted 1 times
Samuel03 7 months, 1 week ago
Selected Answer: D
I go with D. It says "as soon as", "daily" reports seems to be a bit longer time frame to wait in my opinion.
upvoted 1 times
Why not B: B would be the most cost-effective if the requirements didn't ask for real-time notification. You would not incur additional costs
for the daily or monthly reports and the notifications. But doesn't provide real-time alerts.
upvoted 4 times
A solutions architect needs to design a new microservice for a company’s application. Clients must be able to call an HTTPS endpoint to reach the
microservice. The microservice also must use AWS Identity and Access Management (IAM) to authenticate calls. The solutions architect will write
the logic for this microservice by using a single AWS Lambda function that is written in Go 1.x.
Which solution will deploy the function in the MOST operationally efficient way?
A. Create an Amazon API Gateway REST API. Configure the method to use the Lambda function. Enable IAM authentication on the API.
B. Create a Lambda function URL for the function. Specify AWS_IAM as the authentication type.
C. Create an Amazon CloudFront distribution. Deploy the function to Lambda@Edge. Integrate IAM authentication logic into the
Lambda@Edge function.
D. Create an Amazon CloudFront distribution. Deploy the function to CloudFront Functions. Specify AWS_IAM as the authentication type.
Correct Answer: A
Auth type
Choose the auth type for your function URL. >
AWS_IAM
Only authenticated IAM users and roles can make requests to your function URL.
upvoted 1 times
B suggests creating a Lambda URL and specifying AWS IAM as the authentication type. While this can provide IAM authentication, it lacks
the benefits of API Gateway, such as request validation, rate limiting, and easy management of API configurations.
C and D involve using CloudFront, Lambda@Edge, and CloudFront Functions. While these services offer flexibility and the ability to run
logic at the edge locations, they introduce additional complexity and may not be necessary for the given requirement.
upvoted 1 times
A company previously migrated its data warehouse solution to AWS. The company also has an AWS Direct Connect connection. Corporate office
users query the data warehouse using a visualization tool. The average size of a query returned by the data warehouse is 50 MB and each
webpage sent by the visualization tool is approximately 500 KB. Result sets returned by the data warehouse are not cached.
Which solution provides the LOWEST data transfer egress cost for the company?
A. Host the visualization tool on premises and query the data warehouse directly over the internet.
B. Host the visualization tool in the same AWS Region as the data warehouse. Access it over the internet.
C. Host the visualization tool on premises and query the data warehouse directly over a Direct Connect connection at a location in the same
AWS Region.
D. Host the visualization tool in the same AWS Region as the data warehouse and access it over a Direct Connect connection at a location in
the same Region.
Correct Answer: C
using direct link, both are charged in direct link tier. direct link tier is not cheap.
so i go for B
upvoted 1 times
A. Hosting the visualization tool on premises and querying the data warehouse over the internet incurs data transfer costs for every query
result, as well as potential latency and bandwidth limitations.
B. Hosting the visualization tool in the same AWS Region as the data warehouse but accessing it over the internet still incurs data transfer
costs for each query result.
C. Hosting the visualization tool on premises and querying the data warehouse over a Direct Connect connection within the same AWS
Region incurs data transfer costs for every query result and adds complexity by requiring on-premises infrastructure.
upvoted 1 times
dexpos 8 months ago
Selected Answer: D
D let you reduce at minimum the data transfer costs
upvoted 1 times
Why it is not C is because the visualization tool is hosted on-premises, as it's not hosted in the same region as the data warehouse the
data transfer between them would occur over the internet, thus, would incur in egress data transfer costs.
upvoted 4 times
An online learning company is migrating to the AWS Cloud. The company maintains its student records in a PostgreSQL database. The company
needs a solution in which its data is available and online across multiple AWS Regions at all times.
Which solution will meet these requirements with the LEAST amount of operational overhead?
B. Migrate the PostgreSQL database to an Amazon RDS for PostgreSQL DB instance with the Multi-AZ feature turned on.
C. Migrate the PostgreSQL database to an Amazon RDS for PostgreSQL DB instance. Create a read replica in another Region.
D. Migrate the PostgreSQL database to an Amazon RDS for PostgreSQL DB instance. Set up DB snapshots to be copied to another Region.
Correct Answer: C
Though C may sound good, it in fact requires manual management and monitoring of the replication process due to the fact that Amazon
RDS read replicas are asynchronous, meaning there is a delay between the primary and read replica. Therefore, there will be a need to
ensure that the read replica is constantly up-to-date and someone still has to fix any read replica errors during the replication process
which may cause data inconsistency. Lastly, you still have to configure additional steps to make it fail over to the read replica.
upvoted 13 times
Option B, is a valid solution for achieving high availability within a single AWS Region. However, it does not meet the requirement of having
the data available and online across multiple AWS Regions at all times, which is specified in the question. The Multi-AZ feature in RDS
provides automatic failover within the same Region, but it does not replicate the data to multiple Regions.
upvoted 3 times
The data will be available in multiple regions for both B and C but B is a better solution!
upvoted 1 times
https://aws.amazon.com/rds/features/multi-az/
upvoted 1 times
A company hosts its web application on AWS using seven Amazon EC2 instances. The company requires that the IP addresses of all healthy EC2
instances be returned in response to DNS queries.
Correct Answer: C
https://aws.amazon.com/premiumsupport/knowledge-center/multivalue-versus-simple-policies/
upvoted 8 times
Option A (Simple routing policy) would only return a single IP address in response to DNS queries and does not support returning multiple
addresses.
Option B (Latency routing policy) is used to route traffic based on the lowest latency to the resource and does not fulfill the requirement of
returning all healthy IP addresses.
Option D (Geolocation routing policy) is used to route traffic based on the geographic location of the user and does not fulfill the
requirement of returning all healthy IP addresses.
Therefore, the Multivalue routing policy is the most suitable option for returning the IP addresses of all healthy EC2 instances in response
to DNS queries.
upvoted 2 times
A medical research lab produces data that is related to a new study. The lab wants to make the data available with minimum latency to clinics
across the country for their on-premises, file-based applications. The data files are stored in an Amazon S3 bucket that has read-only permissions
for each clinic.
A. Deploy an AWS Storage Gateway file gateway as a virtual machine (VM) on premises at each clinic
B. Migrate the files to each clinic’s on-premises applications by using AWS DataSync for processing.
C. Deploy an AWS Storage Gateway volume gateway as a virtual machine (VM) on premises at each clinic.
D. Attach an Amazon Elastic File System (Amazon EFS) file system to each clinic’s on-premises servers.
Correct Answer: C
AWS Storage Gateway is a service that connects an on-premises software appliance with cloud-based storage to provide seamless and
secure integration between an organization's on-premises IT environment and AWS's storage infrastructure. By deploying a file gateway
as a virtual machine on each clinic's premises, the medical research lab can provide low-latency access to the data stored in the S3 bucket
while maintaining read-only permissions for each clinic. This solution allows the clinics to access the data files directly from their on-
premises file-based applications without the need for data transfer or migration.
upvoted 12 times
B. It involves transferring the data files from the Amazon S3 bucket to each clinic's on-premises applications using AWS DataSync. While
this enables data migration, it may not provide real-time access and may introduce additional latency.
C. It is suitable for block-level access to data rather than file-level access. It may not be the most efficient solution for file-based
applications.
D. It involves using Amazon EFS, which is a scalable file storage service, to provide file-level access to the data. However, it may introduce
additional complexity and latency compared to using a file gateway solution.
upvoted 2 times
Volume Gateway provides block storage volumes over iSCSI, backed by Amazon S3, and provides point-in-time backups as Amazon EBS
snapshots. Volume Gateway integrates with AWS Backup, an automated and centralized backup service, to protect Storage Gateway
volumes.
So it's A
upvoted 3 times
A company is using a content management system that runs on a single Amazon EC2 instance. The EC2 instance contains both the web server
and the database software. The company must make its website platform highly available and must enable the website to scale to meet user
demand.
A. Move the database to Amazon RDS, and enable automatic backups. Manually launch another EC2 instance in the same Availability Zone.
Configure an Application Load Balancer in the Availability Zone, and set the two instances as targets.
B. Migrate the database to an Amazon Aurora instance with a read replica in the same Availability Zone as the existing EC2 instance. Manually
launch another EC2 instance in the same Availability Zone. Configure an Application Load Balancer, and set the two EC2 instances as targets.
C. Move the database to Amazon Aurora with a read replica in another Availability Zone. Create an Amazon Machine Image (AMI) from the
EC2 instance. Configure an Application Load Balancer in two Availability Zones. Attach an Auto Scaling group that uses the AMI across two
Availability Zones.
D. Move the database to a separate EC2 instance, and schedule backups to Amazon S3. Create an Amazon Machine Image (AMI) from the
original EC2 instance. Configure an Application Load Balancer in two Availability Zones. Attach an Auto Scaling group that uses the AMI
across two Availability Zones.
Correct Answer: C
This approach will provide both high availability and scalability for the website platform. By moving the database to Amazon Aurora with a
read replica in another availability zone, it will provide a failover option for the database. The use of an Application Load Balancer and an
Auto Scaling group across two availability zones allows for automatic scaling of the website to meet increased user demand. Additionally,
creating an AMI from the original EC2 instance allows for easy replication of the instance in case of failure.
upvoted 9 times
Option B improves database performance and provides a level of fault tolerance, it does not address the scalability aspect of the website
platform.
Option C provides both high availability and fault tolerance. Creating an AMI allows for easy replication of the EC2 instance across AZs.
Configuring an ALB in two AZs and attaching an ASG ensures scalability and load distribution across multiple instances.
Option D does not provide the high availability and scalability required by the company. Scheduled backups to S3 address data protection
but do not contribute to website availability or scalability.
upvoted 1 times
A company is launching an application on AWS. The application uses an Application Load Balancer (ALB) to direct traffic to at least two Amazon
EC2 instances in a single target group. The instances are in an Auto Scaling group for each environment. The company requires a development
environment and a production environment. The production environment will have periods of high traffic.
A. Reconfigure the target group in the development environment to have only one EC2 instance as a target.
D. Reduce the maximum number of EC2 instances in the development environment’s Auto Scaling group.
Correct Answer: A
This option will configure the development environment in the most cost-effective way as it reduces the number of instances running in
the development environment and therefore reduces the cost of running the application. The development environment typically requires
less resources than the production environment, and it is unlikely that the development environment will have periods of high traffic that
would require a large number of instances. By reducing the maximum number of instances in the development environment's Auto
Scaling group, the company can save on costs while still maintaining a functional development environment.
upvoted 10 times
So, simply reconfigure the target group in the development environment to have only one EC2 instance as a target as said in option A to
reduce cost.
upvoted 1 times
So, simply reconfigure the target group in the development environment to have only one EC2 instance as a target as said in option A to
reduce cost.
upvoted 1 times
Option B does not directly address the cost-effectiveness of the development environment. It focuses on load balancing strategies rather
than cost optimization.
Option C may not be the most cost-effective solution unless the current instance sizes are over-provisioned or unnecessary for the
application's requirements.
Option D can help reduce costs, but it may impact the environment's ability to handle traffic and scale efficiently, especially during periods
of increased load.
Overall, option A provides a cost-effective approach by minimizing the resources allocated to the development environment while still
maintaining a functional setup.
upvoted 1 times
A company runs a web application on Amazon EC2 instances in multiple Availability Zones. The EC2 instances are in private subnets. A solutions
architect implements an internet-facing Application Load Balancer (ALB) and specifies the EC2 instances as the target group. However, the
internet traffic is not reaching the EC2 instances.
How should the solutions architect reconfigure the architecture to resolve this issue?
A. Replace the ALB with a Network Load Balancer. Configure a NAT gateway in a public subnet to allow internet traffic.
B. Move the EC2 instances to public subnets. Add a rule to the EC2 instances’ security groups to allow outbound traffic to 0.0.0.0/0.
C. Update the route tables for the EC2 instances’ subnets to send 0.0.0.0/0 traffic through the internet gateway route. Add a rule to the EC2
instances’ security groups to allow outbound traffic to 0.0.0.0/0.
D. Create public subnets in each Availability Zone. Associate the public subnets with the ALB. Update the route tables for the public subnets
with a route to the private subnets.
Correct Answer: C
I'm 110% sure the question or answers or both are wrong. Prove me wrong! :)
upvoted 11 times
Option B (move EC2 instances to public subnets and modify security group rules) involves placing instances in public subnets, which is
generally not recommended for security reasons. Additionally, it suggests modifying security group rules for outbound traffic, which
might not be the best practice to resolve the issue.
Option C (update route tables and security group rules) addresses the route table update, but it also suggests moving instances to public
subnets, which is not ideal from a security perspective.
upvoted 1 times
Option D is correct because its the only option left. and updating the route tables for the public subnets with a route to the private
subnets ensures internet access to EC2 instances in private subnet.
upvoted 1 times
B. suggests exposing the EC2 instances to the public internet, which may pose security risks and does not address the issue of inbound
internet traffic reaching the instances.
C. suggests configuring the EC2 instances to have outbound internet access, but it does not solve the problem of inbound internet traffic
reaching the instances.
D. is the correct solution. By creating public subnets and associating them with the ALB, inbound internet traffic can reach the ALB. The
route tables for the public subnets are updated to include a route to the private subnets, allowing traffic to reach the EC2 instances in the
private subnets. This setup enables secure access to the application while allowing internet traffic to reach the EC2 instances through the
ALB.
upvoted 3 times
Option D is not necessary, as the internet-facing ALB is already specified and the EC2 instances are already part of the target group.
Option A is not a solution to the problem, as it does not address the underlying issue of the EC2 instances not being able to receive
internet traffic.
upvoted 1 times
Question #247 Topic 1
A company has deployed a database in Amazon RDS for MySQL. Due to increased transactions, the database support team is reporting slow reads
against the DB instance and recommends adding a read replica.
Which combination of actions should a solutions architect take before implementing this change? (Choose two.)
D. Create a global table and specify the AWS Regions where the table will be available.
E. Enable automatic backups on the source instance by setting the backup retention period to a value other than 0.
Correct Answer: AC
When creating a read replica, there are a few things to consider. First, you must enable automatic backups on the source DB instance by
setting the backup retention period to a value other than 0. This requirement also applies to a read replica that is the source DB instance
for another read replica"
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html
upvoted 28 times
B. Choosing a failover priority is related to Multi-AZ configurations and automatic failover, but it is not specifically required when adding a
read replica.
D. Creating a global table and specifying AWS Regions is related to Aurora Global Databases, which is not the same as creating a read
replica for a standard RDS instance.
upvoted 1 times
**E. Automatic backups must be enabled on the source DB instance for read replicas to be created. This is done by setting the backup
retention period to a value other than 0.
upvoted 1 times
Option C is just a recommendation from AWS official documentation, it is there to prevent data mismatch on primary and secondaries
when the long-running transactions have not been complete yet.
upvoted 1 times
A1975 2 months ago
Selected Answer: CE
Before a MySQL DB instance can serve as a replication source, make sure to enable automatic backups on the source DB instance. To do
this, set the backup retention period to a value other than 0. This requirement also applies to a read replica that is the source DB instance
for another read replica. Automatic backups are supported for read replicas running any version of MySQL. You can configure replication
based on binary log coordinates for a MySQL DB instance
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_MySQL.Replication.ReadReplicas.html
upvoted 1 times
B. determines the order in which DB instances are promoted to the primary role during a failover scenario. It is not directly related to
adding a read replica to address slow reads.
C. ensures that any ongoing transactions on the source DB instance are allowed to finish before implementing the change. It helps
maintain data integrity and consistency during the transition to the read replica.
D. is a feature specific to DynamoDB. It allows for multi-region replication and high availability in DynamoDB, but it is not applicable in this
scenario.
E. ensures that regular backups are taken for the source DB instance. This is important for data protection and recovery purposes, as it
allows for point-in-time restoration in case of any issues during or after the addition of the read replica.
upvoted 1 times
After you enable automatic backups by modifying your read replica instance to have a backup retention period greater than 0 days, you’ll
find that the log_bin and binlog_format will align itself with the configuration specified in your parameter group dynamically and will not
require the RDS instance to be restarted. You will also be able to create a read replica from your read replica instance with no further
modification requirements.
https://blog.pythian.com/enabling-binary-logging-rds-read-replica/
upvoted 2 times
alexleely 8 months, 1 week ago
Selected Answer: AC
A,C
A: Before we can start read replica, it is important to enable binary logging on the RDS primary node, thus, ensuring read replica to have
up-to-date data.
C: To avoid inconsistencies between the primary and replica instances by allowing long-running transactions to complete on the source DB
instance
Though E is a good practise, it is not part of the steps you need to do before enabling read replica.
upvoted 2 times
A company runs analytics software on Amazon EC2 instances. The software accepts job requests from users to process data that has been
uploaded to Amazon S3. Users report that some submitted data is not being processed Amazon CloudWatch reveals that the EC2 instances have
a consistent CPU utilization at or near 100%. The company wants to improve system performance and scale the system based on user load.
A. Create a copy of the instance. Place all instances behind an Application Load Balancer.
B. Create an S3 VPC endpoint for Amazon S3. Update the software to reference the endpoint.
C. Stop the EC2 instances. Modify the instance type to one with a more powerful CPU and more memory. Restart the instances.
D. Route incoming requests to Amazon Simple Queue Service (Amazon SQS). Configure an EC2 Auto Scaling group based on queue size.
Update the software to read from the queue.
Correct Answer: D
By routing incoming requests to Amazon SQS, the company can decouple the job requests from the processing instances. This allows
them to scale the number of instances based on the size of the queue, providing more resources when needed. Additionally, using an
Auto Scaling group based on the queue size will automatically scale the number of instances up or down depending on the workload.
Updating the software to read from the queue will allow it to process the job requests in a more efficient manner, improving the
performance of the system.
upvoted 9 times
B. Creating an S3 VPC endpoint for S3 and updating the software to reference the endpoint improves network performance but does not
address the high CPU utilization or provide scalability based on user load.
C. Stopping the EC2 instances and modifying the instance type to one with a more powerful CPU and more memory may improve
performance, but it does not address scalability based on user load.
D. Routing incoming requests to SQS, configuring an EC2 ASG based on queue size, and updating the software to read from the queue
improves system performance and provides scalability based on user load.
Therefore, option D is the correct choice as it addresses the high CPU utilization, improves system performance, and enables scalability
based on user load.
upvoted 1 times
WherecanIstart 6 months, 3 weeks ago
Selected Answer: D
Autoscaling Group and SQS solves the problem.
SQS - Decouples the process
ASG - Autoscales the EC2 instances based on usage
upvoted 1 times
A company is implementing a shared storage solution for a media application that is hosted in the AWS Cloud. The company needs the ability to
use SMB clients to access data. The solution must be fully managed.
A. Create an AWS Storage Gateway volume gateway. Create a file share that uses the required client protocol. Connect the application server
to the file share.
B. Create an AWS Storage Gateway tape gateway. Configure tapes to use Amazon S3. Connect the application server to the tape gateway.
C. Create an Amazon EC2 Windows instance. Install and configure a Windows file share role on the instance. Connect the application server to
the file share.
D. Create an Amazon FSx for Windows File Server file system. Attach the file system to the origin server. Connect the application server to the
file system.
Correct Answer: D
B. involves using Storage Gateway with tape gateway configuration, which is primarily used for archiving data to S3. It does not provide
native support for SMB clients to access data.
C. involves manually setting up and configuring a Windows file share on an EC2 Windows instance. While it allows SMB clients to access
data, it is not a fully managed solution as it requires manual setup and maintenance.
D. involves creating an FSx for Windows File Server file system, which is a fully managed Windows file system that supports SMB clients. It
provides an easy-to-use shared storage solution with native SMB support.
Based on the requirements of using SMB clients and needing a fully managed solution, option D is the most suitable choice.
upvoted 2 times
A company’s security team requests that network traffic be captured in VPC Flow Logs. The logs will be frequently accessed for 90 days and then
accessed intermittently.
What should a solutions architect do to meet these requirements when configuring the logs?
A. Use Amazon CloudWatch as the target. Set the CloudWatch log group with an expiration of 90 days
B. Use Amazon Kinesis as the target. Configure the Kinesis stream to always retain the logs for 90 days.
C. Use AWS CloudTrail as the target. Configure CloudTrail to save to an Amazon S3 bucket, and enable S3 Intelligent-Tiering.
D. Use Amazon S3 as the target. Enable an S3 Lifecycle policy to transition the logs to S3 Standard-Infrequent Access (S3 Standard-IA) after
90 days.
Correct Answer: A
B. suggests using Kinesis as the target for VPC Flow Logs. While it can retain the logs for 90 days, it does not address the requirement for
intermittent access to the logs.
C. suggests using CloudTrail as the target for VPC Flow Logs. However, CloudTrail is designed for auditing and monitoring API activity, not
for capturing network traffic logs. It does not meet the requirement of capturing VPC Flow Logs.
D. suggests using S3 as the target for VPC Flow Logs and leveraging S3 Lifecycle policies to transition the logs to a cost-effective storage
class after 90 days. It meets the requirement of retaining the logs for 90 days and provides the flexibility for intermittent access while
optimizing storage costs.
upvoted 2 times
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AWS-logs-and-resource-policy.html#AWS-logs-infrastructure-S3
upvoted 3 times
By using Amazon S3 as the target for the VPC Flow Logs, the logs can be easily stored and accessed by the security team. Enabling an S3
Lifecycle policy to transition the logs to S3 Standard-Infrequent Access (S3 Standard-IA) after 90 days will automatically move the logs to a
storage class that is optimized for infrequent access, reducing the storage costs for the company. The security team will still be able to
access the logs as needed, even after they have been transitioned to S3 Standard-IA, but the storage cost will be optimized.
upvoted 4 times
https://www.examtopics.com/discussions/amazon/view/59983-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times
Question #251 Topic 1
An Amazon EC2 instance is located in a private subnet in a new VPC. This subnet does not have outbound internet access, but the EC2 instance
needs the ability to download monthly security updates from an outside vendor.
A. Create an internet gateway, and attach it to the VPC. Configure the private subnet route table to use the internet gateway as the default
route.
B. Create a NAT gateway, and place it in a public subnet. Configure the private subnet route table to use the NAT gateway as the default route.
C. Create a NAT instance, and place it in the same subnet where the EC2 instance is located. Configure the private subnet route table to use
the NAT instance as the default route.
D. Create an internet gateway, and attach it to the VPC. Create a NAT instance, and place it in the same subnet where the EC2 instance is
located. Configure the private subnet route table to use the internet gateway as the default route.
Correct Answer: B
This approach will allow the EC2 instance to access the internet and download the monthly security updates while still being located in a
private subnet. By creating a NAT gateway and placing it in a public subnet, it will allow the instances in the private subnet to access the
internet through the NAT gateway. And then, configure the private subnet route table to use the NAT gateway as the default route. This
will ensure that all outbound traffic is directed through the NAT gateway, allowing the EC2 instance to access the internet while still
maintaining the security of the private subnet.
upvoted 7 times
B. allows the EC2 in the private subnet to access the internet through the NAT gateway, which acts as a proxy. It provides controlled
outbound internet access while maintaining the security of the private subnet.
C. is similar to using a NAT gateway, but it involves using a NAT instance. NAT instances require more manual configuration and
management compared to NAT gateways, making them a less preferred option.
D. combines the use of an internet gateway and a NAT instance, which is not necessary. It introduces unnecessary complexity and adds a
NAT instance that requires additional management.
Overall, option B is the most appropriate solution as it utilizes a NAT gateway placed in a public subnet to enable controlled outbound
internet access for the EC2 instance in the private subnet.
NAT Gateways are preferred over NAT Instances by AWS and in general.
upvoted 1 times
Bmarodi 4 months ago
Selected Answer: B
Option B meets the reqiurements, hence B is right choice.
upvoted 1 times
I have yet to find a situation where a NAT Instance would be more applicable than NAT Gateway which is fully managed and is overall
an easier solution to implement - both in AWS questions or the real world.
upvoted 2 times
A solutions architect needs to design a system to store client case files. The files are core company assets and are important. The number of files
will grow over time.
The files must be simultaneously accessible from multiple application servers that run on Amazon EC2 instances. The solution must have built-in
redundancy.
D. AWS Backup
Correct Answer: A
Option B, EBS, is a block-level storage service that is typically used for attaching to individual EC2 and does not provide concurrent access
to multiple instances, making it unsuitable for this scenario.
Option C, S3 Glacier Deep Archive, is a long-term archival storage service and may not be suitable for active file access and simultaneous
access from multiple application servers.
Option D, AWS Backup, is a centralized backup management service and does not provide the required simultaneous file access and
redundancy features.
A solutions architect has created two IAM policies: Policy1 and Policy2. Both policies are attached to an IAM group.
A cloud engineer is added as an IAM user to the IAM group. Which action will the cloud engineer be able to perform?
B. Deleting directories
Correct Answer: C
The policy only grants get and list permission on IAM users, so not A
ds:Delete deny denies delete-directory, so not B, see https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ds/index.html
The policy only grants get and describe permission on logs, so not D
upvoted 8 times
A company is reviewing a recent migration of a three-tier application to a VPC. The security team discovers that the principle of least privilege is
not being applied to Amazon EC2 security group ingress and egress rules between the application tiers.
A. Create security group rules using the instance ID as the source or destination.
B. Create security group rules using the security group ID as the source or destination.
C. Create security group rules using the VPC CIDR blocks as the source or destination.
D. Create security group rules using the subnet CIDR blocks as the source or destination.
Correct Answer: B
Option A is not the best choice because using the instance ID as the source or destination would allow traffic from any instance with that
ID, which may not be limited to the specific application tier.
Option C is also not the best choice because using VPC CIDR blocks would allow traffic from any IP address within the VPC, which may not
be limited to the specific application tier.
Option D is not the best choice because using subnet CIDR blocks would allow traffic from any IP address within the subnet, which may
not be limited to the specific application tier.
upvoted 5 times
B. By using security group IDs in the rules, you can precisely control the traffic between application tiers, allowing only the necessary
communication and adhering to the principle of least privilege.
C. would apply broad rules based on the entire VPC CIDR blocks, which may not provide the necessary level of granularity required for
secure communication between specific application tiers.
D. would limit the traffic based on subnet CIDR blocks, which may not be sufficient for ensuring proper security between application tiers.
In summary, using security group IDs (Option B) is the recommended approach as it allows for precise control of traffic between
application tiers, aligning with the principle of least privilege.
upvoted 3 times
Bmarodi 4 months ago
Selected Answer: B
I vote for option B.
upvoted 1 times
A company has an ecommerce checkout workflow that writes an order to a database and calls a service to process the payment. Users are
experiencing timeouts during the checkout process. When users resubmit the checkout form, multiple unique orders are created for the same
desired transaction.
How should a solutions architect refactor this workflow to prevent the creation of multiple orders?
A. Configure the web application to send an order message to Amazon Kinesis Data Firehose. Set the payment service to retrieve the message
from Kinesis Data Firehose and process the order.
B. Create a rule in AWS CloudTrail to invoke an AWS Lambda function based on the logged application path request. Use Lambda to query the
database, call the payment service, and pass in the order information.
C. Store the order in the database. Send a message that includes the order number to Amazon Simple Notification Service (Amazon SNS). Set
the payment service to poll Amazon SNS, retrieve the message, and process the order.
D. Store the order in the database. Send a message that includes the order number to an Amazon Simple Queue Service (Amazon SQS) FIFO
queue. Set the payment service to retrieve the message and process the order. Delete the message from the queue.
Correct Answer: D
B. is not an appropriate solution for preventing the creation of multiple orders. CloudTrail is primarily used for logging and auditing API
activity, and invoking a Lambda based on the logged request does not ensure the correct order processing.
C. is not a suitable solution. SNS is a publish-subscribe messaging service, and polling it may result in delayed processing and potential
order duplication.
D. is the correct solution. Using an SQS FIFO ensures that the orders are processed in a sequential and reliable manner, preventing the
creation of multiple orders for the same transaction.
upvoted 4 times
A solutions architect is implementing a document review application using an Amazon S3 bucket for storage. The solution must prevent
accidental deletion of the documents and ensure that all versions of the documents are available. Users must be able to download, modify, and
upload documents.
Which combination of actions should be taken to meet these requirements? (Choose two.)
Correct Answer: BD
D. adds an extra layer of protection against accidental deletion of objects in the bucket. With MFA Delete enabled, a user would need to
provide an additional authentication factor to successfully delete objects from the bucket. This helps prevent accidental or unauthorized
deletions and provides an extra level of security for critical documents.
A. would restrict users from modifying or uploading documents. It would not meet the requirement of allowing users to download,
modify, and upload documents.
C. can control access permissions to the bucket, it does not specifically address the requirement of preventing accidental deletion or
ensuring availability of all versions of the documents.
E. Encryption focuses on data protection rather than versioning and deletion prevention.
upvoted 2 times
A company is building a solution that will report Amazon EC2 Auto Scaling events across all the applications in an AWS account. The company
needs to use a serverless solution to store the EC2 Auto Scaling status data in Amazon S3. The company then will use the data in Amazon S3 to
provide near-real-time updates in a dashboard. The solution must not affect the speed of EC2 instance launches.
How should the company move the data to Amazon S3 to meet these requirements?
A. Use an Amazon CloudWatch metric stream to send the EC2 Auto Scaling status data to Amazon Kinesis Data Firehose. Store the data in
Amazon S3.
B. Launch an Amazon EMR cluster to collect the EC2 Auto Scaling status data and send the data to Amazon Kinesis Data Firehose. Store the
data in Amazon S3.
C. Create an Amazon EventBridge rule to invoke an AWS Lambda function on a schedule. Configure the Lambda function to send the EC2 Auto
Scaling status data directly to Amazon S3.
D. Use a bootstrap script during the launch of an EC2 instance to install Amazon Kinesis Agent. Configure Kinesis Agent to collect the EC2
Auto Scaling status data and send the data to Amazon Kinesis Data Firehose. Store the data in Amazon S3.
Correct Answer: A
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Metric-
Streams.html#:~:text=CloudWatch%20metric%20streams
upvoted 1 times
C. would introduce delays in data updates, as it is not triggered in real-time. Additionally, it adds unnecessary overhead and complexity
compared to using a direct data stream.
D. introduces additional dependencies and management overhead. It may also impact the speed of EC2 instance launches, which is a
requirement that needs to be avoided.
Overall, option A provides a streamlined and serverless solution by leveraging CloudWatch metric streams and Kinesis Data Firehose to
efficiently capture and store the EC2 Auto Scaling status data in S3 without affecting the speed of EC2 instance launches.
upvoted 2 times
Unless the lambda is configured to run every minute which is not common with schedules - it is not considered near real-time.
upvoted 3 times
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Metric-Streams.html
upvoted 2 times
A schedule-based approach with an EventBridge rule and Lambda function may not be able to deliver the data in near real-time, as the
EC2 Auto Scaling status data is generated dynamically and may not always align with the schedule set by the EventBridge rule.
Additionally, using a schedule-based approach with EventBridge and Lambda also has the potential to create latency, as there may be a
delay between the time the data is generated and the time it is sent to S3.
In this scenario, using Amazon CloudWatch and Kinesis Data Firehose as described in Option A, provides a more reliable and near real-
time solution.
upvoted 1 times
This approach will use a serverless solution (AWS Lambda) which will not affect the speed of EC2 instance launches. It will use the
EventBridge rule to invoke the Lambda function on schedule to send the data to S3. This will meet the requirement of near-real-time
updates in a dashboard as well. The Lambda function can be triggered by CloudWatch events that are emitted when Auto Scaling events
occur. The function can then collect the necessary data and store it in S3. This direct sending of data to S3 will reduce the number of steps
and hence it is more efficient and cost-effective.
upvoted 2 times
A company has an application that places hundreds of .csv files into an Amazon S3 bucket every hour. The files are 1 GB in size. Each time a file is
uploaded, the company needs to convert the file to Apache Parquet format and place the output file into an S3 bucket.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create an AWS Lambda function to download the .csv files, convert the files to Parquet format, and place the output files in an S3 bucket.
Invoke the Lambda function for each S3 PUT event.
B. Create an Apache Spark job to read the .csv files, convert the files to Parquet format, and place the output files in an S3 bucket. Create an
AWS Lambda function for each S3 PUT event to invoke the Spark job.
C. Create an AWS Glue table and an AWS Glue crawler for the S3 bucket where the application places the .csv files. Schedule an AWS Lambda
function to periodically use Amazon Athena to query the AWS Glue table, convert the query results into Parquet format, and place the output
files into an S3 bucket.
D. Create an AWS Glue extract, transform, and load (ETL) job to convert the .csv files to Parquet format and place the output files into an S3
bucket. Create an AWS Lambda function for each S3 PUT event to invoke the ETL job.
Correct Answer: A
"LEAST operational overhead" => Should you fully manage service like Glue instead of manually like the answer A.
upvoted 11 times
https://aws.amazon.com/glue/#:~:text=to%20initiate%20your-,ETL,-jobs%20to%20run
B. adds unnecessary complexity and operational overhead. Managing the Spark job, handling scalability, and coordinating the Lambda
invocations for each file upload can be cumbersome.
C. introduces additional complexity and may not be the most efficient solution. It involves managing Glue resources, scheduling Lambda,
and querying data even when no new files are uploaded.
Option D leverages AWS Glue's ETL capabilities, allowing you to define and execute a data transformation job at scale. By invoking the ETL
job using an Lambda function for each S3 PUT event, you can ensure that files are efficiently converted to Parquet format without the
need for manual intervention. This approach minimizes operational overhead and provides a streamlined and scalable solution.
upvoted 3 times
F629 3 months, 2 weeks ago
Selected Answer: A
Both A and D can works, but A is more simple. It's more close to the "Least Operational effort".
upvoted 1 times
Also, glue cannot convert on fly automatically, you need to write some code there. If you write the same code in lambda it will convert the
same and push the file to S3
Lambda has max memory of 128 MB to 10 GB. So, it can handle it easily.
And we need to consider cost also, glue cost is more. Hope many from this forum realize these differences.
upvoted 4 times
A says lambda will download the .csv...but to where? that seem manual based on that
upvoted 1 times
A company is implementing new data retention policies for all databases that run on Amazon RDS DB instances. The company must retain daily
backups for a minimum period of 2 years. The backups must be consistent and restorable.
A. Create a backup vault in AWS Backup to retain RDS backups. Create a new backup plan with a daily schedule and an expiration period of 2
years after creation. Assign the RDS DB instances to the backup plan.
B. Configure a backup window for the RDS DB instances for daily snapshots. Assign a snapshot retention policy of 2 years to each RDS DB
instance. Use Amazon Data Lifecycle Manager (Amazon DLM) to schedule snapshot deletions.
C. Configure database transaction logs to be automatically backed up to Amazon CloudWatch Logs with an expiration period of 2 years.
D. Configure an AWS Database Migration Service (AWS DMS) replication task. Deploy a replication instance, and configure a change data
capture (CDC) task to stream database changes to Amazon S3 as the target. Configure S3 Lifecycle policies to delete the snapshots after 2
years.
Correct Answer: A
Backup Window: Configuring a backup window for daily snapshots ensures that consistent backups are taken at the specified time each
day. This helps maintain data integrity and consistency.
Snapshot Retention Policy: Assigning a snapshot retention policy of 2 years to each RDS DB instance ensures that the backups are
retained for the required duration.
Amazon Data Lifecycle Manager (Amazon DLM): Amazon DLM can be used to automate the management of EBS snapshots, including RDS
snapshots. You can configure Amazon DLM to schedule snapshot deletions, making it easier to manage the retention policy without
manual intervention.
Option A (AWS Backup) is primarily used for managing backups of resources that may not have built-in backup capabilities, but for
Amazon RDS, it's better to use the built-in snapshot capabilities and Amazon DLM for snapshot retention.
upvoted 1 times
B. it does not address the requirement for consistent and restorable backups. Snapshots are point-in-time backups and may not provide
the desired level of consistency.
C. it is not designed to provide the backup and restore functionality required for databases. It does not ensure the backups are consistent
or provide an easy restore mechanism.
D. it does not address the requirement for daily backups and retention of consistent backups. It focuses more on replication and change
data capture rather than backup and restore.
upvoted 2 times
markw92 3 months, 2 weeks ago
Why not B?
upvoted 2 times
Creating tasks for ongoing replication using AWS DMS: You can create an AWS DMS task that captures ongoing changes from the source
data store. You can do this capture while you are migrating your data. You can also create a task that captures ongoing changes after you
complete your initial (full-load) migration to a supported target data store. This process is called ongoing replication or change data
capture (CDC). AWS DMS uses this process when replicating ongoing changes from a source data store.
upvoted 1 times
A company’s compliance team needs to move its file shares to AWS. The shares run on a Windows Server SMB file share. A self-managed on-
premises Active Directory controls access to the files and folders.
The company wants to use Amazon FSx for Windows File Server as part of the solution. The company must ensure that the on-premises Active
Directory groups restrict access to the FSx for Windows File Server SMB compliance shares, folders, and files after the move to AWS. The
company has created an FSx for Windows File Server file system.
A. Create an Active Directory Connector to connect to the Active Directory. Map the Active Directory groups to IAM groups to restrict access.
B. Assign a tag with a Restrict tag key and a Compliance tag value. Map the Active Directory groups to IAM groups to restrict access.
C. Create an IAM service-linked role that is linked directly to FSx for Windows File Server to restrict access.
Correct Answer: D
Joining the FSx for Windows File Server file system to the on-premises Active Directory will allow the company to use the existing Active
Directory groups to restrict access to the file shares, folders, and files after the move to AWS. This option allows the company to continue
using their existing access controls and management structure, making the transition to AWS more seamless.
upvoted 12 times
Joining FSx to the AD domain allows the native file system permissions, users, and groups to be applied from Active Directory. Access is
handled seamlessly via the trust relationship between FSx and AD.
The other options would not leverage the existing AD identities and groups
upvoted 1 times
A) AD Connector and IAM groups would require re-mapping AD groups to IAM, adding complexity. Native AD integration is simpler.
C) Service-linked roles are not applicable for managing end user access.
So D is the correct option to meet the requirements using the native Active Directory integration built into FSx for Windows.
upvoted 1 times
Option A is incorrect because mapping the AD groups to IAM groups is not applicable in this scenario. IAM is primarily used for managing
access to AWS resources, while the requirement is to integrate with the on-premises AD for access control.
Option B is incorrect because assigning a tag with a Restrict tag key and a Compliance tag value does not provide the necessary
integration with the on-premises AD for access control. Tags are used for organizing and categorizing resources and do not provide
authentication or access control mechanisms.
Option C is incorrect because creating an IAM service-linked role linked directly to FSx for Windows File Server does not integrate with the
on-premises AD. IAM roles are used within AWS for managing permissions and do not provide the necessary integration with external AD
systems.
upvoted 3 times
Mia2009687 3 months ago
Selected Answer: D
https://docs.aws.amazon.com/fsx/latest/WindowsGuide/aws-ad-integration-fsxW.html
upvoted 1 times
https://docs.aws.amazon.com/fsx/latest/WindowsGuide/aws-ad-integration-fsxW.html
upvoted 3 times
Joining the FSx for Windows File Server file system to the on-premises Active Directory allows the company to use the existing Active
Directory groups to restrict access to the file shares, folders, and files after the move to AWS. By joining the file system to the Active
Directory, the company can maintain the same access control as before the move, ensuring that the compliance team can maintain
compliance with the relevant regulations and standards.
Options A and B involve creating an Active Directory Connector or assigning a tag to map the Active Directory groups to IAM groups, but
these options do not allow for the use of the existing Active Directory groups to restrict access to the file shares in AWS.
Option C involves creating an IAM service-linked role linked directly to FSx for Windows File Server to restrict access, but this option does
not take advantage of the existing on-premises Active Directory and its access control.
upvoted 3 times
The best way to restrict access to the FSx for Windows File Server SMB compliance shares, folders, and files after the move to AWS is to
join the file system to the on-premises Active Directory. This will allow the company to continue using the Active Directory groups to
restrict access to the files and folders, without the need to create additional IAM groups or roles.
By joining the file system to the Active Directory, the company can continue to use the same access control mechanisms it already has in
place and the security configuration will not change.
Option A and B are not applicable to FSx for Windows File Server because it doesn't support the use of IAM groups or tags to restrict
access.
Option C is not appropriate in this case because FSx for Windows File Server does not support using IAM service-linked roles to restrict
access.
upvoted 4 times
Question #261 Topic 1
A company recently announced the deployment of its retail website to a global audience. The website runs on multiple Amazon EC2 instances
behind an Elastic Load Balancer. The instances run in an Auto Scaling group across multiple Availability Zones.
The company wants to provide its customers with different versions of content based on the devices that the customers use to access the
website.
Which combination of actions should a solutions architect take to meet these requirements? (Choose two.)
B. Configure a host header in a Network Load Balancer to forward traffic to different instances.
C. Configure a Lambda@Edge function to send specific objects to users based on the User-Agent header.
D. Configure AWS Global Accelerator. Forward requests to a Network Load Balancer (NLB). Configure the NLB to set up host-based routing to
different EC2 instances.
E. Configure AWS Global Accelerator. Forward requests to a Network Load Balancer (NLB). Configure the NLB to set up path-based routing to
different EC2 instances.
Correct Answer: AC
NLB lister rule only supports Protocol & Port (Not host/based routing like ALB) => D, E is incorrect.
NLB just works layer 4 (TCP/UDP) instead of Layer 7 (HTTP) => B is incorrect.
C. By creating a Lambda@Edge, you can inspect the User-Agent header of incoming requests and determine the type of device being
used. Based on this information, you can customize the response and send the appropriate version of the content to the user.
upvoted 2 times
C. By creating a Lambda@Edge, you can inspect the User-Agent header of incoming requests and determine the type of device being
used. Based on this information, you can customize the response and send the appropriate version of the content to the user.
B. does not address the requirement of serving different content versions based on device types.
Therefore, options A and C are the correct combination of actions to meet the requirement of providing different versions of content
based on the devices that customers use to access the website.
upvoted 2 times
For C:
IMPROVED USER EXPERIENCE
Lambda@Edge can help improve your users' experience with your websites and web applications across the world, by letting you
personalize content for them without sacrificing performance.
https://aws.amazon.com/lambda/edge/
upvoted 2 times
Lambda@Edge allows you to run a Lambda function in response to specific CloudFront events, such as a viewer request, an origin request,
a response, or a viewer response.
upvoted 2 times
A company plans to use Amazon ElastiCache for its multi-tier web application. A solutions architect creates a Cache VPC for the ElastiCache
cluster and an App VPC for the application’s Amazon EC2 instances. Both VPCs are in the us-east-1 Region.
The solutions architect must implement a solution to provide the application’s EC2 instances with access to the ElastiCache cluster.
A. Create a peering connection between the VPCs. Add a route table entry for the peering connection in both VPCs. Configure an inbound rule
for the ElastiCache cluster’s security group to allow inbound connection from the application’s security group.
B. Create a Transit VPC. Update the VPC route tables in the Cache VPC and the App VPC to route traffic through the Transit VPC. Configure an
inbound rule for the ElastiCache cluster's security group to allow inbound connection from the application’s security group.
C. Create a peering connection between the VPCs. Add a route table entry for the peering connection in both VPCs. Configure an inbound rule
for the peering connection’s security group to allow inbound connection from the application’s security group.
D. Create a Transit VPC. Update the VPC route tables in the Cache VPC and the App VPC to route traffic through the Transit VPC. Configure an
inbound rule for the Transit VPC’s security group to allow inbound connection from the application’s security group.
Correct Answer: A
Creating a peering connection between the VPCs allows the application's EC2 instances to communicate with the ElastiCache cluster
directly and efficiently. This is the most cost-effective solution as it does not involve creating additional resources such as a Transit VPC,
and it does not incur additional costs for traffic passing through the Transit VPC. Additionally, it is also more secure as it allows you to
configure a more restrictive security group rule to allow inbound connection from only the application's security group.
upvoted 12 times
Option B suggests creating a Transit VPC, which adds unnecessary complexity and cost for this scenario.
Option C suggests configuring an inbound rule for the peering connection's security group, which is not necessary as the security group
for the ElastiCache cluster should be used to control inbound connections.
Option D suggests configuring an inbound rule for the Transit VPC's security group, which is not needed in this case and adds
unnecessary complexity.
Therefore, option A is the most cost-effective solution to provide the application's EC2 instances with access to the ElastiCache cluster.
upvoted 1 times
https://docs.aws.amazon.com/vpc/latest/peering/vpc-peering-routing.html
upvoted 1 times
A company is building an application that consists of several microservices. The company has decided to use container technologies to deploy its
software on AWS. The company needs a solution that minimizes the amount of ongoing effort for maintenance and scaling. The company cannot
manage additional infrastructure.
Which combination of actions should a solutions architect take to meet these requirements? (Choose two.)
B. Deploy the Kubernetes control plane on Amazon EC2 instances that span multiple Availability Zones.
C. Deploy an Amazon Elastic Container Service (Amazon ECS) service with an Amazon EC2 launch type. Specify a desired task number level of
greater than or equal to 2.
D. Deploy an Amazon Elastic Container Service (Amazon ECS) service with a Fargate launch type. Specify a desired task number level of
greater than or equal to 2.
E. Deploy Kubernetes worker nodes on Amazon EC2 instances that span multiple Availability Zones. Create a deployment that specifies two or
more replicas for each microservice.
Correct Answer: AD
Option C suggests using the Amazon EC2 launch type for ECS, which still requires managing EC2 instances and is not as cost-effective and
scalable as using Fargate.
Therefore, the combination of deploying an Amazon ECS cluster and an ECS service with a Fargate launch type (options A and D) is the
most suitable for minimizing maintenance and scaling effort without managing additional infrastructure.
upvoted 3 times
https://docs.aws.amazon.com/AmazonECS/latest/userguide/what-is-fargate.html
upvoted 3 times
A company has a web application hosted over 10 Amazon EC2 instances with traffic directed by Amazon Route 53. The company occasionally
experiences a timeout error when attempting to browse the application. The networking team finds that some DNS queries return IP addresses of
unhealthy instances, resulting in the timeout error.
A. Create a Route 53 simple routing policy record for each EC2 instance. Associate a health check with each record.
B. Create a Route 53 failover routing policy record for each EC2 instance. Associate a health check with each record.
C. Create an Amazon CloudFront distribution with EC2 instances as its origin. Associate a health check with the EC2 instances.
D. Create an Application Load Balancer (ALB) with a health check in front of the EC2 instances. Route to the ALB from Route 53.
Correct Answer: D
Routing traffic to the ALB from Route 53 ensures that DNS queries return the IP address of the ALB instead of individual instances. This
allows the ALB to distribute traffic only to healthy instances, avoiding timeouts caused by unhealthy instances.
A & B: While associating health checks with each record can help identify unhealthy instances, it does not provide automatic load
balancing and distribution of traffic to healthy instances.
C: While CloudFront can improve performance and availability, it is primarily a CDN and may not directly address the issue of load
balancing and distributing traffic to healthy instances.
Therefore, option D is the most appropriate solution to overcome the timeout errors by implementing an ALB with health checks and
routing traffic through Route 53.
upvoted 3 times
An Application Load Balancer (ALB) allows you to distribute incoming traffic across multiple backend instances, and can automatically
route traffic to healthy instances while removing traffic from unhealthy instances. By using an ALB in front of the EC2 instances and
routing traffic to it from Route 53, the load balancer can perform health checks on the instances and only route traffic to healthy
instances, which should help to reduce or eliminate timeout errors caused by unhealthy instances.
upvoted 4 times
Question #265 Topic 1
A solutions architect needs to design a highly available application consisting of web, application, and database tiers. HTTPS content delivery
should be as close to the edge as possible, with the least delivery time.
A. Configure a public Application Load Balancer (ALB) with multiple redundant Amazon EC2 instances in public subnets. Configure Amazon
CloudFront to deliver HTTPS content using the public ALB as the origin.
B. Configure a public Application Load Balancer with multiple redundant Amazon EC2 instances in private subnets. Configure Amazon
CloudFront to deliver HTTPS content using the EC2 instances as the origin.
C. Configure a public Application Load Balancer (ALB) with multiple redundant Amazon EC2 instances in private subnets. Configure Amazon
CloudFront to deliver HTTPS content using the public ALB as the origin.
D. Configure a public Application Load Balancer with multiple redundant Amazon EC2 instances in public subnets. Configure Amazon
CloudFront to deliver HTTPS content using the EC2 instances as the origin.
Correct Answer: C
This solution meets the requirements for a highly available application with web, application, and database tiers, as well as providing
edge-based content delivery. Additionally, it maximizes security by having the ALB in a private subnet, which limits direct access to the
web servers, while still being able to serve traffic over the Internet via the public ALB. This will ensure that the web servers are not
exposed to the public Internet, which reduces the attack surface and provides a secure way to access the application.
upvoted 12 times
B. lacks a load balancer in the public subnet, which is required for efficient load distribution and high availability.
D. provides load balancing and HTTPS content delivery, it exposes the EC2 instances directly to the public internet, which may pose
security risks.
C. provides high availability, secure access through private subnets, and optimized HTTPS content delivery using CloudFront with a public
ALB as the origin.
upvoted 3 times
A company has a popular gaming platform running on AWS. The application is sensitive to latency because latency can impact the user
experience and introduce unfair advantages to some players. The application is deployed in every AWS Region. It runs on Amazon EC2 instances
that are part of Auto Scaling groups configured behind Application Load Balancers (ALBs). A solutions architect needs to implement a mechanism
to monitor the health of the application and redirect traffic to healthy endpoints.
A. Configure an accelerator in AWS Global Accelerator. Add a listener for the port that the application listens on, and attach it to a Regional
endpoint in each Region. Add the ALB as the endpoint.
B. Create an Amazon CloudFront distribution and specify the ALB as the origin server. Configure the cache behavior to use origin cache
headers. Use AWS Lambda functions to optimize the traffic.
C. Create an Amazon CloudFront distribution and specify Amazon S3 as the origin server. Configure the cache behavior to use origin cache
headers. Use AWS Lambda functions to optimize the traffic.
D. Configure an Amazon DynamoDB database to serve as the data store for the application. Create a DynamoDB Accelerator (DAX) cluster to
act as the in-memory cache for DynamoDB hosting the application data.
Correct Answer: A
AWS Global Accelerator directs traffic to the optimal healthy endpoint based on health checks, it can also route traffic to the closest
healthy endpoint based on geographic location of the client. By configuring an accelerator and attaching it to a Regional endpoint in each
Region, and adding the ALB as the endpoint, the solution will redirect traffic to healthy endpoints, improving the user experience by
reducing latency and ensuring that the application is running optimally. This solution will ensure that traffic is directed to the closest
healthy endpoint and will help to improve the overall user experience.
upvoted 14 times
C. This configuration is suitable for static content delivery but does not address the health monitoring and traffic redirection requirements
of the application.
D. While this can enhance performance, it does not monitor the health of the application or redirect traffic based on health checks.
Therefore, option A is the most suitable solution as it leverages AWS Global Accelerator to monitor application health, route traffic to
healthy endpoints, and optimize the user experience while addressing latency concerns.
upvoted 1 times
A company has one million users that use its mobile app. The company must analyze the data usage in near-real time. The company also must
encrypt the data in near-real time and must store the data in a centralized location in Apache Parquet format for further processing.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create an Amazon Kinesis data stream to store the data in Amazon S3. Create an Amazon Kinesis Data Analytics application to analyze the
data. Invoke an AWS Lambda function to send the data to the Kinesis Data Analytics application.
B. Create an Amazon Kinesis data stream to store the data in Amazon S3. Create an Amazon EMR cluster to analyze the data. Invoke an AWS
Lambda function to send the data to the EMR cluster.
C. Create an Amazon Kinesis Data Firehose delivery stream to store the data in Amazon S3. Create an Amazon EMR cluster to analyze the
data.
D. Create an Amazon Kinesis Data Firehose delivery stream to store the data in Amazon S3. Create an Amazon Kinesis Data Analytics
application to analyze the data.
Correct Answer: D
This solution will meet the requirements with the least operational overhead as it uses Amazon Kinesis Data Firehose, which is a fully
managed service that can automatically handle the data collection, data transformation, encryption, and data storage in near-real time.
Kinesis Data Firehose can automatically store the data in Amazon S3 in Apache Parquet format for further processing. Additionally, it
allows you to create an Amazon Kinesis Data Analytics application to analyze the data in near real-time, with no need to manage any
infrastructure or invoke any Lambda function. This way you can process a large amount of data with the least operational overhead.
upvoted 28 times
C. introduces unnecessary complexity by involving EMR for data analysis when Kinesis Data Analytics can perform the analysis in a more
streamlined and automated manner.
Therefore, option D is the most suitable solution as it leverages Kinesis Data Firehose for data ingestion, stores the data in S3, and utilizes
Kinesis Data Analytics for near-real-time analysis, providing a low operational overhead solution for data usage analysis and encryption.
upvoted 2 times
AHUI 8 months, 2 weeks ago
D:
https://www.examtopics.com/discussions/amazon/view/82022-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times
Amazon Kinesis Data Firehose can automatically encrypt and store the data in Amazon S3 in Apache Parquet format for further
processing, which reduces the operational overhead. It also allows for near-real-time data analysis using Kinesis Data Analytics, which is a
fully managed service that makes it easy to analyze streaming data using SQL. This solution eliminates the need for setting up and
maintaining an EMR cluster, which would require more operational overhead.
upvoted 2 times
Question #268 Topic 1
A gaming company has a web application that displays scores. The application runs on Amazon EC2 instances behind an Application Load
Balancer. The application stores data in an Amazon RDS for MySQL database. Users are starting to experience long delays and interruptions that
are caused by database read performance. The company wants to improve the user experience while minimizing changes to the application’s
architecture.
D. Migrate the database from Amazon RDS for MySQL to Amazon DynamoDB.
Correct Answer: A
C. While Lambda can offer benefits such as scalability and reduced operational overhead, it may not directly address the database read
performance issues. Migrating to Lambda would require significant changes to the application's architecture and codebase.
D. While DynamoDB is a scalable and high-performance NoSQL database, migrating from a relational database like MySQL to DynamoDB
would require significant changes to the application's data model and query patterns.
Therefore, option B is the most appropriate solution as it leverages RDS Proxy to optimize database connections and improve read
performance, minimizing changes to the application's architecture and providing a scalable and efficient solution for addressing the
database read performance issues.
upvoted 3 times
An ecommerce company has noticed performance degradation of its Amazon RDS based web application. The performance degradation is
attributed to an increase in the number of read-only SQL queries triggered by business analysts. A solutions architect needs to solve the problem
with minimal changes to the existing web application.
A. Export the data to Amazon DynamoDB and have the business analysts run their queries.
B. Load the data into Amazon ElastiCache and have the business analysts run their queries.
C. Create a read replica of the primary database and have the business analysts run their queries.
D. Copy the data into an Amazon Redshift cluster and have the business analysts run their queries.
Correct Answer: C
B. ElastiCache is an in-memory data store that can improve query performance, but it is primarily used for caching rather than running
complex queries.
D. Redshift is a powerful data warehousing solution, but migrating the data and adapting the queries to Redshift's columnar architecture
would require significant changes to the application and query logic.
Therefore, option C is the most appropriate recommendation as it leverages read replicas in RDS to offload read-only query traffic from
the primary database, allowing the business analysts to run their queries without impacting the performance of the web application. It
provides a scalable and efficient solution with minimal changes to the existing web application.
upvoted 1 times
Creating a read replica of the primary RDS database will offload the read-only SQL queries from the primary database, which will help to
improve the performance of the web application. Read replicas are exact copies of the primary database that can be used to handle read-
only traffic, which will reduce the load on the primary database and improve the performance of the web application. This solution can be
implemented with minimal changes to the existing web application, as the business analysts can continue to run their queries on the read
replica without modifying the code.
upvoted 4 times
A company is using a centralized AWS account to store log data in various Amazon S3 buckets. A solutions architect needs to ensure that the
data is encrypted at rest before the data is uploaded to the S3 buckets. The data also must be encrypted in transit.
A. Use client-side encryption to encrypt the data that is being uploaded to the S3 buckets.
B. Use server-side encryption to encrypt the data that is being uploaded to the S3 buckets.
C. Create bucket policies that require the use of server-side encryption with S3 managed encryption keys (SSE-S3) for S3 uploads.
D. Enable the security option to encrypt the S3 buckets through the use of a default AWS Key Management Service (AWS KMS) key.
Correct Answer: A
A solutions architect observes that a nightly batch processing job is automatically scaled up for 1 hour before the desired Amazon EC2 capacity is
reached. The peak capacity is the ‘same every night and the batch jobs always start at 1 AM. The solutions architect needs to find a cost-effective
solution that will allow for the desired EC2 capacity to be reached quickly and allow the Auto Scaling group to scale down after the batch jobs are
complete.
D. Change the scaling policy to add more EC2 instances during each scaling operation.
Correct Answer: C
The Auto Scaling group can then scale down based on metrics after the batch jobs complete.
upvoted 3 times
By configuring scheduled scaling, the solutions architect can set the Auto Scaling group to automatically scale up to the desired compute
level at a specific time (1AM) when the batch job starts and then automatically scale down after the job is complete. This will allow the
desired EC2 capacity to be reached quickly and also help in reducing the cost.
upvoted 4 times
A company serves a dynamic website from a fleet of Amazon EC2 instances behind an Application Load Balancer (ALB). The website needs to
support multiple languages to serve customers around the world. The website’s architecture is running in the us-west-1 Region and is exhibiting
high request latency for users that are located in other parts of the world.
The website needs to serve requests quickly and efficiently regardless of a user’s location. However, the company does not want to recreate the
existing architecture across multiple Regions.
A. Replace the existing architecture with a website that is served from an Amazon S3 bucket. Configure an Amazon CloudFront distribution
with the S3 bucket as the origin. Set the cache behavior settings to cache based on the Accept-Language request header.
B. Configure an Amazon CloudFront distribution with the ALB as the origin. Set the cache behavior settings to cache based on the Accept-
Language request header.
C. Create an Amazon API Gateway API that is integrated with the ALB. Configure the API to use the HTTP integration type. Set up an API
Gateway stage to enable the API cache based on the Accept-Language request header.
D. Launch an EC2 instance in each additional Region and configure NGINX to act as a cache server for that Region. Put all the EC2 instances
and the ALB behind an Amazon Route 53 record set with a geolocation routing policy.
Correct Answer: B
A rapidly growing ecommerce company is running its workloads in a single AWS Region. A solutions architect must create a disaster recovery
(DR) strategy that includes a different AWS Region. The company wants its database to be up to date in the DR Region with the least possible
latency. The remaining infrastructure in the DR Region needs to run at reduced capacity and must be able to scale up if necessary.
Which solution will meet these requirements with the LOWEST recovery time objective (RTO)?
Correct Answer: B
Option B is a better choice than Option A as it provides a warm standby deployment, which is an automated and more scalable setup than
pilot light deployment. In this setup, the database is replicated to the DR Region, and the standby instance can be brought up quickly in
case of a disaster.
Option C is incorrect because Multi-AZ DB instances provide high availability, not disaster recovery.
Option D is a good choice for high availability, but it does not meet the requirement for DR in a different region with the least possible
latency.
upvoted 16 times
https://docs.aws.amazon.com/whitepapers/latest/disaster-recovery-workloads-on-aws/disaster-recovery-options-in-the-cloud.html
upvoted 14 times
A company runs an application on Amazon EC2 instances. The company needs to implement a disaster recovery (DR) solution for the application.
The DR solution needs to have a recovery time objective (RTO) of less than 4 hours. The DR solution also needs to use the fewest possible AWS
resources during normal operations.
Which solution will meet these requirements in the MOST operationally efficient way?
A. Create Amazon Machine Images (AMIs) to back up the EC2 instances. Copy the AMIs to a secondary AWS Region. Automate infrastructure
deployment in the secondary Region by using AWS Lambda and custom scripts.
B. Create Amazon Machine Images (AMIs) to back up the EC2 instances. Copy the AMIs to a secondary AWS Region. Automate infrastructure
deployment in the secondary Region by using AWS CloudFormation.
C. Launch EC2 instances in a secondary AWS Region. Keep the EC2 instances in the secondary Region active at all times.
D. Launch EC2 instances in a secondary Availability Zone. Keep the EC2 instances in the secondary Availability Zone active at all times.
Correct Answer: D
This option allows for the creation of Amazon Machine Images (AMIs) to back up the EC2 instances, which can then be copied to a
secondary AWS region to provide disaster recovery capabilities. The infrastructure deployment in the secondary region can be automated
using AWS CloudFormation, which can help to reduce the amount of time and resources needed for deployment and management.
upvoted 7 times
Option B: Creating AMIs for backup and using AWS CloudFormation for infrastructure deployment in the secondary Region is a more
streamlined and automated approach. CloudFormation allows you to define and provision resources in a declarative manner, making it
easier to maintain and update your infrastructure. This solution is more operationally efficient compared to Option A.
Option C: could be expensive and not fully aligned with the requirement of using the fewest possible AWS resources during normal
operations.
Option D: might not be sufficient for meeting the DR requirements, as Availability Zones are still within the same AWS Region and might
be subject to the same regional-level failures.
upvoted 1 times
NBone 2 months, 1 week ago
Please I would really appreciate clarification with this question. The community has voted 100% that the right answer is B. However, option
D is shown to be the correct answer. So, who sets the correct answer? Which one should new comers like myself believe? the community's
or the other (which am guessing is set by the moderators???) Please help.
upvoted 1 times
By creating Amazon Machine Images (AMIs) to back up the EC2 instances and copying them to a secondary AWS Region, the company can
ensure that they have a reliable backup in the event of a disaster. By using AWS CloudFormation to automate infrastructure deployment in
the secondary Region, the company can minimize the amount of time and effort required to set up the DR solution.
upvoted 4 times
https://docs.aws.amazon.com/zh_cn/whitepapers/latest/disaster-recovery-workloads-on-aws/disaster-recovery-options-in-the-
cloud.html#backup-and-restore
upvoted 3 times
Question #275 Topic 1
A company runs an internal browser-based application. The application runs on Amazon EC2 instances behind an Application Load Balancer. The
instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. The Auto Scaling group scales up to 20 instances during
work hours, but scales down to 2 instances overnight. Staff are complaining that the application is very slow when the day begins, although it runs
well by mid-morning.
How should the scaling be changed to address the staff complaints and keep costs to a minimum?
A. Implement a scheduled action that sets the desired capacity to 20 shortly before the office opens.
B. Implement a step scaling action triggered at a lower CPU threshold, and decrease the cooldown period.
C. Implement a target tracking action triggered at a lower CPU threshold, and decrease the cooldown period.
D. Implement a scheduled action that sets the minimum and maximum capacity to 20 shortly before the office opens.
Correct Answer: A
It seems that there is no information in the question about CPU or Memory usage.
So, we might think the answer is A. why? because what we need is to have the required (desired) number of instances. It already has
scheduled scaling that works well in this scenario. Scale down after working hours and scale up in working hours. So, it just needs to
adjust the desired number to start from 20 instances.
Scaling In: As the load on your application decreases in the afternoon or night, AWS Auto Scaling will continuously monitor the health and
load of your instances. If the instances are underutilized and can be terminated without affecting your application's performance, AWS
Auto Scaling will automatically scale in by terminating excess instances,
Why not D? If you specify the min instance, AWS will always keep the minimum number of instances (20 in this case) running.
upvoted 2 times
Desired capacity: Represents the initial capacity of the Auto Scaling group at the time of creation. An Auto Scaling group attempts to
maintain the desired capacity. It starts by launching the number of instances that are specified for the desired capacity, and maintains this
number of instances as long as there are no scaling policies or scheduled actions attached to the Auto Scaling group.
upvoted 2 times
On the other hand we know that "Auto Scaling group scales up to 20 instances during work hours". A seems to be the only option that
kinda satisfies requirements.
upvoted 1 times
A company has a multi-tier application deployed on several Amazon EC2 instances in an Auto Scaling group. An Amazon RDS for Oracle instance
is the application’ s data layer that uses Oracle-specific PL/SQL functions. Traffic to the application has been steadily increasing. This is causing
the EC2 instances to become overloaded and the RDS instance to run out of storage. The Auto Scaling group does not have any scaling metrics
and defines the minimum healthy instance count only. The company predicts that traffic will continue to increase at a steady but unpredictable
rate before leveling off.
What should a solutions architect do to ensure the system can automatically scale for the increased traffic? (Choose two.)
C. Configure an alarm on the RDS for Oracle instance for low free storage space.
D. Configure the Auto Scaling group to use the average CPU as the scaling metric.
E. Configure the Auto Scaling group to use the average free memory as the scaling metric.
Correct Answer: AC
C) Configure an alarm on the RDS for Oracle instance for low free storage space.
= You could do this but what does it fix? Nothing. The CW notification isn't going to trigger anything.
D) Configure the Auto Scaling group to use the average CPU as the scaling metric.
= Makes sense. The CPU utilization is the precursor to the storage outage. When the ec2 instances are overloaded, the RDS instance
storage hits its limits, too.
upvoted 10 times
Option C (Configure an alarm on the RDS for Oracle instance for low free storage space) is useful for monitoring, but it doesn't proactively
address the storage issue by automatically expanding storage as needed.
Option E (Configure the Auto Scaling group to use the average free memory as the scaling metric) is less common as a scaling metric for
EC2 instances compared to CPU utilization. While memory can be an important factor for application performance, CPU utilization is
typically a more commonly used metric for scaling decisions. It also doesn't directly address the RDS storage issue.
upvoted 1 times
D. By configuring the Auto Scaling group to use the average CPU utilization as the scaling metric, it can automatically add more EC2
instances to the Auto Scaling group when the CPU utilization exceeds a certain threshold. This will help handle the increased traffic and
workload on the EC2 instances in the multi-tier application.
upvoted 1 times
A company provides an online service for posting video content and transcoding it for use by any mobile platform. The application architecture
uses Amazon Elastic File System (Amazon EFS) Standard to collect and store the videos so that multiple Amazon EC2 Linux instances can access
the video content for processing. As the popularity of the service has grown over time, the storage costs have become too expensive.
A. Use AWS Storage Gateway for files to store and process the video content.
B. Use AWS Storage Gateway for volumes to store and process the video content.
C. Use Amazon EFS for storing the video content. Once processing is complete, transfer the files to Amazon Elastic Block Store (Amazon
EBS).
D. Use Amazon S3 for storing the video content. Move the files temporarily over to an Amazon Elastic Block Store (Amazon EBS) volume
attached to the server for processing.
Correct Answer: A
For processing, the video files can be temporarily copied from S3 to an EBS volume attached to the EC2 instance. EBS provides low latency
block storage for high performance video processing.
Amazon storage gateway has 4 types, S3 File Gateway, FSx file gateway, Type Gateway and Volume Gateway.
If not specific reference file gateway should be default as S3 gateway, which sent file over to S3 the most cost effective storage in AWS.
Why not D, the reason is last sentence, there are multiple EC2 servers for processing the video and EBS can only attach to 1 EC2 instance
at a time, so if you use EBS, which mean for each EC2 instance you will have 1 EBS. This rule out D.
upvoted 1 times
With this approach, you would use an AWS Storage Gateway file gateway to access the video content stored in Amazon S3. The file
gateway presents a file interface to the EC2 instances, allowing them to access the video content as if it were stored on a local file system.
The video processing tasks can be performed on the EC2 instances, and the processed files can be stored back in S3.
This approach is cost-effective because it leverages the lower cost of Amazon S3 for storage while still allowing for easy access to the video
content from the EC2 instances using a file interface. Additionally, Storage Gateway provides caching capabilities that can further improve
performance by reducing the need to access S3 directly.
upvoted 1 times
Amazon S3 File Gateway presents a file interface that enables you to store files as objects in Amazon S3 using the industry-standard NFS
and SMB file protocols, and access those files via NFS and SMB from your data center or Amazon EC2, or access those files as objects
directly in Amazon S3. POSIX-style metadata, including ownership, permissions, and timestamps are durably stored in Amazon S3 in the
user-metadata of the object associated with the file. Once objects are transferred to S3, they can be managed as native S3 objects and
bucket policies such as lifecycle management and Cross-Region Replication (CRR), and can be applied directly to objects stored in your
bucket. Amazon S3 File Gateway also publishes audit logs for SMB file share user operations to Amazon CloudWatch.
Customers can use Amazon S3 File Gateway to back up on-premises file data as objects in Amazon S3 (including Microsoft SQL Server and
Oracle databases and logs), and for hybrid cloud workflows using data generated by on-premises applications for processing by AWS
services such as machine learning or big data analytics.
upvoted 1 times
A company wants to create an application to store employee data in a hierarchical structured relationship. The company needs a minimum-latency
response to high-traffic queries for the employee data and must protect any sensitive data. The company also needs to receive monthly email
messages if any financial information is present in the employee data.
Which combination of steps should a solutions architect take to meet these requirements? (Choose two.)
A. Use Amazon Redshift to store the employee data in hierarchies. Unload the data to Amazon S3 every month.
B. Use Amazon DynamoDB to store the employee data in hierarchies. Export the data to Amazon S3 every month.
C. Configure Amazon Macie for the AWS account. Integrate Macie with Amazon EventBridge to send monthly events to AWS Lambda.
D. Use Amazon Athena to analyze the employee data in Amazon S3. Integrate Athena with Amazon QuickSight to publish analysis dashboards
and share the dashboards with users.
E. Configure Amazon Macie for the AWS account. Integrate Macie with Amazon EventBridge to send monthly notifications through an Amazon
Simple Notification Service (Amazon SNS) subscription.
Correct Answer: CD
B meets the need to store hierarchical employee data in DynamoDB for low latency queries at high traffic. DynamoDB can handle the
access patterns for hierarchical data. Exporting to S3 monthly provides an audit trail.
E sets up Macie to analyze sensitive data and integrate with EventBridge to trigger monthly SNS notifications when financial data is
present.
upvoted 1 times
E. Amazon Macie is a service that uses machine learning to automatically discover, classify, and protect sensitive data in AWS. Integrating
Macie with Amazon EventBridge allows you to receive events whenever any financial information is identified in the employee data. By
using Amazon SNS, you can receive these notifications via email.
upvoted 1 times
https://docs.aws.amazon.com/prescriptive-guidance/latest/dynamodb-hierarchical-data-model/introduction.html
upvoted 2 times
A company has an application that is backed by an Amazon DynamoDB table. The company’s compliance requirements specify that database
backups must be taken every month, must be available for 6 months, and must be retained for 7 years.
A. Create an AWS Backup plan to back up the DynamoDB table on the first day of each month. Specify a lifecycle policy that transitions the
backup to cold storage after 6 months. Set the retention period for each backup to 7 years.
B. Create a DynamoDB on-demand backup of the DynamoDB table on the first day of each month. Transition the backup to Amazon S3 Glacier
Flexible Retrieval after 6 months. Create an S3 Lifecycle policy to delete backups that are older than 7 years.
C. Use the AWS SDK to develop a script that creates an on-demand backup of the DynamoDB table. Set up an Amazon EventBridge rule that
runs the script on the first day of each month. Create a second script that will run on the second day of each month to transition DynamoDB
backups that are older than 6 months to cold storage and to delete backups that are older than 7 years.
D. Use the AWS CLI to create an on-demand backup of the DynamoDB table. Set up an Amazon EventBridge rule that runs the command on the
first day of each month with a cron expression. Specify in the command to transition the backups to cold storage after 6 months and to delete
the backups after 7 years.
Correct Answer: B
can be used to create backup schedules and retention policies for DynamoDB tables
upvoted 2 times
A company is using Amazon CloudFront with its website. The company has enabled logging on the CloudFront distribution, and logs are saved in
one of the company’s Amazon S3 buckets. The company needs to perform advanced analyses on the logs and build visualizations.
A. Use standard SQL queries in Amazon Athena to analyze the CloudFront logs in the S3 bucket. Visualize the results with AWS Glue.
B. Use standard SQL queries in Amazon Athena to analyze the CloudFront logs in the S3 bucket. Visualize the results with Amazon QuickSight.
C. Use standard SQL queries in Amazon DynamoDB to analyze the CloudFront logs in the S3 bucket. Visualize the results with AWS Glue.
D. Use standard SQL queries in Amazon DynamoDB to analyze the CloudFront logs in the S3 bucket. Visualize the results with Amazon
QuickSight.
Correct Answer: A
https://docs.aws.amazon.com/quicksight/latest/user/welcome.html
upvoted 5 times
A company runs a fleet of web servers using an Amazon RDS for PostgreSQL DB instance. After a routine compliance check, the company sets a
standard that requires a recovery point objective (RPO) of less than 1 second for all its production databases.
C. Configure the DB instance in one Availability Zone, and create multiple read replicas in a separate Availability Zone.
D. Configure the DB instance in one Availability Zone, and configure AWS Database Migration Service (AWS DMS) change data capture (CDC)
tasks.
Correct Answer: D
Multi-AZ:
Multi-AZ maintains a synchronous standby replica of the primary instance in a different Availability Zone within the same region.
Multi-AZ deployments provide high availability and automatic failover.
A company runs a web application that is deployed on Amazon EC2 instances in the private subnet of a VPC. An Application Load Balancer (ALB)
that extends across the public subnets directs web traffic to the EC2 instances. The company wants to implement new security measures to
restrict inbound traffic from the ALB to the EC2 instances while preventing access from any other source inside or outside the private subnet of
the EC2 instances.
A. Configure a route in a route table to direct traffic from the internet to the private IP addresses of the EC2 instances.
B. Configure the security group for the EC2 instances to only allow traffic that comes from the security group for the ALB.
C. Move the EC2 instances into the public subnet. Give the EC2 instances a set of Elastic IP addresses.
D. Configure the security group for the ALB to allow any TCP traffic on any port.
Correct Answer: C
A research company runs experiments that are powered by a simulation application and a visualization application. The simulation application
runs on Linux and outputs intermediate data to an NFS share every 5 minutes. The visualization application is a Windows desktop application that
displays the simulation output and requires an SMB file system.
The company maintains two synchronized file systems. This strategy is causing data duplication and inefficient resource usage. The company
needs to migrate the applications to AWS without making code changes to either application.
A. Migrate both applications to AWS Lambda. Create an Amazon S3 bucket to exchange data between the applications.
B. Migrate both applications to Amazon Elastic Container Service (Amazon ECS). Configure Amazon FSx File Gateway for storage.
C. Migrate the simulation application to Linux Amazon EC2 instances. Migrate the visualization application to Windows EC2 instances.
Configure Amazon Simple Queue Service (Amazon SQS) to exchange data between the applications.
D. Migrate the simulation application to Linux Amazon EC2 instances. Migrate the visualization application to Windows EC2 instances.
Configure Amazon FSx for NetApp ONTAP for storage.
Correct Answer: D
As part of budget planning, management wants a report of AWS billed items listed by user. The data will be used to create department budgets. A
solutions architect needs to determine the most efficient way to obtain this report information.
C. Access the bill details from the billing dashboard and download the bill.
D. Modify a cost budget in AWS Budgets to alert with Amazon Simple Email Service (Amazon SES).
Correct Answer: B
A company hosts its static website by using Amazon S3. The company wants to add a contact form to its webpage. The contact form will have
dynamic server-side components for users to input their name, email address, phone number, and user message. The company anticipates that
there will be fewer than 100 site visits each month.
A. Host a dynamic contact form page in Amazon Elastic Container Service (Amazon ECS). Set up Amazon Simple Email Service (Amazon SES)
to connect to any third-party email provider.
B. Create an Amazon API Gateway endpoint with an AWS Lambda backend that makes a call to Amazon Simple Email Service (Amazon SES).
C. Convert the static webpage to dynamic by deploying Amazon Lightsail. Use client-side scripting to build the contact form. Integrate the
form with Amazon WorkMail.
D. Create a t2.micro Amazon EC2 instance. Deploy a LAMP (Linux, Apache, MySQL, PHP/Perl/Python) stack to host the webpage. Use client-
side scripting to build the contact form. Integrate the form with Amazon WorkMail.
Correct Answer: B
Member only 100 site visits per month, so you are comparing API GW used 100 times a month with constantly running EC2...
upvoted 1 times
https://aws.amazon.com/es/api-gateway/pricing/
https://aws.amazon.com/es/lambda/pricing/
upvoted 1 times
A company has a static website that is hosted on Amazon CloudFront in front of Amazon S3. The static website uses a database backend. The
company notices that the website does not reflect updates that have been made in the website’s Git repository. The company checks the
continuous integration and continuous delivery (CI/CD) pipeline between the Git repository and Amazon S3. The company verifies that the
webhooks are configured properly and that the CI/CD pipeline is sending messages that indicate successful deployments.
A solutions architect needs to implement a solution that displays the updates on the website.
B. Add Amazon ElastiCache for Redis or Memcached to the database layer of the web application.
D. Use AWS Certificate Manager (ACM) to validate the website’s SSL certificate.
Correct Answer: B
Explanation:
Invalidate the CloudFront cache to ensure that the latest updates from the Git repository are reflected on the static website. When
updates are made to the website's Git repository and deployed to Amazon S3, the CloudFront cache may still be serving the old cached
content to users. By invalidating the CloudFront cache, you're instructing CloudFront to fetch fresh content from the origin (Amazon S3)
and serve it to users.
upvoted 3 times
A company wants to migrate a Windows-based application from on premises to the AWS Cloud. The application has three tiers: an application tier,
a business tier, and a database tier with Microsoft SQL Server. The company wants to use specific features of SQL Server such as native backups
and Data Quality Services. The company also needs to share files for processing between the tiers.
How should a solutions architect design the architecture to meet these requirements?
A. Host all three tiers on Amazon EC2 instances. Use Amazon FSx File Gateway for file sharing between the tiers.
B. Host all three tiers on Amazon EC2 instances. Use Amazon FSx for Windows File Server for file sharing between the tiers.
C. Host the application tier and the business tier on Amazon EC2 instances. Host the database tier on Amazon RDS. Use Amazon Elastic File
System (Amazon EFS) for file sharing between the tiers.
D. Host the application tier and the business tier on Amazon EC2 instances. Host the database tier on Amazon RDS. Use a Provisioned IOPS
SSD (io2) Amazon Elastic Block Store (Amazon EBS) volume for file sharing between the tiers.
Correct Answer: B
B: Correct> This solution will allow the company to host all three tiers on Amazon EC2 instances while using Amazon FSx for Windows File
Server to provide Windows-based file sharing between the tiers. This will allow the company to use specific features of SQL Server, such as
native backups and Data Quality Services, while sharing files for processing between the tiers.
C: Incorrect> Currently, Amazon EFS supports the NFSv4.1 protocol and does not natively support the SMB protocol, and can't be used in
Windows instances yet.
D: Incorrect> Amazon EBS is a block-level storage solution that is typically used to store data at the operating system level, rather than for
file sharing between servers.
upvoted 8 times
This solution allows the company to use specific features of SQL Server such as native backups and Data Quality Services, by hosting the
database tier on Amazon RDS. It also enables file sharing between the tiers using Amazon EFS, which is a fully managed, highly available,
and scalable file system. Amazon EFS provides shared access to files across multiple instances, which is important for processing files
between the tiers. Additionally, hosting the application and business tiers on Amazon EC2 instances provides the company with the
flexibility to configure and manage the environment according to their requirements.
upvoted 2 times
A company is migrating a Linux-based web server group to AWS. The web servers must access files in a shared file store for some content. The
company must not make any changes to the application.
C. Create an Amazon Elastic File System (Amazon EFS) file system. Mount the EFS file system on all web servers.
D. Configure a General Purpose SSD (gp3) Amazon Elastic Block Store (Amazon EBS) volume. Mount the EBS volume to all web servers.
Correct Answer: A
Amazon S3 is not ideal for this scenario since it is an object storage service and not a file system, and it requires additional tools or
libraries to mount the S3 bucket as a file system.
Amazon CloudFront can be used to improve content delivery performance but is not necessary for this requirement.
Additionally, Amazon EBS volumes can only be mounted to one instance at a time, so it is not suitable for sharing files across multiple
instances.
upvoted 2 times
A company has an AWS Lambda function that needs read access to an Amazon S3 bucket that is located in the same AWS account.
Which solution will meet these requirements in the MOST secure manner?
B. Apply an IAM role to the Lambda function. Apply an IAM policy to the role to grant read access to the S3 bucket.
C. Embed an access key and a secret key in the Lambda function’s code to grant the required IAM permissions for read access to the S3
bucket.
D. Apply an IAM role to the Lambda function. Apply an IAM policy to the role to grant read access to all S3 buckets in the account.
Correct Answer: D
A company hosts a web application on multiple Amazon EC2 instances. The EC2 instances are in an Auto Scaling group that scales in response to
user demand. The company wants to optimize cost savings without making a long-term commitment.
Which EC2 instance purchasing option should a solutions architect recommend to meet these requirements?
Correct Answer: B
However, if the company wants the most predictable pricing and does not want to risk instance interruption, then using only On-Demand
Instances is a good choice. It ultimately depends on the company's priorities and risk tolerance.
upvoted 3 times
A media company uses Amazon CloudFront for its publicly available streaming video content. The company wants to secure the video content
that is hosted in Amazon S3 by controlling who has access. Some of the company’s users are using a custom HTTP client that does not support
cookies. Some of the company’s users are unable to change the hardcoded URLs that they are using for access.
Which services or methods will meet these requirements with the LEAST impact to the users? (Choose two.)
A. Signed cookies
B. Signed URLs
C. AWS AppSync
Correct Answer: CE
D. JSON Web Token (JWT) - This method allows the media company to control who can access the video content by creating a secure token
that contains user authentication and authorization information. This token can be distributed to the users who are using a custom HTTP
client that does not support cookies. The users can include this token in their requests to access the content without needing to support
cookies.
Option A (Signed cookies) would not work for users who are using a custom HTTP client that does not support cookies. Option C (AWS
AppSync) is not relevant to the requirement of securing video content. Option E (AWS Secrets Manager) is a service used for storing and
retrieving secrets, which is not relevant to the requirement of securing video content.
upvoted 12 times
Signed URLs allow access to individual objects in Amazon S3 for a specified time period without requiring cookies. This allows the custom
HTTP client users to access content.
JSON Web Tokens (JWT) allow users to get temporary access tokens that can be passed in requests. This allows users with hardcoded URLs
to access content without updating URLs.
upvoted 1 times
AWS AppSync and Secrets Manager do not help address the specific access requirements.
Good
So Signed URLs and JWTs allow securing access to S3 content with minimal impact to users, meeting the requirements.
upvoted 1 times
riccardoto 1 month, 3 weeks ago
Selected Answer: BD
I understand why many users here are voting AB, but in my opinion BD is more correct.
Using JWT or signed urls will work both for users that cannot use cookies or cannot change the url.
upvoted 1 times
B. Signed URLs: This method allows the media company to control who can access the video content by creating a time-limited URL with a
cryptographic signature.
upvoted 1 times
Some of the company’s users are unable to change the hardcoded URLs that they are using for access. **Signed cookies
upvoted 5 times
A company is preparing a new data platform that will ingest real-time streaming data from multiple sources. The company needs to transform the
data before writing the data to Amazon S3. The company needs the ability to use SQL to query the transformed data.
A. Use Amazon Kinesis Data Streams to stream the data. Use Amazon Kinesis Data Analytics to transform the data. Use Amazon Kinesis Data
Firehose to write the data to Amazon S3. Use Amazon Athena to query the transformed data from Amazon S3.
B. Use Amazon Managed Streaming for Apache Kafka (Amazon MSK) to stream the data. Use AWS Glue to transform the data and to write the
data to Amazon S3. Use Amazon Athena to query the transformed data from Amazon S3.
C. Use AWS Database Migration Service (AWS DMS) to ingest the data. Use Amazon EMR to transform the data and to write the data to
Amazon S3. Use Amazon Athena to query the transformed data from Amazon S3.
D. Use Amazon Managed Streaming for Apache Kafka (Amazon MSK) to stream the data. Use Amazon Kinesis Data Analytics to transform the
data and to write the data to Amazon S3. Use the Amazon RDS query editor to query the transformed data from Amazon S3.
E. Use Amazon Kinesis Data Streams to stream the data. Use AWS Glue to transform the data. Use Amazon Kinesis Data Firehose to write the
data to Amazon S3. Use the Amazon RDS query editor to query the transformed data from Amazon S3.
Correct Answer: AB
"You can create streaming extract, transform, and load (ETL) jobs that run continuously, consume data from streaming sources like
Amazon Kinesis Data Streams, Apache Kafka, and Amazon Managed Streaming for Apache Kafka (Amazon MSK). The jobs cleanse and
transform the data, and then load the results into Amazon S3 data lakes or JDBC data stores."
upvoted 5 times
A uses Kinesis Data Streams for streaming, Kinesis Data Analytics for transformation, Kinesis Data Firehose for writing to S3, and Athena
for SQL queries on S3 data.
B uses Amazon MSK for streaming, AWS Glue for transformation and writing to S3, and Athena for SQL queries on S3 data.
upvoted 1 times
Option A is correct because it uses Amazon Kinesis Data Streams to stream data from multiple sources, Amazon Kinesis Data Analytics to
transform the data, and Amazon Kinesis Data Firehose to write the data to Amazon S3. Amazon Athena can be used to query the
transformed data in Amazon S3.
Option E is also correct because it uses Amazon Kinesis Data Streams to stream data from multiple sources, AWS Glue to transform the
data, and Amazon Kinesis Data Firehose to write the data to Amazon S3. Amazon Athena can be used to query the transformed data in
Amazon S3.
upvoted 3 times
Option C: This option is not ideal for streaming real-time data as AWS DMS is not optimized for real-time data ingestion.
Option D & E: These option are not recommended as the Amazon RDS query editor is not designed for querying data in S3, and it is not
efficient for running complex queries.
upvoted 3 times
A company has an on-premises volume backup solution that has reached its end of life. The company wants to use AWS as part of a new backup
solution and wants to maintain local access to all the data while it is backed up on AWS. The company wants to ensure that the data backed up on
AWS is automatically and securely transferred.
A. Use AWS Snowball to migrate data out of the on-premises solution to Amazon S3. Configure on-premises systems to mount the Snowball
S3 endpoint to provide local access to the data.
B. Use AWS Snowball Edge to migrate data out of the on-premises solution to Amazon S3. Use the Snowball Edge file interface to provide on-
premises systems with local access to the data.
C. Use AWS Storage Gateway and configure a cached volume gateway. Run the Storage Gateway software appliance on premises and
configure a percentage of data to cache locally. Mount the gateway storage volumes to provide local access to the data.
D. Use AWS Storage Gateway and configure a stored volume gateway. Run the Storage Gateway software appliance on premises and map the
gateway storage volumes to on-premises storage. Mount the gateway storage volumes to provide local access to the data.
Correct Answer: D
https://docs.aws.amazon.com/storagegateway/latest/vgw/WhatIsStorageGateway.html
upvoted 3 times
An application that is hosted on Amazon EC2 instances needs to access an Amazon S3 bucket. Traffic must not traverse the internet.
C. Configure the EC2 instances to use a NAT gateway to access the S3 bucket.
D. Establish an AWS Site-to-Site VPN connection between the VPC and the S3 bucket.
Correct Answer: B
A gateway VPC endpoint is a private way for Amazon EC2 instances in a VPC to access AWS services, such as Amazon S3, without having to
go through the internet. This can help to improve security and performance.
upvoted 1 times
An ecommerce company stores terabytes of customer data in the AWS Cloud. The data contains personally identifiable information (PII). The
company wants to use the data in three applications. Only one of the applications needs to process the PII. The PII must be removed before the
other two applications process the data.
Which solution will meet these requirements with the LEAST operational overhead?
A. Store the data in an Amazon DynamoDB table. Create a proxy application layer to intercept and process the data that each application
requests.
B. Store the data in an Amazon S3 bucket. Process and transform the data by using S3 Object Lambda before returning the data to the
requesting application.
C. Process the data and store the transformed data in three separate Amazon S3 buckets so that each application has its own custom
dataset. Point each application to its respective S3 bucket.
D. Process the data and store the transformed data in three separate Amazon DynamoDB tables so that each application has its own custom
dataset. Point each application to its respective DynamoDB table.
Correct Answer: B
https://aws.amazon.com/blogs/aws/introducing-amazon-s3-object-lambda-use-your-code-to-process-data-as-it-is-being-retrieved-from-
s3/
upvoted 9 times
Is it plausible that S3 Object Lambda can process terabytes of data in 60 seconds? The same link you shared states that the maximum
duration for a Lambda function used by S3 Object Lambda is 60 seconds.
Answer is A.
upvoted 2 times
Isn't just 60 seconds the maximum duration for a Lambda function used by S3 Object Lambda? How can it process terabytes of data in
60 seconds?
You are correct that the maximum duration for a Lambda function used by S3 Object Lambda is 60 seconds.
Given the time constraint, it is not feasible to process terabytes of data within a single Lambda function execution.
S3 Object Lambda is designed for lightweight and real-time transformations rather than extensive processing of large datasets.
To handle terabytes of data, you would typically need to implement a distributed processing solution using services like Amazon EMR,
AWS Glue, or AWS Batch. These services are specifically designed to handle big data workloads and provide scalability and distributed
processing capabilities.
So, while S3 Object Lambda can be useful for lightweight processing tasks, it is not the appropriate tool for processing terabytes of
data within the execution time limits of a Lambda function.
upvoted 1 times
A development team has launched a new application that is hosted on Amazon EC2 instances inside a development VPC. A solutions architect
needs to create a new VPC in the same account. The new VPC will be peered with the development VPC. The VPC CIDR block for the development
VPC is 192.168.0.0/24. The solutions architect needs to create a CIDR block for the new VPC. The CIDR block must be valid for a VPC peering
connection to the development VPC.
A. 10.0.1.0/32
B. 192.168.0.0/24
C. 192.168.1.0/32
D. 10.0.1.0/24
Correct Answer: B
A company deploys an application on five Amazon EC2 instances. An Application Load Balancer (ALB) distributes traffic to the instances by using
a target group. The average CPU usage on each of the instances is below 10% most of the time, with occasional surges to 65%.
A solutions architect needs to implement a solution to automate the scalability of the application. The solution must optimize the cost of the
architecture and must ensure that the application has enough CPU resources when surges occur.
A. Create an Amazon CloudWatch alarm that enters the ALARM state when the CPUUtilization metric is less than 20%. Create an AWS Lambda
function that the CloudWatch alarm invokes to terminate one of the EC2 instances in the ALB target group.
B. Create an EC2 Auto Scaling group. Select the existing ALB as the load balancer and the existing target group as the target group. Set a
target tracking scaling policy that is based on the ASGAverageCPUUtilization metric. Set the minimum instances to 2, the desired capacity to
3, the maximum instances to 6, and the target value to 50%. Add the EC2 instances to the Auto Scaling group.
C. Create an EC2 Auto Scaling group. Select the existing ALB as the load balancer and the existing target group as the target group. Set the
minimum instances to 2, the desired capacity to 3, and the maximum instances to 6. Add the EC2 instances to the Auto Scaling group.
D. Create two Amazon CloudWatch alarms. Configure the first CloudWatch alarm to enter the ALARM state when the average CPUUtilization
metric is below 20%. Configure the second CloudWatch alarm to enter the ALARM state when the average CPUUtilization matric is above 50%.
Configure the alarms to publish to an Amazon Simple Notification Service (Amazon SNS) topic to send an email message. After receiving the
message, log in to decrease or increase the number of EC2 instances that are running.
Correct Answer: D
Option A suggests creating a CloudWatch alarm to terminate an EC2 instance when CPU utilization is less than 20%. However, this
approach does not ensure that the application will have enough CPU resources during surges, as it only terminates instances when CPU
utilization is low, which may not meet the requirements.
Option C suggests creating an Auto Scaling group without any specific scaling policies or configurations. This approach does not address
the need for automated scaling based on CPU utilization, making it insufficient for the given requirements.
Option D suggests using CloudWatch alarms to send notifications via Amazon SNS and manually adjusting the number of instances based
on the received messages. This approach lacks automation and requires manual intervention, which does not optimize cost or meet the
requirement of automated scalability.
Therefore, Option B is the most appropriate solution in this case.
upvoted 1 times
A company is running a critical business application on Amazon EC2 instances behind an Application Load Balancer. The EC2 instances run in an
Auto Scaling group and access an Amazon RDS DB instance.
The design did not pass an operational review because the EC2 instances and the DB instance are all located in a single Availability Zone. A
solutions architect must update the design to use a second Availability Zone.
A. Provision a subnet in each Availability Zone. Configure the Auto Scaling group to distribute the EC2 instances across both Availability
Zones. Configure the DB instance with connections to each network.
B. Provision two subnets that extend across both Availability Zones. Configure the Auto Scaling group to distribute the EC2 instances across
both Availability Zones. Configure the DB instance with connections to each network.
C. Provision a subnet in each Availability Zone. Configure the Auto Scaling group to distribute the EC2 instances across both Availability
Zones. Configure the DB instance for Multi-AZ deployment.
D. Provision a subnet that extends across both Availability Zones. Configure the Auto Scaling group to distribute the EC2 instances across
both Availability Zones. Configure the DB instance for Multi-AZ deployment.
Correct Answer: D
A research laboratory needs to process approximately 8 TB of data. The laboratory requires sub-millisecond latencies and a minimum throughput
of 6 GBps for the storage subsystem. Hundreds of Amazon EC2 instances that run Amazon Linux will distribute and process the data.
A. Create an Amazon FSx for NetApp ONTAP file system. Sat each volume’ tiering policy to ALL. Import the raw data into the file system.
Mount the fila system on the EC2 instances.
B. Create an Amazon S3 bucket to store the raw data. Create an Amazon FSx for Lustre file system that uses persistent SSD storage. Select
the option to import data from and export data to Amazon S3. Mount the file system on the EC2 instances.
C. Create an Amazon S3 bucket to store the raw data. Create an Amazon FSx for Lustre file system that uses persistent HDD storage. Select
the option to import data from and export data to Amazon S3. Mount the file system on the EC2 instances.
D. Create an Amazon FSx for NetApp ONTAP file system. Set each volume’s tiering policy to NONE. Import the raw data into the file system.
Mount the file system on the EC2 instances.
Correct Answer: D
Refrences:
https://aws.amazon.com/fsx/when-to-choose-fsx/
upvoted 9 times
https://aws.amazon.com/fsx/lustre/faqs/?nc=sn&loc=5
https://aws.amazon.com/fsx/netapp-ontap/faqs/
upvoted 3 times
A company needs to migrate a legacy application from an on-premises data center to the AWS Cloud because of hardware capacity constraints.
The application runs 24 hours a day, 7 days a week. The application’s database storage continues to grow over time.
A. Migrate the application layer to Amazon EC2 Spot Instances. Migrate the data storage layer to Amazon S3.
B. Migrate the application layer to Amazon EC2 Reserved Instances. Migrate the data storage layer to Amazon RDS On-Demand Instances.
C. Migrate the application layer to Amazon EC2 Reserved Instances. Migrate the data storage layer to Amazon Aurora Reserved Instances.
D. Migrate the application layer to Amazon EC2 On-Demand Instances. Migrate the data storage layer to Amazon RDS Reserved Instances.
Correct Answer: C
Amazon Aurora is a highly scalable, cloud-native relational database service that is designed to be compatible with MySQL and
PostgreSQL. It can automatically scale up to meet growing storage requirements, so it can accommodate the application's database
storage needs over time. By using Reserved Instances for Aurora, the cost savings will be significant over the long term.
upvoted 9 times
In this case, it may be more cost-effective to use Amazon RDS On-Demand Instances for the data storage layer. With RDS On-Demand
Instances, you pay only for the capacity you use and you can easily scale up or down the storage as needed.
upvoted 4 times
C. Migrate the application layer to Amazon EC2 Reserved Instances. Migrate the data storage layer to Amazon Aurora Reserved Instances.
Explanation:
Migrating the application layer to Amazon EC2 Reserved Instances allows you to reserve EC2 capacity in advance, providing cost savings
compared to On-Demand Instances. This is especially beneficial if the application runs 24/7.
Migrating the data storage layer to Amazon Aurora Reserved Instances provides cost optimization for the growing database storage
needs. Amazon Aurora is a fully managed relational database service that offers high performance, scalability, and cost efficiency.
upvoted 1 times
cpen 4 months ago
nnascncnscnknkckl
upvoted 1 times
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2.how-it-works.html#aurora-serverless-v2.how-it-
works.scaling
upvoted 1 times
A university research laboratory needs to migrate 30 TB of data from an on-premises Windows file server to Amazon FSx for Windows File Server.
The laboratory has a 1 Gbps network link that many other departments in the university share.
The laboratory wants to implement a data migration service that will maximize the performance of the data transfer. However, the laboratory
needs to be able to control the amount of bandwidth that the service uses to minimize the impact on other departments. The data migration must
take place within the next 5 days.
A. AWS Snowcone
C. AWS DataSync
Correct Answer: C
You can use AWS DataSync to migrate data located on-premises, at the edge, or in other clouds to Amazon S3, Amazon EFS, Amazon FSx
for Windows File Server, Amazon FSx for Lustre, Amazon FSx for OpenZFS, and Amazon FSx for NetApp ONTAP.
upvoted 5 times
The laboratory needs to migrate a large amount of data (30 TB) within a relatively short timeframe (5 days) and limit the impact on other
departments' network traffic. Therefore, AWS DataSync can meet these requirements by providing fast and efficient data transfer with
network throttling capability to control bandwidth usage.
upvoted 3 times
A company wants to create a mobile app that allows users to stream slow-motion video clips on their mobile devices. Currently, the app captures
video clips and uploads the video clips in raw format into an Amazon S3 bucket. The app retrieves these video clips directly from the S3 bucket.
However, the videos are large in their raw format.
Users are experiencing issues with buffering and playback on mobile devices. The company wants to implement solutions to maximize the
performance and scalability of the app while minimizing operational overhead.
B. Use AWS DataSync to replicate the video files across AW'S Regions in other S3 buckets.
C. Use Amazon Elastic Transcoder to convert the video files to more appropriate formats.
D. Deploy an Auto Sealing group of Amazon EC2 instances in Local Zones for content delivery and caching.
E. Deploy an Auto Scaling group of Amazon EC2 instances to convert the video files to more appropriate formats.
Correct Answer: A
C: Use Amazon Elastic Transcoder to convert the video files to more appropriate formats: Amazon Elastic Transcoder is a service that can
help optimize the video format for mobile devices, reducing the size of the video files, and improving the playback performance. Elastic
Transcoder can also convert videos into multiple formats to support different devices and platforms.
upvoted 2 times
A company is launching a new application deployed on an Amazon Elastic Container Service (Amazon ECS) cluster and is using the Fargate
launch type for ECS tasks. The company is monitoring CPU and memory usage because it is expecting high traffic to the application upon its
launch. However, the company wants to reduce costs when utilization decreases.
A. Use Amazon EC2 Auto Scaling to scale at certain periods based on previous traffic patterns.
B. Use an AWS Lambda function to scale Amazon ECS based on metric breaches that trigger an Amazon CloudWatch alarm.
C. Use Amazon EC2 Auto Scaling with simple scaling policies to scale when ECS metric breaches trigger an Amazon CloudWatch alarm.
D. Use AWS Application Auto Scaling with target tracking policies to scale when ECS metric breaches trigger an Amazon CloudWatch alarm.
Correct Answer: D
A company recently created a disaster recovery site in a different AWS Region. The company needs to transfer large amounts of data back and
forth between NFS file systems in the two Regions on a periodic basis.
Which solution will meet these requirements with the LEAST operational overhead?
Correct Answer: A
A company is designing a shared storage solution for a gaming application that is hosted in the AWS Cloud. The company needs the ability to use
SMB clients to access data. The solution must be fully managed.
A. Create an AWS DataSync task that shares the data as a mountable file system. Mount the file system to the application server.
B. Create an Amazon EC2 Windows instance. Install and configure a Windows file share role on the instance. Connect the application server to
the file share.
C. Create an Amazon FSx for Windows File Server file system. Attach the file system to the origin server. Connect the application server to the
file system.
D. Create an Amazon S3 bucket. Assign an IAM role to the application to grant access to the S3 bucket. Mount the S3 bucket to the
application server.
Correct Answer: C
A company wants to run an in-memory database for a latency-sensitive application that runs on Amazon EC2 instances. The application
processes more than 100,000 transactions each minute and requires high network throughput. A solutions architect needs to provide a cost-
effective network design that minimizes data transfer charges.
A. Launch all EC2 instances in the same Availability Zone within the same AWS Region. Specify a placement group with cluster strategy when
launching EC2 instances.
B. Launch all EC2 instances in different Availability Zones within the same AWS Region. Specify a placement group with partition strategy
when launching EC2 instances.
C. Deploy an Auto Scaling group to launch EC2 instances in different Availability Zones based on a network utilization target.
D. Deploy an Auto Scaling group with a step scaling policy to launch EC2 instances in different Availability Zones.
Correct Answer: A
As all the autoscaling nodes will also be on the same availability zones, (as per Placement groups with Cluster mode), this would provide
the low-latency network performance
Reference is below:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html
upvoted 2 times
A company that primarily runs its application servers on premises has decided to migrate to AWS. The company wants to minimize its need to
scale its Internet Small Computer Systems Interface (iSCSI) storage on premises. The company wants only its recently accessed data to remain
stored locally.
Which AWS solution should the company use to meet these requirements?
Correct Answer: A
Since the company wants only its recently accessed data to remain stored locally, the cached volume configuration would be the most
appropriate. It allows the company to keep frequently accessed data on-premises and reduce the need for scaling its iSCSI storage while
still providing access to all data through the AWS cloud. This configuration also provides low-latency access to frequently accessed data
and cost-effective off-site backups for less frequently accessed data.
upvoted 18 times
A company has multiple AWS accounts that use consolidated billing. The company runs several active high performance Amazon RDS for Oracle
On-Demand DB instances for 90 days. The company’s finance team has access to AWS Trusted Advisor in the consolidated billing account and all
other AWS accounts.
The finance team needs to use the appropriate AWS account to access the Trusted Advisor check recommendations for RDS. The finance team
must review the appropriate Trusted Advisor check to reduce RDS costs.
Which combination of steps should the finance team take to meet these requirements? (Choose two.)
A. Use the Trusted Advisor recommendations from the account where the RDS instances are running.
B. Use the Trusted Advisor recommendations from the consolidated billing account to see all RDS instance checks at the same time.
C. Review the Trusted Advisor check for Amazon RDS Reserved Instance Optimization.
D. Review the Trusted Advisor check for Amazon RDS Idle DB Instances.
E. Review the Trusted Advisor check for Amazon Redshift Reserved Node Optimization.
Correct Answer: AC
If a DB instance has not had a connection for a prolonged period of time, you can delete the instance to reduce costs. A DB instance is
considered idle if the instance hasn't had a connection in the past 7 days. If persistent storage is needed for data on the instance, you
can use lower-cost options such as taking and retaining a DB snapshot. Manually created DB snapshots are retained until you delete
them.
https://docs.aws.amazon.com/awssupport/latest/user/cost-optimization-checks.html#amazon-rds-idle-dbs-instances
upvoted 1 times
Steve_4542636 7 months ago
Selected Answer: BD
I got with B and D
upvoted 2 times
C. Review the Trusted Advisor check for Amazon RDS Reserved Instance Optimization. This check can help identify cost savings
opportunities for RDS by identifying instances that can be covered by Reserved Instances. This can result in significant savings on RDS
costs.
upvoted 1 times
Option D is not the best choice because it only addresses the issue of idle instances and may not provide the most effective
recommendations to reduce RDS costs.
Option E is not relevant to this scenario since it is related to Amazon Redshift, not RDS.
upvoted 1 times
A solutions architect needs to optimize storage costs. The solutions architect must identify any Amazon S3 buckets that are no longer being
accessed or are rarely accessed.
Which solution will accomplish this goal with the LEAST operational overhead?
A. Analyze bucket access patterns by using the S3 Storage Lens dashboard for advanced activity metrics.
B. Analyze bucket access patterns by using the S3 dashboard in the AWS Management Console.
C. Turn on the Amazon CloudWatch BucketSizeBytes metric for buckets. Analyze bucket access patterns by using the metrics data with
Amazon Athena.
D. Turn on AWS CloudTrail for S3 object monitoring. Analyze bucket access patterns by using CloudTrail logs that are integrated with Amazon
CloudWatch Logs.
Correct Answer: D
I could not find that S3 storage Lens examples online showing using Lens to identify idle S3 buckets. Instead I find using S3 Access
Logging. Hmm.
upvoted 1 times
A company sells datasets to customers who do research in artificial intelligence and machine learning (AI/ML). The datasets are large, formatted
files that are stored in an Amazon S3 bucket in the us-east-1 Region. The company hosts a web application that the customers use to purchase
access to a given dataset. The web application is deployed on multiple Amazon EC2 instances behind an Application Load Balancer. After a
purchase is made, customers receive an S3 signed URL that allows access to the files.
The customers are distributed across North America and Europe. The company wants to reduce the cost that is associated with data transfers
and wants to maintain or improve performance.
A. Configure S3 Transfer Acceleration on the existing S3 bucket. Direct customer requests to the S3 Transfer Acceleration endpoint. Continue
to use S3 signed URLs for access control.
B. Deploy an Amazon CloudFront distribution with the existing S3 bucket as the origin. Direct customer requests to the CloudFront URL. Switch
to CloudFront signed URLs for access control.
C. Set up a second S3 bucket in the eu-central-1 Region with S3 Cross-Region Replication between the buckets. Direct customer requests to
the closest Region. Continue to use S3 signed URLs for access control.
D. Modify the web application to enable streaming of the datasets to end users. Configure the web application to read the data from the
existing S3 bucket. Implement access control directly in the application.
Correct Answer: B
Deploying a CloudFront distribution with the existing S3 bucket as the origin will allow the company to serve the data to customers from
edge locations that are closer to them, reducing data transfer costs and improving performance.
Directing customer requests to the CloudFront URL and switching to CloudFront signed URLs for access control will enable customers to
access the data securely and efficiently.
upvoted 7 times
A - focus on accelerating uploads to S3 which may not necessarily improve the performance needed for serving datasets to customers
C - helps with redundancy and data availability but does not necessarily offer cost savings for data transfer.
D - complex to implement, does not address data transfer cost
upvoted 1 times
https://www.examtopics.com/discussions/amazon/view/68990-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 2 times
Question #311 Topic 1
A company is using AWS to design a web application that will process insurance quotes. Users will request quotes from the application. Quotes
must be separated by quote type, must be responded to within 24 hours, and must not get lost. The solution must maximize operational efficiency
and must minimize maintenance.
A. Create multiple Amazon Kinesis data streams based on the quote type. Configure the web application to send messages to the proper data
stream. Configure each backend group of application servers to use the Kinesis Client Library (KCL) to pool messages from its own data
stream.
B. Create an AWS Lambda function and an Amazon Simple Notification Service (Amazon SNS) topic for each quote type. Subscribe the
Lambda function to its associated SNS topic. Configure the application to publish requests for quotes to the appropriate SNS topic.
C. Create a single Amazon Simple Notification Service (Amazon SNS) topic. Subscribe Amazon Simple Queue Service (Amazon SQS) queues
to the SNS topic. Configure SNS message filtering to publish messages to the proper SQS queue based on the quote type. Configure each
backend application server to use its own SQS queue.
D. Create multiple Amazon Kinesis Data Firehose delivery streams based on the quote type to deliver data streams to an Amazon OpenSearch
Service cluster. Configure the application to send messages to the proper delivery stream. Configure each backend group of application
servers to search for the messages from OpenSearch Service and process them accordingly.
Correct Answer: D
A company has an application that runs on several Amazon EC2 instances. Each EC2 instance has multiple Amazon Elastic Block Store (Amazon
EBS) data volumes attached to it. The application’s EC2 instance configuration and data need to be backed up nightly. The application also needs
to be recoverable in a different AWS Region.
Which solution will meet these requirements in the MOST operationally efficient way?
A. Write an AWS Lambda function that schedules nightly snapshots of the application’s EBS volumes and copies the snapshots to a different
Region.
B. Create a backup plan by using AWS Backup to perform nightly backups. Copy the backups to another Region. Add the application’s EC2
instances as resources.
C. Create a backup plan by using AWS Backup to perform nightly backups. Copy the backups to another Region. Add the application’s EBS
volumes as resources.
D. Write an AWS Lambda function that schedules nightly snapshots of the application's EBS volumes and copies the snapshots to a different
Availability Zone.
Correct Answer: C
https://aws.amazon.com/blogs/aws/aws-backup-ec2-instances-efs-single-file-restore-and-cross-region-backup/
upvoted 2 times
A company is building a mobile app on AWS. The company wants to expand its reach to millions of users. The company needs to build a platform
so that authorized users can watch the company’s content on their mobile devices.
A. Publish content to a public Amazon S3 bucket. Use AWS Key Management Service (AWS KMS) keys to stream content.
B. Set up IPsec VPN between the mobile app and the AWS environment to stream content.
D. Set up AWS Client VPN between the mobile app and the AWS environment to stream content.
Correct Answer: C
A company has an on-premises MySQL database used by the global sales team with infrequent access patterns. The sales team requires the
database to have minimal downtime. A database administrator wants to migrate this database to AWS without selecting a particular instance type
in anticipation of more users in the future.
Correct Answer: B
With Amazon Aurora Serverless for MySQL, the sales team can enjoy minimal downtime since the database is designed to automatically
scale to accommodate the increased traffic. Additionally, the service allows the customer to pay only for the capacity used, making it cost-
effective for infrequent access patterns.
Amazon RDS for MySQL could also be an option, but it requires the customer to select an instance type, and the database administrator
would need to monitor and adjust the instance size manually to accommodate the increasing traffic.
upvoted 2 times
A company experienced a breach that affected several applications in its on-premises data center. The attacker took advantage of vulnerabilities
in the custom applications that were running on the servers. The company is now migrating its applications to run on Amazon EC2 instances. The
company wants to implement a solution that actively scans for vulnerabilities on the EC2 instances and sends a report that details the findings.
A. Deploy AWS Shield to scan the EC2 instances for vulnerabilities. Create an AWS Lambda function to log any findings to AWS CloudTrail.
B. Deploy Amazon Macie and AWS Lambda functions to scan the EC2 instances for vulnerabilities. Log any findings to AWS CloudTrail.
C. Turn on Amazon GuardDuty. Deploy the GuardDuty agents to the EC2 instances. Configure an AWS Lambda function to automate the
generation and distribution of reports that detail the findings.
D. Turn on Amazon Inspector. Deploy the Amazon Inspector agent to the EC2 instances. Configure an AWS Lambda function to automate the
generation and distribution of reports that detail the findings.
Correct Answer: C
To use Amazon Inspector, the Amazon Inspector agent must be installed on the EC2 instances that need to be assessed. The agent collects
data about the instances and sends it to Amazon Inspector for analysis. Amazon Inspector then generates a report that details any
security vulnerabilities that were found and provides guidance on how to remediate them.
By configuring an AWS Lambda function, the company can automate the generation and distribution of reports that detail the findings.
This means that reports can be generated and distributed as soon as vulnerabilities are detected, allowing the company to take action
quickly.
upvoted 1 times
A company uses an Amazon EC2 instance to run a script to poll for and process messages in an Amazon Simple Queue Service (Amazon SQS)
queue. The company wants to reduce operational costs while maintaining its ability to process a growing number of messages that are added to
the queue.
B. Use Amazon EventBridge to turn off the EC2 instance when the instance is underutilized.
C. Migrate the script on the EC2 instance to an AWS Lambda function with the appropriate runtime.
D. Use AWS Systems Manager Run Command to run the script on demand.
Correct Answer: A
A company uses a legacy application to produce data in CSV format. The legacy application stores the output data in Amazon S3. The company is
deploying a new commercial off-the-shelf (COTS) application that can perform complex SQL queries to analyze data that is stored in Amazon
Redshift and Amazon S3 only. However, the COTS application cannot process the .csv files that the legacy application produces.
The company cannot update the legacy application to produce data in another format. The company needs to implement a solution so that the
COTS application can use the data that the legacy application produces.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create an AWS Glue extract, transform, and load (ETL) job that runs on a schedule. Configure the ETL job to process the .csv files and store
the processed data in Amazon Redshift.
B. Develop a Python script that runs on Amazon EC2 instances to convert the .csv files to .sql files. Invoke the Python script on a cron
schedule to store the output files in Amazon S3.
C. Create an AWS Lambda function and an Amazon DynamoDB table. Use an S3 event to invoke the Lambda function. Configure the Lambda
function to perform an extract, transform, and load (ETL) job to process the .csv files and store the processed data in the DynamoDB table.
D. Use Amazon EventBridge to launch an Amazon EMR cluster on a weekly schedule. Configure the EMR cluster to perform an extract,
transform, and load (ETL) job to process the .csv files and store the processed data in an Amazon Redshift table.
Correct Answer: A
A company recently migrated its entire IT environment to the AWS Cloud. The company discovers that users are provisioning oversized Amazon
EC2 instances and modifying security group rules without using the appropriate change control process. A solutions architect must devise a
strategy to track and audit these inventory and configuration changes.
Which actions should the solutions architect take to meet these requirements? (Choose two.)
D. Enable AWS Config and create rules for auditing and compliance purposes.
Correct Answer: AD
D. Enable AWS Config and create rules for auditing and compliance purposes. AWS Config provides a detailed inventory of the AWS
resources in your account, and continuously records changes to the configurations of those resources. By creating rules in AWS Config,
the company can automate the evaluation of resource configurations against desired state, and receive alerts when configurations drift
from compliance.
Options B, C, and E are not directly relevant to the requirement of tracking and auditing inventory and configuration changes.
upvoted 7 times
D. Enable AWS Config and create rules for auditing and compliance purposes. AWS Config provides a detailed inventory of the AWS
resources in your account, and continuously records changes to the configurations of those resources. By creating rules in AWS Config,
the company can automate the evaluation of resource configurations against desired state, and receive alerts when configurations drift
from compliance.
upvoted 1 times
A company has hundreds of Amazon EC2 Linux-based instances in the AWS Cloud. Systems administrators have used shared SSH keys to manage
the instances. After a recent audit, the company’s security team is mandating the removal of all shared keys. A solutions architect must design a
solution that provides secure access to the EC2 instances.
Which solution will meet this requirement with the LEAST amount of administrative overhead?
A. Use AWS Systems Manager Session Manager to connect to the EC2 instances.
B. Use AWS Security Token Service (AWS STS) to generate one-time SSH keys on demand.
C. Allow shared SSH access to a set of bastion instances. Configure all other instances to allow only SSH access from the bastion instances.
D. Use an Amazon Cognito custom authorizer to authenticate users. Invoke an AWS Lambda function to generate a temporary SSH key.
Correct Answer: B
STS can generate short-lived credentials that provide temporary access to the EC2 instances for administering them.
The credentials can be generated on-demand each time access is needed, eliminating the risks of using permanent shared SSH keys.
No infrastructure like bastion hosts needs to be maintained.
The on-premises administrators can use the familiar SSH tools with the temporary keys.
upvoted 1 times
Information Security experts who want to monitor and track managed node access and activity, close down inbound ports on
managed nodes, or allow connections to managed nodes that don't have a public IP address.
Administrators who want to grant and revoke access from a single location, and who want to provide one solution to users for Linux,
macOS, and Windows Server managed nodes.
Users who want to connect to a managed node with just one click from the browser or AWS CLI without having to provide SSH keys.
upvoted 1 times
Stanislav4907 6 months, 3 weeks ago
Selected Answer: C
You guys seriously don't want to go to SMSM for Avery Single EC2. You have to create solution not used services for one time access.
Bastion will give you option to manage 1000s EC2 machines from 1. Plus you can use Ansible from it.
upvoted 2 times
The most secure way is definitely session manager therefore answer A is correct imho.
upvoted 2 times
A company is using a fleet of Amazon EC2 instances to ingest data from on-premises data sources. The data is in JSON format and ingestion
rates can be as high as 1 MB/s. When an EC2 instance is rebooted, the data in-flight is lost. The company’s data science team wants to query
ingested data in near-real time.
Which solution provides near-real-time data querying that is scalable with minimal data loss?
A. Publish data to Amazon Kinesis Data Streams, Use Kinesis Data Analytics to query the data.
B. Publish data to Amazon Kinesis Data Firehose with Amazon Redshift as the destination. Use Amazon Redshift to query the data.
C. Store ingested data in an EC2 instance store. Publish data to Amazon Kinesis Data Firehose with Amazon S3 as the destination. Use
Amazon Athena to query the data.
D. Store ingested data in an Amazon Elastic Block Store (Amazon EBS) volume. Publish data to Amazon ElastiCache for Redis. Subscribe to
the Redis channel to query the data.
Correct Answer: A
Reason Kruasan gave "Redshift would lack real-time capabilities." This is not true. Redshift could do real-time. evidence
https://aws.amazon.com/blogs/big-data/real-time-analytics-with-amazon-redshift-streaming-ingestion/
upvoted 1 times
What should a solutions architect do to ensure that all objects uploaded to an Amazon S3 bucket are encrypted?
A. Update the bucket policy to deny if the PutObject does not have an s3:x-amz-acl header set.
B. Update the bucket policy to deny if the PutObject does not have an s3:x-amz-acl header set to private.
C. Update the bucket policy to deny if the PutObject does not have an aws:SecureTransport header set to true.
D. Update the bucket policy to deny if the PutObject does not have an x-amz-server-side-encryption header set.
Correct Answer: D
A solutions architect is designing a multi-tier application for a company. The application's users upload images from a mobile device. The
application generates a thumbnail of each image and returns a message to the user to confirm that the image was uploaded successfully.
The thumbnail generation can take up to 60 seconds, but the company wants to provide a faster response time to its users to notify them that the
original image was received. The solutions architect must design the application to asynchronously dispatch requests to the different application
tiers.
A. Write a custom AWS Lambda function to generate the thumbnail and alert the user. Use the image upload process as an event source to
invoke the Lambda function.
B. Create an AWS Step Functions workflow. Configure Step Functions to handle the orchestration between the application tiers and alert the
user when thumbnail generation is complete.
C. Create an Amazon Simple Queue Service (Amazon SQS) message queue. As images are uploaded, place a message on the SQS queue for
thumbnail generation. Alert the user through an application message that the image was received.
D. Create Amazon Simple Notification Service (Amazon SNS) notification topics and subscriptions. Use one subscription with the application
to generate the thumbnail after the image upload is complete. Use a second subscription to message the user's mobile app by way of a push
notification after thumbnail generation is complete.
Correct Answer: C
C proposes to use an Amazon Simple Queue Service (Amazon SQS) message queue to process image uploads and generate thumbnails.
SQS can help decouple the image upload process from the thumbnail generation process, which is helpful for asynchronous processing.
However, it may not be the most suitable option for quickly alerting the user that the image was received, as the user may have to wait
until the thumbnail is generated before receiving a notification.
upvoted 2 times
A company’s facility has badge readers at every entrance throughout the building. When badges are scanned, the readers send a message over
HTTPS to indicate who attempted to access that particular entrance.
A solutions architect must design a system to process these messages from the sensors. The solution must be highly available, and the results
must be made available for the company’s security team to analyze.
A. Launch an Amazon EC2 instance to serve as the HTTPS endpoint and to process the messages. Configure the EC2 instance to save the
results to an Amazon S3 bucket.
B. Create an HTTPS endpoint in Amazon API Gateway. Configure the API Gateway endpoint to invoke an AWS Lambda function to process the
messages and save the results to an Amazon DynamoDB table.
C. Use Amazon Route 53 to direct incoming sensor messages to an AWS Lambda function. Configure the Lambda function to process the
messages and save the results to an Amazon DynamoDB table.
D. Create a gateway VPC endpoint for Amazon S3. Configure a Site-to-Site VPN connection from the facility network to the VPC so that sensor
data can be written directly to an S3 bucket by way of the VPC endpoint.
Correct Answer: B
API Gateway is a highly scalable and available service that can be used to create and expose RESTful APIs.
Lambda is a serverless compute service that can be used to process events and data.
DynamoDB is a NoSQL database that can be used to store data in a scalable and highly available way.
upvoted 2 times
A company wants to implement a disaster recovery plan for its primary on-premises file storage volume. The file storage volume is mounted from
an Internet Small Computer Systems Interface (iSCSI) device on a local storage server. The file storage volume holds hundreds of terabytes (TB)
of data.
The company wants to ensure that end users retain immediate access to all file types from the on-premises systems without experiencing latency.
Which solution will meet these requirements with the LEAST amount of change to the company's existing infrastructure?
A. Provision an Amazon S3 File Gateway as a virtual machine (VM) that is hosted on premises. Set the local cache to 10 TB. Modify existing
applications to access the files through the NFS protocol. To recover from a disaster, provision an Amazon EC2 instance and mount the S3
bucket that contains the files.
B. Provision an AWS Storage Gateway tape gateway. Use a data backup solution to back up all existing data to a virtual tape library. Configure
the data backup solution to run nightly after the initial backup is complete. To recover from a disaster, provision an Amazon EC2 instance and
restore the data to an Amazon Elastic Block Store (Amazon EBS) volume from the volumes in the virtual tape library.
C. Provision an AWS Storage Gateway Volume Gateway cached volume. Set the local cache to 10 TB. Mount the Volume Gateway cached
volume to the existing file server by using iSCSI, and copy all files to the storage volume. Configure scheduled snapshots of the storage
volume. To recover from a disaster, restore a snapshot to an Amazon Elastic Block Store (Amazon EBS) volume and attach the EBS volume to
an Amazon EC2 instance.
D. Provision an AWS Storage Gateway Volume Gateway stored volume with the same amount of disk space as the existing file storage volume.
Mount the Volume Gateway stored volume to the existing file server by using iSCSI, and copy all files to the storage volume. Configure
scheduled snapshots of the storage volume. To recover from a disaster, restore a snapshot to an Amazon Elastic Block Store (Amazon EBS)
volume and attach the EBS volume to an Amazon EC2 instance.
Correct Answer: C
Stored volumes can range from 1 GiB to 16 TiB in size and must be rounded to the nearest GiB. Each gateway configured for stored
volumes can support up to 32 volumes and a total volume storage of 512 TiB (0.5 PiB).
upvoted 1 times
Option D is not the best solution because a Volume Gateway stored volume does not provide immediate access to all file types and would
require additional steps to retrieve data from Amazon S3, which can result in latency for end-users.
upvoted 2 times
Option D states: "Provision an AWS Storage Gateway Volume Gateway stored *volume* with the same amount of disk space as the
existing file storage volume.".
Notice that it states volume and not volumes, which would be the only way to match the information that the question provides.
Initial question states that on-premise volume is 100s of TB in size.
Therefore, only logical and viable answer can be C.
Feel free to prove me wrong
upvoted 3 times
A company is hosting a web application from an Amazon S3 bucket. The application uses Amazon Cognito as an identity provider to authenticate
users and return a JSON Web Token (JWT) that provides access to protected resources that are stored in another S3 bucket.
Upon deployment of the application, users report errors and are unable to access the protected content. A solutions architect must resolve this
issue by providing proper permissions so that users can access the protected content.
A. Update the Amazon Cognito identity pool to assume the proper IAM role for access to the protected content.
B. Update the S3 ACL to allow the application to access the protected content.
C. Redeploy the application to Amazon S3 to prevent eventually consistent reads in the S3 bucket from affecting the ability of users to access
the protected content.
D. Update the Amazon Cognito pool to use custom attribute mappings within the identity pool and grant users the proper permissions to
access the protected content.
Correct Answer: A
A. Update the Amazon Cognito identity pool to assume the proper IAM role for access to the protected content.
Explanation:
Amazon Cognito provides authentication and user management services for web and mobile applications.
In this scenario, the application is using Amazon Cognito as an identity provider to authenticate users and obtain JSON Web Tokens (JWTs).
The JWTs are used to access protected resources stored in another S3 bucket.
To grant users access to the protected content, the proper IAM role needs to be assumed by the identity pool in Amazon Cognito.
By updating the Amazon Cognito identity pool with the appropriate IAM role, users will be authorized to access the protected content in
the S3 bucket.
upvoted 3 times
Option C is incorrect because redeploying the application to Amazon S3 will not resolve the issue related to user access permissions.
Option D is incorrect because updating custom attribute mappings in Amazon Cognito will not directly grant users the proper
permissions to access the protected content.
upvoted 2 times
D is the right answer, using custom attributes that are added to the JWT and used to grant permissions in S3. See
https://docs.aws.amazon.com/cognito/latest/developerguide/using-attributes-for-access-control-policy-example.html for an example.
upvoted 2 times
A suggests updating the Amazon Cognito identity pool to assume the proper IAM role for access to the protected content. This is a valid
solution, as it would grant authenticated users the necessary permissions to access the protected content.
upvoted 4 times
An image hosting company uploads its large assets to Amazon S3 Standard buckets. The company uses multipart upload in parallel by using S3
APIs and overwrites if the same object is uploaded again. For the first 30 days after upload, the objects will be accessed frequently. The objects
will be used less frequently after 30 days, but the access patterns for each object will be inconsistent. The company must optimize its S3 storage
costs while maintaining high availability and resiliency of stored assets.
Which combination of actions should a solutions architect recommend to meet these requirements? (Choose two.)
E. Move assets to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 30 days.
Correct Answer: AB
Explanation:
A. Moving assets to S3 Intelligent-Tiering after 30 days: This storage class automatically analyzes the access patterns of objects and moves
them between frequent access and infrequent access tiers. Since the objects will be accessed frequently for the first 30 days, storing them
in the frequent access tier during that period optimizes performance. After 30 days, when the access patterns become inconsistent, S3
Intelligent-Tiering will automatically move the objects to the infrequent access tier, reducing storage costs.
B. Configuring an S3 Lifecycle policy to clean up incomplete multipart uploads: Multipart uploads are used for large objects, and
incomplete multipart uploads can consume storage space if not cleaned up. By configuring an S3 Lifecycle policy to clean up incomplete
multipart uploads, unnecessary storage costs can be avoided.
upvoted 1 times
antropaws 4 months, 1 week ago
Selected Answer: AD
AD.
B makes no sense because multipart uploads overwrite objects that are already uploaded. The question never says this is a problem.
upvoted 1 times
A. Move assets to S3 Intelligent-Tiering after 30 days. This will automatically move objects between two access tiers based on changing
access patterns and save costs by reducing the number of objects stored in the expensive tier.
B. Configure an S3 Lifecycle policy to clean up incomplete multipart uploads. This will help to reduce storage costs by removing incomplete
multipart uploads that are no longer needed.
upvoted 2 times
and D: as "For the first 30 days after upload, the objects will be accessed frequently"
Intelligent checks and if file haven't been access for 30 consecutive days and send infrequent access.So if somebody accessed the file 20
days after the upload with the intelligent process, file will be moved to Infrequent Access tier after 50 days. Which will reflect against the
COST.
"S3 Intelligent-Tiering monitors access patterns and moves objects that have not been accessed for 30 consecutive days to the Infrequent
Access tier and after 90 days of no access to the Archive Instant Access tier. For data that does not require immediate retrieval, you can set
up S3 Intelligent-Tiering to monitor and automatically move objects that aren’t accessed for 180 days or more to the Deep Archive Access
tier to realize up to 95% in storage cost savings."
https://aws.amazon.com/s3/storage-classes/#Unknown_or_changing_access
upvoted 4 times
"S3 Standard-IA is for data that is accessed less frequently, but requires rapid access when needed." and for the first 30 days data is
frequently accessed lol.
https://aws.amazon.com/s3/storage-classes/#Unknown_or_changing_access
upvoted 1 times
Question #327 Topic 1
A solutions architect must secure a VPC network that hosts Amazon EC2 instances. The EC2 instances contain highly sensitive data and run in a
private subnet. According to company policy, the EC2 instances that run in the VPC can access only approved third-party software repositories on
the internet for software product updates that use the third party’s URL. Other internet traffic must be blocked.
A. Update the route table for the private subnet to route the outbound traffic to an AWS Network Firewall firewall. Configure domain list rule
groups.
B. Set up an AWS WAF web ACL. Create a custom set of rules that filter traffic requests based on source and destination IP address range
sets.
C. Implement strict inbound security group rules. Configure an outbound rule that allows traffic only to the authorized software repositories on
the internet by specifying the URLs.
D. Configure an Application Load Balancer (ALB) in front of the EC2 instances. Direct all outbound traffic to the ALB. Use a URL-based rule
listener in the ALB’s target group for outbound access to the internet.
Correct Answer: A
https://docs.aws.amazon.com/network-firewall/latest/developerguide/suricata-examples.html#suricata-example-domain-filtering
upvoted 9 times
Highly sensitive EC2 instances in private subnet that can access only approved URLs
Other internet access must be blocked
Security groups act as a firewall at the instance level and can control both inbound and outbound traffic.
upvoted 1 times
Option A is not the best solution as it involves the use of AWS Network Firewall, which may introduce additional operational overhead.
While domain list rule groups can be used to block all internet traffic except for the approved third-party software repositories, this
solution is more complex than necessary for this scenario.
upvoted 2 times
A company is hosting a three-tier ecommerce application in the AWS Cloud. The company hosts the website on Amazon S3 and integrates the
website with an API that handles sales requests. The company hosts the API on three Amazon EC2 instances behind an Application Load Balancer
(ALB). The API consists of static and dynamic front-end content along with backend workers that process sales requests asynchronously.
The company is expecting a significant and sudden increase in the number of sales requests during events for the launch of new products.
What should a solutions architect recommend to ensure that all the requests are processed successfully?
A. Add an Amazon CloudFront distribution for the dynamic content. Increase the number of EC2 instances to handle the increase in traffic.
B. Add an Amazon CloudFront distribution for the static content. Place the EC2 instances in an Auto Scaling group to launch new instances
based on network traffic.
C. Add an Amazon CloudFront distribution for the dynamic content. Add an Amazon ElastiCache instance in front of the ALB to reduce traffic
for the API to handle.
D. Add an Amazon CloudFront distribution for the static content. Add an Amazon Simple Queue Service (Amazon SQS) queue to receive
requests from the website for later processing by the EC2 instances.
Correct Answer: D
In contrast, if you think the answer is B, the issue is the sudden spike. Maybe the auto-scaling is not acting fast enough and some orders
are lost. So, B i snot correct.
upvoted 2 times
A security audit reveals that Amazon EC2 instances are not being patched regularly. A solutions architect needs to provide a solution that will run
regular security scans across a large fleet of EC2 instances. The solution should also patch the EC2 instances on a regular schedule and provide a
report of each instance’s patch status.
A. Set up Amazon Macie to scan the EC2 instances for software vulnerabilities. Set up a cron job on each EC2 instance to patch the instance
on a regular schedule.
B. Turn on Amazon GuardDuty in the account. Configure GuardDuty to scan the EC2 instances for software vulnerabilities. Set up AWS
Systems Manager Session Manager to patch the EC2 instances on a regular schedule.
C. Set up Amazon Detective to scan the EC2 instances for software vulnerabilities. Set up an Amazon EventBridge scheduled rule to patch the
EC2 instances on a regular schedule.
D. Turn on Amazon Inspector in the account. Configure Amazon Inspector to scan the EC2 instances for software vulnerabilities. Set up AWS
Systems Manager Patch Manager to patch the EC2 instances on a regular schedule.
Correct Answer: D
AWS Systems Manager Patch Manager is a service that helps you automate the process of patching Windows and Linux instances. It
provides a simple, automated way to patch your instances with the latest security patches and updates. Patch Manager helps you
maintain compliance with security policies and regulations by providing detailed reports on the patch status of your instances.
upvoted 1 times
http://webcache.googleusercontent.com/search?q=cache:FbFTc6XKycwJ:https://medium.com/aws-architech/use-case-aws-inspector-vs-
guardduty-3662bf80767a&hl=vi&gl=kr&strip=1&vwsrc=0
upvoted 2 times
A company is planning to store data on Amazon RDS DB instances. The company must encrypt the data at rest.
A. Create a key in AWS Key Management Service (AWS KMS). Enable encryption for the DB instances.
B. Create an encryption key. Store the key in AWS Secrets Manager. Use the key to encrypt the DB instances.
C. Generate a certificate in AWS Certificate Manager (ACM). Enable SSL/TLS on the DB instances by using the certificate.
D. Generate a certificate in AWS Identity and Access Management (IAM). Enable SSL/TLS on the DB instances by using the certificate.
Correct Answer: C
Secrets Manager stores actual secrets like passwords, pass phrases, and anything else you want encrypted. SM uses KMS to encrypt its
secrets, it would be circular to get an encryption key from KMS to use SM to encrypt the encryption key.
upvoted 1 times
To encrypt data at rest in Amazon RDS, you can use the encryption feature of Amazon RDS, which uses AWS Key Management Service
(AWS KMS). With this feature, Amazon RDS encrypts each database instance with a unique key. This key is stored securely by AWS KMS. You
can manage your own keys or use the default AWS-managed keys. When you enable encryption for a DB instance, Amazon RDS encrypts
the underlying storage, including the automated backups, read replicas, and snapshots.
upvoted 2 times
Amazon RDS provides multiple options for encrypting data at rest. AWS Key Management Service (KMS) is used to manage the keys used
to encrypt and decrypt the data. Therefore, a solution architect should create a key in AWS KMS and enable encryption for the DB
instances to encrypt the data at rest.
upvoted 1 times
https://www.examtopics.com/discussions/amazon/view/80753-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 2 times
Question #331 Topic 1
A company must migrate 20 TB of data from a data center to the AWS Cloud within 30 days. The company’s network bandwidth is limited to 15
Mbps and cannot exceed 70% utilization.
Correct Answer: A
That's how much you can transfer with a 10 Mbps link (roughly 70% of the 15 Mbps connection).
With a consistent connection of 8~ Mbps, and 30 days, you can upload 20 TB of data.
A company needs to provide its employees with secure access to confidential and sensitive files. The company wants to ensure that the files can
be accessed only by authorized users. The files must be downloaded securely to the employees’ devices.
The files are stored in an on-premises Windows file server. However, due to an increase in remote usage, the file server is running out of capacity.
.
Which solution will meet these requirements?
A. Migrate the file server to an Amazon EC2 instance in a public subnet. Configure the security group to limit inbound traffic to the employees’
IP addresses.
B. Migrate the files to an Amazon FSx for Windows File Server file system. Integrate the Amazon FSx file system with the on-premises Active
Directory. Configure AWS Client VPN.
C. Migrate the files to Amazon S3, and create a private VPC endpoint. Create a signed URL to allow download.
D. Migrate the files to Amazon S3, and create a public VPC endpoint. Allow employees to sign on with AWS IAM Identity Center (AWS Single
Sign-On).
Correct Answer: B
A company’s application runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The instances run in an Amazon EC2 Auto
Scaling group across multiple Availability Zones. On the first day of every month at midnight, the application becomes much slower when the
month-end financial calculation batch runs. This causes the CPU utilization of the EC2 instances to immediately peak to 100%, which disrupts the
application.
What should a solutions architect recommend to ensure the application is able to handle the workload and avoid downtime?
B. Configure an EC2 Auto Scaling simple scaling policy based on CPU utilization.
C. Configure an EC2 Auto Scaling scheduled scaling policy based on the monthly schedule.
D. Configure Amazon ElastiCache to remove some of the workload from the EC2 instances.
Correct Answer: C
Configuring a simple scaling policy based on CPU utilization or adding Amazon CloudFront distribution or Amazon ElastiCache will not
directly address the issue of handling the monthly peak workload.
upvoted 1 times
The most appropriate solution to handle the increased workload during the monthly batch run and avoid downtime would be to configure
an EC2 Auto Scaling scheduled scaling policy based on the monthly schedule.
upvoted 2 times
To set up a scheduled scaling policy in EC2 Auto Scaling, you need to specify the following:
Start time and date: The date and time when the scaling event should begin.
Desired capacity: The number of instances that you want to have running after the scaling event.
Recurrence: The frequency with which the scaling event should occur. This can be a one-time event or a recurring event, such as daily or
weekly.
upvoted 1 times
A company wants to give a customer the ability to use on-premises Microsoft Active Directory to download files that are stored in Amazon S3. The
customer’s application uses an SFTP client to download the files.
Which solution will meet these requirements with the LEAST operational overhead and no changes to the customer’s application?
A. Set up AWS Transfer Family with SFTP for Amazon S3. Configure integrated Active Directory authentication.
B. Set up AWS Database Migration Service (AWS DMS) to synchronize the on-premises client with Amazon S3. Configure integrated Active
Directory authentication.
C. Set up AWS DataSync to synchronize between the on-premises location and the S3 location by using AWS IAM Identity Center (AWS Single
Sign-On).
D. Set up a Windows Amazon EC2 instance with SFTP to connect the on-premises client with Amazon S3. Integrate AWS Identity and Access
Management (IAM).
Correct Answer: B
https://docs.aws.amazon.com/transfer/latest/userguide/directory-services-users.html
upvoted 3 times
Question #335 Topic 1
A company is experiencing sudden increases in demand. The company needs to provision large Amazon EC2 instances from an Amazon Machine
Image (AMI). The instances will run in an Auto Scaling group. The company needs a solution that provides minimum initialization latency to meet
the demand.
A. Use the aws ec2 register-image command to create an AMI from a snapshot. Use AWS Step Functions to replace the AMI in the Auto
Scaling group.
B. Enable Amazon Elastic Block Store (Amazon EBS) fast snapshot restore on a snapshot. Provision an AMI by using the snapshot. Replace
the AMI in the Auto Scaling group with the new AMI.
C. Enable AMI creation and define lifecycle rules in Amazon Data Lifecycle Manager (Amazon DLM). Create an AWS Lambda function that
modifies the AMI in the Auto Scaling group.
D. Use Amazon EventBridge to invoke AWS Backup lifecycle policies that provision AMIs. Configure Auto Scaling group capacity limits as an
event source in EventBridge.
Correct Answer: C
° Need to launch large EC2 instances quickly from an AMI in an Auto Scaling group
° Looking to minimize instance initialization latency
upvoted 1 times
Provisioning an AMI by using the fast snapshot restore feature is a fast and efficient way to create an AMI. Once the AMI is created, it can
be replaced in the Auto Scaling group without any downtime or disruption to running instances.
upvoted 1 times
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/snapshot-lifecycle.html
Amazon Data Lifecycle Manager helps automate snapshot and AMI management
upvoted 2 times
A company hosts a multi-tier web application that uses an Amazon Aurora MySQL DB cluster for storage. The application tier is hosted on Amazon
EC2 instances. The company’s IT security guidelines mandate that the database credentials be encrypted and rotated every 14 days.
What should a solutions architect do to meet this requirement with the LEAST operational effort?
A. Create a new AWS Key Management Service (AWS KMS) encryption key. Use AWS Secrets Manager to create a new secret that uses the
KMS key with the appropriate credentials. Associate the secret with the Aurora DB cluster. Configure a custom rotation period of 14 days.
B. Create two parameters in AWS Systems Manager Parameter Store: one for the user name as a string parameter and one that uses the
SecureString type for the password. Select AWS Key Management Service (AWS KMS) encryption for the password parameter, and load these
parameters in the application tier. Implement an AWS Lambda function that rotates the password every 14 days.
C. Store a file that contains the credentials in an AWS Key Management Service (AWS KMS) encrypted Amazon Elastic File System (Amazon
EFS) file system. Mount the EFS file system in all EC2 instances of the application tier. Restrict the access to the file on the file system so that
the application can read the file and that only super users can modify the file. Implement an AWS Lambda function that rotates the key in
Aurora every 14 days and writes new credentials into the file.
D. Store a file that contains the credentials in an AWS Key Management Service (AWS KMS) encrypted Amazon S3 bucket that the application
uses to load the credentials. Download the file to the application regularly to ensure that the correct credentials are used. Implement an AWS
Lambda function that rotates the Aurora credentials every 14 days and uploads these credentials to the file in the S3 bucket.
Correct Answer: A
A company has deployed a web application on AWS. The company hosts the backend database on Amazon RDS for MySQL with a primary DB
instance and five read replicas to support scaling needs. The read replicas must lag no more than 1 second behind the primary DB instance. The
database routinely runs scheduled stored procedures.
As traffic on the website increases, the replicas experience additional lag during periods of peak load. A solutions architect must reduce the
replication lag as much as possible. The solutions architect must minimize changes to the application code and must minimize ongoing
operational overhead.
A. Migrate the database to Amazon Aurora MySQL. Replace the read replicas with Aurora Replicas, and configure Aurora Auto Scaling. Replace
the stored procedures with Aurora MySQL native functions.
B. Deploy an Amazon ElastiCache for Redis cluster in front of the database. Modify the application to check the cache before the application
queries the database. Replace the stored procedures with AWS Lambda functions.
C. Migrate the database to a MySQL database that runs on Amazon EC2 instances. Choose large, compute optimized EC2 instances for all
replica nodes. Maintain the stored procedures on the EC2 instances.
D. Migrate the database to Amazon DynamoDB. Provision a large number of read capacity units (RCUs) to support the required throughput,
and configure on-demand capacity scaling. Replace the stored procedures with DynamoDB streams.
Correct Answer: A
Option B is not the best solution since adding an ElastiCache for Redis cluster does not address the replication lag issue, and the cache
may not have the most up-to-date information. Additionally, replacing the stored procedures with AWS Lambda functions adds additional
complexity and may not improve performance.
upvoted 3 times
Reference:
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Replication.html
upvoted 2 times
https://docs.amazonaws.cn/en_us/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.Lambda.html
upvoted 1 times
A solutions architect must create a disaster recovery (DR) plan for a high-volume software as a service (SaaS) platform. All data for the platform
is stored in an Amazon Aurora MySQL DB cluster.
A. Use MySQL binary log replication to an Aurora cluster in the secondary Region. Provision one DB instance for the Aurora cluster in the
secondary Region.
B. Set up an Aurora global database for the DB cluster. When setup is complete, remove the DB instance from the secondary Region.
C. Use AWS Database Migration Service (AWS DMS) to continuously replicate data to an Aurora cluster in the secondary Region. Remove the
DB instance from the secondary Region.
D. Set up an Aurora global database for the DB cluster. Specify a minimum of one DB instance in the secondary Region.
Correct Answer: D
In addition to Aurora Replicas, you have the following options for replication with Aurora MySQL:
You can replicate data across multiple Regions by using an Aurora global database. For details, see High availability across AWS Regions
with Aurora global databases
You can create an Aurora read replica of an Aurora MySQL DB cluster in a different AWS Region, by using MySQL binary log (binlog)
replication. Each cluster can have up to five read replicas created this way, each in a different Region.
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Replication.html
upvoted 1 times
https://aws.amazon.com/rds/aurora/pricing/
upvoted 1 times
luisgu Highly Voted 4 months, 1 week ago
Selected Answer: B
MOST cost-effective --> B
See section "Creating a headless Aurora DB cluster in a secondary Region" on the link
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Replication.html
"Although an Aurora global database requires at least one secondary Aurora DB cluster in a different AWS Region than the primary, you
can use a headless configuration for the secondary cluster. A headless secondary Aurora DB cluster is one without a DB instance. This type
of configuration can lower expenses for an Aurora global database. In an Aurora DB cluster, compute and storage are decoupled. Without
the DB instance, you're not charged for compute, only for storage. If it's set up correctly, a headless secondary's storage volume is kept in-
sync with the primary Aurora DB cluster."
upvoted 5 times
Not A because it achieves the same, would be equally costly and adds overhead.
upvoted 2 times
A company has a custom application with embedded credentials that retrieves information from an Amazon RDS MySQL DB instance.
Management says the application must be made more secure with the least amount of programming effort.
A. Use AWS Key Management Service (AWS KMS) to create keys. Configure the application to load the database credentials from AWS KMS.
Enable automatic key rotation.
B. Create credentials on the RDS for MySQL database for the application user and store the credentials in AWS Secrets Manager. Configure the
application to load the database credentials from Secrets Manager. Create an AWS Lambda function that rotates the credentials in Secret
Manager.
C. Create credentials on the RDS for MySQL database for the application user and store the credentials in AWS Secrets Manager. Configure the
application to load the database credentials from Secrets Manager. Set up a credentials rotation schedule for the application user in the RDS
for MySQL database using Secrets Manager.
D. Create credentials on the RDS for MySQL database for the application user and store the credentials in AWS Systems Manager Parameter
Store. Configure the application to load the database credentials from Parameter Store. Set up a credentials rotation schedule for the
application user in the RDS for MySQL database using Parameter Store.
Correct Answer: D
https://www.examtopics.com/discussions/amazon/view/46483-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 8 times
A media company hosts its website on AWS. The website application’s architecture includes a fleet of Amazon EC2 instances behind an
Application Load Balancer (ALB) and a database that is hosted on Amazon Aurora. The company’s cybersecurity team reports that the application
is vulnerable to SQL injection.
A. Use AWS WAF in front of the ALB. Associate the appropriate web ACLs with AWS WAF.
B. Create an ALB listener rule to reply to SQL injections with a fixed response.
C. Subscribe to AWS Shield Advanced to block all SQL injection attempts automatically.
Correct Answer: C
By using AWS WAF in front of the ALB and associating the appropriate web ACLs with AWS WAF, the company can protect its website
application from SQL injection attacks. AWS WAF will inspect incoming traffic to the website application and block requests that match the
defined SQL injection patterns in the web ACLs. This will help to prevent SQL injection attacks from reaching the application, thereby
improving the overall security posture of the application.
upvoted 2 times
A company has an Amazon S3 data lake that is governed by AWS Lake Formation. The company wants to create a visualization in Amazon
QuickSight by joining the data in the data lake with operational data that is stored in an Amazon Aurora MySQL database. The company wants to
enforce column-level authorization so that the company’s marketing team can access only a subset of columns in the database.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use Amazon EMR to ingest the data directly from the database to the QuickSight SPICE engine. Include only the required columns.
B. Use AWS Glue Studio to ingest the data from the database to the S3 data lake. Attach an IAM policy to the QuickSight users to enforce
column-level access control. Use Amazon S3 as the data source in QuickSight.
C. Use AWS Glue Elastic Views to create a materialized view for the database in Amazon S3. Create an S3 bucket policy to enforce column-
level access control for the QuickSight users. Use Amazon S3 as the data source in QuickSight.
D. Use a Lake Formation blueprint to ingest the data from the database to the S3 data lake. Use Lake Formation to enforce column-level
access control for the QuickSight users. Use Amazon Athena as the data source in QuickSight.
Correct Answer: C
https://www.examtopics.com/discussions/amazon/view/80865-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 2 times
Question #342 Topic 1
A transaction processing company has weekly scripted batch jobs that run on Amazon EC2 instances. The EC2 instances are in an Auto Scaling
group. The number of transactions can vary, but the baseline CPU utilization that is noted on each run is at least 60%. The company needs to
provision the capacity 30 minutes before the jobs run.
Currently, engineers complete this task by manually modifying the Auto Scaling group parameters. The company does not have the resources to
analyze the required capacity trends for the Auto Scaling group counts. The company needs an automated way to modify the Auto Scaling group’s
desired capacity.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create a dynamic scaling policy for the Auto Scaling group. Configure the policy to scale based on the CPU utilization metric. Set the target
value for the metric to 60%.
B. Create a scheduled scaling policy for the Auto Scaling group. Set the appropriate desired capacity, minimum capacity, and maximum
capacity. Set the recurrence to weekly. Set the start time to 30 minutes before the batch jobs run.
C. Create a predictive scaling policy for the Auto Scaling group. Configure the policy to scale based on forecast. Set the scaling metric to CPU
utilization. Set the target value for the metric to 60%. In the policy, set the instances to pre-launch 30 minutes before the jobs run.
D. Create an Amazon EventBridge event to invoke an AWS Lambda function when the CPU utilization metric value for the Auto Scaling group
reaches 60%. Configure the Lambda function to increase the Auto Scaling group’s desired capacity and maximum capacity by 20%.
Correct Answer: C
Cyclical traffic, such as high use of resources during regular business hours and low use of resources during evenings and weekends
Recurring on-and-off workload patterns, such as batch processing, testing, or periodic data analysis
Applications that take a long time to initialize, causing a noticeable latency impact on application performance during scale-out events
https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-predictive-scaling.html
upvoted 1 times
Option C, creating a predictive scaling policy for the Auto Scaling group, is not necessary in this scenario since the company does not have
the resources to analyze the required capacity trends for the Auto Scaling group counts. This would require analyzing the required
capacity trends for the Auto Scaling group counts to determine the appropriate scaling policy.
upvoted 3 times
The job runs weekly therefore the easiest way to achieve this with the LEAST operational overhead, seems to be scheduled scaling.
Both solutions achieve the goal, B imho does it better, considering the limitations.
Predictive Scaling:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-predictive-scaling.html
Scheduled Scaling:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-scheduled-scaling.html
upvoted 2 times
samcloudaws 6 months, 4 weeks ago
Selected Answer: B
Scheduled scaling seems mostly simplest way to solve this
upvoted 3 times
A solutions architect is designing a company’s disaster recovery (DR) architecture. The company has a MySQL database that runs on an Amazon
EC2 instance in a private subnet with scheduled backup. The DR design needs to include multiple AWS Regions.
Which solution will meet these requirements with the LEAST operational overhead?
A. Migrate the MySQL database to multiple EC2 instances. Configure a standby EC2 instance in the DR Region. Turn on replication.
B. Migrate the MySQL database to Amazon RDS. Use a Multi-AZ deployment. Turn on read replication for the primary DB instance in the
different Availability Zones.
C. Migrate the MySQL database to an Amazon Aurora global database. Host the primary DB cluster in the primary Region. Host the secondary
DB cluster in the DR Region.
D. Store the scheduled backup of the MySQL database in an Amazon S3 bucket that is configured for S3 Cross-Region Replication (CRR). Use
the data backup to restore the database in the DR Region.
Correct Answer: B
A company has a Java application that uses Amazon Simple Queue Service (Amazon SQS) to parse messages. The application cannot parse
messages that are larger than 256 KB in size. The company wants to implement a solution to give the application the ability to parse messages as
large as 50 MB.
Which solution will meet these requirements with the FEWEST changes to the code?
A. Use the Amazon SQS Extended Client Library for Java to host messages that are larger than 256 KB in Amazon S3.
B. Use Amazon EventBridge to post large messages from the application instead of Amazon SQS.
C. Change the limit in Amazon SQS to handle messages that are larger than 256 KB.
D. Store messages that are larger than 256 KB in Amazon Elastic File System (Amazon EFS). Configure Amazon SQS to reference this location
in the messages.
Correct Answer: A
Amazon SQS has a limit of 256 KB for the size of messages. To handle messages larger than 256 KB, the Amazon SQS Extended Client
Library for Java can be used. This library allows messages larger than 256 KB to be stored in Amazon S3 and provides a way to retrieve and
process them. Using this solution, the application code can remain largely unchanged while still being able to process messages up to 50
MB in size.
upvoted 7 times
To handle messages larger than 256 KB, the Amazon SQS Extended Client Library for Java can be used.
upvoted 1 times
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-s3-messages.html
upvoted 1 times
A company wants to restrict access to the content of one of its main web applications and to protect the content by using authorization
techniques available on AWS. The company wants to implement a serverless architecture and an authentication solution for fewer than 100 users.
The solution needs to integrate with the main web application and serve web content globally. The solution must also scale as the company's user
base grows while providing the lowest login latency possible.
A. Use Amazon Cognito for authentication. Use Lambda@Edge for authorization. Use Amazon CloudFront to serve the web application
globally.
B. Use AWS Directory Service for Microsoft Active Directory for authentication. Use AWS Lambda for authorization. Use an Application Load
Balancer to serve the web application globally.
C. Use Amazon Cognito for authentication. Use AWS Lambda for authorization. Use Amazon S3 Transfer Acceleration to serve the web
application globally.
D. Use AWS Directory Service for Microsoft Active Directory for authentication. Use Lambda@Edge for authorization. Use AWS Elastic
Beanstalk to serve the web application globally.
Correct Answer: A
Lambda@Edge is a serverless compute service that can be used to run code at the edge of the AWS network. It is a good choice for this
scenario because it can be used to perform authorization checks at the edge, which can improve the login latency.
Amazon CloudFront is a content delivery network (CDN) that can be used to serve web content globally. It is a good choice for this
scenario because it can cache web content closer to users, which can improve the performance of the web application.
upvoted 1 times
Lambda@Edge is a service that lets you run AWS Lambda functions globally closer to users, providing lower latency and faster response
times. It can also handle authorization logic at the edge to secure content in CloudFront. For this scenario, Lambda@Edge can provide
authorization for the web application while leveraging the low-latency benefit of running at the edge.
upvoted 2 times
bdp123 7 months, 1 week ago
Selected Answer: A
CloudFront to serve globally
upvoted 1 times
A company has an aging network-attached storage (NAS) array in its data center. The NAS array presents SMB shares and NFS shares to client
workstations. The company does not want to purchase a new NAS array. The company also does not want to incur the cost of renewing the NAS
array’s support contract. Some of the data is accessed frequently, but much of the data is inactive.
A solutions architect needs to implement a solution that migrates the data to Amazon S3, uses S3 Lifecycle policies, and maintains the same look
and feel for the client workstations. The solutions architect has identified AWS Storage Gateway as part of the solution.
Which type of storage gateway should the solutions architect provision to meet these requirements?
A. Volume Gateway
B. Tape Gateway
Correct Answer: C
In this case, the company's aging NAS array can be replaced with an Amazon S3 File Gateway that presents the same NFS and SMB shares
to the client workstations. The data can then be migrated to Amazon S3 and managed using S3 Lifecycle policies
upvoted 5 times
- Why not choose C? Because need working with Amazon S3. (Answer D, and it is correct answer)
https://aws.amazon.com/storagegateway/file/s3/
upvoted 1 times
A company has an application that is running on Amazon EC2 instances. A solutions architect has standardized the company on a particular
instance family and various instance sizes based on the current needs of the company.
The company wants to maximize cost savings for the application over the next 3 years. The company needs to be able to change the instance
family and sizes in the next 6 months based on application popularity and usage.
Correct Answer: D
EC2 Instance Savings Plans provide savings up to 72 percent off On-Demand, in exchange for a commitment to a specific instance family
in a chosen AWS Region (for example, M5 in Virginia). These plans automatically apply to usage regardless of size (for example, m5.xlarge,
m5.2xlarge, etc.), OS (for example, Windows, Linux, etc.), and tenancy (Host, Dedicated, Default) within the specified family in a Region.
upvoted 12 times
A company collects data from a large number of participants who use wearable devices. The company stores the data in an Amazon DynamoDB
table and uses applications to analyze the data. The data workload is constant and predictable. The company wants to stay at or below its
forecasted budget for DynamoDB.
A. Use provisioned mode and DynamoDB Standard-Infrequent Access (DynamoDB Standard-IA). Reserve capacity for the forecasted workload.
B. Use provisioned mode. Specify the read capacity units (RCUs) and write capacity units (WCUs).
C. Use on-demand mode. Set the read capacity units (RCUs) and write capacity units (WCUs) high enough to accommodate changes in the
workload.
D. Use on-demand mode. Specify the read capacity units (RCUs) and write capacity units (WCUs) with reserved capacity.
Correct Answer: A
Option D does not actually allow reserving capacity with on-demand mode.
So option A leverages provisioned mode, Standard-IA, and reserved capacity to meet the requirements in a cost-optimal way.
upvoted 1 times
"With provisioned capacity you pay for the provision of read and write capacity units for your DynamoDB tables. Whereas with DynamoDB
on-demand you pay per request for the data reads and writes that your application performs on your tables."
upvoted 1 times
Charly0710 7 months ago
Selected Answer: B
The data workload is constant and predictable, then, isn't on-demand mode.
DynamoDB Standard-IA is not necessary in this context
upvoted 1 times
A company stores confidential data in an Amazon Aurora PostgreSQL database in the ap-southeast-3 Region. The database is encrypted with an
AWS Key Management Service (AWS KMS) customer managed key. The company was recently acquired and must securely share a backup of the
database with the acquiring company’s AWS account in ap-southeast-3.
A. Create a database snapshot. Copy the snapshot to a new unencrypted snapshot. Share the new snapshot with the acquiring company’s
AWS account.
B. Create a database snapshot. Add the acquiring company’s AWS account to the KMS key policy. Share the snapshot with the acquiring
company’s AWS account.
C. Create a database snapshot that uses a different AWS managed KMS key. Add the acquiring company’s AWS account to the KMS key alias.
Share the snapshot with the acquiring company's AWS account.
D. Create a database snapshot. Download the database snapshot. Upload the database snapshot to an Amazon S3 bucket. Update the S3
bucket policy to allow access from the acquiring company’s AWS account.
Correct Answer: B
C - Wouldn't recommended this option because using a different AWS managed KMS key will not allow the acquiring company's AWS
account to access the encrypted data.
D. - Don't risk it for a biscuit and get fired!!!! - by downloading the database snapshot and uploading it to an Amazon S3 bucket. This will
increase the risk of data leakage or loss of confidentiality during the transfer process.
B - CORRECT
upvoted 3 times
Option A, creating an unencrypted snapshot, is not recommended as it will compromise the confidentiality of the data. Option C, creating
a snapshot that uses a different AWS managed KMS key, does not provide any additional security and will unnecessarily complicate the
solution. Option D, downloading the database snapshot and uploading it to an S3 bucket, is not secure as it can expose the data during
transit.
Therefore, the correct option is B: Create a database snapshot. Add the acquiring company's AWS account to the KMS key policy. Share the
snapshot with the acquiring company's AWS account.
upvoted 1 times
Then:
Copy and share the DB cluster snapshot
upvoted 2 times
Then:
Copy and share the DB cluster snapshot
upvoted 1 times
A company uses a 100 GB Amazon RDS for Microsoft SQL Server Single-AZ DB instance in the us-east-1 Region to store customer transactions.
The company needs high availability and automatic recovery for the DB instance.
The company must also run reports on the RDS database several times a year. The report process causes transactions to take longer than usual to
post to the customers’ accounts. The company needs a solution that will improve the performance of the report process.
B. Take a snapshot of the current DB instance. Restore the snapshot to a new RDS deployment in another Availability Zone.
C. Create a read replica of the DB instance in a different Availability Zone. Point all requests for reports to the read replica.
Correct Answer: AC
A. Modify the DB instance from a Single-AZ DB instance to a Multi-AZ deployment. This will provide high availability and automatic
recovery for the DB instance. If the primary DB instance fails, the standby DB instance will automatically become the primary DB instance.
This will ensure that the database is always available.
C. Create a read replica of the DB instance in a different Availability Zone. Point all requests for reports to the read replica. This will
improve the performance of the report process by offloading the read traffic from the primary DB instance to the read replica. The read
replica is a fully synchronized copy of the primary DB instance, so the reports will be accurate.
upvoted 1 times