0% found this document useful (0 votes)
378 views

ExamtopicsSAA C03 1 3

Uploaded by

donat şekerli
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
378 views

ExamtopicsSAA C03 1 3

Uploaded by

donat şekerli
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 882

- Expert Verified, Online, Free.

Prepare for your AWS Certified Solutions Architect - Associate SAA-C03 exam with additiona
products

Study Guide
632 PDF Pages

Download Now

Video Course
368 Lectures

$19.99
Buy Now

 Custom View Settings


Topic 1 - Exam A

Question #1 Topic 1

A company collects data for temperature, humidity, and atmospheric pressure in cities across multiple continents. The average volume of data
that the company collects from each site daily is 500 GB. Each site has a high-speed Internet connection.
The company wants to aggregate the data from all these global sites as quickly as possible in a single Amazon S3 bucket. The solution must
minimize operational complexity.
Which solution meets these requirements?

A. Turn on S3 Transfer Acceleration on the destination S3 bucket. Use multipart uploads to directly upload site data to the destination S3
bucket.

B. Upload the data from each site to an S3 bucket in the closest Region. Use S3 Cross-Region Replication to copy objects to the destination S3
bucket. Then remove the data from the origin S3 bucket.

C. Schedule AWS Snowball Edge Storage Optimized device jobs daily to transfer data from each site to the closest Region. Use S3 Cross-
Region Replication to copy objects to the destination S3 bucket.

D. Upload the data from each site to an Amazon EC2 instance in the closest Region. Store the data in an Amazon Elastic Block Store (Amazon
EBS) volume. At regular intervals, take an EBS snapshot and copy it to the Region that contains the destination S3 bucket. Restore the EBS
volume in that Region.

Correct Answer: A

Community vote distribution


A (94%) 6%

  D2w Highly Voted  11 months, 3 weeks ago


Selected Answer: A
S3 Transfer Acceleration is the best solution cz it's faster , good for high speed, Transfer Acceleration is designed to optimize transfer
speeds from across the world into S3 buckets.
upvoted 43 times

  Blest012 2 months, 2 weeks ago


Correct the S3 Transfer Acceleration service is the best for this scenario
upvoted 1 times

  BoboChow 11 months, 3 weeks ago


I thought S3 Transfer Acceleration is based on Cross Region Repilication, I made a mistake.
upvoted 1 times

  dilopezat Highly Voted  5 months, 4 weeks ago


Thank you ExamTopics!!! I am so happy, today 06/04/2023 I pass the exam with 793.
upvoted 14 times

  Nezar_Mohsen 5 months, 4 weeks ago


is it enough to study the first 20 pages which are free?
upvoted 3 times

  ashu089 5 months, 3 weeks ago


NOPE NOPE
upvoted 1 times

  security20 Most Recent  15 hours, 9 minutes ago


help pzl - pdf [email protected]
upvoted 1 times

  avrk 2 days, 21 hours ago


In case you have pdf version, can you pls send the same to [email protected]
Appreciate your support
upvoted 1 times

  venkatesh3 3 days, 19 hours ago


Hey guys, Can anyone share me the full pdf version to [email protected], my exam within 3 weeks.
upvoted 1 times
  KasiVenkat 3 days, 19 hours ago
Hi Could anyone pls send me the pdf of all questions to [email protected]
I have only 1 week to pass the exam
Thanks in advance!
upvoted 1 times

  Vishalr348 4 days, 23 hours ago


Can anyone share me the full pdf version to [email protected]. I am taking this exam in 2 weeks. Thanks in advance
upvoted 1 times

  shavit 5 days, 16 hours ago


if anyone have access to the full pdf file ad can send me i"ll be gratefull, [email protected]
really need it
upvoted 1 times

  Devalakshmi 1 week ago


Hi Could anyone pls send me the pdf of all questions to [email protected]. I have only 1 week to pass the exam
Thanks.
upvoted 1 times

  Babbrsher 1 week, 3 days ago


Please send PDF if anyone have.... thank you soo much
[email protected]
please
upvoted 1 times

  amannnn1 1 week, 4 days ago


Can someone give me general advice, do I go with the solution exam solutions provided or the communicated majority vote?
upvoted 1 times

  Sugarbear_01 1 week, 6 days ago


S3 Transfer Acceleration (S3TA) reduces the variability in Internet routing, congestion and speeds that can affect transfers, and logically
shortens the distance to S3 for remote applications. S3TA improves transfer performance by routing traffic through Amazon CloudFront’s
globally distributed Edge Locations and over AWS backbone networks, and by using network protocol optimizations
upvoted 1 times

  Vmaha 2 weeks, 2 days ago


Can anyone please send me pdf of this whole questions. It would be helpful for my exam. Thanks in advance. email id:
[email protected]
upvoted 2 times

  vam2691 1 week, 4 days ago


Hello Vmaha, did you get the full pdf? Can you share with me? My email [email protected]
upvoted 1 times

  Traytray 1 week, 5 days ago


Please if you have access to the full pdf, can you share with me? [email protected]
upvoted 1 times

  dhiraj_126 1 week, 5 days ago


Did you get the email
upvoted 1 times

  Traytray 1 week, 4 days ago


No I did not
upvoted 1 times

  vam2691 1 week, 4 days ago


Hello dhiraj_126, can you share me with me? [email protected]
upvoted 1 times

  soumyaranjan7 2 weeks, 2 days ago


Can anyone please send me the pdf of this whole questions. I have only 2 weeks to pass it. Thanks in advance.It would be a great help.
email- [email protected]
upvoted 1 times

  Briana021 2 weeks, 2 days ago


Can anyone please send me the pdf of this whole questions. Thanks in advance.It would be a great help. email-
[email protected]
upvoted 1 times

  Njeks 2 weeks, 3 days ago


Please , Can anyone send me a PDF version I'm writing tomorrow [email protected]
upvoted 1 times
  parthdesai 2 weeks, 3 days ago
Can anyone please send me the pdf of this whole questions. I have only 2 weeks to pass it. Thanks in advance.It would be a great help.
email- [email protected]
upvoted 1 times
Question #2 Topic 1

A company needs the ability to analyze the log files of its proprietary application. The logs are stored in JSON format in an Amazon S3 bucket.
Queries will be simple and will run on-demand. A solutions architect needs to perform the analysis with minimal changes to the existing
architecture.
What should the solutions architect do to meet these requirements with the LEAST amount of operational overhead?

A. Use Amazon Redshift to load all the content into one place and run the SQL queries as needed.

B. Use Amazon CloudWatch Logs to store the logs. Run SQL queries as needed from the Amazon CloudWatch console.

C. Use Amazon Athena directly with Amazon S3 to run the queries as needed.

D. Use AWS Glue to catalog the logs. Use a transient Apache Spark cluster on Amazon EMR to run the SQL queries as needed.

Correct Answer: C

Community vote distribution


C (100%)

  airraid2010 Highly Voted  11 months, 3 weeks ago


Answer: C
https://docs.aws.amazon.com/athena/latest/ug/what-is.html
Amazon Athena is an interactive query service that makes it easy to analyze data directly in Amazon Simple Storage Service (Amazon S3)
using standard SQL. With a few actions in the AWS Management Console, you can point Athena at your data stored in Amazon S3 and
begin using standard SQL to run ad-hoc queries and get results in seconds.
upvoted 42 times

  BoboChow 11 months, 3 weeks ago


I agree C is the answer
upvoted 2 times

  tt79 11 months, 3 weeks ago


C is right.
upvoted 1 times

  PhucVuu Highly Voted  6 months ago


Selected Answer: C
Keyword:
- Queries will be simple and will run on-demand.
- Minimal changes to the existing architecture.
A: Incorrect - We have to do 2 step. load all content to Redshift and run SQL query (This is simple query so we can you Athena, for complex
query we will apply Redshit)
B: Incorrect - Our query will be run on-demand so we don't need to use CloudWatch Logs to store the logs.
C: Correct - This is simple query we can apply Athena directly on S3
D: Incorrect - This take 2 step: use AWS Glue to catalog the logs and use Spark to run SQL query
upvoted 21 times

  Abitek007 Most Recent  1 day, 21 hours ago


serverless operation simply
upvoted 1 times

  thanhnv 1 month, 1 week ago


Selected Answer: C
Keyword:
- needs to perform the analysis with minimal changes to the existing architecture.
- LEAST amount of operational overhead.
C
upvoted 1 times

  Theocode 1 month, 3 weeks ago


Selected Answer: C
A no-brainer, Athena can be used to query data directly from s3
upvoted 1 times

  sandhyaejji 1 month, 3 weeks ago


Amazon Athena is an interactive query service that makes it easy to analyze data directly in Amazon Simple Storage Service (Amazon S3)
using standard SQL. With a few actions in the AWS Management Console, you can point Athena at your data stored in Amazon S3 and
begin using standard SQL to run ad-hoc queries and get results in seconds.
upvoted 1 times
  TariqKipkemei 2 months, 1 week ago
Selected Answer: C
Amazon Athena is a serverless, interactive analytics service used to query data in relational, nonrelational, object, and custom data
sources running on S3.
upvoted 1 times

  Guru4Cloud 2 months, 2 weeks ago


Selected Answer: C
Explanation:
Option C is the most suitable choice for this scenario. Amazon Athena is a serverless query service that allows you to analyze data directly
from Amazon S3 using standard SQL queries. Since the log files are already stored in JSON format in an S3 bucket, there is no need for
data transformation or loading into another service. Athena can directly query the JSON logs without the need for any additional
infrastructure.
upvoted 3 times

  james2033 2 months, 2 weeks ago


Selected Answer: C
I remember question and answer in an easy way with case study https://www.youtube.com/watch?v=Dmw7HOOmiJQ
upvoted 1 times

  miki111 2 months, 3 weeks ago


Best option is CCCC
upvoted 1 times

  animefan1 3 months ago


Selected Answer: C
athena is great for querying data in s3
upvoted 1 times

  oaidv 3 months, 1 week ago


Selected Answer: C
C is right
upvoted 1 times

  KAMERO 3 months, 2 weeks ago


Selected Answer: C
I agree C
upvoted 1 times

  Gullashekar 3 months, 2 weeks ago


Selected Answer: C
https://docs.aws.amazon.com/athena/latest/ug/what-is.html
upvoted 1 times

  cookieMr 3 months, 2 weeks ago


Selected Answer: C
To meet the requirements of analyzing log files stored in JSON format in an Amazon S3 bucket with minimal changes to the existing
architecture and minimal operational overhead, the most suitable option would be Option C: Use Amazon Athena directly with Amazon S3
to run the queries as needed.

Amazon Athena is a serverless interactive query service that allows you to analyze data directly from Amazon S3 using standard SQL
queries. It eliminates the need for infrastructure provisioning or data loading, making it a low-overhead solution.

Overall, Amazon Athena offers a straightforward and efficient solution for analyzing log files stored in JSON format, ensuring minimal
operational overhead and compatibility with simple on-demand queries.
upvoted 1 times

  Bmarodi 4 months ago


Selected Answer: C
C is answer.
upvoted 1 times

  cheese929 4 months, 2 weeks ago


C is correct
upvoted 1 times
Question #3 Topic 1

A company uses AWS Organizations to manage multiple AWS accounts for different departments. The management account has an Amazon S3
bucket that contains project reports. The company wants to limit access to this S3 bucket to only users of accounts within the organization in
AWS Organizations.
Which solution meets these requirements with the LEAST amount of operational overhead?

A. Add the aws PrincipalOrgID global condition key with a reference to the organization ID to the S3 bucket policy.

B. Create an organizational unit (OU) for each department. Add the aws:PrincipalOrgPaths global condition key to the S3 bucket policy.

C. Use AWS CloudTrail to monitor the CreateAccount, InviteAccountToOrganization, LeaveOrganization, and RemoveAccountFromOrganization
events. Update the S3 bucket policy accordingly.

D. Tag each user that needs access to the S3 bucket. Add the aws:PrincipalTag global condition key to the S3 bucket policy.

Correct Answer: A

Community vote distribution


A (95%) 5%

  ude Highly Voted  11 months, 3 weeks ago


Selected Answer: A
aws:PrincipalOrgID Validates if the principal accessing the resource belongs to an account in your organization.
https://aws.amazon.com/blogs/security/control-access-to-aws-resources-by-using-the-aws-organization-of-iam-principals/
upvoted 40 times

  BoboChow 11 months, 3 weeks ago


the condition key aws:PrincipalOrgID can prevent the members who don't belong to your organization to access the resource
upvoted 9 times

  Naneyerocky Highly Voted  11 months ago


Selected Answer: A
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_permissions_overview.html
Condition keys: AWS provides condition keys that you can query to provide more granular control over certain actions.
The following condition keys are especially useful with AWS Organizations:

aws:PrincipalOrgID – Simplifies specifying the Principal element in a resource-based policy. This global key provides an alternative to
listing all the account IDs for all AWS accounts in an organization. Instead of listing all of the accounts that are members of an
organization, you can specify the organization ID in the Condition element.

aws:PrincipalOrgPaths – Use this condition key to match members of a specific organization root, an OU, or its children. The
aws:PrincipalOrgPaths condition key returns true when the principal (root user, IAM user, or role) making the request is in the specified
organization path. A path is a text representation of the structure of an AWS Organizations entity.
upvoted 13 times

  Sleepy_Lazy_Coder 1 month, 3 weeks ago


are we not choosing ou because the least overhead term was use? option B also seems correct
upvoted 2 times

  BlackMamba_4 1 month, 1 week ago


Exactly
upvoted 1 times

  Guru4Cloud Most Recent  2 months, 2 weeks ago


Selected Answer: A
This is the least operationally overhead solution because it does not require any additional infrastructure or configuration. AWS
Organizations already tracks the organization ID of each account, so you can simply add the aws:PrincipalOrgID condition key to the S3
bucket policy and reference the organization ID. This will ensure that only users of accounts within the organization can access the S3
bucket
upvoted 2 times

  james2033 2 months, 2 weeks ago


Selected Answer: A
See video "Ensure identities and networks can only be used to access trusted resources" at https://youtu.be/cWVW0xAiWwc?t=677 at 11:17
use "aws:PrincipalOrgId": "o-fr75jjs531" .
upvoted 1 times

  miki111 2 months, 3 weeks ago


Option A MET THE REQUIREMENT
upvoted 1 times
  cookieMr 3 months, 2 weeks ago
Selected Answer: A
Option A, which suggests adding the aws PrincipalOrgID global condition key with a reference to the organization ID to the S3 bucket
policy, is a valid solution to limit access to the S3 bucket to users within the organization in AWS Organizations. It can effectively achieve
the desired access control.

It restricts access to the S3 bucket based on the organization ID, ensuring that only users within the organization can access the bucket.
This method is suitable if you want to restrict access at the organization level rather than individual departments or organizational units.

The operational overhead for Option A is also relatively low since it involves adding a global condition key to the S3 bucket policy. However,
it is important to note that the organization ID must be accurately configured in the bucket policy to ensure the desired access control is
enforced.

In summary, Option A is a valid solution with minimal operational overhead that can limit access to the S3 bucket to users within the
organization using the aws PrincipalOrgID global condition key.
upvoted 1 times

  karloscetina007 3 months, 2 weeks ago


A is the correct answer.
upvoted 1 times

  Musti35 5 months, 3 weeks ago


You can now use the aws:PrincipalOrgID condition key in your resource-based policies to more easily restrict access to IAM principals from
accounts in your AWS organization. For more information about this global condition key and policy examples using aws:PrincipalOrgID,
read the IAM documentation.
upvoted 1 times

  PhucVuu 6 months ago


Selected Answer: A
Keywords:
- Company uses AWS Organizations
- Limit access to this S3 bucket to only users of accounts within the organization in AWS Organizations
- LEAST amount of operational overhead
A: Correct - We just add PrincipalOrgID global condition key with a reference to the organization ID to the S3 bucket policy
B: Incorrect - We can limit access by this way but this will take more amount of operational overhead
C: Incorrect - AWS CloudTrail only log API events, we can not prevent user access to S3 bucket. For update S3 bucket policy to make it work
you should manually add each account -> this way will not be cover in case of new user is added to Organization.
D: Incorrect - We can limit access by this way but this will take most amount of operational overhead
upvoted 8 times

  linux_admin 6 months ago


Selected Answer: A
Option A proposes adding the aws PrincipalOrgID global condition key with a reference to the organization ID to the S3 bucket policy. This
would limit access to the S3 bucket to only users of accounts within the organization in AWS Organizations, as the aws PrincipalOrgID
condition key can check if the request is coming from within the organization.
upvoted 2 times

  martin451 6 months, 1 week ago


B. Create an organizational unit (OU) for each department. Add the AWS: Principal Org Paths global condition key to the S3 bucket policy.
This solution allows for the S3 bucket to only be accessed by users within the organization in AWS Organizations while minimizing
operational overhead by organizing users into OUs and using a single global condition key in the bucket policy. Option A, adding the
Principal ID global condition key, would require frequent updates to the policy as new users are added or removed from the organization.
Option C, using CloudTrail to monitor events, would require manual updating of the policy based on the events. Option D, tagging each
user, would also require manual tagging updates and may not be scalable for larger organizations with many users.
upvoted 1 times

  iamRohanKaushik 6 months, 2 weeks ago


Selected Answer: A
Answer is A.
upvoted 1 times

  buiducvu 7 months, 3 weeks ago


Selected Answer: A
A is correct
upvoted 1 times

  SilentMilli 8 months, 4 weeks ago


Selected Answer: A
This is the least operationally overhead solution because it requires only a single configuration change to the S3 bucket policy, which will
allow access to the bucket for all users within the organization. The other options require ongoing management and maintenance. Option
B requires the creation and maintenance of organizational units for each department. Option C requires monitoring of specific CloudTrail
events and updates to the S3 bucket policy based on those events. Option D requires the creation and maintenance of tags for each user
that needs access to the bucket.
upvoted 1 times
  Buruguduystunstugudunstuy 9 months, 2 weeks ago
Selected Answer: A
Answered by ChatGPT with an explanation.

The correct solution that meets these requirements with the least amount of operational overhead is Option A: Add the aws
PrincipalOrgID global condition key with a reference to the organization ID to the S3 bucket policy.

Option A involves adding the aws:PrincipalOrgID global condition key to the S3 bucket policy, which allows you to specify the organization
ID of the accounts that you want to grant access to the bucket. By adding this condition to the policy, you can limit access to the bucket to
only users of accounts within the organization.
upvoted 4 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Option B involves creating organizational units (OUs) for each department and adding the aws:PrincipalOrgPaths global condition key
to the S3 bucket policy. This option would require more operational overhead, as it involves creating and managing OUs for each
department.

Option C involves using AWS CloudTrail to monitor certain events and updating the S3 bucket policy accordingly. While this option
could potentially work, it would require ongoing monitoring and updates to the policy, which could increase operational overhead.
upvoted 2 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Option D involves tagging each user that needs access to the S3 bucket and adding the aws:PrincipalTag global condition key to the
S3 bucket policy. This option would require you to tag each user, which could be time-consuming and could increase operational
overhead.

Overall, Option A is the most straightforward and least operationally complex solution for limiting access to the S3 bucket to only
users of accounts within the organization.
upvoted 1 times

  psr83 9 months, 2 weeks ago


Selected Answer: A
use a new condition key, aws:PrincipalOrgID, in these policies to require all principals accessing the resource to be from an account
(including the master account) in the organization. For example, let’s say you have an Amazon S3 bucket policy and you want to restrict
access to only principals from AWS accounts inside of your organization. To accomplish this, you can define the aws:PrincipalOrgID
condition and set the value to your organization ID in the bucket policy. Your organization ID is what sets the access control on the S3
bucket. Additionally, when you use this condition, policy permissions apply when you add new accounts to this organization without
requiring an update to the policy.
upvoted 2 times

  NikaCZ 9 months, 2 weeks ago


Selected Answer: A
aws:PrincipalOrgID – Simplifies specifying the Principal element in a resource-based policy. This global key provides an alternative to
listing all the account IDs for all AWS accounts in an organization.
upvoted 1 times
Question #4 Topic 1

An application runs on an Amazon EC2 instance in a VPC. The application processes logs that are stored in an Amazon S3 bucket. The EC2
instance needs to access the S3 bucket without connectivity to the internet.
Which solution will provide private network connectivity to Amazon S3?

A. Create a gateway VPC endpoint to the S3 bucket.

B. Stream the logs to Amazon CloudWatch Logs. Export the logs to the S3 bucket.

C. Create an instance profile on Amazon EC2 to allow S3 access.

D. Create an Amazon API Gateway API with a private link to access the S3 endpoint.

Correct Answer: A

Community vote distribution


A (100%)

  D2w Highly Voted  11 months, 3 weeks ago


Selected Answer: A
VPC endpoint allows you to connect to AWS services using a private network instead of using the public Internet
upvoted 26 times

  PhucVuu Highly Voted  6 months ago


Selected Answer: A
Keywords:
- EC2 in VPC
- EC2 instance needs to access the S3 bucket without connectivity to the internet

A: Correct - Gateway VPC endpoint can connect to S3 bucket privately without additional cost
B: Incorrect - You can set up interface VPC endpoint for CloudWatch Logs for private network from EC2 to CloudWatch. But from
CloudWatch to S3 bucket: Log data can take up to 12 hours to become available for export and the requirement only need EC2 to S3
C: Incorrect - Create an instance profile just grant access but not help EC2 connect to S3 privately
D: Incorrect - API Gateway like the proxy which receive network from out site and it forward request to AWS Lambda, Amazon EC2, Elastic
Load Balancing products such as Application Load Balancers or Classic Load Balancers, Amazon DynamoDB, Amazon Kinesis, or any
publicly available HTTPS-based endpoint. But not S3
upvoted 23 times

  RNess Most Recent  3 weeks, 6 days ago


Selected Answer: A
VPC endpoint is the best way to connect in private
upvoted 1 times

  Bmarodi 1 month, 1 week ago


Selected Answer: A
With a gateway endpoint, you can access Amazon S3 from your VPC, without requiring an internet gateway or NAT device for your VPC,
and with no additional cost. However, gateway endpoints do not allow access from on-premises networks, from peered VPCs in other AWS
Regions, or through a transit gateway.
Ref. https://docs.aws.amazon.com/vpc/latest/privatelink/what-is-privatelink.html
upvoted 1 times

  TariqKipkemei 2 months, 1 week ago


Selected Answer: A
A VPC endpoint enables customers to privately connect to supported AWS services and VPC endpoint services powered by AWS
PrivateLink.

https://docs.aws.amazon.com/whitepapers/latest/aws-privatelink/what-are-vpc-endpoints.html#:~:text=A-,VPC%20endpoint,-
enables%20customers%20to
upvoted 2 times

  Guru4Cloud 2 months, 2 weeks ago


Selected Answer: A
The answer is A. Create a gateway VPC endpoint to the S3 bucket.

A gateway VPC endpoint is a private way to connect to AWS services without using the internet. This is the best solution for the given
scenario because it will allow the EC2 instance to access the S3 bucket without any internet connectivity
upvoted 1 times

  james2033 2 months, 2 weeks ago


Selected Answer: A
Keyword (1) EC2 in a VPC. (2)EC2 instance need access S3 bucket WITHOUT internet. Therefore, A is correct answer: Create a gateway VPC
endpoint to S3 bucket.
upvoted 1 times

  miki111 2 months, 3 weeks ago


Option A MET THE REQUIREMENT
upvoted 1 times

  cookieMr 3 months, 2 weeks ago


Selected Answer: A
Here's why Option A is the correct choice:

Gateway VPC Endpoint: A gateway VPC endpoint allows you to privately connect your VPC to supported AWS services. By creating a
gateway VPC endpoint for S3, you can establish a private connection between your VPC and the S3 service without requiring internet
connectivity.

Private network connectivity: The gateway VPC endpoint for S3 enables your EC2 instance within the VPC to access the S3 bucket over the
private network, ensuring secure and direct communication between the EC2 instance and S3.

No internet connectivity required: Since the requirement is to access the S3 bucket without internet connectivity, the gateway VPC
endpoint provides a private and direct connection to S3 without needing to route traffic through the internet.

Minimal operational complexity: Setting up a gateway VPC endpoint is a straightforward process. It involves creating the endpoint and
configuring the appropriate routing in the VPC. This solution minimizes operational complexity while providing the required private
network connectivity.
upvoted 2 times

  Bmarodi 4 months ago


Selected Answer: A
A is right answer.
upvoted 1 times

  cheese929 4 months, 2 weeks ago


Selected Answer: A
A is correct
upvoted 1 times

  channn 6 months ago


Selected Answer: A
Option B) not provide private network connectivity to S3.
Option C) not provide private network connectivity to S3.
Option D) API Gateway with a private link provide private network connectivity between a VPC and an HTTP(S) endpoint, not S3.
upvoted 2 times

  linux_admin 6 months ago


Selected Answer: A
Option A proposes creating a VPC endpoint for Amazon S3. A VPC endpoint enables private connectivity between the VPC and S3 without
using an internet gateway or NAT device. This would provide the EC2 instance with private network connectivity to the S3 bucket.
upvoted 2 times

  dee_pandey 6 months ago


Could someone send me a pdf of this dump please? Thank you so much in advance!
upvoted 1 times

  Subhajeetpal 6 months, 1 week ago


Can anyone please send me the pdf of this whole dump... i can be very grateful. thanks in advance.
email- [email protected]
upvoted 2 times

  tienhoboss 6 months, 1 week ago


Selected Answer: A
A bạn ơi :)
upvoted 1 times

  iamRohanKaushik 6 months, 2 weeks ago


Selected Answer: A
Answer is A, but was confused with C, instance role will route through internet.
upvoted 1 times
Question #5 Topic 1

A company is hosting a web application on AWS using a single Amazon EC2 instance that stores user-uploaded documents in an Amazon EBS
volume. For better scalability and availability, the company duplicated the architecture and created a second EC2 instance and EBS volume in
another Availability Zone, placing both behind an Application Load Balancer. After completing this change, users reported that, each time they
refreshed the website, they could see one subset of their documents or the other, but never all of the documents at the same time.
What should a solutions architect propose to ensure users see all of their documents at once?

A. Copy the data so both EBS volumes contain all the documents

B. Configure the Application Load Balancer to direct a user to the server with the documents

C. Copy the data from both EBS volumes to Amazon EFS. Modify the application to save new documents to Amazon EFS

D. Configure the Application Load Balancer to send the request to both servers. Return each document from the correct server

Correct Answer: C

Community vote distribution


C (100%)

  D2w Highly Voted  11 months, 3 weeks ago


Selected Answer: C
Concurrent or at the same time key word for EFS
upvoted 28 times

  mikey2000 Highly Voted  10 months, 2 weeks ago


Ebs doesnt support cross az only reside in one Az but Efs does, that why it's c
upvoted 19 times

  pbpally 4 months, 4 weeks ago


And just for clarification to others, you can have COPIES of the same EBS volume in one AZ and in another via EBS Snapshots, but don't
confuse that with the idea of having some sort of global capability that has concurrent copying mechanisms.
upvoted 4 times

  RNess Most Recent  3 weeks, 6 days ago


Selected Answer: C
EFS is to muliple AZ
upvoted 1 times

  TariqKipkemei 2 months, 1 week ago


Selected Answer: C
Shared file storage = EFS
upvoted 1 times

  Guru4Cloud 2 months, 2 weeks ago


Selected Answer: C
The answer is C. Copy the data from both EBS volumes to Amazon EFS. Modify the application to save new documents to Amazon EFS.

The current architecture is using two separate EBS volumes, one for each EC2 instance. This means that each instance only has a subset of
the documents. When a user refreshes the website, the Application Load Balancer will randomly direct them to one of the two instances. If
the user's documents are not on the instance that they are directed to, they will not be able to see them.
upvoted 1 times

  james2033 2 months, 2 weeks ago


Selected Answer: C
Keyword "stores user-uploaded documents". Two EC2 instances behind Application Load Balancer. See
https://docs.aws.amazon.com/efs/latest/ug/how-it-works.html#efs-regional-ec2 . In the diagram, Per Amazon EC2 in a different
Availability zone, and Amazon Elastic File System support this case.

Solution: Amazon Elastic File System, see https://aws.amazon.com/efs/ . "Amazon EFS file system creation, mounting, and settings"
https://www.youtube.com/watch?v=Aux37Nwe5nc . "Amazon EFS overview" https://www.youtube.com/watch?v=vAV4ASDnbN0 .
upvoted 1 times

  miki111 2 months, 3 weeks ago


Option C MET THE REQUIREMENT
upvoted 1 times

  FroZor 2 months, 3 weeks ago


They could use Sicky sessions with EBS, if they don't want to use EFS
upvoted 1 times

  datmd77 3 weeks, 3 days ago


no, they want to see both documents so must be EFS, not sticky sessions.
upvoted 1 times

  capino 3 months ago


EFS Reduces latency and all user can see all files at once
upvoted 1 times

  cookieMr 3 months, 2 weeks ago


Selected Answer: C
To ensure users can see all their documents at once in the duplicated architecture with multiple EC2 instances and EBS volumes behind an
Application Load Balancer, the most appropriate solution is Option C: Copy the data from both EBS volumes to Amazon EFS (Elastic File
System) and modify the application to save new documents to Amazon EFS.

In summary, Option C, which involves copying the data to Amazon EFS and modifying the application to use Amazon EFS for document
storage, is the most appropriate solution to ensure users can see all their documents at once in the duplicated architecture. Amazon EFS
provides scalability, availability, and shared access, allowing both EC2 instances to access and synchronize the documents seamlessly.
upvoted 3 times

  albertmunene 3 months, 2 weeks ago


ffkfkffkfkf
upvoted 1 times

  Bmarodi 4 months ago


Selected Answer: C
C is right answer.
upvoted 1 times

  Praveen_Ch 4 months, 1 week ago


Selected Answer: C
C because the other options don't put all the data in one place.
upvoted 1 times

  smash_aws 4 months, 2 weeks ago


Option C is the best answer, option D is pretty vague. All other options are obviously wrong.
upvoted 1 times

  abhishek2021 4 months, 2 weeks ago


The answer is B as it is aligned to min. b/w usage and also the time taken is 6-7 days which is about the same as transferring over the
internet of 1G as per option C.
upvoted 2 times

  cheese929 4 months, 2 weeks ago


Selected Answer: C
C is correct
upvoted 1 times

  [Removed] 5 months, 1 week ago


If there is anyone who is willing to share his/her contributor access, then please write to [email protected]
upvoted 2 times
Question #6 Topic 1

A company uses NFS to store large video files in on-premises network attached storage. Each video file ranges in size from 1 MB to 500 GB. The
total storage is 70 TB and is no longer growing. The company decides to migrate the video files to Amazon S3. The company must migrate the
video files as soon as possible while using the least possible network bandwidth.
Which solution will meet these requirements?

A. Create an S3 bucket. Create an IAM role that has permissions to write to the S3 bucket. Use the AWS CLI to copy all files locally to the S3
bucket.

B. Create an AWS Snowball Edge job. Receive a Snowball Edge device on premises. Use the Snowball Edge client to transfer data to the
device. Return the device so that AWS can import the data into Amazon S3.

C. Deploy an S3 File Gateway on premises. Create a public service endpoint to connect to the S3 File Gateway. Create an S3 bucket. Create a
new NFS file share on the S3 File Gateway. Point the new file share to the S3 bucket. Transfer the data from the existing NFS file share to the
S3 File Gateway.

D. Set up an AWS Direct Connect connection between the on-premises network and AWS. Deploy an S3 File Gateway on premises. Create a
public virtual interface (VIF) to connect to the S3 File Gateway. Create an S3 bucket. Create a new NFS file share on the S3 File Gateway. Point
the new file share to the S3 bucket. Transfer the data from the existing NFS file share to the S3 File Gateway.

Correct Answer: C

Community vote distribution


B (83%) Other

  Gatt Highly Voted  11 months, 3 weeks ago


Selected Answer: B
Let's analyse this:

B. On a Snowball Edge device you can copy files with a speed of up to 100Gbps. 70TB will take around 5600 seconds, so very quickly, less
than 2 hours. The downside is that it'll take between 4-6 working days to receive the device and then another 2-3 working days to send it
back and for AWS to move the data onto S3 once it reaches them. Total time: 6-9 working days. Bandwidth used: 0.

C. File Gateway uses the Internet, so maximum speed will be at most 1Gbps, so it'll take a minimum of 6.5 days and you use 70TB of
Internet bandwidth.

D. You can achieve speeds of up to 10Gbps with Direct Connect. Total time 15.5 hours and you will use 70TB of bandwidth. However, what's
interesting is that the question does not specific what type of bandwidth? Direct Connect does not use your Internet bandwidth, as you
will have a dedicate peer to peer connectivity between your on-prem and the AWS Cloud, so technically, you're not using your "public"
bandwidth.

The requirements are a bit too vague but I think that B is the most appropriate answer, although D might also be correct if the bandwidth
usage refers strictly to your public connectivity.
upvoted 55 times

  LuckyAro 8 months, 3 weeks ago


But it said "as soon as possible" It takes about 4-6 weeks to provision a direct connect.
upvoted 12 times

  Uncolored8034 5 months, 4 weeks ago


This calculation is out of the scope.
C is right because the company wants to use the LEAST POSSIBLE NETWORK BANDWITH. Therefore they don't want or can't use the
snowball capabilities of having a such fast connection because it draws too much bandwith within their company.
upvoted 7 times

  [Removed] 4 months, 1 week ago


NFS is using bandwidth within their company, so that logic does not apply.
upvoted 2 times

  tribagus6 4 months ago


yeah first company use NFS file to store the data right then the company want to move to S3. with endpoint we dont need public
connectivity
upvoted 1 times

  darn 5 months, 1 week ago


you are out of scope
upvoted 6 times
  Help2023 7 months, 2 weeks ago
D is a viable solution but to setup D it can take weeks or months and the question does say as soon as possible.
upvoted 4 times

  ShlomiM 7 months, 2 weeks ago


Time Calc Clarification:

Data: 70TB
=70TB*8b/B=560Tb
=560Tb*1000G/1T=560000Gb
Speed: 100Gb/s

Time=Data:Speed=56000Gb:100Gb/s=5600s
Time=5600s:3600s/hour=~1.5 hours (in case always on max speed)
upvoted 2 times

  tuloveu Highly Voted  11 months, 3 weeks ago


Selected Answer: B
As using the least possible network bandwidth.
upvoted 29 times

  awashenko Most Recent  12 hours, 16 minutes ago


Selected Answer: B
B is the easiest and least resource intensive
upvoted 1 times

  debolek 3 days, 18 hours ago


Selected Answer: B
option B is the most efficient and least resource-intensive solution for migrating large video files to Amazon S3 in a timely manner
upvoted 1 times

  Yonimoni 4 weeks, 1 day ago


Selected Answer: C
while using the least possible network bandwidth
they want to do it over the internet
upvoted 3 times

  kwang312 1 month, 1 week ago


Selected Answer: B
I think B is better answer
upvoted 1 times

  triz 2 months ago


Selected Answer: C
Correct answer is C. The key words are NFS, S3 and low bandwidth. S3 File Gateway retains access through NFS protocol so that existing
users don't have to change anything while being able to use S3 to store the files. The question also gives a clue that we only need to
upload existing 70TB and no new data is added. This gives a clue that users only need to read data via NFS and don't need to write to S3
bucket. The existing data can be quickly uploaded (via endpoint) to S3 bucket by admin.
upvoted 2 times

  abthakur 2 months ago


C is correct answer, reason S3 file gateway will consume least amount of bandwidth and can transfer data as fast as possible.
upvoted 2 times

  Kits 2 months ago


Selected Answer: C
To me, it appears C is correct as it is mentioned to use the least network bandwidth no network bandwidth, so B doesn't come into the
picture.
upvoted 2 times

  Kits 2 months ago


To me, it appears C is correct as it is mentioned to use the least network bandwidth no network bandwidth, so B doesn't come into the
picture.
upvoted 2 times

  TariqKipkemei 2 months, 1 week ago


Selected Answer: C
To me this statement <'The company must migrate the video files as soon as possible while using the least possible network bandwidth.'>
means they want to make the transfer over the internet.

Hence C makes more sense as an option.


upvoted 2 times

  martinfrad 2 months, 1 week ago


Selected Answer: B
Keys:
1. The total storage is 70 TB and is no longer growing
2. Must migrate the video files as soon as possible
3. Using the least possible network bandwidth

AWS Snowball can transfer up to 80TB per device without use network bandwidth and transfer data at up to 100 Gbps.
upvoted 1 times

  Guru4Cloud 2 months, 1 week ago


Selected Answer: B
Option B is the correct solution to meet the requirements.

The reasons are:

AWS Snowball Edge provides a physical storage device to transfer large local datasets to Amazon S3. This avoids using network bandwidth
for the data transfer.
Snowball Edge can transfer up to 80TB per device, so a single Snowball Edge job can handle the 70TB dataset.
The client transfers data to the Snowball Edge device on-premises, avoiding the need to copy the data over the network.
When AWS receives the device back, the data is imported to the S3 bucket.
This achieves fast data migration without using network bandwidth.

Option A would consume large amounts of network bandwidth for the data transfer.

Options C and D use S3 File Gateway, which still requires the data to be sent over the network to S3. This does not meet the goal of
minimizing network bandwidth.

So Option B with Snowball Edge is the right approach to migrate the large dataset quickly while using minimal network bandwidth.
upvoted 2 times

  Nazmul123 2 months, 1 week ago


Snowball Edge is more suitable because with the storage capacity of 80TB, it is more suited for this kind of large scale transfer whereas
AWS S3 file gateway is more suitable for smaller, ongoing transfer of data where we need a hybrid cloud storage environment
upvoted 2 times

  Guru4Cloud 2 months, 2 weeks ago


Selected Answer: B
The answer is B. Create an AWS Snowball Edge job. Receive a Snowball Edge device on premises. Use the Snowball Edge client to transfer
data to the device. Return the device so that AWS can import the data into Amazon S3.

This solution is the most efficient way to migrate the video files to Amazon S3. The Snowball Edge device can transfer data at up to 100
Gbps, which is much faster than the company's current network bandwidth. The Snowball Edge device is also a secure way to transfer
data, as it is encrypted at rest and in transit.
upvoted 2 times

  james2033 2 months, 2 weeks ago


Selected Answer: B
Keyword "70 TB and is no longer growing", choose AWS Snowball ( https://aws.amazon.com/snowball/ )
upvoted 3 times

  Ranfer 2 months, 2 weeks ago


Selected Answer: B
B. No bandwidth, fast enough and has the least operational overhead
upvoted 1 times
Question #7 Topic 1

A company has an application that ingests incoming messages. Dozens of other applications and microservices then quickly consume these
messages. The number of messages varies drastically and sometimes increases suddenly to 100,000 each second. The company wants to
decouple the solution and increase scalability.
Which solution meets these requirements?

A. Persist the messages to Amazon Kinesis Data Analytics. Configure the consumer applications to read and process the messages.

B. Deploy the ingestion application on Amazon EC2 instances in an Auto Scaling group to scale the number of EC2 instances based on CPU
metrics.

C. Write the messages to Amazon Kinesis Data Streams with a single shard. Use an AWS Lambda function to preprocess messages and store
them in Amazon DynamoDB. Configure the consumer applications to read from DynamoDB to process the messages.

D. Publish the messages to an Amazon Simple Notification Service (Amazon SNS) topic with multiple Amazon Simple Queue Service (Amazon
SOS) subscriptions. Configure the consumer applications to process the messages from the queues.

Correct Answer: A

Community vote distribution


D (80%) A (17%)

  rein_chau Highly Voted  11 months, 4 weeks ago


Selected Answer: D
D makes more sense to me.
upvoted 39 times

  SilentMilli 8 months, 4 weeks ago


By default, an SQS queue can handle a maximum of 3,000 messages per second. However, you can request higher throughput by
contacting AWS Support. AWS can increase the message throughput for your queue beyond the default limits in increments of 300
messages per second, up to a maximum of 10,000 messages per second.

It's important to note that the maximum number of messages per second that a queue can handle is not the same as the maximum
number of requests per second that the SQS API can handle. The SQS API is designed to handle a high volume of requests per second,
so it can be used to send messages to your queue at a rate that exceeds the maximum message throughput of the queue.
upvoted 7 times

  Abdel42 8 months, 3 weeks ago


The limit that you're mentioning apply to FIFO queues. Standard queues are unlimited in throughput
(https://aws.amazon.com/sqs/features/). Do you think that the use case require FIFO queue ?
upvoted 12 times

  daizy 8 months ago


D. Publish the messages to an Amazon Simple Notification Service (Amazon SNS) topic with multiple Amazon Simple Queue Service
(Amazon SQS) subscriptions. Configure the consumer applications to process the messages from the queues.

This solution uses Amazon SNS and SQS to publish and subscribe to messages respectively, which decouples the system and enables
scalability by allowing multiple consumer applications to process the messages in parallel. Additionally, using Amazon SQS with
multiple subscriptions can provide increased resiliency by allowing multiple copies of the same message to be processed in parallel.
upvoted 5 times

  9014 10 months ago


of course, the answer is D
upvoted 3 times

  Bevemo Highly Voted  10 months, 4 weeks ago


D. SNS Fan Out Pattern https://docs.aws.amazon.com/sns/latest/dg/sns-common-scenarios.html (A is wrong Kinesis Analysis does not
'persist' by itself.)
upvoted 16 times

  hrushikeshrelekar Most Recent  2 days, 3 hours ago


Option D

Amazon SNS allows you to publish messages to a topic, which can then fan out those messages to multiple subscribers.
By using Amazon SQS as a subscriber to the SNS topic, you can handle the message load in a decoupled and scalable way. SQS can store
messages until the consuming application is ready to process them, helping to smooth out the variance in message load.
This approach allows the company to effectively decouple the message producing applications from the consuming applications, and it
can easily scale to handle the high load of messages.
The number of messages (100,000 each second) might require careful configuration and sharding of SQS queues or use of FIFO queues to
ensure that they can handle the load.
Options A, B, and C have their own limitations:
upvoted 1 times
  M0SHE 5 days, 14 hours ago
Selected Answer: D
D is the right answer
upvoted 1 times

  AshokBabu 2 weeks, 1 day ago


My choice is D
The only reason A is chosen because of incoming message rate, which is 100000. I was referring the document. If it is standard topic or
standard queue, they support unlimited throughput. So, incoming message rate can't be the criteria for choosing option A
upvoted 1 times

  Chiquitabandita 1 month ago


the wording in the question leads me to think it is D.
upvoted 1 times

  Syruis 1 month, 2 weeks ago


Selected Answer: D
"(Amazon SOS)" should be "(Amazon SQS)" it blow my mind :/
upvoted 2 times

  BillyBlunts 1 month, 2 weeks ago


How are we to go into the test with such confusing answers....I agree as well that it should be D. However, I want to select the answer they
say is right so I can pass the test. Everywhere I have looked including Quizlet who I think is pretty reliable, the answer is A. Would you guys
say it is better to go with the answer they select, even though we majority agree it is a different answer?
upvoted 1 times

  Stevey 2 months ago


D. is the answer.
The question states that there are dozens of other applications and microservices that consume these messages and that the volume of
messages can vary drastically and increase suddenly. Therefore, you need a solution that can handle a high volume of messages,
distribute them to multiple consumers, and scale quickly. SNS with SQS provides these capabilities.

Publishing messages to an SNS topic with multiple SQS subscriptions is a common AWS pattern for achieving both decoupling and
scalability in message-driven systems. SNS allows messages to be fanned out to multiple subscribers, which in this case would be SQS
queues. Each consumer application could then process messages from its SQS queue at its own pace, providing scalability and ensuring
that all messages are processed by all consumer applications.

A. Amazon Kinesis Data Analytics is primarily used for real-time analysis of streaming data. It's not designed to distribute messages to
multiple consumers.
upvoted 2 times

  sabs1981 2 months ago


Selected Answer: D
Decoupling ingestion from subscription of messages. Should be done via queues and ideally SQS should be used in this case.
upvoted 1 times

  Shanky9916 2 months, 1 week ago


The answer is D, because de-coupling is used only in SNS and SQS.
upvoted 2 times

  Guru4Cloud 2 months, 2 weeks ago


Selected Answer: D
The answer is D. Publish the messages to an Amazon Simple Notification Service (Amazon SNS) topic with multiple Amazon Simple Queue
Service (Amazon SQS) subscriptions. Configure the consumer applications to process the messages from the queues.
This solution is the most scalable and decoupled solution for the given scenario. Amazon SNS is a pub/sub messaging service that can be
used to decouple applications. Amazon SQS is a fully managed message queuing service that can be used to store and process messages.
The solution would work as follows:
The ingestion application would publish the messages to an Amazon SNS topic.
The Amazon SNS topic would have multiple Amazon SQS subscriptions.
The consumer applications would subscribe to the Amazon SQS queues.
The consumer applications would process the messages from the Amazon SQS queues.
upvoted 2 times

  Ranfer 2 months, 2 weeks ago


Selected Answer: D
Producer and consumer architecture works best with SQS (SNS) services
upvoted 1 times

  Kaab_B 2 months, 3 weeks ago


Selected Answer: D
The solution that meets the requirements of decoupling the solution and increasing scalability, considering the varying number of
messages and sudden increases, is option D: Publish the messages to an Amazon Simple Notification Service (SNS) topic with multiple
Amazon Simple Queue Service (SQS) subscription
upvoted 1 times
  miki111 2 months, 3 weeks ago
Option D MET THE REQUIREMENT
upvoted 1 times

  RupeC 2 months, 3 weeks ago


Selected Answer: D
SNS + SQS Fan-out architecture. This is where messages from an SNS topic are fanned out to multiple SQS queues that subscribe to a
topic. Thus the consumers can take their feeds from different queues.
upvoted 1 times

  jaydesai8 2 months, 3 weeks ago


Selected Answer: D
SQS have unlimited throughput and help easily decoupling the applications
D makes more sense
upvoted 1 times
Question #8 Topic 1

A company is migrating a distributed application to AWS. The application serves variable workloads. The legacy platform consists of a primary
server that coordinates jobs across multiple compute nodes. The company wants to modernize the application with a solution that maximizes
resiliency and scalability.
How should a solutions architect design the architecture to meet these requirements?

A. Configure an Amazon Simple Queue Service (Amazon SQS) queue as a destination for the jobs. Implement the compute nodes with Amazon
EC2 instances that are managed in an Auto Scaling group. Configure EC2 Auto Scaling to use scheduled scaling.

B. Configure an Amazon Simple Queue Service (Amazon SQS) queue as a destination for the jobs. Implement the compute nodes with Amazon
EC2 instances that are managed in an Auto Scaling group. Configure EC2 Auto Scaling based on the size of the queue.

C. Implement the primary server and the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure
AWS CloudTrail as a destination for the jobs. Configure EC2 Auto Scaling based on the load on the primary server.

D. Implement the primary server and the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure
Amazon EventBridge (Amazon CloudWatch Events) as a destination for the jobs. Configure EC2 Auto Scaling based on the load on the
compute nodes.

Correct Answer: C

Community vote distribution


B (94%) 3%

  rein_chau Highly Voted  11 months, 4 weeks ago


Selected Answer: B
A - incorrect: Schedule scaling policy doesn't make sense.
C, D - incorrect: Primary server should not be in same Auto Scaling group with compute nodes.
B is correct.
upvoted 58 times

  Wilson_S 10 months, 3 weeks ago


https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-using-sqs-queue.html
upvoted 4 times

  Sinaneos Highly Voted  11 months, 4 weeks ago


Selected Answer: B
The answer seems to be B for me:
A: doesn't make sense to schedule auto-scaling
C: Not sure how CloudTrail would be helpful in this case, at all.
D: EventBridge is not really used for this purpose, wouldn't be very reliable
upvoted 15 times

  debolek Most Recent  3 days, 15 hours ago


Option B provides the required scalability, resiliency, and dynamic workload handling that the company needs for its distributed
application while maximizing efficiency and
minimizing operational overhead
upvoted 1 times

  M0SHE 5 days, 14 hours ago


Selected Answer: B
Option B: This option provides a decoupled architecture where the jobs are sent to an SQS queue. The compute nodes (EC2 instances in an
Auto Scaling group) can then process these jobs. Scaling based on the size of the SQS queue (the number of messages) allows the
architecture to adapt to variable workloads, scaling out when the queue depth increases and scaling in when the depth decreases.
upvoted 1 times

  joyce66 3 weeks, 1 day ago


A and B are not correct. The question is about multiple nodes / distributed system. A and B use SQS which is a message driven solution.
You don't know the system from the question is message driven or not. The answer is either C or D. I selected D. But after reading
CloudTrail doc, C is correct. CloudTrail monitors actions, and CloudWatch monitoring resources. The system composes of multiple nodes
which perform actions. From actions monitored/recorded, CloudTrail can trigger/notify next workload to action...
upvoted 2 times

  krozmok 4 weeks, 1 day ago


I think that B is correct, but considering that the questions mentions that they are only migrating but no redesigning its architecture, that
could be a key word to go for C as correct. It does not scales like SQS, but it fullfill the requirements. And it is well known that no one
should modify Legacy Code :v
upvoted 1 times
  BillyBlunts 1 month, 1 week ago
Can anyone tell me what answers we are to pick on the test. This site is driving me crazy with not having matching answers, especially
with ones like this where the answer they have really doesn't seem like the right one. I am hesitant to pick the actual correct answer
because the dumps have wrong answers. Any help is appreciated.
upvoted 7 times

  DavidArmas 1 month, 2 weeks ago


Using CloudTrail as a "target for jobs" doesn't make sense, as CloudTrail is designed to audit and log API events in an AWS environment,
not to manage jobs in a distributed application.
upvoted 5 times

  Teruteru 1 month, 3 weeks ago


<Correct Answer> is saying the option C is the correct answer. But in the <Community vote distribution>, the option B(97%) is most voted.
So, do I still need to consider the C is the correct answer? or should I consider the most voted is the correct one?
Sorry but I just confused.
upvoted 2 times

  pKap1812 1 month, 1 week ago


I've gone through the first 290 questions, which are accessible for free, and I'm just doing a revision run. Trust me when I say that
about 98% of the time, I have found no dissonance between the most voted answer and what answer I could derive from a few simple
google searches, and the remaining 2% was because of ambiguity. So stick to the most voted ones, there's a much higher relative
probability of them being accurate.
upvoted 5 times

  niltriv98 1 month, 2 weeks ago


I always consider the most voted answer to be correct and most of the time they explain as well why that option is correct.
upvoted 2 times

  Jenny9063 1 month, 3 weeks ago


I am new to this and I am confused. why is the voted answer different from the correct answer? what is the accuracy of the correct
answer?
upvoted 1 times

  VSiqueira 2 months, 1 week ago


aaa
sadasdsadasdsadsadas
upvoted 2 times

  hakim1977 2 months, 1 week ago


Selected Answer: B
why "C" is the answer ? CloudTrail has nothing to do here, that's the b the right answer
upvoted 2 times

  hakim1977 2 months, 1 week ago


why "C" is the answer ? CloudTrail has nothing to do here, that's the b the right answer
upvoted 1 times

  troubledev 2 months, 1 week ago


why "C" is the answer ? what cloud trial has to do here ?
upvoted 1 times

  Guru4Cloud 2 months, 2 weeks ago


Selected Answer: B
Based on the requirements stated, Option B is the most appropriate solution. It utilizes Amazon SQS for job destination and EC2 Auto
Scaling based on the size of the queue to handle variable workloads while maximizing resiliency and scalability.
upvoted 1 times

  BHU_9ijn 2 months, 2 weeks ago


Selected Answer: B
CloudTrail is used for auditing, should not apply in work flow.
Anyone knows why is C?
upvoted 1 times

  miki111 2 months, 3 weeks ago


Option B MET THE REQUIREMENT
upvoted 1 times
Question #9 Topic 1

A company is running an SMB file server in its data center. The file server stores large files that are accessed frequently for the first few days after
the files are created. After 7 days the files are rarely accessed.
The total data size is increasing and is close to the company's total storage capacity. A solutions architect must increase the company's available
storage space without losing low-latency access to the most recently accessed files. The solutions architect must also provide file lifecycle
management to avoid future storage issues.
Which solution will meet these requirements?

A. Use AWS DataSync to copy data that is older than 7 days from the SMB file server to AWS.

B. Create an Amazon S3 File Gateway to extend the company's storage space. Create an S3 Lifecycle policy to transition the data to S3 Glacier
Deep Archive after 7 days.

C. Create an Amazon FSx for Windows File Server file system to extend the company's storage space.

D. Install a utility on each user's computer to access Amazon S3. Create an S3 Lifecycle policy to transition the data to S3 Glacier Flexible
Retrieval after 7 days.

Correct Answer: D

Community vote distribution


B (83%) Other

  Sinaneos Highly Voted  11 months, 4 weeks ago


Answer directly points towards file gateway with lifecycles,
https://docs.aws.amazon.com/filegateway/latest/files3/CreatingAnSMBFileShare.html

D is wrong because utility function is vague and there is no need for flexible storage.
upvoted 40 times

  Udoyen 10 months ago


Yes it might be vague but how do we keep the low-latency access that only flexible can offer?
upvoted 2 times

  SuperDuperPooperScooper 1 month, 2 weeks ago


Low-latency access is only required for the first 7 days, B maintains that fast access for 7 days and only then are the files sent to
Glacier Archive
upvoted 1 times

  Nava702 1 month ago


It says low-latency is required for the most recently accessed files, not new ones. So if an older file is retrieved from deep archive,
it should then readily be accessible, according to the question, which points toward Flexible retrieval. However the utility portion
in the answer D is vague.
upvoted 1 times

  javitech83 Highly Voted  10 months ago


Selected Answer: B
B answwer is correct. low latency is only needed for newer files. Additionally, File GW provides low latency access by caching frequently
accessed files locally so answer is B
upvoted 21 times

  lowkey07 Most Recent  4 days ago


Installing a utility on each user computer is a MANUAL process. AWS will always favor an AUTOMATED process over a manual process. the
correct answer is B.
upvoted 1 times

  M0SHE 5 days, 14 hours ago


Selected Answer: B
ption B: Amazon S3 File Gateway provides a hybrid cloud storage solution, integrating on-premises environments with cloud storage. Files
written to the file share are automatically saved as S3 objects. With S3 Lifecycle policies, you can transition objects between storage
classes. Transitioning to Glacier Deep Archive is suitable for rarely accessed files. This solution addresses both the storage capacity and
lifecycle management requirements.
upvoted 1 times

  pakut2 1 week, 2 days ago


Selected Answer: A
A is the only answer meeting the requirements. B and D are incorrect, since minimum storage duration needed for an object to be moved
into Glacier is 90 days, not 7 https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-class-intro.html#sc-compare. C does not
provide lifecycle management, which is a part of the requirements
upvoted 2 times

  pakut2 1 week, 2 days ago


I was wrong. Turns out you are able to transition from standard to Glacier after just 7 days. I can't find it explicitly stated anywhere, but
it works in the console. My confusion is from the `minimum storage duration` metric. As I understand it now, you are able to
delete/move objects before minimum storage duration is exceeded, but you pay for the entire period nonetheless. So it's doable, but
not cost effective. Anyway, it does not apply here since S3 Standard storage class does not have a minimum storage duration
constraint. So, I would go with B. Both B and D achieve the same thing, but B is way simpler to setup and maintain
upvoted 2 times

  LR2023 1 week, 4 days ago


Selected Answer: A
https://aws.amazon.com/about-aws/whats-new/2019/08/aws-datasync-can-now-transfer-data-to-and-from-smb-files-shares/

we caanot move objects to S3 glacier within 7 days


upvoted 2 times

  sanjay_cloud_guy 2 weeks ago


D Correct answer.
keywords->low-latency is required for the most recently accessed files which will be from glacier having low latency
utility will run as from multiple systems connected to SMB server to do the transfer.
upvoted 1 times

  dbs6339 3 weeks ago


Selected Answer: D
Exactly, what we needs to focus on that is increase the company's available storage space not total storage space with not losing the low-
laytency access.

Answer D is the more exact.

B/D both answer can be make more available storage space after 7 days send to the S3 galacier but B will losing access about most
recently accessed files with the low-latency access.
upvoted 1 times

  reema908516 3 weeks, 1 day ago


Selected Answer: B
B answwer is correct
upvoted 1 times

  BillyBlunts 1 month, 2 weeks ago


Can anyone tell me what the test is going to use as the correct answer? I don't want to go into the test and just answer the ones that
majority voted for but really they don't have it as the right answer. BTW the discussions are awesome and helps you learn a lot from other
peoples intellect.
upvoted 3 times

  Fresbie99 1 month, 2 weeks ago


Acc to the question we need lifecycle policy and file gateway - Also the S3 flexible retrieval is not required here, Deep archive is the
solution.
hence B is correct
upvoted 1 times

  McLobster 2 months ago


Selected Answer: A
according to documentation the minimum storage timeframe for an object inside S3 before being able to transition using lifecycle policy
is 30 days , so those 7 days policies kinda seem wrong to me

Transition actions – These actions define when objects transition to another storage class. For example, you might choose to transition
objects to the S3 Standard-IA storage class 30 days after creating them, or archive objects to the S3 Glacier Flexible Retrieval storage class
one year after creating them. For more information, see Using Amazon S3 storage classes.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lifecycle-mgmt.html

I was thinking of option A using DataSync as a scheduled task? am i wrong here?


https://aws.amazon.com/datasync/
upvoted 1 times

  TariqKipkemei 2 months ago


Selected Answer: B
Increase the company's available storage space without losing low-latency access to the most recently accessed files = Amazon S3 File
Gateway offers SMB or NFS-based access to data in Amazon S3 with local caching.
Provide file lifecycle management = S3 Lifecycle policy.
upvoted 1 times

  Zoro_ 2 months, 1 week ago


It says rarely accessed so Flexible retrieval can be a better option please clarify on this
for deep retrieval, it takes about 12 hours (maybe less than too)
upvoted 1 times

  hakim1977 2 months, 1 week ago


Selected Answer: B
B answwer is correct.
upvoted 1 times

  Guru4Cloud 2 months, 2 weeks ago


Selected Answer: B
The most appropriate solution for the given requirements would be:

B. Create an Amazon S3 File Gateway to extend the company's storage space. Create an S3 Lifecycle policy to transition the data to S3
Glacier Deep Archive after 7 days.
upvoted 1 times

  premnick 2 months, 2 weeks ago


Selected Answer: D
The question says, after 7 days the file are "rarely accessed", which means there are still possibilities that these filed would be needed in
short period of time. Glacier Flexible Retrieval would fit this requirement.
upvoted 3 times
Question #10 Topic 1

A company is building an ecommerce web application on AWS. The application sends information about new orders to an Amazon API Gateway
REST API to process. The company wants to ensure that orders are processed in the order that they are received.
Which solution will meet these requirements?

A. Use an API Gateway integration to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic when the application
receives an order. Subscribe an AWS Lambda function to the topic to perform processing.

B. Use an API Gateway integration to send a message to an Amazon Simple Queue Service (Amazon SQS) FIFO queue when the application
receives an order. Configure the SQS FIFO queue to invoke an AWS Lambda function for processing.

C. Use an API Gateway authorizer to block any requests while the application processes an order.

D. Use an API Gateway integration to send a message to an Amazon Simple Queue Service (Amazon SQS) standard queue when the
application receives an order. Configure the SQS standard queue to invoke an AWS Lambda function for processing.

Correct Answer: A

Community vote distribution


B (98%)

  Sinaneos Highly Voted  11 months, 4 weeks ago


Selected Answer: B
B because FIFO is made for that specific purpose
upvoted 49 times

  rein_chau Highly Voted  11 months, 4 weeks ago


Selected Answer: B
Should be B because SQS FIFO queue guarantees message order.
upvoted 23 times

  AWSGuru123 Most Recent  1 week, 1 day ago


Selected Answer: B
FIFO queue suits best
upvoted 1 times

  praveenkumar2 1 week, 4 days ago


I vote for Option B, When the question says about order ......blindly go for SQS FIFO
upvoted 1 times

  dbs6339 3 weeks ago


Selected Answer: A
Answer is A, this is the order process. when ecommerce application an order topic occured. many consumer api subscribe this topic. so is
it not proper using SQS. SNS using is the best.
upvoted 2 times

  dbs6339 3 days, 17 hours ago


addtionally add more detail about the Rest api from question. the SQS FIFO is not proper option. because SQS is porpose one
consumer. the rest api is consume many request so this case is correct that the SNS. SNS publish a topic that is fan out for many
subscribers through some transfer methods. It's Could be SQS if after the SNS from the API gateway.
upvoted 1 times

  dbs6339 3 days, 16 hours ago


I Was wrong answer B is correct
upvoted 1 times

  paobalinas 3 weeks, 5 days ago


How come answers do not match? I also think B is the correct answer coz of the FIFO mention
upvoted 1 times

  MakaylaLearns 4 weeks ago


I made two short videos explaining why the answer is B
Also we should note that sometimes SNS loses the message whilst SQS will hold onto the message until it can send it.

These videos are super quick!

https://youtube.com/shorts/Je_Zc_qWoYE?feature=share
What is API gateway?
https://youtube.com/shorts/1IGqAHgpqEo?feature=share
upvoted 2 times
  benacert 4 weeks ago
B it is..
upvoted 1 times

  chrisda 1 month ago


option B
upvoted 1 times

  PLN6302 1 month, 1 week ago


option B
upvoted 1 times

  zjcorpuz 2 months, 1 week ago


B is the correct answer SQS configured as FIFO will suffice the requirement
upvoted 1 times

  prabhjot 2 months, 1 week ago


Ans is B(Amazon SQS FIFO queue.) - check the video here - https://aws.amazon.com/sqs/
upvoted 1 times

  Guru4Cloud 2 months, 1 week ago


Selected Answer: B
Explanation:
- Amazon API Gateway will be used to receive the orders from the web application.
- Instead of directly processing the orders, the API Gateway will integrate with an Amazon SQS FIFO queue.
- FIFO (First-In-First-Out) queues in Amazon SQS ensure that messages are processed in the order they are received.
- By using a FIFO queue, the order processing is guaranteed to be sequential, ensuring that the first order received is processed before
the next one.
- An AWS Lambda function can be configured to be triggered by the SQS FIFO queue, processing the orders as they arrive
upvoted 5 times

  james2033 2 months, 2 weeks ago


Selected Answer: B
Keyword "orders are processed in the order that they are received", choose what has word "SQS", there are B and D. B is better than D,
with keyword "FIFO" is exist.
upvoted 1 times

  ShreyasAlavala 2 months, 3 weeks ago


Can someone share the PDF to [email protected]
upvoted 1 times

  Multiverse 2 months, 3 weeks ago


Selected Answer: B
B. Process in the order received. SNS does not provide FIFO capabilities
upvoted 1 times

  jaydesai8 2 months, 3 weeks ago


Selected Answer: B
B because FIFO is made for that specific purpose
upvoted 1 times
Question #11 Topic 1

A company has an application that runs on Amazon EC2 instances and uses an Amazon Aurora database. The EC2 instances connect to the
database by using user names and passwords that are stored locally in a file. The company wants to minimize the operational overhead of
credential management.
What should a solutions architect do to accomplish this goal?

A. Use AWS Secrets Manager. Turn on automatic rotation.

B. Use AWS Systems Manager Parameter Store. Turn on automatic rotation.

C. Create an Amazon S3 bucket to store objects that are encrypted with an AWS Key Management Service (AWS KMS) encryption key. Migrate
the credential file to the S3 bucket. Point the application to the S3 bucket.

D. Create an encrypted Amazon Elastic Block Store (Amazon EBS) volume for each EC2 instance. Attach the new EBS volume to each EC2
instance. Migrate the credential file to the new EBS volume. Point the application to the new EBS volume.

Correct Answer: B

Community vote distribution


A (96%)

  Sinaneos Highly Voted  11 months, 4 weeks ago


Selected Answer: A
B is wrong because parameter store does not support auto rotation, unless the customer writes it themselves, A is the answer.
upvoted 69 times

  17Master 11 months, 1 week ago


READ!!! AWS Secrets Manager is a secrets management service that helps you protect access to your applications, services, and IT
resources. This service enables you to rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their
lifecycle.
https://aws.amazon.com/cn/blogs/security/how-to-connect-to-aws-secrets-manager-service-within-a-virtual-private-cloud/ y
https://aws.amazon.com/secrets-manager/?nc1=h_ls
upvoted 17 times

  HarishArul 4 months, 1 week ago


Read this - https://docs.aws.amazon.com/secretsmanager/latest/userguide/integrating_parameterstore.html
It says SSM Parameter store cant rotate automatically.
upvoted 3 times

  kewl 10 months ago


correct. see link https://tutorialsdojo.com/aws-secrets-manager-vs-systems-manager-parameter-store/ for differences between SSM
Parameter Store and AWS Secrets Manager
upvoted 13 times

  mrbottomwood 9 months, 3 weeks ago


That was a fantastic link. This part of their site "comparison of AWS services" is superb. Thanks.
upvoted 5 times

  iCcma 11 months, 2 weeks ago


ty bro, I was confused about that and you just mentioned the "key" phrase, B doesn't support autorotation
upvoted 2 times

  leeyoung Highly Voted  9 months ago


Admin is trying to fail everybody in the exam.
upvoted 49 times

  perception 5 months, 1 week ago


He wants you to read discussion part as well for better understanding
upvoted 2 times

  acuaws 6 months ago


RIGHT? I found a bunch of "correct" answers on here are not really correct, but they're not corrected? hhmmmmm
upvoted 2 times

  santbot Most Recent  12 hours ago


Selected Answer: A
A - SECREATS MANAGER
upvoted 1 times
  Mandar15 14 hours, 44 minutes ago
Selected Answer: A
Aurora automatically stores and manages database credentials in AWS Secrets Manager. Aurora rotates database credentials regularly,
without requiring application changes. Secrets Manager secures database credentials from human access and plain text view.

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/rds-secrets-manager.html
upvoted 1 times

  NaaVeeN 1 day, 21 hours ago


If most Voted answers is done by us, then Who is marking the answers as Correct ?
upvoted 1 times

  novice16 5 days ago


Selected Answer: A
Secret manager and auto rotation does the job
upvoted 1 times

  Shaansd 1 week, 3 days ago


can anyone please share the pdf to my email [email protected]
upvoted 1 times

  praveenkumar2 1 week, 4 days ago


I vote for A. If its something to deal with credentials and rotation. Only Secret Manager does the Job.
upvoted 1 times

  sanjay_cloud_guy 2 weeks ago


A is correct answer. user name and password these are secrets so to be stored in secret manager.
upvoted 1 times

  Blackberry 2 weeks, 4 days ago


Can admit update only 💯 answers. It's confusing most voted or correct answer which to prefer
upvoted 1 times

  cyber_bedouin 2 weeks, 3 days ago


most voted
upvoted 1 times

  akshunn 3 weeks, 3 days ago


Selected Answer: A
A it is
upvoted 1 times

  MakaylaLearns 4 weeks ago


I meant to say WHY B at the end!
So I think A is the answer, here is a little video I made
https://youtube.com/shorts/njodSslsqOs?feature=share
upvoted 1 times

  Hassaoo 1 month ago


A is Right Because secret manager is meant for Rds Integration
upvoted 1 times

  PLN6302 1 month, 1 week ago


Option A
upvoted 1 times

  kapalulz 1 month, 3 weeks ago


Selected Answer: A
parameter store does not support auto rotation and AWS Secrets Manager does.
upvoted 2 times

  TariqKipkemei 2 months ago


Selected Answer: A
Minimize the operational overhead of credential management = AWS Secrets Manager
upvoted 1 times

  hakim1977 2 months, 1 week ago


Selected Answer: A
The answer is A without hesitation, in the AWS console: Secret manager => store a ne secret => choose the "Credentials for Amazon RDS
database" option

https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotate-secrets_turn-on-for-db.html
upvoted 1 times
Question #12 Topic 1

A global company hosts its web application on Amazon EC2 instances behind an Application Load Balancer (ALB). The web application has static
data and dynamic data. The company stores its static data in an Amazon S3 bucket. The company wants to improve performance and reduce
latency for the static data and dynamic data. The company is using its own domain name registered with Amazon Route 53.
What should a solutions architect do to meet these requirements?

A. Create an Amazon CloudFront distribution that has the S3 bucket and the ALB as origins. Configure Route 53 to route traffic to the
CloudFront distribution.

B. Create an Amazon CloudFront distribution that has the ALB as an origin. Create an AWS Global Accelerator standard accelerator that has
the S3 bucket as an endpoint Configure Route 53 to route traffic to the CloudFront distribution.

C. Create an Amazon CloudFront distribution that has the S3 bucket as an origin. Create an AWS Global Accelerator standard accelerator that
has the ALB and the CloudFront distribution as endpoints. Create a custom domain name that points to the accelerator DNS name. Use the
custom domain name as an endpoint for the web application.

D. Create an Amazon CloudFront distribution that has the ALB as an origin. Create an AWS Global Accelerator standard accelerator that has
the S3 bucket as an endpoint. Create two domain names. Point one domain name to the CloudFront DNS name for dynamic content. Point the
other domain name to the accelerator DNS name for static content. Use the domain names as endpoints for the web application.

Correct Answer: C

Community vote distribution


A (77%) C (23%)

  Kartikey140 Highly Voted  10 months, 2 weeks ago


Answer is A
Explanation - AWS Global Accelerator vs CloudFront
• They both use the AWS global network and its edge locations around the world
• Both services integrate with AWS Shield for DDoS protection.
• CloudFront
• Improves performance for both cacheable content (such as images and videos)
• Dynamic content (such as API acceleration and dynamic site delivery)
• Content is served at the edge
• Global Accelerator
• Improves performance for a wide range of applications over TCP or UDP
• Proxying packets at the edge to applications running in one or more AWS Regions.
• Good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over IP
• Good for HTTP use cases that require static IP addresses
• Good for HTTP use cases that required deterministic, fast regional failover
upvoted 67 times

  daizy 8 months ago


By creating a CloudFront distribution that has both the S3 bucket and the ALB as origins, the company can reduce latency for both the
static and dynamic data. The CloudFront distribution acts as a content delivery network (CDN), caching the data closer to the users and
reducing the latency. The company can then configure Route 53 to route traffic to the CloudFront distribution, providing improved
performance for the web application.
upvoted 5 times

  kanweng Highly Voted  10 months, 3 weeks ago


Selected Answer: A
Q: How is AWS Global Accelerator different from Amazon CloudFront?

A: AWS Global Accelerator and Amazon CloudFront are separate services that use the AWS global network and its edge locations around
the world. CloudFront improves performance for both cacheable content (such as images and videos) and dynamic content (such as API
acceleration and dynamic site delivery). Global Accelerator improves performance for a wide range of applications over TCP or UDP by
proxying packets at the edge to applications running in one or more AWS Regions. Global Accelerator is a good fit for non-HTTP use cases,
such as gaming (UDP), IoT (MQTT), or Voice over IP, as well as for HTTP use cases that specifically require static IP addresses or
deterministic, fast regional failover. Both services integrate with AWS Shield for DDoS protection.
upvoted 21 times

  rainiverse Most Recent  4 days, 4 hours ago


Selected Answer: A
I'm wavering between A and C.
With dynamic content, CloudFront is cacheable and that's not good.
But with answer C, AWS Global doesn't support Cloudfront endpoint
"Endpoints for standard accelerators in AWS Global Accelerator can be Network Load Balancers, Application Load Balancers, Amazon EC2
instances, or Elastic IP addresses. "
So I choose A
upvoted 1 times
  aropl 4 days, 16 hours ago
A is correct, other answers have wrong origin or endpoint types.
Cloudfront supports multiple origins on the same distribution (ALB and S3) in our case.
B incorrect - Global Accelerator Standard accelerator doesn;t support s3 endpoints
c incorrect - Global Accelerator Standard accelerator doesn't support CloudFront distribution as endpoint
D incorrect - Global Accelerator Standard accelerator doesn't support s3 endpoints
upvoted 1 times

  M0SHE 5 days, 9 hours ago


Selected Answer: A
A. Create an Amazon CloudFront distribution that has the S3 bucket and the ALB as origins. Configure Route 53 to route traffic to the
CloudFront distribution.

Here's the reasoning:

CloudFront with Multiple Origins: CloudFront allows you to set up multiple origins for your distribution, so you can use both the ALB (for
dynamic content) and the S3 bucket (for static content) as origins. This means that both your dynamic and static content can be served
through CloudFront, which will cache content at edge locations to reduce latency.
Route 53 Integration with CloudFront: Amazon Route 53 can be easily configured to route traffic for your domain to a CloudFront
distribution. Users will access your domain, and Route 53 will direct them to the nearest CloudFront edge location.
upvoted 1 times

  David_Ang 6 days, 17 hours ago


I did some research and "a" is correct but "c" is also correct. the thing is that "a" is more simple than c and the fact that does not use global
accelerator makes it cheaper so is more correct
upvoted 1 times

  gsax 3 weeks, 2 days ago


Selected Answer: A
A - Simple solution, CloudFront itself is enough to reduce latency and improve performance. And it can use both as origins S3 and ALB.

A - Simple solution, CloudFront itself is enough to reduce latency and network


https://repost.aws/knowledge-center/cloudfront-distribution-serve-content

network
https://repost.aws/knowledge-center/cloudfront-distribution-serve-content
upvoted 1 times

  Santku 3 weeks, 3 days ago


The answer should be C
Considering the fact that to access dynamic content hosted by heterogeneous providers that are supported by AWS Global Accelerator as
ompared to CloudFront
Please refer following link to compare AWS Global Accelerator vs CloudFront
https://www.techtarget.com/searchcloudcomputing/tip/Compare-AWS-Global-Accelerator-vs-Amazon-CloudFront
upvoted 1 times

  Santku 3 weeks, 3 days ago


Also for dynamic content the caching is not recommended, which is CloudFront behavior
upvoted 1 times

  georgitoan 2 months ago


What is the correct answer for the exam?
upvoted 1 times

  BillyBlunts 1 month, 2 weeks ago


I have been asking this on many questions because it is confusing that majority of people are disagreeing with what this dump and the
3 other dumps have as the answer. Yet the arguments for why they are wrong are very good and some prove right...but f'in aye...what
is the answer we need to pick for the test.
upvoted 1 times

  BillyBlunts 1 month, 2 weeks ago


I think I am just going with the answers provided by exam topics....that is probably the safer bet.
upvoted 1 times

  TundeO 1 week, 6 days ago


Answer to this is A
https://aws.amazon.com/blogs/networking-and-content-delivery/deliver-your-apps-dynamic-content-using-amazon-cloudfront-
getting-started-template/
upvoted 1 times

  Lorenzo1 2 months ago


Selected Answer: A
The answer A fulfills the requirements, so I would choose A.
The answer C may also seem to make sense though.
upvoted 1 times
  gurmit 2 months ago
A.
https://stackoverflow.com/questions/71064028/aws-cloudfront-in-front-of-s3-and-alb
upvoted 1 times

  TariqKipkemei 2 months ago


Selected Answer: A
Improve performance and reduce latency for the static data and dynamic data = Amazon CloudFront
upvoted 1 times

  Lx016 1 month, 2 weeks ago


And? All four answers are about CloudFront, to find the correct answer need to find correct origin(s) to distribute which are S3 and ALB
upvoted 1 times

  hakim1977 2 months, 1 week ago


Selected Answer: A
Answer is A.

Global Accelerator is a good fit for non-HTTP use cases.


upvoted 1 times

  frkael 2 months, 2 weeks ago


Keywords: Static and Dynamic data/ S3 and ALB.
So Cloudfront is for S3 and Global Acelarator is for ALB
upvoted 3 times

  miki111 2 months, 3 weeks ago


Option A MET THE REQUIREMENT
upvoted 1 times

  Jayendra0609 2 months, 3 weeks ago


Selected Answer: C
Since the data in S3 is static while other data is dynamic. And caching dynamic data doesn't make sense since it will be changing every
time. So rather than caching we can use edge locations of Global Accelerator to reduce latency.
upvoted 5 times

  Clouddon 1 month, 3 weeks ago


I will support option C because the combination of CloudFront and Global Accelerator as described in answer C is better. • Note:
CloudFront uses Edge Locations to cache content (improve performance) while Global Accelerator uses Edge Locations to find an
optimal pathway to the nearest regional endpoint (reduce latency for the static data and dynamic data).
upvoted 1 times

  Clouddon 1 month, 3 weeks ago


The above explanation is correct however, the answer cannot be option C (my bad) The correct answer is A. AWS says that Global
Accelerator is an acceleration at network level
AWS Global Accelerator is a networking service that improves the performance, reliability and security of your online applications
using AWS Global Infrastructure. AWS Global Accelerator can be deployed in front of your Network Load Balancers, Application Load
Balancers, AWS EC2 instances, and Elastic IPs, any of which could serve as Regional endpoints for your application. (This implies that
Cloudfront is not part of the endpoints that can be used by Global accelerator which only provide security of your online applications
) except someone can proof otherwise.
upvoted 1 times

  premnick 2 months, 3 weeks ago


How do we pass the exam if there are lots of questions where 2 options seems to be correct? I dont think Amazon should provide such
options where candidates are only left to make a guess.
upvoted 3 times

  Jeffab 1 month ago


You pass the exam by knowing the Subject matter. If you just want answers, go somewhere else! Most don't care what admin publishes
as correct. More important to understand rationale and the discussion with links to support the arguments. If not that, then the
consensus.
upvoted 2 times
Question #13 Topic 1

A company performs monthly maintenance on its AWS infrastructure. During these maintenance activities, the company needs to rotate the
credentials for its Amazon RDS for MySQL databases across multiple AWS Regions.
Which solution will meet these requirements with the LEAST operational overhead?

A. Store the credentials as secrets in AWS Secrets Manager. Use multi-Region secret replication for the required Regions. Configure Secrets
Manager to rotate the secrets on a schedule.

B. Store the credentials as secrets in AWS Systems Manager by creating a secure string parameter. Use multi-Region secret replication for the
required Regions. Configure Systems Manager to rotate the secrets on a schedule.

C. Store the credentials in an Amazon S3 bucket that has server-side encryption (SSE) enabled. Use Amazon EventBridge (Amazon
CloudWatch Events) to invoke an AWS Lambda function to rotate the credentials.

D. Encrypt the credentials as secrets by using AWS Key Management Service (AWS KMS) multi-Region customer managed keys. Store the
secrets in an Amazon DynamoDB global table. Use an AWS Lambda function to retrieve the secrets from DynamoDB. Use the RDS API to rotate
the secrets.

Correct Answer: A

Community vote distribution


A (100%)

  rein_chau Highly Voted  11 months, 4 weeks ago


Selected Answer: A
A is correct.
https://aws.amazon.com/blogs/security/how-to-replicate-secrets-aws-secrets-manager-multiple-regions/
upvoted 19 times

  PhucVuu Highly Voted  5 months, 4 weeks ago


Selected Answer: A
Keywords:
- rotate the credentials for its Amazon RDS for MySQL databases across multiple AWS Regions
- LEAST operational overhead

A: Correct - AWS Secrets Manager supports


- Encrypt credential for RDS, DocumentDb, Redshift, other DBs and key/value secret.
- multi-region replication.
- Remote base on schedule
B: Incorrect - Secure string parameter only apply for Parameter Store. All the data in AWS Secrets Manager is encrypted
C: Incorrect - don't mention about replicate S3 across region.
D: Incorrect - So many steps compare to answer A =))
upvoted 6 times

  MakaylaLearns Most Recent  4 weeks ago


So this is what I thought
https://youtube.com/shorts/6YSBv95V2cs?feature=share

What is a secure string parameter?


https://youtube.com/shorts/-6wJOqZ93co?feature=share
upvoted 1 times

  TariqKipkemei 2 months ago


Selected Answer: A
'The company needs to rotate the credentials for its Amazon RDS for MySQL databases across multiple AWS Regions' = AWS Secrets
Manager
upvoted 1 times

  miki111 2 months, 3 weeks ago


Option A MET THE REQUIREMENT
upvoted 1 times

  cookieMr 3 months, 2 weeks ago


Selected Answer: A
Option A: Storing the credentials as secrets in AWS Secrets Manager provides a dedicated service for secure and centralized management
of secrets. By using multi-Region secret replication, the company ensures that the secrets are available in the required Regions for
rotation. Secrets Manager also provides built-in functionality to rotate secrets automatically on a defined schedule, reducing operational
overhead. This automation simplifies the process of rotating credentials for the Amazon RDS for MySQL databases during monthly
maintenance activities.
upvoted 5 times
  Bmarodi 4 months ago
Selected Answer: A
A is correct answer.
upvoted 1 times

  Musti35 5 months, 3 weeks ago


Selected Answer: A
https://aws.amazon.com/blogs/security/how-to-replicate-secrets-aws-secrets-manager-multiple-regions/
With Secrets Manager, you can store, retrieve, manage, and rotate your secrets, including database credentials, API keys, and other
secrets. When you create a secret using Secrets Manager, it’s created and managed in a Region of your choosing. Although scoping secrets
to a Region is a security best practice, there are scenarios such as disaster recovery and cross-Regional redundancy that require
replication of secrets across Regions. Secrets Manager now makes it possible for you to easily replicate your secrets to one or more
Regions to support these scenarios.
upvoted 3 times

  linux_admin 6 months ago


Selected Answer: A
A. Store the credentials as secrets in AWS Secrets Manager. Use multi-Region secret replication for the required Regions. Configure Secrets
Manager to rotate the secrets on a schedule.

This solution is the best option for meeting the requirements with the least operational overhead. AWS Secrets Manager is designed
specifically for managing and rotating secrets like database credentials. Using multi-Region secret replication, you can easily replicate the
secrets across the required AWS Regions. Additionally, Secrets Manager allows you to configure automatic secret rotation on a schedule,
further reducing the operational overhead.
upvoted 1 times

  cheese929 7 months, 2 weeks ago


Selected Answer: A
A is correct.
upvoted 1 times

  BlueVolcano1 8 months, 2 weeks ago


Selected Answer: A
It's A, as Secrets Manager does support replicating secrets into multiple AWS Regions:
https://docs.aws.amazon.com/secretsmanager/latest/userguide/create-manage-multi-region-secrets.html
upvoted 3 times

  Abdel42 8 months, 3 weeks ago


Selected Answer: A
it's A, here the question specify that we want the LEAST overhead
upvoted 2 times

  MichaelCarrasco 7 months, 3 weeks ago


https://aws.amazon.com/blogs/security/how-to-replicate-secrets-aws-secrets-manager-multiple-regions/
upvoted 1 times

  SilentMilli 8 months, 4 weeks ago


Selected Answer: A
AWS Secrets Manager is a secrets management service that enables you to store, manage, and rotate secrets such as database
credentials, API keys, and SSH keys. Secrets Manager can help you minimize the operational overhead of rotating credentials for your
Amazon RDS for MySQL databases across multiple Regions. With Secrets Manager, you can store the credentials as secrets and use multi-
Region secret replication to replicate the secrets to the required Regions. You can then configure Secrets Manager to rotate the secrets on
a schedule so that the credentials are rotated automatically without the need for manual intervention. This can help reduce the risk of
secrets being compromised and minimize the operational overhead of credential management.
upvoted 3 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Selected Answer: A
Option A, storing the credentials as secrets in AWS Secrets Manager and using multi-Region secret replication for the required Regions,
and configuring Secrets Manager to rotate the secrets on a schedule, would meet the requirements with the least operational overhead.

AWS Secrets Manager allows you to store, manage, and rotate secrets, such as database credentials, across multiple AWS Regions. By
enabling multi-Region secret replication, you can replicate the secrets across the required Regions to allow for seamless rotation of the
credentials during maintenance activities. Additionally, Secrets Manager provides automatic rotation of secrets on a schedule, which
would minimize the operational overhead of rotating the credentials on a monthly basis.
upvoted 2 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Option B, storing the credentials as secrets in AWS Systems Manager and using multi-Region secret replication, would not provide
automatic rotation of secrets on a schedule.
Option C, storing the credentials in an S3 bucket with SSE enabled and using EventBridge to invoke an AWS Lambda function to rotate
the credentials, would not provide automatic rotation of secrets on a schedule.

Option D, encrypting the credentials as secrets using KMS multi-Region customer managed keys and storing the secrets in a
DynamoDB global table, would not provide automatic rotation of secrets on a schedule and would require additional operational
overhead to retrieve the secrets from DynamoDB and use the RDS API to rotate the secrets.
upvoted 2 times
  Zerotn3 9 months, 1 week ago
vote A !
upvoted 1 times

  NikaCZ 9 months, 2 weeks ago


Selected Answer: A
AWS Secret Manager
upvoted 1 times

  ngochieu276 9 months, 3 weeks ago


A is correct
upvoted 1 times
Question #14 Topic 1

A company runs an ecommerce application on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2
Auto Scaling group across multiple Availability Zones. The Auto Scaling group scales based on CPU utilization metrics. The ecommerce
application stores the transaction data in a MySQL 8.0 database that is hosted on a large EC2 instance.
The database's performance degrades quickly as application load increases. The application handles more read requests than write transactions.
The company wants a solution that will automatically scale the database to meet the demand of unpredictable read workloads while maintaining
high availability.
Which solution will meet these requirements?

A. Use Amazon Redshift with a single node for leader and compute functionality.

B. Use Amazon RDS with a Single-AZ deployment Configure Amazon RDS to add reader instances in a different Availability Zone.

C. Use Amazon Aurora with a Multi-AZ deployment. Configure Aurora Auto Scaling with Aurora Replicas.

D. Use Amazon ElastiCache for Memcached with EC2 Spot Instances.

Correct Answer: C

Community vote distribution


C (100%)

  D2w Highly Voted  11 months, 3 weeks ago


Selected Answer: C
C, AURORA is 5x performance improvement over MySQL on RDS and handles more read requests than write,; maintaining high availability
= Multi-AZ deployment
upvoted 27 times

  Buruguduystunstugudunstuy Highly Voted  9 months, 1 week ago


Selected Answer: C
Option C, using Amazon Aurora with a Multi-AZ deployment and configuring Aurora Auto Scaling with Aurora Replicas, would be the best
solution to meet the requirements.

Aurora is a fully managed, MySQL-compatible relational database that is designed for high performance and high availability. Aurora
Multi-AZ deployments automatically maintain a synchronous standby replica in a different Availability Zone to provide high availability.
Additionally, Aurora Auto Scaling allows you to automatically scale the number of Aurora Replicas in response to read workloads, allowing
you to meet the demand of unpredictable read workloads while maintaining high availability. This would provide an automated solution
for scaling the database to meet the demand of the application while maintaining high availability.
upvoted 9 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Option A, using Amazon Redshift with a single node for leader and compute functionality, would not provide high availability.

Option B, using Amazon RDS with a Single-AZ deployment and configuring RDS to add reader instances in a different Availability Zone,
would not provide high availability and would not automatically scale the number of reader instances in response to read workloads.

Option D, using Amazon ElastiCache for Memcached with EC2 Spot Instances, would not provide a database solution and would not
meet the requirements.
upvoted 2 times

  AWSGuru123 Most Recent  1 day, 13 hours ago


Selected Answer: C
Aurora
upvoted 1 times

  Syruis 1 month, 2 weeks ago


Selected Answer: C
C fit perfectly
upvoted 1 times

  TariqKipkemei 2 months ago


Selected Answer: C
Unpredictable read workloads while maintaining high availability = Amazon Aurora with a Multi-AZ deployment, Auto Scaling with Aurora
read replicas.
upvoted 1 times

  Guru4Cloud 2 months, 1 week ago


Selected Answer: C
As the application handles more read requests than write transactions, using read replicas with Aurora is an ideal choice as it allows read
scaling without sacrificing write performance on the primary instance.
upvoted 1 times
  miki111 2 months, 3 weeks ago
Option C MET THE REQUIREMENT
upvoted 1 times

  hiepdz98 3 months ago


Selected Answer: C
Option C
upvoted 1 times

  cookieMr 3 months, 2 weeks ago


Selected Answer: C
Option C: Using Amazon Aurora with a Multi-AZ deployment and configuring Aurora Auto Scaling with Aurora Replicas is the most
appropriate solution. Aurora is a MySQL-compatible relational database engine that provides high performance and scalability. With Multi-
AZ deployment, the database is automatically replicated across multiple Availability Zones for high availability. Aurora Auto Scaling allows
the database to automatically add or remove Aurora Replicas based on the workload, ensuring that read requests can be distributed
effectively and the database can scale to meet demand. This provides both high availability and automatic scaling to handle unpredictable
read workloads.
upvoted 2 times

  Bmarodi 4 months ago


Selected Answer: C
C meets the requirements.
upvoted 1 times

  Mehkay 4 months ago


C Aurora with read replicas
upvoted 1 times

  big0007 4 months, 2 weeks ago


Key words:
- Must support MySQL
- High Availability (must be mulit-az)
- Auto Scaling
upvoted 4 times

  cheese929 4 months, 2 weeks ago


Selected Answer: C
C is correct since cost is not a concern.
upvoted 1 times

  Abrar2022 4 months, 3 weeks ago


It's Aurora with Multi-AZ deployment - Keywords > "unpredictable read workloads while maintaining high availability"
upvoted 2 times

  Abrar2022 4 months, 3 weeks ago


To automatically scale the database to meet the demand of unpredictable read workloads while maintaining high availability, you can use
Amazon Aurora with a Multi-AZ deployment. Aurora is a fully managed, MySQL-compatible database service that can automatically scale
up or down based on workload demands. With a Multi-AZ deployment, Aurora maintains a synchronous standby replica in a different
Availability Zone (AZ) to provide high availability in the event of an outage.
upvoted 2 times

  PhucVuu 5 months, 4 weeks ago


Selected Answer: C
Keywords:
- The database's performance degrades quickly as application load increases.
- The application handles more read requests than write transactions.
- Automatically scale the database to meet the demand of unpredictable read workloads
- Maintaining high availability.

A: Incorrect - Amazon Redshift is used columnar block storage which useful Data Analytic and warehouse.
It also have the issue when migrate from MySql to Redshift: storage procedure, trigger,.. Single node for leader don't maintaining high
availability.
B: Incorrect - The requirement said that: "Automatically scale the database to meet the demand of unpredictable read workloads" ->
missing auto scaling.
C: Correct - it's resolved the issue high availability and auto scaling.
D: Incorrect - Stop instance don't maintaining high availability.
upvoted 6 times

  gx2222 6 months ago


Selected Answer: C
Amazon Aurora is a relational database engine that is compatible with MySQL and PostgreSQL. It is designed for high performance,
scalability, and availability. With a Multi-AZ deployment, Aurora automatically replicates the database to a standby instance in a different
Availability Zone. This provides high availability and fast failover in case of a primary instance failure.

Aurora Auto Scaling allows you to add or remove Aurora Replicas based on CPU utilization, connections, or custom metrics. This enables
you to automatically scale the read capacity of the database in response to application load. Aurora Replicas are read-only instances that
can offload read traffic from the primary instance. They are kept in sync with the primary instance using Aurora's distributed storage
architecture, which enables low-latency updates across the replicas.
upvoted 1 times
Question #15 Topic 1

A company recently migrated to AWS and wants to implement a solution to protect the traffic that flows in and out of the production VPC. The
company had an inspection server in its on-premises data center. The inspection server performed specific operations such as traffic flow
inspection and traffic filtering. The company wants to have the same functionalities in the AWS Cloud.
Which solution will meet these requirements?

A. Use Amazon GuardDuty for traffic inspection and traffic filtering in the production VPC.

B. Use Traffic Mirroring to mirror traffic from the production VPC for traffic inspection and filtering.

C. Use AWS Network Firewall to create the required rules for traffic inspection and traffic filtering for the production VPC.

D. Use AWS Firewall Manager to create the required rules for traffic inspection and traffic filtering for the production VPC.

Correct Answer: C

Community vote distribution


C (91%) 9%

  SilentMilli Highly Voted  8 months, 4 weeks ago


Selected Answer: C
I would recommend option C: Use AWS Network Firewall to create the required rules for traffic inspection and traffic filtering for the
production VPC.

AWS Network Firewall is a managed firewall service that provides filtering for both inbound and outbound network traffic. It allows you to
create rules for traffic inspection and filtering, which can help protect your production VPC.

Option A: Amazon GuardDuty is a threat detection service, not a traffic inspection or filtering service.

Option B: Traffic Mirroring is a feature that allows you to replicate and send a copy of network traffic from a VPC to another VPC or on-
premises location. It is not a service that performs traffic inspection or filtering.

Option D: AWS Firewall Manager is a security management service that helps you to centrally configure and manage firewalls across your
accounts. It is not a service that performs traffic inspection or filtering.
upvoted 48 times

  Clouddon 1 month, 3 weeks ago


Thank you for this reply
upvoted 2 times

  BoboChow Highly Voted  11 months, 3 weeks ago


Selected Answer: C
I agree with C.
**AWS Network Firewall** is a stateful, managed network firewall and intrusion detection and prevention service for your virtual private
cloud (VPC) that you created in Amazon Virtual Private Cloud (Amazon VPC). With Network Firewall, you can filter traffic at the perimeter of
your VPC. This includes filtering traffic going to and coming from an internet gateway, NAT gateway, or over VPN or AWS Direct Connect.
upvoted 23 times

  BoboChow 11 months, 3 weeks ago


And I'm not sure Traffic Mirroring can be for filtering
upvoted 3 times

  reema908516 Most Recent  3 weeks, 1 day ago


Selected Answer: C
AWS Network Firewall is a managed firewall service that provides filtering for both inbound and outbound network traffic. It allows you to
create rules for traffic inspection and filtering, which can help protect your production VPC.
upvoted 1 times

  nmywrld 1 month, 2 weeks ago


Why isn’t D viable? Firewall Manager will help to provision network firewall as required if you define it in firewall manager. And it’s fully
managed, not requiring you to do any configuration or set up.
upvoted 1 times

  Syruis 1 month, 2 weeks ago


Selected Answer: C
C with no doubt
upvoted 1 times

  Guru4Cloud 2 months, 1 week ago


Selected Answer: C
- AWS Network Firewall is a managed network security service that provides stateful inspection of traffic and allows you to define firewall
rules to control the traffic flow in and out of your VPC.
- With AWS Network Firewall, you can create custom rule groups to define specific operations for traffic inspection and filtering.
- It can perform deep packet inspection and filtering at the network level to enforce security policies, block malicious traffic, and allow or
deny traffic based on defined rules.
- By integrating AWS Network Firewall with the production VPC, you can achieve similar functionalities as the on-premises inspection
server, performing traffic flow inspection and filtering.
upvoted 1 times

  miki111 2 months, 3 weeks ago


Option C MET THE REQUIREMENT
upvoted 1 times

  cookieMr 3 months, 2 weeks ago


Selected Answer: C
AWS Network Firewall is a managed network firewall service that allows you to define firewall rules to filter and inspect network traffic. You
can create rules to define the traffic that should be allowed or blocked based on various criteria such as source/destination IP addresses,
protocols, ports, and more. With AWS Network Firewall, you can implement traffic inspection and filtering capabilities within the
production VPC, helping to protect the network traffic.

In the context of the given scenario, AWS Network Firewall can be a suitable choice if the company wants to implement traffic inspection
and filtering directly within the VPC without the need for traffic mirroring. It provides an additional layer of security by enforcing specific
rules for traffic filtering, which can help protect the production environment.
upvoted 2 times

  Danni 3 months, 2 weeks ago


Anyone with the contributor access, kindly help me. I'm in need of the last set of questions as a means of retake preparations.
upvoted 1 times

  AJAYSINGH0807 3 months, 4 weeks ago


B is correct answer
upvoted 1 times

  mbuck2023 4 months ago


Selected Answer: B
option B with Traffic Mirroring is the most suitable solution for mirroring the traffic from the production VPC to an inspection instance or
tool, allowing you to perform traffic inspection and filtering as required.
upvoted 3 times

  abhishek2021 4 months, 2 weeks ago


Selected Answer: C
C is correct as the option uses AWS services to fully meet the requirement.
Has the question not been asking "in the AWS cloud", option B could be a correct option too, but a costlier one though as the user has to
pay for network data for every bit of traffic replication between AWS cloud and on-prem location.
upvoted 1 times

  sbnpj 4 months, 2 weeks ago


Selected Answer: B
Traffic Mirroring will allow you to inspect and filter traffic using a server, (note company had a on-premise server for Traffic filtering )
upvoted 2 times

  siyokko 4 months, 2 weeks ago


Selected Answer: B
Option B, using Traffic Mirroring, is the most appropriate solution. Traffic Mirroring allows you to capture and forward network traffic from
an Amazon VPC to an inspection instance or service for analysis and filtering. By mirroring the traffic from the production VPC, you can
send it to an inspection server or a dedicated service that performs the required traffic flow inspection and filtering, replicating the
functionalities of the on-premises inspection server.
upvoted 3 times

  mbuck2023 4 months ago


Yes, so says chatgpt
upvoted 1 times

  cheese929 4 months, 2 weeks ago


Selected Answer: C
C is correct
upvoted 1 times

  Abrar2022 4 months, 3 weeks ago


Network Firewall is for inspection and traffic filtering.
upvoted 1 times

  leonardh 4 months, 3 weeks ago


Selected Answer: B
https://docs.aws.amazon.com/vpc/latest/mirroring/traffic-mirroring-filters.html
A traffic mirror filter is a set of inbound and outbound rules that determines which traffic is copied from the traffic mirror source and sent
to the traffic mirror target. You can also choose to mirror certain network services traffic, including Amazon DNS. When you add network
services traffic, all traffic (inbound and outbound) related to that network service is mirrored.
upvoted 1 times
Question #16 Topic 1

A company hosts a data lake on AWS. The data lake consists of data in Amazon S3 and Amazon RDS for PostgreSQL. The company needs a
reporting solution that provides data visualization and includes all the data sources within the data lake. Only the company's management team
should have full access to all the visualizations. The rest of the company should have only limited access.
Which solution will meet these requirements?

A. Create an analysis in Amazon QuickSight. Connect all the data sources and create new datasets. Publish dashboards to visualize the data.
Share the dashboards with the appropriate IAM roles.

B. Create an analysis in Amazon QuickSight. Connect all the data sources and create new datasets. Publish dashboards to visualize the data.
Share the dashboards with the appropriate users and groups.

C. Create an AWS Glue table and crawler for the data in Amazon S3. Create an AWS Glue extract, transform, and load (ETL) job to produce
reports. Publish the reports to Amazon S3. Use S3 bucket policies to limit access to the reports.

D. Create an AWS Glue table and crawler for the data in Amazon S3. Use Amazon Athena Federated Query to access data within Amazon RDS
for PostgreSQL. Generate reports by using Amazon Athena. Publish the reports to Amazon S3. Use S3 bucket policies to limit access to the
reports.

Correct Answer: D

Community vote distribution


B (80%) 11% 8%

  rodriiviru Highly Voted  11 months, 3 weeks ago


Selected Answer: B
https://docs.aws.amazon.com/quicksight/latest/user/sharing-a-dashboard.html
upvoted 53 times

  mattlai 11 months, 3 weeks ago


https://docs.aws.amazon.com/quicksight/latest/user/share-a-dashboard-grant-access-users.html
^ more percise link
upvoted 10 times

  BoboChow 11 months, 3 weeks ago


Agree with you
upvoted 2 times

  PhucVuu Highly Voted  5 months, 4 weeks ago


Selected Answer: B
Keywords:
- Data lake on AWS.
- Consists of data in Amazon S3 and Amazon RDS for PostgreSQL.
- The company needs a reporting solution that provides data VISUALIZATION and includes ALL the data sources within the data lake.

A - Incorrect: Amazon QuickSight only support users(standard version) and groups (enterprise version). users and groups only exists
without QuickSight. QuickSight don't support IAM. We use users and groups to view the QuickSight dashboard
B - Correct: as explained in answer A and QuickSight is used to created dashboard from S3, RDS, Redshift, Aurora, Athena, OpenSearch,
Timestream
C - Incorrect: This way don't support visulization and don't mention how to process RDS data
D - Incorrect: This way don't support visulization and don't mention how to combine data RDS and S3
upvoted 24 times

  oddnoises Most Recent  1 week, 3 days ago


For anyone wondering how to know what answers to pick when the voted answer and "official" answer are different:
Ask ChatGPT the question without giving it the answer choices. This will give you an idea of what the best answer is and a thorough
explanation to help your learning
upvoted 1 times

  MakaylaLearns 4 weeks ago


Hey, I made a video to quickly teach you what AWS Glue is
https://youtube.com/shorts/ECynBsEaWKo?feature=share
upvoted 1 times

  Meytiam 1 month ago


Selected Answer: B
Option D does involve useful components like AWS Glue and Amazon Athena, which can be great for data processing and querying.
However, given the emphasis on data visualization, limited access, and user-friendliness, option B (Amazon QuickSight) still seems more
suitable for this particular scenario.
upvoted 2 times
  hsinchang 2 months ago
Selected Answer: B
An IAM role is associated with AWS resources instead of a specific person or group, so not A.
In C and D no visualization.
So B.
upvoted 2 times

  Guru4Cloud 2 months, 1 week ago


Selected Answer: B
Explanation:

Option B involves using Amazon QuickSight, which is a business intelligence tool provided by AWS for data visualization and reporting.
With this option, you can connect all the data sources within the data lake, including Amazon S3 and Amazon RDS for PostgreSQL. You can
create datasets within QuickSight that pull data from these sources.

The solution allows you to publish dashboards in Amazon QuickSight, which will provide the required data visualization capabilities. To
control access, you can use appropriate IAM (Identity and Access Management) roles, assigning full access only to the company's
management team and limiting access for the rest of the company. You can share the dashboards selectively with the users and groups
that need access.
upvoted 1 times

  james2033 2 months, 1 week ago


Selected Answer: B
Question keyword "data visualization". "company's management team have full access to all the visualizations, the rest should have only
limited access."

Answer keyword "Amazon QuickSight", "share the dashboards with the appropriate users and groups". Choose B.
upvoted 1 times

  miki111 2 months, 3 weeks ago


Option B MET THE REQUIREMENT
upvoted 1 times

  RupeC 2 months, 3 weeks ago


Selected Answer: B
C and D are out as there is no inbuilt visualisation function in Glue. Thus A or B. As Quicksite shares with users and groups, the answer is
B.

https://docs.aws.amazon.com/quicksight/latest/user/sharing-a-dashboard.html
upvoted 1 times

  Mia2009687 3 months, 1 week ago


Answer is B
Dashboard cannot be shared with roles.
https://docs.aws.amazon.com/quicksight/latest/user/share-a-dashboard-grant-access-users.html
upvoted 1 times

  cookieMr 3 months, 2 weeks ago


Selected Answer: B
B. Create an analysis in Amazon QuickSight. Connect all the data sources and create new datasets. Publish dashboards to visualize the
data. Share the dashboards with the appropriate users and groups.

Amazon QuickSight is a business intelligence (BI) tool provided by AWS that allows you to create interactive dashboards and reports. It
supports a variety of data sources, including Amazon S3 and Amazon RDS for PostgreSQL, which are the data sources in the company's
data lake.

Option A (Create an analysis in Amazon QuickSight and share with IAM roles) is incorrect because it suggests sharing with IAM roles,
which are more suitable for managing access to AWS resources rather than granting access to specific users or groups within QuickSight.
upvoted 3 times

  jensmitie 3 months, 3 weeks ago


Selected Answer: C
B: is wrong because Quicksight has to load all datasets from S3 into SPICE which is expensive and impossible for a whole data lake.
Question says report contains all data from the lake.
D: Is wrong, because Athena does not allow generating reports except as file, which does not have visualizations
C: Glue can access all available sources (RDS and S3), perform aggregation and using the driver to generate Visualization with Python and
storing it as PDF on S3
upvoted 2 times

  diabloexodia 2 months, 2 weeks ago


yeah but in option C they dont mention accesssing data from RDS, it only mentions S3
upvoted 1 times
  hypnozz 3 months, 4 weeks ago
Selected Answer: D
There is something wird here.....Quicksight is a good option, however let´s see that in the question says that the management group is the
only one with full acces, and for that you need IAM roles, because the groups just apply within Quisight, also, the groups that can see the
dashboard, have the ability to see the underlying data.....IMPORTANT, far I know, you dont have to create a new dataset to show a
dashboard, algo, quicksight CAN NOT do that, just glue is capable..
upvoted 1 times

  Bmarodi 4 months ago


Selected Answer: B
I vote for option B.
upvoted 1 times

  dszes 4 months, 1 week ago


tricky question, Users, groups and roles can have access.
Viewing who has access to a dashboard
Use the following procedure to see which users or groups have access to the dashboard.

Open the published dashboard and choose Share at upper right. Then choose Share dashboard.

In the Share dashboard page that opens, under Manage permissions, review the users and groups, and their roles and settings.

You can search to locate a specific user or group by entering their name or any part of their name in the search box at upper right.
Searching is case-sensitive, and wildcards aren't supported. Delete the search term to return the view to all users.
upvoted 1 times

  cheese929 4 months, 2 weeks ago


Selected Answer: B
Amazon QuickSight with users and groups. B is correct.
upvoted 2 times
Question #17 Topic 1

A company is implementing a new business application. The application runs on two Amazon EC2 instances and uses an Amazon S3 bucket for
document storage. A solutions architect needs to ensure that the EC2 instances can access the S3 bucket.
What should the solutions architect do to meet this requirement?

A. Create an IAM role that grants access to the S3 bucket. Attach the role to the EC2 instances.

B. Create an IAM policy that grants access to the S3 bucket. Attach the policy to the EC2 instances.

C. Create an IAM group that grants access to the S3 bucket. Attach the group to the EC2 instances.

D. Create an IAM user that grants access to the S3 bucket. Attach the user account to the EC2 instances.

Correct Answer: A

Community vote distribution


A (99%)

  sba21 Highly Voted  11 months, 3 weeks ago


Selected Answer: A
Always remember that you should associate IAM roles to EC2 instances
upvoted 63 times

  Buruguduystunstugudunstuy Highly Voted  9 months, 1 week ago


Selected Answer: A
The correct option to meet this requirement is A: Create an IAM role that grants access to the S3 bucket and attach the role to the EC2
instances.

An IAM role is an AWS resource that allows you to delegate access to AWS resources and services. You can create an IAM role that grants
access to the S3 bucket and then attach the role to the EC2 instances. This will allow the EC2 instances to access the S3 bucket and the
documents stored within it.

Option B is incorrect because an IAM policy is used to define permissions for an IAM user or group, not for an EC2 instance.

Option C is incorrect because an IAM group is used to group together IAM users and policies, not to grant access to resources.

Option D is incorrect because an IAM user is used to represent a person or service that interacts with AWS resources, not to grant access
to resources.
upvoted 39 times

  Abdou1604 Most Recent  1 month, 3 weeks ago


Option B may work but ,
suggests creating an IAM policy directly and attaching it to the EC2 instances. While this might work, it's not the recommended approach.
Using an IAM role is more secure and manageable.
upvoted 1 times

  Guru4Cloud 2 months, 1 week ago


Selected Answer: A
Always remember that you should associate IAM roles to EC2 instances.
An IAM role is an AWS resource that allows you to delegate access to AWS resources and services. You can create an IAM role that grants
access to the S3 bucket and then attach the role to the EC2 instances. This will allow the EC2 instances to access the S3 bucket and the
documents stored within it.
upvoted 1 times

  Rexino 2 months, 2 weeks ago


Selected Answer: A
IAM roles should be associated to EC2 instance
upvoted 2 times

  miki111 2 months, 3 weeks ago


Option A MET THE REQUIREMENT
upvoted 1 times

  cookieMr 3 months, 2 weeks ago


Selected Answer: A
Option A is the correct approach because IAM roles are designed to provide temporary credentials to AWS resources such as EC2
instances. By creating an IAM role, you can define the necessary permissions and policies that allow the EC2 instances to access the S3
bucket securely. Attaching the IAM role to the EC2 instances will automatically provide the necessary credentials to access the S3 bucket
without the need for explicit access keys or secrets.
Option B is not recommended in this case because IAM policies alone cannot be directly attached to EC2 instances. Policies are usually
attached to IAM users, groups, or roles.

Option C is not the most appropriate choice because IAM groups are used to manage collections of IAM users and their permissions,
rather than granting access to specific resources like S3 buckets.

Option D is not the optimal solution because IAM users are intended for individual user accounts and are not the recommended approach
for granting access to resources within EC2 instances.
upvoted 3 times
  big0007 4 months, 2 weeks ago
IAM Roles manage who/what has access to your AWS resources, whereas IAM policies control their permissions.

Therefore, a Policy alone is useless without an active IAM Role or IAM User.
upvoted 1 times

  cheese929 4 months, 2 weeks ago


Selected Answer: A
A is correct
upvoted 1 times

  zoblazo 5 months, 2 weeks ago


Selected Answer: A
always role for ec2 instance
upvoted 1 times

  PhucVuu 5 months, 4 weeks ago


Keywords: EC2 instances can access the S3 bucket.

A: Correct - IAM role is used to grant access for AWS services like EC2, Lambda,...
B: Incorrect - IAM policy only apply for users cannot attach it to EC2 (AWS service).
C: Incorrect - IAM group is used to group of permission and attach to list of users.
D: Incorrect - To make EC2 work we need access key and secret access key but not user account. But even when we use access key and
secret access key of user it's not recommended because anyone can access EC2 instance can get your access key and secret access key
and get all permission from the owner. The secure way is using IAM role which we just specify enough role for EC2 instance.
upvoted 4 times

  thanhvx1 6 months ago


A is correct
upvoted 1 times

  r1skkam 6 months, 1 week ago


Selected Answer: A
https://aws.amazon.com/blogs/security/writing-iam-policies-how-to-grant-access-to-an-amazon-s3-bucket/
upvoted 1 times

  gold4otas 6 months, 2 weeks ago


Selected Answer: A
IAM Role is the correct anwser.
upvoted 1 times

  bilel500 7 months, 1 week ago


Selected Answer: A
IAM Role is the correct anwser.
upvoted 1 times

  buiducvu 7 months, 2 weeks ago


Selected Answer: A
IAM Role
upvoted 1 times

  Pankul 8 months ago


Selected Answer: A
Associate IAM roles to EC2 instances
upvoted 1 times
Question #18 Topic 1

An application development team is designing a microservice that will convert large images to smaller, compressed images. When a user uploads
an image through the web interface, the microservice should store the image in an Amazon S3 bucket, process and compress the image with an
AWS Lambda function, and store the image in its compressed form in a different S3 bucket.
A solutions architect needs to design a solution that uses durable, stateless components to process the images automatically.
Which combination of actions will meet these requirements? (Choose two.)

A. Create an Amazon Simple Queue Service (Amazon SQS) queue. Configure the S3 bucket to send a notification to the SQS queue when an
image is uploaded to the S3 bucket.

B. Configure the Lambda function to use the Amazon Simple Queue Service (Amazon SQS) queue as the invocation source. When the SQS
message is successfully processed, delete the message in the queue.

C. Configure the Lambda function to monitor the S3 bucket for new uploads. When an uploaded image is detected, write the file name to a text
file in memory and use the text file to keep track of the images that were processed.

D. Launch an Amazon EC2 instance to monitor an Amazon Simple Queue Service (Amazon SQS) queue. When items are added to the queue,
log the file name in a text file on the EC2 instance and invoke the Lambda function.

E. Configure an Amazon EventBridge (Amazon CloudWatch Events) event to monitor the S3 bucket. When an image is uploaded, send an alert
to an Amazon ample Notification Service (Amazon SNS) topic with the application owner's email address for further processing.

Correct Answer: AB

Community vote distribution


AB (99%)

  Buruguduystunstugudunstuy Highly Voted  9 months, 1 week ago


Selected Answer: AB
To design a solution that uses durable, stateless components to process images automatically, a solutions architect could consider the
following actions:

Option A involves creating an SQS queue and configuring the S3 bucket to send a notification to the queue when an image is uploaded.
This allows the application to decouple the image upload process from the image processing process and ensures that the image
processing process is triggered automatically when a new image is uploaded.

Option B involves configuring the Lambda function to use the SQS queue as the invocation source. When the SQS message is successfully
processed, the message is deleted from the queue. This ensures that the Lambda function is invoked only once per image and that the
image is not processed multiple times.
upvoted 17 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Option C is incorrect because it involves storing state (the file name) in memory, which is not a durable or scalable solution.

Option D is incorrect because it involves launching an EC2 instance to monitor the SQS queue, which is not a stateless solution.

Option E is incorrect because it involves using Amazon EventBridge (formerly Amazon CloudWatch Events) to send an alert to an
Amazon Simple Notification Service (Amazon SNS) topic, which is not related to the image processing process.
upvoted 12 times

  hsinchang 2 months ago


So storing states invokes the stateless principle, nice understanding!
upvoted 2 times

  sba21 Highly Voted  11 months, 3 weeks ago


Selected Answer: AB
It looks like A-B
upvoted 15 times

  Nava702 Most Recent  3 weeks, 6 days ago


Anybody that would like to share their contributor access with me ? My email is [email protected]
Any help would be appreciated.
upvoted 1 times

  Guru4Cloud 2 months, 1 week ago


Selected Answer: AB
Explanation:
Option A: By creating an Amazon SQS queue and configuring the S3 bucket to send a notification to the SQS queue when an image is
uploaded, the system establishes a durable and scalable way to handle incoming image processing tasks.

Option B: Configuring the Lambda function to use the SQS queue as the invocation source allows it to retrieve messages from the queue
and process them in a stateless manner. After successfully processing the image, the Lambda function can delete the message from the
queue to avoid duplicate processing.
upvoted 1 times
  miki111 2 months, 3 weeks ago
Option AB MET THE REQUIREMENT
upvoted 1 times

  RupeC 2 months, 3 weeks ago


Selected Answer: AB
D and E are distractions. C seems a valid solution. However, as you have to select two, A and B are the only two that work in conjunction
with each other.
upvoted 2 times

  tester0071 2 months, 3 weeks ago


Selected Answer: AB
A and B are optimal solutions
upvoted 1 times

  cookieMr 3 months, 2 weeks ago


Selected Answer: AB
Option A is a correct because it allows for decoupling between the image upload process and image processing. By configuring S3 to send
a notification to SQS, image upload event is recorded and can be processed independently by microservice.

Option B is also a correct because it ensures that Lambda is triggered by messages in SQS. Lambda can retrieve image information from
SQS, process and compress image, and store compressed image in a different S3. Once processing is successful, Lambda can delete
processed message from SQS, indicating that image has been processed.

Option C is not recommended because it introduces a stateful approach by using a text file to keep track of processed images.

Option D is not optimal solution as it introduces unnecessary complexity by involving an EC2 to monitor SQS and maintain a text file.

Option E is not directly related to requirement of processing images automatically. Although EventBridge and SNS can be useful for event
notifications and further processing, they don't provide the same level of durability and scalability as SQS.
upvoted 3 times

  beginnercloud 4 months, 2 weeks ago


Selected Answer: AB
Option A nad B
upvoted 1 times

  cheese929 4 months, 2 weeks ago


Selected Answer: AB
A and B
upvoted 1 times

  PhucVuu 5 months, 4 weeks ago


Selected Answer: AB
Keywords:
- Store the image in an Amazon S3 bucket, process and compress the image with an AWS Lambda function.
- Durable, stateless components to process the images automatically

A,B: Correct - SQS has message retention function(store message) default 4 days(can increate update 14 days) so that you can re-run
lambda if there are any errors when processing the images.
C: Incorrect - Lambda function just run the request then stop, the max tmeout is 15 mins. So we cannot store data in the ram of Lambda
function.
D: Incorrect - we can trigger Lambda dirrectly from SQS no need EC2 instance in this case
E: Incorrect - It kinds of manually step -> the owner has to read email then process it :))
upvoted 3 times

  linux_admin 6 months ago


Selected Answer: AB
A. Create an Amazon Simple Queue Service (Amazon SQS) queue. Configure the S3 bucket to send a notification to the SQS queue when
an image is uploaded to the S3 bucket.
B. Configure the Lambda function to use the Amazon Simple Queue Service (Amazon SQS) queue as the invocation source. When the SQS
message is successfully processed, delete the message in the queue.
upvoted 2 times

  cheese929 6 months, 3 weeks ago


Selected Answer: AB
Agree with the general answer. its A+B.
upvoted 1 times
  Nikhilcy 7 months ago
Why B?
Message gets automatically deleted from queue once it goes out of it. FIFO
upvoted 1 times

  camelstrike 6 months, 3 weeks ago


Not deleted but hidden while being processed
upvoted 1 times

  bilel500 7 months ago


Selected Answer: AB
AB definitely Okay
upvoted 1 times

  buiducvu 7 months, 2 weeks ago


Selected Answer: AB
AB definitely Okay
upvoted 1 times

  SilentMilli 8 months, 4 weeks ago


Selected Answer: AB
AB definitely Okay
upvoted 1 times
Question #19 Topic 1

A company has a three-tier web application that is deployed on AWS. The web servers are deployed in a public subnet in a VPC. The application
servers and database servers are deployed in private subnets in the same VPC. The company has deployed a third-party virtual firewall appliance
from AWS Marketplace in an inspection VPC. The appliance is configured with an IP interface that can accept IP packets.
A solutions architect needs to integrate the web application with the appliance to inspect all traffic to the application before the traffic reaches the
web server.
Which solution will meet these requirements with the LEAST operational overhead?

A. Create a Network Load Balancer in the public subnet of the application's VPC to route the traffic to the appliance for packet inspection.

B. Create an Application Load Balancer in the public subnet of the application's VPC to route the traffic to the appliance for packet inspection.

C. Deploy a transit gateway in the inspection VPConfigure route tables to route the incoming packets through the transit gateway.

D. Deploy a Gateway Load Balancer in the inspection VPC. Create a Gateway Load Balancer endpoint to receive the incoming packets and
forward the packets to the appliance.

Correct Answer: B

Community vote distribution


D (81%) Other

  CloudGuru99 Highly Voted  11 months, 4 weeks ago


Answer is D . Use Gateway Load balancer
REF: https://aws.amazon.com/blogs/networking-and-content-delivery/scaling-network-traffic-inspection-using-aws-gateway-load-
balancer/
upvoted 34 times

  pm2229 Highly Voted  10 months, 4 weeks ago


It's D, Coz.. Gateway Load Balancer is a new type of load balancer that operates at layer 3 of the OSI model and is built on Hyperplane,
which is capable of handling several thousands of connections per second. Gateway Load Balancer endpoints are configured in spoke
VPCs originating or receiving traffic from the Internet. This architecture allows you to perform inline inspection of traffic from multiple
spoke VPCs in a simplified and scalable fashion while still centralizing your virtual appliances.
upvoted 32 times

  David_Ang Most Recent  6 days, 15 hours ago


Selected Answer: A
the key part is the LEAST overhead, and answer "D" adds more complexity and cost, "A" is the most correct answer
upvoted 1 times

  jonsnow1210 2 weeks ago


Selected Answer: D
Answer is D . Use Gateway Load balancer
upvoted 1 times

  Meytiam 1 month ago


Selected Answer: A
Given the straightforward nature of the requirement—inspecting all traffic before it reaches the web servers—the more suitable and
operationally efficient solution would be to use a Network Load Balancer (Option A). NLB operates at the transport layer (Layer 4) and can
route packets to the third-party firewall appliance with minimal complexity and overhead.
so its not B
and about D it mentioned with least operational overhead
upvoted 2 times

  slackbot 1 month, 2 weeks ago


Selected Answer: A
Option D is irrelevant - you are adding complexity when it is not needed. 3rd party appliances do not require GWLB when there is a SINGLE
appliance. GWLB is used when there are multiple appliances (which was not mentioned).
NLB (working on layer 4) will forward the TCP traffic to the target (firewall in inspection VPC) which will route the traffic to the web tier.
upvoted 1 times

  TariqKipkemei 2 months ago


Selected Answer: D
Gateway Load Balancer for layer 3 ip traffic distribution across EC2 and ip address - it is used for routing traffic to 3rd party virtual
appliances e.g. firewall, packet inspection systems before routing to destination apps on aws.
upvoted 1 times

  Bogs123456711 2 months ago


Selected Answer: D
Key word "Third party FW"

Gateway Load Balancers make it easy to deploy, scale, and manage third-party virtual appliances, such as security appliances.

https://docs.aws.amazon.com/elasticloadbalancing/latest/gateway/getting-started.html
upvoted 2 times

  hsinchang 2 months ago


So basically Gateway Load Balancer is the only LB that comes with inspection?
upvoted 1 times

  Guru4Cloud 2 months, 1 week ago


Selected Answer: D
The correct answer is D.
Here is the explanation:
Option D is correct because a Gateway Load Balancer (GWLB) is a global service, and it can be deployed in any VPC. This means that the
GWLB can reach the appliance. Additionally, the GWLB can be configured to forward packets to the appliance for packet inspection.

Option A is incorrect because a Network Load Balancer (NLB) is a regional service, and the appliance is deployed in an inspection VPC. This
means that the NLB would not be able to reach the appliance.
Option B is incorrect because an Application Load Balancer (ALB) is a regional service, and the appliance is deployed in an inspection VPC.
This means that the ALB would not be able to reach the appliance.
Option C is incorrect because a transit gateway is a global service, and the appliance is deployed in an inspection VPC. This means that the
transit gateway would not be able to reach the appliance.
upvoted 5 times

  Undisputed 2 months, 2 weeks ago


Selected Answer: D
The key word is Inspection, Gateway Load Balancer is a layer 3 used for inspection purposes.
upvoted 1 times

  miki111 2 months, 3 weeks ago


Option D is the ideal answer.
upvoted 1 times

  animefan1 2 months, 4 weeks ago


Selected Answer: D
Gateway LB is used for Firewalls, IDPS & 3rd party tools for inspection
upvoted 3 times

  pepepotamopepe 3 months ago


Selected Answer: D
D bcoz ChatGPT says that is D
upvoted 2 times

  Mia2009687 3 months, 1 week ago


Answer D -
Gateway Load Balancer ( GWLB )
Primarily used for deploying, scaling, and running third-party virtual appliances.
The virtual appliances can be your custom firewalls, deep packet inspection systems, or intrusion detection and prevention systems in
AWS

In this case, the appliance is used as a security system before the web tier.
upvoted 2 times

  cookieMr 3 months, 2 weeks ago


Selected Answer: A
A. Create a Network Load Balancer in the public subnet of the application's VPC to route the traffic to the appliance for packet inspection.

By creating a Network Load Balancer (NLB) in the public subnet, you can configure it to forward incoming traffic to the virtual firewall
appliance for inspection. The NLB operates at the transport layer (Layer 4) and can distribute traffic across multiple instances, including
the firewall appliance. This allows you to scale the inspection capacity if needed. The NLB can be associated with a target group that
includes the IP address of the firewall appliance, directing traffic to it before reaching the web servers.

Option B (Application Load Balancer) is not suitable for this scenario as it operates at the application layer (Layer 7) and does not provide
direct access to the IP packets for inspection.

Option C (Transit Gateway) and option D (Gateway Load Balancer) introduce additional complexity and overhead compared to using an
NLB. They are not necessary for achieving the requirement of inspecting traffic to the web application before reaching the web servers.
upvoted 7 times

  vipyodha 3 months, 1 week ago


best answer.well explained
upvoted 1 times
  Globus777 3 months, 3 weeks ago
Selected Answer: D
Answer is D . Use Gateway Load balancer
upvoted 3 times
Question #20 Topic 1

A company wants to improve its ability to clone large amounts of production data into a test environment in the same AWS Region. The data is
stored in Amazon EC2 instances on Amazon Elastic Block Store (Amazon EBS) volumes. Modifications to the cloned data must not affect the
production environment. The software that accesses this data requires consistently high I/O performance.
A solutions architect needs to minimize the time that is required to clone the production data into the test environment.
Which solution will meet these requirements?

A. Take EBS snapshots of the production EBS volumes. Restore the snapshots onto EC2 instance store volumes in the test environment.

B. Configure the production EBS volumes to use the EBS Multi-Attach feature. Take EBS snapshots of the production EBS volumes. Attach the
production EBS volumes to the EC2 instances in the test environment.

C. Take EBS snapshots of the production EBS volumes. Create and initialize new EBS volumes. Attach the new EBS volumes to EC2 instances
in the test environment before restoring the volumes from the production EBS snapshots.

D. Take EBS snapshots of the production EBS volumes. Turn on the EBS fast snapshot restore feature on the EBS snapshots. Restore the
snapshots into new EBS volumes. Attach the new EBS volumes to EC2 instances in the test environment.

Correct Answer: D

Community vote distribution


D (91%) 8%

  UWSFish Highly Voted  11 months, 1 week ago


Selected Answer: D
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-fast-snapshot-restore.html

Amazon EBS fast snapshot restore (FSR) enables you to create a volume from a snapshot that is fully initialized at creation. This eliminates
the latency of I/O operations on a block when it is accessed for the first time. Volumes that are created using fast snapshot restore
instantly deliver all of their provisioned performance.
upvoted 25 times

  PhucVuu Highly Voted  5 months, 3 weeks ago


Selected Answer: D
Keywords:
- Modifications to the cloned data must not affect the production environment.
- Minimize the time that is required to clone the production data into the test environment.

A: Incorrect - we can do this But it is not minimize the time as requirement.


B: Incorrect - This approach use same EBS volumes for produciton and test. If we modify test then it will be affected prodution
environment.
C: Incorrect - EBS snapshot will create new EBS volumes. It can not restore from existing volumes.
D: Correct - Turn on the EBS fast snapshot restore feature on the EBS snapshots -> no latency on first use
upvoted 11 times

  ukivanlamlpi Most Recent  1 month, 2 weeks ago


Selected Answer: A
why not A? high I/O, no need durability
upvoted 1 times

  JackLo 3 weeks, 3 days ago


Although it is test environment, it's data should be durable
upvoted 1 times

  TariqKipkemei 2 months ago


Selected Answer: D
Needs to minimize the time that is required to clone the production data into the test environment = EBS fast snapshot restore feature
upvoted 1 times

  Anil_Awasthi 2 months ago


Selected Answer: C
Option C provides an effective solution for cloning large amounts of production data into a test environment with minimized time, high
I/O performance, and without affecting the production environment.
upvoted 1 times

  Guru4Cloud 2 months, 1 week ago


Selected Answer: D
The correct answer is D.
Here is a step-by-step explanation of how to clone production data into a test environment using EBS snapshots:
Take EBS snapshots of the production EBS volumes.
Turn on the EBS fast snapshot restore feature on the EBS snapshots.
Restore the snapshots into new EBS volumes.
Attach the new EBS volumes to EC2 instances in the test environment.
The EBS fast snapshot restore feature allows you to restore snapshots more quickly than the default method. This is because the feature
uses a process called parallel restore, which allows multiple EBS volumes to be restored at the same time.
The EBS fast snapshot restore feature is only available for EBS snapshots that are created in the same AWS Region as the EC2 instances
that you are using to restore the snapshots.
upvoted 3 times
  Thornessen 2 months, 3 weeks ago
For consistently high IO, option A is the solution. Instance store has the highest IO
upvoted 1 times

  idanr391 2 months, 2 weeks ago


Its not, D its the solution. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-fast-snapshot-restore.html
upvoted 1 times

  miki111 2 months, 3 weeks ago


Option D is the ideal answer.
upvoted 1 times

  cookieMr 3 months, 2 weeks ago


Selected Answer: D
Take EBS snapshots of the production EBS volumes. Turn on the EBS fast snapshot restore feature on the EBS snapshots. Restore the
snapshots into new EBS volumes. Attach the new EBS volumes to EC2 instances in the test environment.

Enabling the EBS fast snapshot restore feature allows you to restore EBS snapshots into new EBS volumes almost instantly, without
needing to wait for the data to be fully copied from the snapshot. This significantly reduces the time required to clone the production
data.

By taking EBS snapshots of the production EBS volumes and restoring them into new EBS volumes in the test environment, you can
ensure that the cloned data is separate and does not affect the production environment. Attaching the new EBS volumes to the EC2
instances in the test environment allows you to access the cloned data.
upvoted 2 times

  TienHuynh 3 months, 2 weeks ago


Selected Answer: D
Amazon EBS fast snapshot restore (FSR) enables you to create a volume from a snapshot that is fully initialized at creation. This eliminates
the latency of I/O operations on a block when it is accessed for the first time. Volumes that are created using fast snapshot restore
instantly deliver all of their provisioned performance.
upvoted 1 times

  cheese929 4 months, 2 weeks ago


Selected Answer: D
D is correct
upvoted 1 times

  Abrar2022 4 months, 3 weeks ago


You can use EBS Fast Snapshot restore feature to restore EBS snapshots to a new EBS volume with minimal downtime.
upvoted 1 times

  EA100 4 months, 3 weeks ago


ANSWER - C
upvoted 1 times

  channn 6 months ago


Selected Answer: D
Key words: minimize the time
upvoted 1 times

  bilel500 7 months ago


Selected Answer: D
The EBS fast snapshot restore feature allows you to restore EBS snapshots to new EBS volumes with minimal downtime. This is particularly
useful when you need to restore large volumes or when you need to restore a volume to an EC2 instance in a different Availability Zone.
When you enable the fast snapshot restore feature, the EBS volume is restored from the snapshot in the shortest amount of time possible,
typically within a few minutes.
upvoted 1 times

  Bofi 7 months, 3 weeks ago


Selected Answer: A
Option A is correct because the question stated that the software that will access the test environment needs High I/O performance which
is the core feature of instance store. The only risk for instance store its lost when the EC2 that it is attached to is terminated, however, this
is a test environment, long term durability may not be required. Option C is not correct because it mentioned creating a new EBS and
restoring the snapshot. The snap shot can be restored without creating a new EBS. It did not satisfy the minimum overhead requirement
upvoted 5 times
  Ello2023 7 months, 3 weeks ago
Selected Answer: D
D. They are all viable solutions, however EBS fast snapshot will increase the speed as the question does ask for minimal time and not
about cost, automation, minimum overheads etc.
upvoted 1 times
Question #21 Topic 1

An ecommerce company wants to launch a one-deal-a-day website on AWS. Each day will feature exactly one product on sale for a period of 24
hours. The company wants to be able to handle millions of requests each hour with millisecond latency during peak hours.
Which solution will meet these requirements with the LEAST operational overhead?

A. Use Amazon S3 to host the full website in different S3 buckets. Add Amazon CloudFront distributions. Set the S3 buckets as origins for the
distributions. Store the order data in Amazon S3.

B. Deploy the full website on Amazon EC2 instances that run in Auto Scaling groups across multiple Availability Zones. Add an Application
Load Balancer (ALB) to distribute the website traffic. Add another ALB for the backend APIs. Store the data in Amazon RDS for MySQL.

C. Migrate the full application to run in containers. Host the containers on Amazon Elastic Kubernetes Service (Amazon EKS). Use the
Kubernetes Cluster Autoscaler to increase and decrease the number of pods to process bursts in traffic. Store the data in Amazon RDS for
MySQL.

D. Use an Amazon S3 bucket to host the website's static content. Deploy an Amazon CloudFront distribution. Set the S3 bucket as the origin.
Use Amazon API Gateway and AWS Lambda functions for the backend APIs. Store the data in Amazon DynamoDB.

Correct Answer: D

Community vote distribution


D (100%)

  Sinaneos Highly Voted  11 months, 3 weeks ago


Selected Answer: D
D because all of the components are infinitely scalable
dynamoDB, API Gateway, Lambda, and of course s3+cloudfront
upvoted 30 times

  Buruguduystunstugudunstuy Highly Voted  9 months, 1 week ago


Selected Answer: D
The solution that will meet these requirements with the least operational overhead is D: Use an Amazon S3 bucket to host the website's
static content, deploy an Amazon CloudFront distribution, set the S3 bucket as the origin, and use Amazon API Gateway and AWS Lambda
functions for the backend APIs. Store the data in Amazon DynamoDB.

Using Amazon S3 to host static content and Amazon CloudFront to distribute the content can provide high performance and scale for
websites with millions of requests each hour. Amazon API Gateway and AWS Lambda can be used to build scalable and highly available
backend APIs to support the website, and Amazon DynamoDB can be used to store the data. This solution requires minimal operational
overhead as it leverages fully managed services that automatically scale to meet demand.
upvoted 13 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Option A is incorrect because using multiple S3 buckets to host the full website would not provide the required performance and scale
for millions of requests each hour with millisecond latency.

Option B is incorrect because deploying the full website on EC2 instances and using an Application Load Balancer (ALB) and an RDS
database would require more operational overhead to maintain and scale the infrastructure.

Option C is incorrect because while deploying the application in containers and hosting them on Amazon Elastic Kubernetes Service
(EKS) can provide high performance and scale, it would require more operational overhead to maintain and scale the infrastructure
compared to using fully managed services like S3 and CloudFront.
upvoted 7 times

  TariqKipkemei Most Recent  2 months ago


Selected Answer: D
Autoscale with least Ops = AWS managed services: Dynamo DB, API Gateway, Lambda, S3, CF.
upvoted 2 times

  hsinchang 2 months ago


So services fully managed by AWS usually deliver less operational overhead?
upvoted 2 times

  Guru4Cloud 2 months, 1 week ago


Selected Answer: D
Option D leverages various serverless and managed services, minimizing the operational overhead compared to other options. The auto-
scaling capabilities of Lambda, API Gateway, and DynamoDB ensure the system can handle the required peak traffic without requiring
manual intervention in scaling infrastructure
upvoted 2 times
  james2033 2 months, 1 week ago
Selected Answer: D
Answer A "host the full website in different S3 buckets", remove A.

Answer B "Deploy full website on EC2", remove B.

Answer C, use Kubernetes is quite overhead, Amazon DynamoDB faster than Amazon RDS for MySQL.

Answer D is suitalbe in technical architect design, with Amazon S3, Amazon CloudFront, Amazon API Gateway, AWS Lambda, Amazon
DynamoDB. for "LEAD operational overhead" (not mean migration/re-architect overhead, it is operational). Choose D.
upvoted 1 times

  miki111 2 months, 2 weeks ago


Option D is the right answer for this.
upvoted 1 times

  cookieMr 3 months, 2 weeks ago


Selected Answer: D
Use an Amazon S3 bucket to host the website's static content. Deploy an Amazon CloudFront distribution. Set the S3 bucket as the origin.
Use Amazon API Gateway and AWS Lambda functions for the backend APIs. Store the data in Amazon DynamoDB.

This solution leverages the scalability, low latency, and operational ease provided by AWS services.

This solution minimizes operational overhead because it leverages managed services, eliminating the need for manual scaling or
management of infrastructure. It also provides the required scalability and low-latency response times to handle peak-hour traffic
effectively.

Options A, B, and C involve more operational overhead and management responsibilities, such as managing EC2 instances, Auto Scaling
groups, RDS for MySQL, containers, and Kubernetes clusters. These options require more manual configuration and maintenance
compared to the serverless and managed services approach provided by option D.
upvoted 3 times

  Globus777 3 months, 3 weeks ago


Selected Answer: D
D is correct
upvoted 1 times

  cheese929 4 months, 2 weeks ago


Selected Answer: D
D is correct
upvoted 1 times

  MiteshB 5 months, 1 week ago


Selected Answer: D
ans: D
keywords: only one product on sale -- means static content
millions of requests each hour with millisecond latency -- dynamoDB
LEAST operational overhead -- choose serverless architecture -- lambda/ API Gateway that handle millions of request in one go with cost
effective manner
upvoted 5 times

  PhucVuu 5 months, 3 weeks ago


Selected Answer: D
Keywords:
- Each day will feature exactly one product on sale for a period of 24 hours
- Handle millions of requests each hour with millisecond latency during peak hours.
- LEAST operational overhead

A: Incorrect - We cannot store all the data to S3 because our data is dynamic (Each day will feature exactly one product on sale for a period
of 24 hours)
B: Incorrect - We don't have cache to improve performance (one product on sale for a period of 24 hours). Auto Scaling groups and RDS
for MySQL need time to scale cannot scale immedidately.
C: Incorrect - We don't have cache to improve performance (one product on sale for a period of 24 hours). Kubernetes Cluster Autoscaler
can scale better than Auto Scaling groups but it also need time to scale.
D: Correct - DynamoDB, S3, CloudFront, API Gateway are managed servers and they are highly scalable. CloudFront can cache static and
dynamic data.
upvoted 8 times

  gx2222 6 months ago


Selected Answer: D
Option D uses Amazon S3 to host the website's static content, which requires no servers to be provisioned or managed. Additionally,
Amazon CloudFront can be used to improve the latency and scalability of the website. The backend APIs can be built using Amazon API
Gateway and AWS Lambda, which can handle millions of requests with low operational overhead. Amazon DynamoDB can be used to
store order data, which can scale to handle high request volumes with low latency.
upvoted 1 times
  apchandana 6 months, 1 week ago
Selected Answer: D
the most important key work is millisecond latency. only Dynamo DB can provide in this scale.

obviously, S3, Lambda, Cloud front, etc has built in scaling


upvoted 2 times

  cheese929 6 months, 2 weeks ago


Selected Answer: D
Answer is D. All services proposed are managed services and auto scalable.
upvoted 2 times

  pazabal 9 months, 2 weeks ago


Selected Answer: D
high I/O = DynamoDB
upvoted 2 times

  psr83 9 months, 2 weeks ago


Selected Answer: D
millisecond latency --> DynamoDB
upvoted 2 times
Question #22 Topic 1

A solutions architect is using Amazon S3 to design the storage architecture of a new digital media application. The media files must be resilient to
the loss of an Availability Zone. Some files are accessed frequently while other files are rarely accessed in an unpredictable pattern. The solutions
architect must minimize the costs of storing and retrieving the media files.
Which storage option meets these requirements?

A. S3 Standard

B. S3 Intelligent-Tiering

C. S3 Standard-Infrequent Access (S3 Standard-IA)

D. S3 One Zone-Infrequent Access (S3 One Zone-IA)

Correct Answer: B

Community vote distribution


B (100%)

  123jhl0 Highly Voted  11 months, 2 weeks ago


Selected Answer: B
"unpredictable pattern" - always go for Intelligent Tiering of S3
It also meets the resiliency requirement: "S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, S3 Glacier Instant Retrieval, S3 Glacier
Flexible Retrieval, and S3 Glacier Deep Archive redundantly store objects on multiple devices across a minimum of three Availability Zones
in an AWS Region" https://docs.aws.amazon.com/AmazonS3/latest/userguide/DataDurability.html
upvoted 30 times

  Buruguduystunstugudunstuy Highly Voted  9 months, 1 week ago


Selected Answer: B
The storage option that meets these requirements is B: S3 Intelligent-Tiering.

Amazon S3 Intelligent Tiering is a storage class that automatically moves data to the most cost-effective storage tier based on access
patterns. It can store objects in two access tiers: the frequent access tier and the infrequent access tier. The frequent access tier is
optimized for frequently accessed objects and is charged at the same rate as S3 Standard. The infrequent access tier is optimized for
objects that are not accessed frequently and are charged at a lower rate than S3 Standard.

S3 Intelligent Tiering is a good choice for storing media files that are accessed frequently and infrequently in an unpredictable pattern
because it automatically moves data to the most cost-effective storage tier based on access patterns, minimizing storage and retrieval
costs. It is also resilient to the loss of an Availability Zone because it stores objects in multiple Availability Zones within a region.
upvoted 8 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Option A, S3 Standard, is not a good choice because it does not offer the cost optimization of S3 Intelligent-Tiering.

Option C, S3 Standard-Infrequent Access (S3 Standard-IA), is not a good choice because it is optimized for infrequently accessed objects
and does not offer the cost optimization of S3 Intelligent-Tiering.

Option D, S3 One Zone-Infrequent Access (S3 One Zone-IA), is not a good choice because it is not resilient to the loss of an Availability
Zone. It stores objects in a single Availability Zone, making it less durable than other storage classes.
upvoted 5 times

  reema908516 Most Recent  3 weeks, 1 day ago


Selected Answer: B
Amazon S3 Intelligent Tiering is a storage class that automatically moves data to the most cost-effective storage tier based on access
patterns.
upvoted 1 times

  benacert 4 weeks ago


Unpredictable pattern, intelligent tiering will handle that.
B - is the answer..
upvoted 1 times

  TariqKipkemei 2 months ago


Files are accessed in an unpredictable pattern, must minimize the costs of storing and retrieving the media files = S3 Intelligent-Tiering.
upvoted 1 times

  Guru4Cloud 2 months, 1 week ago


Selected Answer: B
S3 Intelligent-Tiering: This storage class is designed to optimize costs by automatically moving objects between two access tiers based on
their usage patterns. It uses frequent access and infrequent access tiers. The frequently accessed objects stay in the frequent access tier,
while the objects that are not accessed frequently are moved to the infrequent access tier. Intelligent-Tiering maintains high availability
across AZs, just like S3 Standard, but it also helps reduce costs by moving data to the lower-cost tier when appropriate.
upvoted 1 times
  miki111 2 months, 2 weeks ago
Option B is the right answer for this.
upvoted 1 times

  james2033 2 months, 2 weeks ago


Selected Answer: B
"S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, S3 Glacier Instant Retrieval, S3 Glacier Flexible Retrieval, and S3 Glacier Deep Archive
are all designed to sustain data in the event of the loss of an entire Amazon S3 Availability Zone." source:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/DataDurability.html
upvoted 1 times

  james2033 2 months, 1 week ago


S3 Intelligent-Tiering is designed for data with changing or unknown access patterns, while S3 Standard-IA is designed for long-lived,
infrequently accessed data [1]. S3 Intelligent-Tiering automatically reduces your storage costs on a granular object level by
automatically moving data to the most cost-effective access tier based on access frequency, without performance impact, retrieval fees,
or operational overhead [2]. However, it's important to note that by using S3 Intelligent-Tiering, you need to pay for a small object
monitoring fee to keep track of access patterns to your data [3].
upvoted 1 times

  james2033 2 months, 1 week ago


[1] S3 Intelligent Tiering: How it Helps to Optimize Storage Costs? https://www.stormit.cloud/blog/s3-intelligent-tiering-storage-
class/
[2] Object Storage Classes – Amazon S3. https://aws.amazon.com/s3/storage-classes/
[3] S3 Standard vs Intelligent Tiering – What’s the difference? https://www.beabetterdev.com/2021/10/16/s3-standard-vs-intelligent-
tiering/
upvoted 1 times

  cookieMr 3 months, 2 weeks ago


Selected Answer: B
S3 Intelligent-Tiering is designed to optimize costs by automatically moving objects between two access tiers: frequent access and
infrequent access. It uses machine learning algorithms to analyze access patterns and determine the most appropriate tier for each
object.

In the given scenario, where some media files are accessed frequently while others are rarely accessed in an unpredictable pattern, S3
Intelligent-Tiering can be a suitable choice. It automatically adjusts the storage tier based on the access patterns, ensuring that frequently
accessed files remain in the frequent access tier for fast retrieval, while rarely accessed files are moved to the infrequent access tier for
cost savings.

Compared to S3 Standard-IA, S3 Intelligent-Tiering provides more granular cost optimization and may be more suitable if the access
patterns of the media files fluctuate over time. However, it's worth noting that S3 Intelligent-Tiering may have slightly higher storage costs
compared to S3 Standard-IA due to the added flexibility and automation it offers.
upvoted 3 times

  Abrar2022 4 months, 3 weeks ago


B - for unpredictable patterns use intelligent tiering
upvoted 1 times

  Rahulbit34 5 months ago


B - "UNPREDICTABLE pattern" is the key
upvoted 1 times

  PhucVuu 5 months, 3 weeks ago


Selected Answer: B
Keywords:
- Must be resilient to the loss of an Availability Zone.
- files are accessed FREQUENTLY while other files are RARELY accessed in an UNPREDICTABLE pattern.

A - Incorrect: S3 Standard is not cost effective for rarely access files


B - Correct: S3 Intelligent-Tiering is good for file which frequently or rarely accessed in an unpredictable pattern. Intelligent-Tiering will
help us analyze the pattern and move rarely access files to storage which has lower cost.
C - Incorrect: Standard-Infrequent Access is not cost effective for frequently access files
D - Incorrect: One Zone-Infrequent Access is not resilient to the loss of an Availability Zone
upvoted 3 times

  channn 6 months ago


Selected Answer: B
Key words: in an unpredictable pattern.
upvoted 1 times

  cheese929 6 months, 2 weeks ago


Selected Answer: B
S3 Intelligent-Tiering is the ideal storage class for data with unknown, changing, or unpredictable access patterns, independent of object
size or retention period
upvoted 1 times
  bilel500 7 months ago
Selected Answer: B
S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, S3 Glacier Instant Retrieval, S3 Glacier Flexible Retrieval, and S3 Glacier Deep Archive
are all designed to sustain data in the event of the loss of an entire Amazon S3 Availability Zone.
upvoted 1 times

  Rishi1 8 months, 1 week ago


Selected Answer: B
B is correct
upvoted 1 times

  jannymacna 8 months, 3 weeks ago


C. S3 Standard-Infrequent Access (S3 Standard-IA)

S3 Standard-IA is designed for infrequently accessed data, which is a good fit for the media files that are rarely accessed in an
unpredictable pattern. S3 Standard-IA is also cross-Region replicated, providing resilience to the loss of an Availability Zone. Additionally,
S3 Standard-IA has a lower storage and retrieval cost compared to S3 Standard and S3 Intelligent-Tiering, which makes it a cost-effective
option for storing infrequently accessed data.
upvoted 1 times
Question #23 Topic 1

A company is storing backup files by using Amazon S3 Standard storage. The files are accessed frequently for 1 month. However, the files are not
accessed after 1 month. The company must keep the files indefinitely.
Which storage solution will meet these requirements MOST cost-effectively?

A. Configure S3 Intelligent-Tiering to automatically migrate objects.

B. Create an S3 Lifecycle configuration to transition objects from S3 Standard to S3 Glacier Deep Archive after 1 month.

C. Create an S3 Lifecycle configuration to transition objects from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-IA) after 1
month.

D. Create an S3 Lifecycle configuration to transition objects from S3 Standard to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 1
month.

Correct Answer: B

Community vote distribution


B (97%)

  Buruguduystunstugudunstuy Highly Voted  9 months, 1 week ago


Selected Answer: B
The storage solution that will meet these requirements most cost-effectively is B: Create an S3 Lifecycle configuration to transition objects
from S3 Standard to S3 Glacier Deep Archive after 1 month.

Amazon S3 Glacier Deep Archive is a secure, durable, and extremely low-cost Amazon S3 storage class for long-term retention of data that
is rarely accessed and for which retrieval times of several hours are acceptable. It is the lowest-cost storage option in Amazon S3, making
it a cost-effective choice for storing backup files that are not accessed after 1 month.

You can use an S3 Lifecycle configuration to automatically transition objects from S3 Standard to S3 Glacier Deep Archive after 1 month.
This will minimize the storage costs for the backup files that are not accessed frequently.
upvoted 8 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Option A, configuring S3 Intelligent-Tiering to automatically migrate objects, is not a good choice because it is not designed for long-
term storage and does not offer the cost benefits of S3 Glacier Deep Archive.

Option C, transitioning objects from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-IA) after 1 month, is not a good choice
because it is not the lowest-cost storage option and would not provide the cost benefits of S3 Glacier Deep Archive.

Option D, transitioning objects from S3 Standard to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 1 month, is not a good
choice because it is not the lowest-cost storage option and would not provide the cost benefits of S3 Glacier Deep Archive.
upvoted 3 times

  vgchan 8 months, 3 weeks ago


Also S3 Standard-IA & One Zone-IA stores the data for max of 30 days and not indefinitely.
upvoted 3 times

  ninjawrz Highly Voted  11 months, 3 weeks ago


B: Transition to Glacier deep archive for cost efficiency
upvoted 7 times

  AhmedAbdelhedi Most Recent  18 hours, 48 minutes ago


Selected Answer: B
Answer is B
upvoted 1 times

  sujanakakarla 1 month ago


Selected Answer: B
B as these files will be stored indefinitely after 1 month
upvoted 1 times

  TariqKipkemei 2 months ago


Selected Answer: B
Files are accessed frequently for 1 month = S3 Standard. Files are not accessed after 1 month and must be kept indefinitely at low costs =
S3 Glacier Deep Archive.
No requirement for low Ops but S3 Lifecycle to the rescue...whoooosh!
upvoted 1 times
  Guru4Cloud 2 months, 1 week ago
Selected Answer: B
Option B (Create an S3 Lifecycle configuration to transition objects from S3 Standard to S3 Glacier Deep Archive after 1 month) is the most
cost-effective storage solution for this specific scenario. It allows you to maintain accessibility for the initial 1 month while achieving
significant cost savings in the long term.
upvoted 1 times

  miki111 2 months, 2 weeks ago


Option B is the right answer for this.
upvoted 1 times

  Kaab_B 2 months, 2 weeks ago


Selected Answer: B
Correct answer is B
upvoted 1 times

  Debmalya_aws 2 months, 3 weeks ago


It will be C. Can not move to Glacier directly from standard using Lifecycle
upvoted 1 times

  bingusbongus 2 months, 3 weeks ago


You absolutely can.
upvoted 2 times

  cookieMr 3 months, 2 weeks ago


Selected Answer: B
S3 Glacier Deep Archive is designed for long-term archival storage with very low storage costs. It offers the lowest storage prices among
the storage classes in Amazon S3. However, it's important to note that accessing data from S3 Glacier Deep Archive has a significant
retrieval time, ranging from several minutes to hours, which may not be suitable if you require immediate access to the backup files.

If the files need to be accessed frequently within the first month but not after that, transitioning them to S3 Glacier Deep Archive using an
S3 Lifecycle configuration can provide cost savings. However, keep in mind that retrieving the files from S3 Glacier Deep Archive will have a
significant time delay.
upvoted 3 times

  MostafaWardany 4 months, 1 week ago


Selected Answer: B
B is the correct answer
upvoted 1 times

  beginnercloud 4 months, 2 weeks ago


Selected Answer: B
B is correct answer
upvoted 1 times

  Rahulbit34 5 months ago


Transition to Glacier storage for cost efficient and can be queries in 5-7 hours time
upvoted 1 times

  PhucVuu 5 months, 3 weeks ago


Selected Answer: B
Keywords:
- The files are accessed frequently for 1 month.
- Files are NOT accessed after 1 month.

A: Incorrect - We know the pattern (accessed frequently for 1 month, NOT accessed after 1 month) so we can configure it manually to
make the cost reduce as much as possible.
B: Correct - Glacier Deep Archive is the most cost-effective for file which rarely use
C: Incorrect - Standard-Infrequent Access good for in Infrequent Access but not good for rarely(never) use.
D: Incorrect - One Zone-Infrequent Access can reduce more cost compare to Standard-Infrequent Access but it is not the best way
compare to Glacier Deep Archive.
upvoted 3 times

  enc_0343 7 months ago


The answer is B. "S3 Glacier Deep Archive is Amazon S3’s lowest-cost storage class and supports long-term retention and digital
preservation for data that may be accessed once or twice in a year." See here: https://aws.amazon.com/s3/storage-classes/
upvoted 1 times

  KittieHearts 7 months, 1 week ago


Selected Answer: B
Files are only required to be kept up to 7 years for businesses to Deep archive is the most cost optimal as well as useful in this scenario.
upvoted 1 times

  pazabal 9 months, 2 weeks ago


Selected Answer: B
Glacier deep archive = lowest cost (accessed once or twice a year)
upvoted 2 times
Question #24 Topic 1

A company observes an increase in Amazon EC2 costs in its most recent bill. The billing team notices unwanted vertical scaling of instance types
for a couple of EC2 instances. A solutions architect needs to create a graph comparing the last 2 months of EC2 costs and perform an in-depth
analysis to identify the root cause of the vertical scaling.
How should the solutions architect generate the information with the LEAST operational overhead?

A. Use AWS Budgets to create a budget report and compare EC2 costs based on instance types.

B. Use Cost Explorer's granular filtering feature to perform an in-depth analysis of EC2 costs based on instance types.

C. Use graphs from the AWS Billing and Cost Management dashboard to compare EC2 costs based on instance types for the last 2 months.

D. Use AWS Cost and Usage Reports to create a report and send it to an Amazon S3 bucket. Use Amazon QuickSight with Amazon S3 as a
source to generate an interactive graph based on instance types.

Correct Answer: C

Community vote distribution


B (64%) C (25%) 11%

  sba21 Highly Voted  11 months, 3 weeks ago


Selected Answer: B
https://www.examtopics.com/discussions/amazon/view/68306-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 29 times

  123jhl0 Highly Voted  11 months, 2 weeks ago


Selected Answer: C
The requested result is a graph, so...
A - can't be as the result is a report
B - can't be as it is limited to 14 days visibility and the graph has to cover 2 months
C - seems to provide graphs and the best option available, as...
D - could provide graphs, BUT involves operational overhead, which has been requested to be minimised.
upvoted 17 times

  Udoyen 10 months ago


Cost Explorer, AWS prepares the data about your costs for the current month and the last 12 months: https://aws.amazon.com/aws-
cost-management/aws-cost-explorer/
upvoted 14 times

  lofzee 7 months, 3 weeks ago


14 days? Fam, you ever logged into the console?
upvoted 9 times

  Ello2023 7 months, 4 weeks ago


B. This is correct because there is no limit of 14 days. Quoted from Amazon "AWS prepares the data about your costs for the current
month and the last 12 months, and then calculates the forecast for the next 12 months." (https://aws.amazon.com/aws-cost-
management/aws-cost-explorer/).
upvoted 6 times

  goku58 11 months, 2 weeks ago


12 months data visible on Cost Explorer.
upvoted 9 times

  David_Ang Most Recent  5 days, 15 hours ago


Selected Answer: C
yeah this is really tricky, because "B" and "C"can do the job, but "B" using granular filtering cost money, even if is just 0.01$ for every 1000
records, cost more money than answer "C" which does not cost any money at all, because you are just analyzing graphs.
upvoted 1 times

  MakaylaLearns 3 weeks, 5 days ago


The answer is cost explorer

Billing and Cost management → An OVERALL look at all of the costs within your AWS organization or billing account

For central management.

In the management billing console you can do things such as add your credit card, add or remove regions, change your default currency
but cost explorer is sort of different- its mainly for finding out what’s being charged the most- you can review costs in both services but
remember management billing console is for configurations mainly…
Cost Explorer can be used to filter and find the ROOT of problems

It can make visualizations, graphs

You can find unusual spending patterns

You can use cost allocation tags

Review your costs by day, week or month & custom timeframes

I hope this helps


upvoted 1 times
  midriss 4 weeks ago
AWS Cost and Usage Reports provide detailed billing data in a structured format, including instance types and costs, which makes it
suitable for in-depth analysis. Answer is C
upvoted 1 times

  Hassaoo 1 month ago


answer is b because
The Monthly costs by service report shows your costs for the last six months, grouped by service.
https://docs.aws.amazon.com/cost-management/latest/userguide/ce-default-reports.html
upvoted 1 times

  BrijMohan08 1 month ago


Selected Answer: D
Not B - Hourly granularity will reduce your dataset to your past 14 days of usage only.
upvoted 2 times

  bahaa_shaker 1 month, 1 week ago


Selected Answer: B
B is the right answer
upvoted 1 times

  TariqKipkemei 2 months ago


Selected Answer: B
I just logged into console: AWS Cost Management>AWS Cost Explorer>View in Cost Explorer>Filter Graph by 'DateRange', 'Service EC2',
'Instance Type'
upvoted 2 times

  hsinchang 2 months ago


Budgets is used to set goals, not for analysis.
The Billing and Cost Management dashboard is a dashboard, no in-depth analysis is provided.
Option D introduces S3 into the solution, adds operational overhead.
So B.
upvoted 2 times

  Guru4Cloud 2 months, 1 week ago


Selected Answer: D
AWS Cost and Usage Reports: By setting up AWS Cost and Usage Reports, you can collect detailed cost and usage data for your AWS
resources, including EC2 instances, and store it in an S3 bucket. This provides the data required for an in-depth analysis.

Amazon S3 Bucket: Storing the cost and usage data in an S3 bucket allows you to have a centralized and secure location for your data,
making it easily accessible for further analysis.

Amazon QuickSight: With Amazon QuickSight as a data visualization tool, you can easily connect to the data stored in the S3 bucket and
create interactive graphs and visualizations. QuickSight offers various chart types and filtering options to perform an in-depth analysis
based on instance types, cost trends, and usage patterns over the last 2 months.
upvoted 2 times

  miki111 2 months, 2 weeks ago


Option B is the right answer for this.
upvoted 1 times

  Kaab_B 2 months, 2 weeks ago


Selected Answer: C
This way the job can be done with minimal effort.
upvoted 1 times

  ibu007 2 months, 3 weeks ago


Selected Answer: B
You can enable Cost Explorer for your account using this procedure on the Billing and Cost Management console. You can't enable Cost
Explorer using the API. After you enable Cost Explorer, AWS prepares the data about your costs for the current month and the last 12
months, and then calculates the forecast for the next 12 months. The current month's data is available for viewing in about 24 hours. The
rest of your data takes a few days longer. Cost Explorer updates your cost data at least once every 24 hours.
upvoted 1 times
  RupeC 2 months, 3 weeks ago
Selected Answer: B
A and C are distractions. B and D have the granularity required, but the overhead of B is less. Thus B. One fellow argued that it cannot be B
as there is a 14 day visibility limit that pertains and 2 months data is needed. However, the documentation says that there is 12 months of
historic data. https://docs.aws.amazon.com/cost-management/latest/userguide/ce-what-is.html
upvoted 2 times

  Mia2009687 3 months, 1 week ago


Selected Answer: B
Answer - B
https://tutorialsdojo.com/aws-billing-and-cost-management/
Other default reports available are:
The EC2 Monthly Cost and Usage report lets you view all of your AWS costs over the past two months, as well as your current month-to-
date costs.
upvoted 1 times

  cookieMr 3 months, 2 weeks ago


Selected Answer: D
By configuring AWS Cost and Usage Reports, the architect can generate detailed reports containing cost and usage information for
various AWS services, including EC2. The reports can be automatically delivered to an Amazon S3 bucket, providing a centralized location
for storing cost data.

To visualize and analyze the EC2 costs based on instance types, the architect can use Amazon QuickSight, a business intelligence tool
offered by AWS. QuickSight can directly access data stored in Amazon S3 and generate interactive graphs, charts, and dashboards for
detailed analysis. By connecting QuickSight to the S3 bucket containing the cost reports, the architect can easily create a graph comparing
the EC2 costs over the last 2 months based on instance types.

This approach minimizes operational overhead by leveraging AWS services (Cost and Usage Reports, Amazon S3, and QuickSight) to
automate data retrieval, storage, and visualization, allowing for efficient analysis of EC2 costs without the need for manual data gathering
and processing.
upvoted 2 times
Question #25 Topic 1

A company is designing an application. The application uses an AWS Lambda function to receive information through Amazon API Gateway and to
store the information in an Amazon Aurora PostgreSQL database.
During the proof-of-concept stage, the company has to increase the Lambda quotas significantly to handle the high volumes of data that the
company needs to load into the database. A solutions architect must recommend a new design to improve scalability and minimize the
configuration effort.
Which solution will meet these requirements?

A. Refactor the Lambda function code to Apache Tomcat code that runs on Amazon EC2 instances. Connect the database by using native
Java Database Connectivity (JDBC) drivers.

B. Change the platform from Aurora to Amazon DynamoDProvision a DynamoDB Accelerator (DAX) cluster. Use the DAX client SDK to point
the existing DynamoDB API calls at the DAX cluster.

C. Set up two Lambda functions. Configure one function to receive the information. Configure the other function to load the information into
the database. Integrate the Lambda functions by using Amazon Simple Notification Service (Amazon SNS).

D. Set up two Lambda functions. Configure one function to receive the information. Configure the other function to load the information into
the database. Integrate the Lambda functions by using an Amazon Simple Queue Service (Amazon SQS) queue.

Correct Answer: D

Community vote distribution


D (100%)

  123jhl0 Highly Voted  11 months, 2 weeks ago


Selected Answer: D
A - refactoring can be a solution, BUT requires a LOT of effort - not the answer
B - DynamoDB is NoSQL and Aurora is SQL, so it requires a DB migration... again a LOT of effort, so no the answer
C and D are similar in structure, but...
C uses SNS, which would notify the 2nd Lambda function... provoking the same bottleneck... not the solution
D uses SQS, so the 2nd lambda function can go to the queue when responsive to keep with the DB load process.
Usually the app decoupling helps with the performance improvement by distributing load. In this case, the bottleneck is solved by uses
queues... so D is the answer.
upvoted 60 times

  PhucVuu Highly Voted  5 months, 3 weeks ago


Selected Answer: D
Keywords:
- Company has to increase the Lambda quotas significantly to handle the high volumes of data that the company needs to load into the
database.
- Improve scalability and minimize the configuration effort.

A: Incorrect - Lambda is Serverless and automatically scale - EC2 instance we have to create load balancer, auto scaling group,.. a lot of
things. using native Java Database Connectivity (JDBC) drivers don't improve the performance.
B: Incorrect - a lot of things to changes and DynamoDB Accelerator use for cache(read) not for write.
C: Incorrect - SNS is use for send notification (e-mail, SMS).
D: Correct - with SQS we can scale application well by queuing the data.
upvoted 12 times

  MakaylaLearns Most Recent  3 weeks, 5 days ago


Lambda Functions: A review
Run your code in response to events

You can build chatbots using Lambda functions to process user input, execute business logic, and generate responses.
Scales automatically
They can be triggered in response to API events
Lambda functions can process files as they are uploaded to S3 buckets. This is often used for tasks like image resizing, data extraction, or
file validation.
upvoted 1 times

  learndigitalcloud 3 weeks, 6 days ago


AWS Cost Explorer is a tool that enables you to view and analyze your costs and usage. You can explore your usage and costs using the
main graph, the Cost Explorer cost and usage reports, or the Cost Explorer RI reports. You can view data for up to the last 12 months,
forecast how much you're likely to spend for the next 12 months, and get recommendations for what Reserved Instances to purchase.
Ans: B is correct
https://docs.aws.amazon.com/cost-management/latest/userguide/ce-what-is.html
upvoted 1 times
  doujones 1 month, 3 weeks ago
Do you all have to take the whole practice exam on here, in order to pass AWS SAA C03
upvoted 1 times

  TariqKipkemei 2 months ago


Increase Lambda quotas = Set up two Lambda functions. Improve scalability = Amazon Simple Queue Service.
upvoted 1 times

  TariqKipkemei 2 months ago


Selected answer D
upvoted 1 times

  miki111 2 months, 2 weeks ago


Option D is the right answer for this.
upvoted 1 times

  Kaab_B 2 months, 2 weeks ago


Selected Answer: D
Lambda and SQS are serverless. No involvement will be required in execution.
upvoted 1 times

  Thornessen 2 months, 3 weeks ago


This threw me off - because ideally, I see no need for two lambdas. It can be done with one: APIGW -> SQS -> Lambda.
upvoted 1 times

  ichwilldoit 2 months, 2 weeks ago


By, @cookieMr [https://www.examtopics.com/user/cookieMr/]
"By dividing the functionality into two Lambda functions, one for receiving the information and the other for loading it into the
database, you can independently scale and optimize each function based on their specific requirements. This approach allows for more
efficient resource allocation and reduces the potential impact of high volumes of data on the overall system."
upvoted 2 times

  cookieMr 3 months, 2 weeks ago


Selected Answer: D
Option D, setting up two Lambda functions and integrating them using an SQS, would be the most suitable solution to improve scalability
and minimize configuration effort in this scenario.

By dividing the functionality into two Lambda functions, one for receiving the information and the other for loading it into the database,
you can independently scale and optimize each function based on their specific requirements. This approach allows for more efficient
resource allocation and reduces the potential impact of high volumes of data on the overall system.

Integrating the Lambda functions using an SQS adds another layer of scalability and reliability. The receiving function can push the
information to the SQS, and the loading function can retrieve messages from the queue and process them independently. This
asynchronous decoupling ensures that the receiving function can handle high volumes of incoming requests without overwhelming the
loading function. Additionally, SQS provides built-in retries and guarantees message durability, ensuring that no data is lost during
processing.
upvoted 5 times

  TienHuynh 3 months, 2 weeks ago


Selected Answer: D
D is correct, SQS can queue data
upvoted 1 times

  Bmarodi 4 months ago


Selected Answer: D
To improve scalability and minimize the configuration effort. Solutions architect can choose option D.
upvoted 1 times

  Abrar2022 4 months, 2 weeks ago


To improve scalability and minimize configuration efforts you can set up 2 lambda functions, one to receive the other to load. Then
integrate the lambda functions using SQS.
upvoted 1 times

  kakka22 5 months, 2 weeks ago


Is the question wrong? Amazon Aurora use it's own DB not PostgreSQL, you need to provision an rds instance for that..
upvoted 1 times

  Freddie26 5 months, 2 weeks ago


The question ask you to improve scalability and minimize the configuration effort. While SNS is a fair answer, SQS is better. "SQS scales
elastically, and there is no limit to the number of messages per queue." See https://aws.amazon.com/blogs/compute/choosing-between-
messaging-services-for-serverless-applications/.
upvoted 1 times

  linux_admin 6 months ago


Selected Answer: D
o improve scalability and minimize configuration effort, the recommended solution is to use an event-driven architecture with AWS
Lambda functions. This will allow the company to handle high volumes of data without worrying about scaling the infrastructure.

Option C and D both propose an event-driven architecture using Lambda functions, but option D is better suited for this use case because
it uses an Amazon SQS queue to decouple the receiving and loading of information into the database. This will provide better fault
tolerance and scalability, as messages can be stored in the queue until they are processed by the second Lambda function. In contrast,
using SNS for this use case might cause some events to be missed, as it only guarantees the delivery of messages to subscribers, not to
the Lambda function.
upvoted 3 times

  SilentMilli 8 months, 4 weeks ago


Selected Answer: D
By using two Lambda functions, you can separate the tasks of receiving the information and loading the information into the database.
This will allow you to scale each function independently, improving scalability.
upvoted 1 times
Question #26 Topic 1

A company needs to review its AWS Cloud deployment to ensure that its Amazon S3 buckets do not have unauthorized configuration changes.
What should a solutions architect do to accomplish this goal?

A. Turn on AWS Config with the appropriate rules.

B. Turn on AWS Trusted Advisor with the appropriate checks.

C. Turn on Amazon Inspector with the appropriate assessment template.

D. Turn on Amazon S3 server access logging. Configure Amazon EventBridge (Amazon Cloud Watch Events).

Correct Answer: A

Community vote distribution


A (97%)

  Buruguduystunstugudunstuy Highly Voted  9 months, 1 week ago


Selected Answer: A
The solution that will accomplish this goal is A: Turn on AWS Config with the appropriate rules.

AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. You can use AWS Config
to monitor and record changes to the configuration of your Amazon S3 buckets. By turning on AWS Config and enabling the appropriate
rules, you can ensure that your S3 buckets do not have unauthorized configuration changes.
upvoted 29 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


AWS Trusted Advisor (Option B) is a service that provides best practice recommendations for your AWS resources, but it does not
monitor or record changes to the configuration of your S3 buckets.

Amazon Inspector (Option C) is a service that helps you assess the security and compliance of your applications. While it can be used to
assess the security of your S3 buckets, it does not monitor or record changes to the configuration of your S3 buckets.

Amazon S3 server access logging (Option D) enables you to log requests made to your S3 bucket. While it can help you identify changes
to your S3 bucket, it does not monitor or record changes to the configuration of your S3 bucket.
upvoted 21 times

  gokalpkocer3 Highly Voted  11 months ago


Configuration changes= AWS Config
upvoted 20 times

  TariqKipkemei Most Recent  2 months ago


Selected Answer: A
AWS Config continually assesses, audits, and evaluates the configurations and relationships of your resources on AWS, on premises, and
on other clouds. It normalizes changes into a consistent format and checks resource compliance with custom and managed rules before
and after provisioning.

https://aws.amazon.com/config/#:~:text=How%20it%20works-,AWS%20Config,-continually%20assesses%2C%20audits
upvoted 1 times

  Guru4Cloud 2 months, 1 week ago


Selected Answer: A
AWS Config provides a detailed inventory of the company's AWS resources and configuration history, and can be configured with rules to
evaluate resource configurations for compliance with policies and best practices.

The solutions architect can enable AWS Config and configure rules specifically checking for S3 bucket settings like public access blocking,
encryption settings, access control lists, etc. AWS Config will record configuration changes to S3 buckets over time, allowing the company
to review changes and be alerted about any unauthorized modifications.
By. Claude.ai
upvoted 1 times

  miki111 2 months, 2 weeks ago


Option A is the right answer for this.
upvoted 1 times

  cookieMr 3 months, 2 weeks ago


Selected Answer: A
AWS Config is a service that provides a detailed view of the configuration of AWS resources in your account. By enabling AWS Config, you
can capture configuration changes and maintain a record of resource configurations over time. It allows you to define rules that check for
compliance with desired configurations and can generate alerts or automated actions when unauthorized changes occur.
To accomplish the goal of preventing unauthorized configuration changes in Amazon S3 buckets, you can configure AWS Config rules
specifically for S3 bucket configurations. These rules can check for a variety of conditions, such as ensuring that encryption is enabled,
access control policies are correctly configured, and public access is restricted.

While options B, C, and D offer valuable services for various aspects of AWS deployment, they are not specifically focused on preventing
unauthorized configuration changes in Amazon S3 buckets as effectively as enabling AWS Config.
upvoted 2 times
  Abrar2022 4 months, 2 weeks ago
Don't be mistaken in thinking that it's Server access logs because that's for detailed records for requests made to S3. It's AWS Config
because it records configuration changes.
upvoted 1 times

  Rahulbit34 5 months ago


AWS truseted Adviser is for providing recommendation only.
For any configuration use AWS config
Inspecter is for scanning for any software vulnerabilities and unintended network exposure
upvoted 1 times

  PhucVuu 5 months, 1 week ago


Selected Answer: A
To accomplish the goal of ensuring that Amazon S3 buckets do not have unauthorized configuration changes, a solutions architect should
turn on AWS Config with the appropriate rules. AWS Config enables continuous monitoring and recording of AWS resource configurations,
including S3 buckets. By turning on AWS Config with the appropriate rules, the solutions architect can be notified of any unauthorized
changes made to the S3 bucket configurations, allowing for prompt corrective action. Options B, C, and D are not directly related to
monitoring and preventing unauthorized configuration changes to Amazon S3 buckets.
upvoted 1 times

  channn 6 months ago


Selected Answer: A
Key words:configuration changes
upvoted 1 times

  linux_admin 6 months ago


Selected Answer: A
Option A is the correct solution. AWS Config is a service that allows you to monitor and record changes to your AWS resources over time.
You can use AWS Config to track changes to Amazon S3 buckets and their configuration settings, and set up rules to identify any
unauthorized configuration changes. AWS Config can also send notifications through Amazon SNS to alert you when these changes occur.
upvoted 1 times

  al64 7 months, 3 weeks ago


Selected Answer: A
aws: A - aws config
upvoted 1 times

  Khushna 8 months ago


AAAAaaaaaaaaaaaaaaaaa
upvoted 1 times

  SilentMilli 8 months, 4 weeks ago


Selected Answer: A
o ensure that Amazon S3 buckets do not have unauthorized configuration changes, a solutions architect should turn on AWS Config with
the appropriate rules.

AWS Config is a service that provides you with a detailed view of the configuration of your AWS resources. It continuously records
configuration changes to your resources and allows you to review, audit, and compare these changes over time. By turning on AWS Config
and enabling the appropriate rules, you can monitor the configuration changes to your Amazon S3 buckets and receive notifications when
unauthorized changes are made.
upvoted 1 times

  pazabal 9 months, 2 weeks ago


Selected Answer: A
unauthorized config changes = aws config
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Selected Answer: A
The solution that will accomplish this goal is A: Turn on AWS Config with the appropriate rules.

AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. You can use AWS Config
to monitor and record changes to the configuration of your Amazon S3 buckets. By turning on AWS Config and enabling the appropriate
rules, you can ensure that your S3 buckets do not have unauthorized configuration changes.
upvoted 1 times
  Buruguduystunstugudunstuy 9 months, 2 weeks ago
AWS Trusted Advisor (Option B) is a service that provides best practice recommendations for your AWS resources, but it does not
monitor or record changes to the configuration of your S3 buckets.

Amazon Inspector (Option C) is a service that helps you assess the security and compliance of your applications. While it can be used to
assess the security of your S3 buckets, it does not monitor or record changes to the configuration of your S3 buckets.

Amazon S3 server access logging (Option D) enables you to log requests made to your S3 bucket. While it can help you identify changes
to your S3 bucket, it does not monitor or record changes to the configuration of your S3 bucket.
upvoted 1 times

  memiy12 9 months, 4 weeks ago


Selected Answer: A
AWS Config
upvoted 2 times
Question #27 Topic 1

A company is launching a new application and will display application metrics on an Amazon CloudWatch dashboard. The company's product
manager needs to access this dashboard periodically. The product manager does not have an AWS account. A solutions architect must provide
access to the product manager by following the principle of least privilege.
Which solution will meet these requirements?

A. Share the dashboard from the CloudWatch console. Enter the product manager's email address, and complete the sharing steps. Provide a
shareable link for the dashboard to the product manager.

B. Create an IAM user specifically for the product manager. Attach the CloudWatchReadOnlyAccess AWS managed policy to the user. Share
the new login credentials with the product manager. Share the browser URL of the correct dashboard with the product manager.

C. Create an IAM user for the company's employees. Attach the ViewOnlyAccess AWS managed policy to the IAM user. Share the new login
credentials with the product manager. Ask the product manager to navigate to the CloudWatch console and locate the dashboard by name in
the Dashboards section.

D. Deploy a bastion server in a public subnet. When the product manager requires access to the dashboard, start the server and share the RDP
credentials. On the bastion server, ensure that the browser is configured to open the dashboard URL with cached AWS credentials that have
appropriate permissions to view the dashboard.

Correct Answer: B

Community vote distribution


A (80%) B (18%)

  masetromain Highly Voted  11 months, 3 weeks ago


Selected Answer: A
Answere A : https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch-dashboard-sharing.html

Share a single dashboard and designate specific email addresses of the people who can view the dashboard. Each of these users creates
their own password that they must enter to view the dashboard.
upvoted 62 times

  123jhl0 11 months, 2 weeks ago


Thanks for the link! No doubt A is the answer.
upvoted 6 times

  omoakin 4 months, 1 week ago


nope! The principle of least privilege will contradict that B is the correct answer even Chat GPT says its B
upvoted 3 times

  MarkyMarcFromTheCloud Highly Voted  2 months ago


New to the forum.....Just a question, has anyone gotten this exact question in the actual exam and whether or not the most voted answer
was the correct one or not ?
upvoted 6 times

  ABS_AWS Most Recent  2 days, 15 hours ago


Answer is A
refer AWS doc ...

"To help manage this information access, Amazon CloudWatch has introduced CloudWatch dashboard sharing. This allows customers to
easily and securely share their CloudWatch dashboards with people outside of their organization, in another business unit, or with those
with no access AWS console access. This blog will demonstrate how a dashboard can be shared across the enterprise via a SAML provider
in order to broker this secure access."
upvoted 1 times

  David_Ang 5 days, 14 hours ago


Selected Answer: B
"B" is the only correct answer because you always have to think which one is the more secure option, with "A" you are exposing the
dashboard and everybody with the link can see it. is more secure and simple to give him and aws account with read only access to the
dashboard.
upvoted 1 times

  Examprep202324 4 weeks, 1 day ago


When you share dashboards, you can designate who can view the dashboard in three ways:
One of which is following:--
1. Share a single dashboard and designate specific email addresses of the people who can view the dashboard. Each of these users
creates their own password that they must enter to view the dashboard.
upvoted 1 times
  bojila 2 months ago
Selected Answer: A
Share a single dashboard and designate specific email addresses of the people who can view the dashboard. Each of these users creates
their own password that they must enter to view the dashboard.
upvoted 1 times

  bojila 2 months ago


Selected Answer: B
You can create a link to add a user's email but to access the dashboard, the user will need to enter username/password... - "...The product
manager does not have an AWS account..."
upvoted 1 times

  bojila 2 months ago


Share a single dashboard publicly, so that anyone who has the link can view the dashboard.
So, A
upvoted 1 times

  NaaVeeN 1 day, 18 hours ago


its not private then.
upvoted 1 times

  Guru4Cloud 2 months, 1 week ago


Selected Answer: B
Option B provides the product manager with specific access to the CloudWatch dashboard using an IAM user with the
CloudWatchReadOnlyAccess policy attached. The IAM user has only read-only access to the required resources, which follows the principle
of least privilege.
upvoted 6 times

  miki111 2 months, 2 weeks ago


Option A is the right answer for this.
upvoted 1 times

  never_give_up 2 months, 3 weeks ago


Selected Answer: B
Answere B
Because A is not the best choice because it requires sharing a link that potentially could be accessed by unauthorized users, which does
not follow the principle of least privilege.
upvoted 4 times

  diabloexodia 2 months, 2 weeks ago


But we are also sharing a link to the dashboard in option B.
upvoted 1 times

  oeufmeister 2 months, 1 week ago


But you still have to login even after clicking on the link if you had chosen B, so it should not have such vunerabilities, no?
upvoted 2 times

  jaydesai8 2 months, 3 weeks ago


Selected Answer: A
With cloudwatch you can share the dashboard entering the employees / user specific email address, Hence A is the answer
upvoted 1 times

  nuray 2 months, 4 weeks ago


In the question, it says the product manager does not have an AWS account. So the answer should be A.
I found this information on AWS's website. When you share dashboards, you can designate who can view the dashboard in three ways:

Share a single dashboard and designate specific email addresses of the people who can view the dashboard. Each of these users creates
their own password that they must enter to view the dashboard.

Share a single dashboard publicly, so that anyone who has the link can view the dashboard.

Share all the CloudWatch dashboards in your account and specify a third-party single sign-on (SSO) provider for dashboard access. All
users who are members of this SSO provider's list can access all the dashboards in the account. To enable this, you integrate the SSO
provider with Amazon Cognito. The SSO provider must support Security Assertion Markup Language (SAML
upvoted 5 times

  nuray 2 months, 4 weeks ago


Selected Answer: B
When you enable SSO, users registered with the selected SSO provider will be granted permissions to access all dashboards in this
account.
When you disable the SSO provider, all dashboards will be automatically unshared.
so the question asks the principal of least privilage given to the product manager.That's why A gives more privilages.B is the right answer
upvoted 1 times
  cookieMr 3 months, 2 weeks ago
Selected Answer: A
This solution allows the product manager to access the CloudWatch dashboard without requiring an AWS account or IAM user credentials.
By sharing the dashboard through the CloudWatch console, you can provide direct access to the specific dashboard without granting
unnecessary permissions.

With this approach, the product manager can access the dashboard periodically by simply clicking on the provided link. They will be able
to view the application metrics without the need for an AWS account or IAM user credentials. This ensures that the product manager has
the necessary access while adhering to the principle of least privilege by not granting unnecessary permissions or creating additional IAM
users.
upvoted 3 times

  teja54 4 months ago


Selected Answer: A
...................................
upvoted 1 times

  smash_aws 4 months, 1 week ago


A is my answer here is why. " To help manage this information access, Amazon CloudWatch has introduced CloudWatch dashboard
sharing. This allows customers to easily and securely share their CloudWatch dashboards with people outside of their organization, in
another business unit, or with those with no access AWS console access"

https://aws.amazon.com/blogs/mt/share-your-amazon-cloudwatch-dashboards-with-anyone-using-aws-single-sign-on/
upvoted 1 times

  omoakin 4 months, 1 week ago


answer is B
upvoted 1 times
Question #28 Topic 1

A company is migrating applications to AWS. The applications are deployed in different accounts. The company manages the accounts centrally
by using AWS Organizations. The company's security team needs a single sign-on (SSO) solution across all the company's accounts. The company
must continue managing the users and groups in its on-premises self-managed Microsoft Active Directory.
Which solution will meet these requirements?

A. Enable AWS Single Sign-On (AWS SSO) from the AWS SSO console. Create a one-way forest trust or a one-way domain trust to connect the
company's self-managed Microsoft Active Directory with AWS SSO by using AWS Directory Service for Microsoft Active Directory.

B. Enable AWS Single Sign-On (AWS SSO) from the AWS SSO console. Create a two-way forest trust to connect the company's self-managed
Microsoft Active Directory with AWS SSO by using AWS Directory Service for Microsoft Active Directory.

C. Use AWS Directory Service. Create a two-way trust relationship with the company's self-managed Microsoft Active Directory.

D. Deploy an identity provider (IdP) on premises. Enable AWS Single Sign-On (AWS SSO) from the AWS SSO console.

Correct Answer: A

Community vote distribution


B (77%) A (18%) 2%

  17Master Highly Voted  11 months ago


Selected Answer: B
Tricky question!!! forget one-way or two-way. In this scenario, AWS applications (Amazon Chime, Amazon Connect, Amazon QuickSight,
AWS Single Sign-On, Amazon WorkDocs, Amazon WorkMail, Amazon WorkSpaces, AWS Client VPN, AWS Management Console, and AWS
Transfer Family) need to be able to look up objects from the on-premises domain in order for them to function. This tells you that
authentication needs to flow both ways. This scenario requires a two-way trust between the on-premises and AWS Managed Microsoft AD
domains.
It is a requirement of the application
Scenario 2: https://aws.amazon.com/es/blogs/security/everything-you-wanted-to-know-about-trusts-with-aws-managed-microsoft-ad/
upvoted 50 times

  pbpally 4 months, 4 weeks ago


What I did find though was documentation that explicitly states that IAM Identity Center (successor to AWS SSO) requires a two-way
trust:
https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_setup_trust.html
upvoted 6 times

  pbpally 4 months, 4 weeks ago


The problem with this is that nowhere in the question is it saying that the application needs to be able to flow back so two-way is not
needed.
upvoted 2 times

  KADSM Highly Voted  10 months, 4 weeks ago


Answer B as we have AWS SSO which requires two way trust. As per documentation - A two-way trust is required for AWS Enterprise Apps
such as Amazon Chime, Amazon Connect, Amazon QuickSight, AWS IAM Identity Center (successor to AWS Single Sign-On), Amazon
WorkDocs, Amazon WorkMail, Amazon WorkSpaces, and the AWS Management Console. AWS Managed Microsoft AD must be able to
query the users and groups in your self-managed AD.

Amazon EC2, Amazon RDS, and Amazon FSx will work with either a one-way or two-way trust.
upvoted 11 times

  pbpally 4 months, 4 weeks ago


I found the documentation that explicitly states that IAM Identity Center (successor to AWS SSO) requires a two-way trust:
https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_setup_trust.html
upvoted 2 times

  Examprep202324 Most Recent  4 weeks ago


A two-way trust is required for AWS Enterprise Apps such as Amazon Chime, Amazon Connect, Amazon QuickSight, "AWS IAM Identity
Center (successor to AWS Single Sign-On)", Amazon WorkDocs, Amazon WorkMail, Amazon WorkSpaces, and the AWS Management
Console
upvoted 1 times

  Yonimoni 1 month, 2 weeks ago


Option B is the correct choice because it aligns with the AWS documentation, which states that a two-way trust relationship is needed
between AWS Managed Microsoft AD and a self-managed AD for users to sign in with their corporate credentials to AWS services. This
solution integrates AWS SSO, AWS Directory Service for Microsoft AD, and centralized account management through AWS Organizations.

Read until the end


"Create a two-way trust relationship – When two-way trust relationships are created between AWS Managed Microsoft AD and a self-
managed directory in AD, users in your self-managed directory in AD can sign in with their corporate credentials to various AWS services
and business applications. One-way trusts do not work with IAM Identity Center."
https://docs.aws.amazon.com/singlesignon/latest/userguide/connectonpremad.html
upvoted 1 times
  Raggz 1 month, 2 weeks ago
Selected Answer: C
Explanation:
To route users to the Region with the lowest latency, we can use Amazon Route 53 latency-based routing with health checks. We can
deploy a Network Load Balancer (NLB) associated with the Auto Scaling group and create an Amazon Route 53 latency record that points
to aliases for each NLB. To enable automated failover between Regions, we can configure Route 53 with failover routing policy. With
failover routing policy, active-active or active-passive configurations can be configured between the Regions. Lastly, we can create an
Amazon CloudFront distribution that uses the latency record as an origin which will improve the delivery performance of content to the
end-users.
upvoted 1 times

  miki111 2 months, 2 weeks ago


Option B is the right answer for this.
upvoted 1 times

  TheHadidi 3 months ago


Selected Answer: C
C. Use AWS Directory Service. Create a two-way trust relationship with the company's self-managed Microsoft Active Directory.
More information: https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_use_cases.html
And yest, two-way trust can be created between AWS DS for MS-AD and the self-managed on-premises AD
(https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_tutorial_setup_trust_create.html)
upvoted 1 times

  bingusbongus 2 months, 3 weeks ago


This solution does not feature single-sign-on (SSO).
upvoted 2 times

  cookieMr 3 months, 2 weeks ago


Selected Answer: A
The recommended solution is option A: Enable AWS Single Sign-On (AWS SSO) from the AWS SSO console and create a one-way forest trust
or a one-way domain trust to connect the company's self-managed Microsoft Active Directory with AWS SSO using AWS Directory Service
for Microsoft Active Directory.

By implementing this solution, the company can achieve a single sign-on experience for their AWS accounts while maintaining central
control over user and group management in their on-premises Active Directory. The one-way trust ensures that user and group
information flows securely from the on-premises directory to AWS SSO, allowing for centralized access management and control across all
AWS accounts.
upvoted 5 times

  DuboisNicolasDuclair 3 months, 3 weeks ago


Selected Answer: D
Can we have a moderator ?
upvoted 1 times

  omoakin 4 months, 1 week ago


A is correct
Option B comes with security risk two way trust
upvoted 1 times

  sbnpj 4 months, 2 weeks ago


Selected Answer: B
https://docs.aws.amazon.com/singlesignon/latest/userguide/connectonpremad.html
upvoted 3 times

  Abrar2022 4 months, 2 weeks ago


AWS IAM Identity Center (successor to AWS Single Sign-On) requires a two-way trust so that it has permissions to read user and group.
upvoted 1 times

  AlaTaftaf 5 months ago


Selected Answer: A
This is the answer of chatGPT:
Option A is the best solution that meets the requirements of providing a single sign-on (SSO) solution across all the company's accounts
while continuing to manage users and groups in the on-premises self-managed Microsoft Active Directory.

Explanation:
Option A is the best solution as it enables AWS Single Sign-On (AWS SSO) from the AWS SSO console and creates a one-way forest trust or
a one-way domain trust to connect the company's self-managed Microsoft Active Directory with AWS SSO by using AWS Directory Service
for Microsoft Active Directory. This solution allows the company to manage users and groups in the on-premises Active Directory and
provides a single sign-on (SSO) experience across all the company's AWS accounts.
upvoted 2 times
  darkknight23 5 months, 1 week ago
I think its A. From ChatGpt:
=========
Should this be one way or two way trust?
To integrate AWS SSO with an on-premises Microsoft Active Directory, a one-way trust relationship should be established.

In a one-way trust relationship, the on-premises Microsoft Active Directory trusts the AWS SSO directory, but the AWS SSO directory does
not trust the on-premises Microsoft Active Directory. This means that users and groups in the on-premises Microsoft Active Directory can
be mapped to AWS SSO users and groups, but not vice versa.

This is the recommended approach for security reasons, as it ensures that the on-premises Microsoft Active Directory is not exposed to
external entities. The one-way trust relationship also simplifies administration and reduces the risk of errors in configuration.
upvoted 2 times

  Clouddon 1 month, 3 weeks ago


Before SSO could work, authentication to and fro must have been established.
upvoted 1 times

  ruqui 3 months, 1 week ago


that's wrong!!! think yourself instead of relying on software:
https://docs.aws.amazon.com/singlesignon/latest/userguide/connectonpremad.html
upvoted 2 times

  skr05 6 months ago


Selected Answer: B
Answer is B
https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_setup_trust.html
upvoted 1 times

  linux_admin 6 months ago


Selected Answer: B
A two-way trust would enable AWS SSO to retrieve user and group information from the on-premises AD domain, and would also allow
changes made to users and groups in AWS SSO to be synchronized back to the on-premises AD.

Option A, which suggests creating a one-way trust relationship, would not enable synchronization of user and group information between
AWS SSO and the on-premises AD domain.
upvoted 1 times

  cheese929 6 months, 1 week ago


Selected Answer: B
It's B.
https://docs.aws.amazon.com/singlesignon/latest/userguide/connectonpremad.html
upvoted 1 times
Question #29 Topic 1

A company provides a Voice over Internet Protocol (VoIP) service that uses UDP connections. The service consists of Amazon EC2 instances that
run in an Auto Scaling group. The company has deployments across multiple AWS Regions.
The company needs to route users to the Region with the lowest latency. The company also needs automated failover between Regions.
Which solution will meet these requirements?

A. Deploy a Network Load Balancer (NLB) and an associated target group. Associate the target group with the Auto Scaling group. Use the
NLB as an AWS Global Accelerator endpoint in each Region.

B. Deploy an Application Load Balancer (ALB) and an associated target group. Associate the target group with the Auto Scaling group. Use the
ALB as an AWS Global Accelerator endpoint in each Region.

C. Deploy a Network Load Balancer (NLB) and an associated target group. Associate the target group with the Auto Scaling group. Create an
Amazon Route 53 latency record that points to aliases for each NLB. Create an Amazon CloudFront distribution that uses the latency record as
an origin.

D. Deploy an Application Load Balancer (ALB) and an associated target group. Associate the target group with the Auto Scaling group. Create
an Amazon Route 53 weighted record that points to aliases for each ALB. Deploy an Amazon CloudFront distribution that uses the weighted
record as an origin.

Correct Answer: C

Community vote distribution


A (81%) Other

  Six_Fingered_Jose Highly Voted  11 months, 1 week ago


Selected Answer: A
agree with A,
Global Accelerator has automatic failover and is perfect for this scenario with VoIP
https://aws.amazon.com/global-accelerator/faqs/
upvoted 41 times

  ElaineRan 2 months ago


Thank you, the link also helps me to know the differences between Global Acc and CloudFront.
upvoted 2 times

  BoboChow 11 months, 1 week ago


Thank you for your link, it make me consolidate A.
upvoted 6 times

  bullrem 8 months, 2 weeks ago


This option does not meet the requirements because AWS Global Accelerator is only used to route traffic to the optimal AWS Region,
it does not provide automatic failover between regions.
upvoted 2 times

  sachin 7 months ago


Instant regional failover: AWS Global Accelerator automatically checks the health of your applications and routes user traffic only
to healthy application endpoints. If the health status changes or you make configuration updates, AWS Global Accelerator reacts
instantaneously to route your users to the next available endpoint.
upvoted 5 times

  mouhannadhaj Highly Voted  11 months ago


Selected Answer: A
CloudFront uses Edge Locations to cache content while Global Accelerator uses Edge Locations to find an optimal pathway to the nearest
regional endpoint. CloudFront is designed to handle HTTP protocol meanwhile Global Accelerator is best used for both HTTP and non-
HTTP protocols such as TCP and UDP. so i think A is a better answer
upvoted 25 times

  rainiverse Most Recent  4 days, 3 hours ago


Selected Answer: C
To route users to the Region with the lowest latency and enable automated failover between Regions, the company should choose Option
C. This option involves deploying a Network Load Balancer (NLB) and an associated target group, associating the target group with the
Auto Scaling group, creating an Amazon Route 53 latency record that points to aliases for each NLB, and creating an Amazon CloudFront
distribution that uses the latency record as an origin.

Option A is not the best choice because using an NLB as an AWS Global Accelerator endpoint in each Region does not provide automated
failover between Regions.
Option B is also not ideal because using an Application Load Balancer (ALB) as an AWS Global Accelerator endpoint in each Region does
not provide automated failover between Regions.
upvoted 1 times
  midriss 4 weeks ago
Option A suggests using Network Load Balancers (NLB) and AWS Global Accelerator, which can provide lower-latency routing, but it does
not inherently support automated failover between Regions.
upvoted 1 times

  pavlinux 1 month, 2 weeks ago


Selected answer: A
Global Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over IP, as well as for HTTP use cases
that specifically require static IP addresses or deterministic, fast regional failover. Both services integrate with AWS Shield for DDoS
protection.
upvoted 1 times

  Raggz 1 month, 2 weeks ago


Selected Answer: C
Explanation:
To route users to the Region with the lowest latency, we can use Amazon Route 53 latency-based routing with health checks. We can
deploy a Network Load Balancer (NLB) associated with the Auto Scaling group and create an Amazon Route 53 latency record that points
to aliases for each NLB. To enable automated failover between Regions, we can configure Route 53 with failover routing policy. With
failover routing policy, active-active or active-passive configurations can be configured between the Regions. Lastly, we can create an
Amazon CloudFront distribution that uses the latency record as an origin which will improve the delivery performance of content to the
end-users.
upvoted 1 times

  nafeez7950 1 month, 3 weeks ago


Selected Answer: C
As much as I see A as a viable option, I would say C is the best option. Note that option A leverages the global accelerator to improve
"PERFORMANCE". I would argue that performance and latency may not be the exact same thing. Subsequently, NLB operates at a regional
level, which makes option A seems that there are no load balancers operating globally. With Route 53 to manage these latencies globally,
and cloudfront, I would definitely say that option C is the more suitable option.
upvoted 2 times

  TariqKipkemei 2 months ago


Selected Answer: A
TCP and UDP = Global accelerator and Network Load Balancer
upvoted 1 times

  Guru4Cloud 2 months, 1 week ago


Selected Answer: C
The correct answer is C.

Deploy a Network Load Balancer (NLB) and an associated target group

An NLB is a good choice for a VoIP service because it can route traffic to the Region with the lowest latency. An NLB also provides load
balancing and fault tolerance for your VoIP service.

Associate the target group with the Auto Scaling group

An Auto Scaling group can automatically scale your VoIP service up or down based on demand. This ensures that you have the right
number of EC2 instances running to handle the load.

Create an Amazon Route 53 latency record that points to aliases for each NLB

A latency record in Amazon Route 53 routes traffic to the NLB that has the lowest latency. This ensures that your VoIP calls are routed to
the Region with the lowest latency.

Create an Amazon CloudFront distribution that uses the latency record as an origin

Amazon CloudFront is a content delivery network (CDN) that can deliver your VoIP traffic closer to your users. This can improve the
performance of your VoIP service.
upvoted 3 times

  miki111 2 months, 2 weeks ago


Option A is the right answer for this.
upvoted 1 times

  karloscetina007 2 months, 4 weeks ago


Selected Answer: A
UDP protocol and integration with cloudfront? it is a kind of trap in this question.
my answer is A
upvoted 2 times

  cookieMr 3 months, 2 weeks ago


Selected Answer: A
Option A, which suggests deploying a Network Load Balancer (NLB) and using it as an AWS Global Accelerator endpoint in each Region,
does provide automated failover between Regions.

When using AWS Global Accelerator, it automatically routes traffic to the closest AWS edge location based on latency and network
conditions. In case of a failure in one Region, AWS Global Accelerator will automatically reroute traffic to the healthy endpoints in another
Region, providing automated failover.

So, option A does meet the requirement for automated failover between Regions, in addition to routing users to the Region with the
lowest latency using AWS Global Accelerator.
upvoted 3 times

  danielklein09 4 months ago


Selected Answer: C
If the answer is A, how exactly we can accomplish this: "route users to the Region with the lowest latency"
upvoted 1 times

  luisgu 4 months, 2 weeks ago


Selected Answer: A
UDP --> NLB --> A or C.
I believe C is not an option because you cannot set up a route 53 record as a cloudfront origin:
https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_Origin.html
upvoted 2 times

  Abrar2022 4 months, 2 weeks ago


Instant regional failover: AWS Global Accelerator automatically checks the health of your applications and routes user traffic only to
healthy application endpoints. If the health status changes or you make configuration updates, AWS Global Accelerator reacts
instantaneously to route your users to the next available endpoint.
upvoted 1 times

  pbpally 4 months, 4 weeks ago


To keep it as simple as possible:
UDP? -> NLB
Failover? -> Global Accelerator. It has "Instant regional failover" which can be found explained here under "benefits"
https://aws.amazon.com/global-accelerator/faqs/
upvoted 2 times

  AlaTaftaf 5 months ago


Selected Answer: A
This is the answer of ChatGPT "Option A is the best solution that meets the requirements of routing users to the Region with the lowest
latency and providing automated failover between Regions for the company's Voice over Internet Protocol (VoIP) service that uses UDP
connections.

Explanation:
Option A is the best solution as it deploys a Network Load Balancer (NLB) and an associated target group, and associates the target group
with the Auto Scaling group. The NLB can be used as an AWS Global Accelerator endpoint in each Region, allowing users to be routed to
the Region with the lowest latency. Additionally, the NLB can automatically failover between Regions to ensure service availability.

Option B is not the best solution as an Application Load Balancer (ALB) is designed for HTTP/HTTPS traffic and may not be suitable for the
company's VoIP service that uses UDP connections."
upvoted 1 times
Question #30 Topic 1

A development team runs monthly resource-intensive tests on its general purpose Amazon RDS for MySQL DB instance with Performance Insights
enabled. The testing lasts for 48 hours once a month and is the only process that uses the database. The team wants to reduce the cost of
running the tests without reducing the compute and memory attributes of the DB instance.
Which solution meets these requirements MOST cost-effectively?

A. Stop the DB instance when tests are completed. Restart the DB instance when required.

B. Use an Auto Scaling policy with the DB instance to automatically scale when tests are completed.

C. Create a snapshot when tests are completed. Terminate the DB instance and restore the snapshot when required.

D. Modify the DB instance to a low-capacity instance when tests are completed. Modify the DB instance again when required.

Correct Answer: C

Community vote distribution


C (87%) 11%

  hanhdroid Highly Voted  11 months, 3 weeks ago


Selected Answer: C
Answer C, you still pay for storage when an RDS database is stopped
upvoted 25 times

  KVK16 Highly Voted  11 months, 3 weeks ago


Selected Answer: C
C - Create a manual Snapshot of DB and shift to S3- Standard and Restore form Manual Snapshot when required.

Not A - By stopping the DB although you are not paying for DB hours you are still paying for Provisioned IOPs , the storage for Stopped DB
is more than Snapshot of underlying EBS vol. and Automated Back ups .
Not D - Is possible but not MOST cost effective, no need to run the RDS when not needed.
upvoted 9 times

  hrushikeshrelekar Most Recent  2 days, 2 hours ago


Selected Answer: D
A. Stop the DB instance when tests are completed. Restart the DB instance when required.

Explanation:

Stopping and starting a DB instance is the most cost-effective solution for scenarios where the database is not in use all the time. Amazon
RDS allows you to stop and start the database instances, and you are not charged for the instance hours while the database is stopped.
upvoted 1 times

  Chiquitabandita 4 weeks, 1 day ago


chatgpt is saying one option is to start/stop db instance, so choice A even though the popular choice is C, otherwise use Aurora but that is
not an option, nor would it probably be the most cost effective option
upvoted 1 times

  Fresbie99 1 month, 2 weeks ago


Selected Answer: C
As DB snapshots is cost efficient.
upvoted 1 times

  miki111 2 months, 2 weeks ago


Option C is the right answer for this.
upvoted 1 times

  cookieMr 3 months, 2 weeks ago


Selected Answer: C
Option C can be a cost-effective solution for reducing the cost of running tests on the RDS instance.

By creating a snapshot and terminating the DB instance, you effectively stop incurring costs for the running instance. When you need to
run the tests again, you can restore the snapshot to create a new instance and resume testing. This approach allows you to save costs
during the periods when the tests are not running.

However, it's important to note that option C involves additional steps and may result in some downtime during the restoration process.
You need to consider the time required for snapshot creation, termination, and restoration when planning the testing schedule.
upvoted 3 times
  Abrar2022 3 months, 2 weeks ago
Selected Answer: C
Can't be A because you're still charged for provisioned storage even when it's stopped.
upvoted 1 times

  Peng001 4 months ago


Selected Answer: C
By only stopping an Amazon RDS DB instance, you stop billing for additional instance hours, but you will still incur storage costs. See:
https://aws.amazon.com/rds/pricing/
upvoted 1 times

  studynoplay 5 months ago


Selected Answer: C
Trick: in a stopped RDS database, you will still pay for storage. If you plan on
stopping it for a long time, you should snapshot & restore instead
upvoted 2 times

  channn 6 months ago


Selected Answer: C
Compare A and C, for a 48 hours usage among a month, C's cost lower.
upvoted 1 times

  linux_admin 6 months ago


Selected Answer: A
Option A, stopping the DB instance when tests are completed and restarting it when required, would be the most cost-effective solution to
reduce the cost of running the tests while maintaining the same compute and memory attributes of the DB instance.

By stopping the DB instance when the tests are completed, the company will only be charged for storage and not for compute resources
while the instance is stopped. This can result in significant cost savings as compared to running the instance continuously.

When the tests need to be run again, the company can simply start the DB instance, and it will be available for use. This solution is
straightforward and does not require any additional configuration or infrastructure.
upvoted 2 times

  ImKingRaje 5 months, 2 weeks ago


if you stopped RDS it gets auto start after 7 days. Here the requirement is once in month ..hence C
upvoted 1 times

  cheese929 6 months, 1 week ago


Selected Answer: C
C is the most cost effective.
upvoted 1 times

  Tiba 8 months, 3 weeks ago


You can't stop an Amazon RDS for SQL Server DB instance in a Multi-AZ configuration.
upvoted 1 times

  SilentMilli 8 months, 4 weeks ago


Selected Answer: C
Amazon RDS for MySQL allows you to create a snapshot of your DB instance and store it in Amazon S3. You can then terminate the DB
instance and restore it from the snapshot when required. This will allow you to reduce the cost of running the resource-intensive tests
without reducing the compute and memory attributes of the DB instance.
upvoted 1 times

  techhb 9 months ago


Selected Answer: C
C is right choice here
upvoted 1 times

  HayLLlHuK 9 months, 1 week ago


Selected Answer: C
Explanation from the same question on UDEMY!
Taking a snapshot of the instance and storing the snapshot is the most cost-effective solution. When needed, a new database can be
created from the snapshot. Performance Insights can be enabled on the new instance if needed. Note that the previous data from
Performance Insights will not be associated with the new instance, however this was not a requirement.
CORRECT: "Create a snapshot of the database when the tests are completed. Terminate the DB instance. Create a new DB instance from
the snapshot when required” is the correct answer (as explained above.)
upvoted 5 times

  HayLLlHuK 9 months, 1 week ago


INCORRECT: "Stop the DB instance once all tests are completed. Start the DB instance again when required” is incorrect. You will be
charged when your instance is stopped. When an instance is stopped you are charged for provisioned storage, manual snapshots, and
automated backup storage within your specified retention window, but not for database instance hours. This is more costly compared
to using snapshots.
INCORRECT: "Create an Auto Scaling group for the DB instance and reduce the desired capacity to 0 once the tests are completed” is
incorrect. You cannot use Auto Scaling groups with Amazon RDS instances.
INCORRECT: "Modify the DB instance size to a smaller capacity instance when all the tests have been completed. Scale up again when
required” is incorrect. This will reduce compute and memory capacity and will be more costly than taking a snapshot and terminating
the DB.
upvoted 3 times
Question #31 Topic 1

A company that hosts its web application on AWS wants to ensure all Amazon EC2 instances. Amazon RDS DB instances. and Amazon Redshift
clusters are configured with tags. The company wants to minimize the effort of configuring and operating this check.
What should a solutions architect do to accomplish this?

A. Use AWS Config rules to define and detect resources that are not properly tagged.

B. Use Cost Explorer to display resources that are not properly tagged. Tag those resources manually.

C. Write API calls to check all resources for proper tag allocation. Periodically run the code on an EC2 instance.

D. Write API calls to check all resources for proper tag allocation. Schedule an AWS Lambda function through Amazon CloudWatch to
periodically run the code.

Correct Answer: A

Community vote distribution


A (97%)

  kurinei021 Highly Voted  9 months, 1 week ago


Answer from ChatGPT:

Yes, you can use AWS Config to create tags for your resources. AWS Config is a service that enables you to assess, audit, and evaluate the
configurations of your AWS resources. You can use AWS Config to create rules that automatically tag resources when they are created or
when their configurations change.

To create tags for your resources using AWS Config, you will need to create an AWS Config rule that specifies the tag key and value you
want to use and the resources you want to apply the tag to. You can then enable the rule and AWS Config will automatically apply the tag
to the specified resources when they are created or when their configurations change.
upvoted 13 times

  aaroncelestin 1 month, 2 weeks ago


This the first answer that I've seen ChatGPT get correct here on ExamTopics. You should all know that using ChatGPT for this is bound
to give bad answers. It only parrots what it has seen written/copied/pasted by someone/something somewhere, picked up with
absolutely zero context. ChatGPT doesn't "know" anything about AWS services. So, beware the "answers" it gives.
upvoted 2 times

  KawtarZ Most Recent  1 month, 1 week ago


Selected Answer: A
A without a doubt
upvoted 1 times

  TariqKipkemei 2 months ago


Selected Answer: A
AWS Config continually assesses, audits, and evaluates the configurations and relationships of your resources on AWS, on premises, and
on other clouds.
upvoted 2 times

  james2033 2 months, 2 weeks ago


Selected Answer: A
Has typos in the question, correct is "A company that hosts its web application on AWS wants to ensure all Amazon EC2 instance, Amazon
RDS DB instances, and Amazon Redshift clusters are configured with tags." Keyword "are configured with tags", choose (A) "AWS Config
rules".
upvoted 1 times

  miki111 2 months, 2 weeks ago


Option A is the right answer for this.
upvoted 1 times

  cookieMr 3 months, 2 weeks ago


Selected Answer: A
AWS Config provides a set of pre-built or customizable rules that can be used to check the configuration and compliance of AWS
resources. By creating a custom rule or using the built-in rule for tagging, you can define the required tags for EC2, RDS DB and Redshift
clusters. AWS Config continuously monitors the resources and generates configuration change events or evaluation results.

By leveraging AWS Config, the solution can automatically detect any resources that do not comply with the defined tagging requirements.
This approach eliminates the need for manual checks or periodic code execution, reducing operational overhead. Additionally, AWS Config
provides the ability to automatically remediate non-compliant resources by triggering Lambda or sending notifications, further
streamlining the configuration management process.
Option B (using Cost Explorer) primarily focuses on cost analysis and does not provide direct enforcement of proper tagging. Option C and
D (writing API calls and running them manually or through scheduled Lambda) require more manual effort and maintenance compared to
using AWS Config rules.
upvoted 3 times
  lelouchjedai 3 months, 2 weeks ago
Selected Answer: A
The answer is A
upvoted 1 times

  Bmarodi 4 months ago


Selected Answer: A
Option will accomplish the requirements
upvoted 1 times

  beginnercloud 4 months, 2 weeks ago


Selected Answer: A
AWS Config can track the configuration status of non-compliant resouces :))
upvoted 1 times

  caffee 5 months, 3 weeks ago


Selected Answer: A
AWS Config can track the configuration status of non-compliant resouces.
upvoted 2 times

  gx2222 6 months ago


Selected Answer: A
Option A is the most appropriate solution to accomplish the given requirement because AWS Config Rules provide a way to evaluate the
configuration of AWS resources against best practices and company policies. In this case, a custom AWS Config rule can be defined to
check for proper tag allocation on Amazon EC2 instances, Amazon RDS DB instances, and Amazon Redshift clusters. The rule can be
configured to run periodically and notify the responsible parties when a resource is not properly tagged.
upvoted 2 times

  channn 6 months ago


Selected Answer: A
Key words: configured with tags
upvoted 1 times

  linux_admin 6 months ago


Selected Answer: A
AWS Config is a service that provides a detailed view of the configuration of AWS resources in an account. AWS Config rules can be used to
define and detect resources that are not properly tagged. These rules can be customized to match specific requirements and
automatically check all resources for proper tag allocation. When resources are found without the proper tags, AWS Config can trigger an
SNS notification or an AWS Lambda function to perform the required action.
upvoted 1 times

  bilel500 6 months, 4 weeks ago


Selected Answer: A
AWS Config provides a detailed view of the resources associated with your AWS account, including how they are configured, how they are
related to one another, and how the configurations and their relationships have changed over time.
upvoted 1 times

  Ello2023 7 months, 4 weeks ago


I found this question very vague.
upvoted 2 times

  jannymacna 8 months, 3 weeks ago


D. Write API calls to check all resources for proper tag allocation. Schedule an AWS Lambda function through Amazon CloudWatch to
periodically run the code.

A solution architect can accomplish this by writing API calls to check all resources (EC2 instances, RDS DB instances, and Redshift clusters)
for proper tag allocation. Then, schedule an AWS Lambda function through Amazon CloudWatch to periodically run the code. This way, the
check will be automated and it eliminates the need to manually check and configure the resources. The Lambda function can be triggered
periodically and will check all resources, this way it will minimize the effort of configuring and operating the check.
upvoted 2 times

  CaoMengde09 8 months ago


How about the key sentence "The company wants to minimize the effort of configuring and operating this check". Either A or B and i
vouch for A
upvoted 3 times

  pazabal 9 months, 2 weeks ago


Selected Answer: A
are configured with tags = AWS config
upvoted 3 times
Question #32 Topic 1

A development team needs to host a website that will be accessed by other teams. The website contents consist of HTML, CSS, client-side
JavaScript, and images.
Which method is the MOST cost-effective for hosting the website?

A. Containerize the website and host it in AWS Fargate.

B. Create an Amazon S3 bucket and host the website there.

C. Deploy a web server on an Amazon EC2 instance to host the website.

D. Configure an Application Load Balancer with an AWS Lambda target that uses the Express.js framework.

Correct Answer: B

Community vote distribution


B (100%)

  masetromain Highly Voted  11 months, 3 weeks ago


Selected Answer: B
Good answer is B: client-side JavaScript. the website is static, so it must be S3.
upvoted 20 times

  BoboChow Highly Voted  11 months, 3 weeks ago


Selected Answer: B
HTML, CSS, client-side JavaScript, and images are all static resources.
upvoted 7 times

  hungpm Most Recent  3 weeks, 4 days ago


Selected Answer: B
Static website should work fine with S3
upvoted 1 times

  KawtarZ 1 month, 1 week ago


Selected Answer: B
the website is static because the backend runs on client side.
upvoted 2 times

  evanhongo 1 month, 3 weeks ago


Selected Answer: B
all static resources.
upvoted 1 times

  TariqKipkemei 2 months ago


Selected Answer: B
static website, cost-effective = S3 web hosting
upvoted 2 times

  james2033 2 months, 2 weeks ago


Selected Answer: B
Just all static content HTML, CSS, client-side JavaScript, images. Amazon S3 is good enough.
upvoted 1 times

  miki111 2 months, 2 weeks ago


Option B is the right answer for this.
upvoted 1 times

  Kaab_B 2 months, 2 weeks ago


Selected Answer: B
S3 is amongst the cheapest services offered by AWS.
upvoted 1 times

  karloscetina007 2 months, 4 weeks ago


Selected Answer: B
B is the correct answer.
upvoted 1 times
  cookieMr 3 months, 2 weeks ago
Selected Answer: B
By using Amazon S3 to host the website, you can take advantage of its durability, scalability, and low-cost pricing model. You only pay for
the storage and data transfer associated with your website, without the need for managing and maintaining web servers or containers.
This reduces the operational overhead and infrastructure costs.

Containerizing the website and hosting it in AWS Fargate (option A) would involve additional complexity and costs associated with
managing the container environment and scaling resources. Deploying a web server on an Amazon EC2 instance (option C) would require
provisioning and managing the EC2 instance, which may not be cost-effective for a static website. Configuring an Application Load
Balancer with an AWS Lambda target (option D) adds unnecessary complexity and may not be the most efficient solution for hosting a
static website.
upvoted 2 times

  Bmarodi 4 months ago


Selected Answer: B
Option B is the MOST cost-effective for hosting the website.
upvoted 1 times

  beginnercloud 4 months, 2 weeks ago


Selected Answer: B
static website = B
upvoted 1 times

  Rahulbit34 5 months ago


Since all are static, S3 can be use to host it
upvoted 1 times

  kamx44 5 months, 2 weeks ago


Selected Answer: B
static website B
upvoted 1 times

  DIptyParashar 6 months ago


Selected Answer: B
static website so B
upvoted 1 times

  linux_admin 6 months ago


Selected Answer: B
With S3, the company can store and serve its website contents, such as HTML, CSS, client-side JavaScript, and images, as static content.
The cost of hosting a website on S3 is relatively low as compared to other options because S3 pricing is based on storage and data
transfer usage, which is generally less expensive than other hosting options like EC2 instances or containers. Additionally, there is no
charge for serving data from an S3 bucket, so there are no additional costs associated with traffic.
upvoted 2 times
Question #33 Topic 1

A company runs an online marketplace web application on AWS. The application serves hundreds of thousands of users during peak hours. The
company needs a scalable, near-real-time solution to share the details of millions of financial transactions with several other internal applications.
Transactions also need to be processed to remove sensitive data before being stored in a document database for low-latency retrieval.
What should a solutions architect recommend to meet these requirements?

A. Store the transactions data into Amazon DynamoDB. Set up a rule in DynamoDB to remove sensitive data from every transaction upon write.
Use DynamoDB Streams to share the transactions data with other applications.

B. Stream the transactions data into Amazon Kinesis Data Firehose to store data in Amazon DynamoDB and Amazon S3. Use AWS Lambda
integration with Kinesis Data Firehose to remove sensitive data. Other applications can consume the data stored in Amazon S3.

C. Stream the transactions data into Amazon Kinesis Data Streams. Use AWS Lambda integration to remove sensitive data from every
transaction and then store the transactions data in Amazon DynamoDB. Other applications can consume the transactions data off the Kinesis
data stream.

D. Store the batched transactions data in Amazon S3 as files. Use AWS Lambda to process every file and remove sensitive data before
updating the files in Amazon S3. The Lambda function then stores the data in Amazon DynamoDB. Other applications can consume
transaction files stored in Amazon S3.

Correct Answer: C

Community vote distribution


C (79%) B (21%)

  ArielSchivo Highly Voted  11 months, 2 weeks ago


Selected Answer: C
I would go for C. The tricky phrase is "near-real-time solution", pointing to Firehouse, but it can't send data to DynamoDB, so it leaves us
with C as best option.

Kinesis Data Firehose currently supports Amazon S3, Amazon Redshift, Amazon OpenSearch Service, Splunk, Datadog, NewRelic,
Dynatrace, Sumologic, LogicMonitor, MongoDB, and HTTP End Point as destinations.

https://aws.amazon.com/kinesis/data-
firehose/faqs/#:~:text=Kinesis%20Data%20Firehose%20currently%20supports,HTTP%20End%20Point%20as%20destinations.
upvoted 50 times

  SaraSundaram 6 months, 2 weeks ago


There are many questions having Firehose and Stream. Need to know them in detail to answer. Thanks for the explanation
upvoted 3 times

  diabloexodia 2 months, 2 weeks ago


Stream is used if you want real time results , but with firehose , you generally use the data at a later point of time by storing it
somewhere. Hence if you see "REAL TIME" the answer is most probably Kinesis Data Streams.
upvoted 6 times

  Lonojack 8 months, 1 week ago


This was a really tough one. But you have the best explanation on here with reference point. Thanks. I’m going with answer C!
upvoted 2 times

  lizzard812 8 months ago


Sorry but I still can't see how Kinesis Data Stream is 'scalable', since you have to provision the quantity of shards in advance?
upvoted 1 times

  habibi03336 7 months, 1 week ago


"easily stream data at any scale"
This is a description of Kinesis Data Stream. I think you can configure its quantity but still not provision and manage scalability by
yourself.
upvoted 1 times

  JesseeS Highly Voted  11 months, 2 weeks ago


The answer is C, because Firehose does not suppport DynamoDB and another key word is "data" Kinesis Data Streams is the correct
choice. Pay attention to key words. AWS likes to trick you up to make sure you know the services.
upvoted 25 times

  Ak9kumar Most Recent  1 week ago


I picked B. We need to understand how Kinesis Data Warehouse works to answer this question right.
upvoted 1 times
  sohailn 1 month, 3 weeks ago
kinesis Data Firhouse optionally support lambda for transformation
upvoted 1 times

  TariqKipkemei 2 months ago


Selected Answer: C
Scalable, near-real-time solution to share the details of millions of financial transactions with several other internal applications = Amazon
Kinesis Data Streams.
Remove sensitive data from transactions = AWS Lambda.
Store transaction data in a document database for low-latency retrieval = Amazon DynamoDB.
upvoted 1 times

  cookieMr 3 months, 2 weeks ago


Selected Answer: C
To meet the requirements of sharing financial transaction details with several other internal applications, and processing and storing the
transactions data in a scalable and near-real-time manner, a solutions architect should recommend option C: Stream the transactions data
into Amazon Kinesis Data Streams, use AWS Lambda integration to remove sensitive data, and then store the transactions data in Amazon
DynamoDB. Other applications can consume the transactions data off the Kinesis data stream.

Option A (storing transactions data in DynamoDB and using DynamoDB Streams) may not provide the same level of scalability and real-
time data sharing as Kinesis Data Streams. Option B (using Kinesis Data Firehose to store data in DynamoDB and S3) adds unnecessary
complexity and additional storage costs. Option D (storing batched transactions data in S3 and processing with Lambda) may not provide
the required near-real-time data sharing and low-latency retrieval compared to the streaming-based solution.
upvoted 3 times

  oiccic99 3 months, 2 weeks ago


Selected Answer: C
its c because yes
upvoted 1 times

  Chris22usa 3 months, 3 weeks ago


I think it is B. Kinesis data stream can import data from DynamoDB, but can not export data to DynamoDB. Data stream only support to
export to Lamda, Kinesis Firehose,Kinesis Analytics or AWS Glue. Data stream's exporting to other object need to ETL transform process ,
which is Firehose's function.
upvoted 1 times

  konieczny69 3 months, 3 weeks ago


Selected Answer: B
near real time - firehose
besides - dynamo is no the destination, lambda is
and lambda can be used since you can expose it behind http
upvoted 1 times

  VIad 4 months, 2 weeks ago


Selected Answer: B
That is definitely B:

It is saying "near real time" that makes sense :

near real time : Kinesis Data Firehose


real time : Kinesis Data Stream

Also, Kinesis Data Firehose supports DynamoDB. The link is below :

https://dynobase.dev/dynamodb-faq/can-firehose-write-to-dynamodb/#:~:text=Answer,data%20to%20a%20DynamoDB%20table.
upvoted 1 times

  Clouddon 1 month, 3 weeks ago


I disagree withe statement about firehose as stated from this source because aws says "Kinesis Data Firehose currently supports
Amazon S3, Amazon Redshift, Amazon OpenSearch Service, Splunk, Datadog, NewRelic, Dynatrace, Sumo Logic, LogicMonitor,
MongoDB, and HTTP End Point as destinations."
upvoted 1 times

  ruqui 4 months, 2 weeks ago


The problem says that Firehose will store data in Amazon DynamoDB and Amazon S3, I think it's not possible to have more than one
consumer, so B solution is impossible
upvoted 2 times

  plutonash 5 months ago


Selected Answer: B
for me the awswer is B. kinesis data firehose can transfert data to dynamoDB and the key world in the question : Near Real Time
Real Time = Kinesis Data Stream
Near Real Time = Kinesis Data Firehose
upvoted 5 times
  jcramos 5 months, 4 weeks ago
Selected Answer: B
Kinesis Data Firehose does have integration with Lambda. Kinesis Data Strems does not have that integration so B is correct
upvoted 2 times

  bakamon 6 months ago


Selected Answer: C
Near Real Time : Kinesis Data Stream & Kinesis Data Firehouse
Kinesis Data Stream :: used for streaming live data
Kinesis Data Firehouse :: used when you have to store the streaming data into S3, Redshift etc
upvoted 5 times

  linux_admin 6 months ago


Selected Answer: C
This solution meets the requirements for scalability, near-real-time processing, and sharing data with several internal applications. Kinesis
Data Streams is a fully managed service that can handle millions of transactions per second, making it a scalable solution. Using Lambda
to process the data and remove sensitive information provides a fast and efficient method to perform data transformation in near-real-
time. Storing the processed data in DynamoDB allows for low-latency retrieval, and the data can be shared with other applications using
the Kinesis data stream.
upvoted 2 times

  will2will 6 months ago


C : B is incorrect , coz firehouse can't work with lambda
upvoted 1 times

  Abhineet9148232 6 months ago


Selected Answer: C
Kinesis Data Firehose doesn't support DynamoDB as a destination.
https://docs.aws.amazon.com/firehose/latest/dev/create-name.html
upvoted 1 times

  bilel500 6 months, 4 weeks ago


Selected Answer: C
Kinesis Data Streams focuses on ingesting and storing data streams. Kinesis Data Firehose focuses on delivering data streams to select
destinations. Both can ingest data streams but the deciding factor in which to use depends on where your streamed data should go to.
upvoted 1 times
Question #34 Topic 1

A company hosts its multi-tier applications on AWS. For compliance, governance, auditing, and security, the company must track configuration
changes on its AWS resources and record a history of API calls made to these resources.
What should a solutions architect do to meet these requirements?

A. Use AWS CloudTrail to track configuration changes and AWS Config to record API calls.

B. Use AWS Config to track configuration changes and AWS CloudTrail to record API calls.

C. Use AWS Config to track configuration changes and Amazon CloudWatch to record API calls.

D. Use AWS CloudTrail to track configuration changes and Amazon CloudWatch to record API calls.

Correct Answer: B

Community vote distribution


B (98%)

  airraid2010 Highly Voted  11 months, 1 week ago


Selected Answer: B
CloudTrail - Track user activity and API call history.
Config - Assess, audits, and evaluates the configuration and relationships of tag resources.

Therefore, the answer is B


upvoted 27 times

  TariqKipkemei Most Recent  2 months ago


Selected Answer: B
CloudWatch is a monitoring service for AWS resources and applications. CloudTrail is a web service that records API activity in your AWS
account.
upvoted 2 times

  Bogs123456711 2 months ago


Selected Answer: B
CONFIG - AWS CONFIG
RECORD API CALLS - CLOUDTRAIL
upvoted 1 times

  hsinchang 2 months ago


Selected Answer: B
CloudWatch is mainly uesd to monitor AWS services with metrics, not recoding actions inside the AWS environments. It can also monitor
CloudTrail logged events.
For recording API calls it requires CloudTrail.
upvoted 1 times

  james2033 2 months, 2 weeks ago


Selected Answer: B
Keyword "Amazon CloudWatch" is not for this case, remove C and D.

Use AWS Config first to track configuration changes, Second is AWS CloudTrai to record API calls. (Answer B, and correct answer). Answer A
is reversed order of B, and not accepted.
upvoted 2 times

  miki111 2 months, 2 weeks ago


Option B is the right answer for this.
upvoted 1 times

  karloscetina007 2 months, 4 weeks ago


Selected Answer: B
B is the answer with no doubts
upvoted 1 times

  minhpn 3 months, 1 week ago


Selected Answer: B
config => AWS config
record API calls => AWS CloudTrail
upvoted 1 times
  cookieMr 3 months, 2 weeks ago
Selected Answer: B
To meet the requirement of tracking configuration changes on AWS resources and recording a history of API calls, a solutions architect
should recommend option B: Use AWS Config to track configuration changes and AWS CloudTrail to record API calls.

Option A (using CloudTrail to track configuration changes and Config to record API calls) is incorrect because CloudTrail is specifically
designed to capture API call history, while Config is designed for tracking configuration changes.

Option C (using Config to track configuration changes and CloudWatch to record API calls) is not the recommended approach. While
CloudWatch can be used for monitoring and logging, it does not provide the same level of detail and compliance tracking as CloudTrail for
recording API calls.

Option D (using CloudTrail to track configuration changes and CloudWatch to record API calls) is not the optimal choice because CloudTrail
is the appropriate service for tracking configuration changes, while CloudWatch is not specifically designed for recording API call history.
upvoted 2 times

  Bmarodi 4 months ago


Selected Answer: B
Option B meets ruirements.
upvoted 1 times

  linux_admin 6 months ago


Selected Answer: B
AWS Config is a fully managed service that allows the company to assess, audit, and evaluate the configurations of its AWS resources. It
provides a detailed inventory of the resources in use and tracks changes to resource configurations. AWS Config can detect configuration
changes and alert the company when changes occur. It also provides a historical view of changes, which is essential for compliance and
governance purposes.

AWS CloudTrail is a fully managed service that provides a detailed history of API calls made to the company's AWS resources. It records all
API activity in the AWS account, including who made the API call, when the call was made, and what resources were affected by the call.
This information is critical for security and auditing purposes, as it allows the company to investigate any suspicious activity that might
occur on its AWS resources.
upvoted 3 times

  bilel500 6 months, 4 weeks ago


Selected Answer: B
AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. It provides a history of
configuration changes made to your resources and can be used to track changes made to your resources over time.

AWS CloudTrail is a service that enables you to record API calls made to your AWS resources. It provides a history of API calls made to your
resources, including the identity of the caller, the time of the call, the source of the call, and the response element returned by the service.
upvoted 1 times

  bilel500 6 months, 4 weeks ago


Selected Answer: B
AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. It provides a history of
configuration changes made to your resources and can be used to track changes made to your resources over time.

AWS CloudTrail is a service that enables you to record API calls made to your AWS resources. It provides a history of API calls made to your
resources, including the identity of the caller, the time of the call, the source of the call, and the response element returned by the service.
upvoted 1 times

  Mcmono 7 months, 2 weeks ago


Selected Answer: B
AWS Config is basically used to track config changes, while cloudtrail is to monitor API calls
upvoted 1 times

  bullrem 8 months, 1 week ago


Selected Answer: A
A. Use AWS CloudTrail to track configuration changes and AWS Config to record API calls. This option is the best because it utilizes both
AWS CloudTrail and AWS Config, which are both designed for tracking and recording different types of information related to AWS
resources and API calls. AWS CloudTrail is used to track user activity and API call history, and AWS Config is used to assess, audit, and
evaluate the configuration and relationships of tag resources. Together, they provide a comprehensive and robust solution for compliance,
governance, auditing, and security.
upvoted 1 times

  bullrem 8 months, 1 week ago


why not the B?.
AWS Config is primarily used to assess, audit, and evaluate the configuration and relationships of resources in your AWS environment.
It does not record the history of API calls made to these resources. On the other hand, AWS CloudTrail is used to track user activity and
API call history. Together, AWS Config and CloudTrail provide a complete picture of the configuration and activity on your AWS
resources, which is necessary for compliance, governance, auditing, and security. Therefore, option A is the best choice.
upvoted 1 times

  BakedBacon 8 months, 2 weeks ago


Selected Answer: B
CloudTrail tracks user activity as well as any API calls (think of bread crumbs leading to an culprit). Config is exactly what it sounds like;
configuration. So think audits, config changes ect.
upvoted 1 times

  pazabal 9 months, 2 weeks ago


Selected Answer: B
auditing = cloudtrail
upvoted 1 times
Question #35 Topic 1

A company is preparing to launch a public-facing web application in the AWS Cloud. The architecture consists of Amazon EC2 instances within a
VPC behind an Elastic Load Balancer (ELB). A third-party service is used for the DNS. The company's solutions architect must recommend a
solution to detect and protect against large-scale DDoS attacks.
Which solution meets these requirements?

A. Enable Amazon GuardDuty on the account.

B. Enable Amazon Inspector on the EC2 instances.

C. Enable AWS Shield and assign Amazon Route 53 to it.

D. Enable AWS Shield Advanced and assign the ELB to it.

Correct Answer: D

Community vote distribution


D (100%)

  ninjawrz Highly Voted  11 months, 3 weeks ago


Selected Answer: D
Answer is D
C is incorrect because question says Third party DNS and route 53 is AWS proprietary
upvoted 28 times

  BoboChow Highly Voted  11 months, 3 weeks ago


Selected Answer: D
AWS Shield Advanced provides expanded DDoS attack protection for your Amazon EC2 instances, Elastic Load Balancing load balancers,
CloudFront distributions, Route 53 hosted zones, and AWS Global Accelerator standard accelerators.
upvoted 24 times

  leonardh 4 months, 3 weeks ago


I´d agree as Shield Advanced is the only tier that can protect EC2 which is not possible in Standard.
upvoted 4 times

  Ak9kumar Most Recent  1 week ago


Answer D. Learn section on AWS Advanced Shield on aws.Amazon.com to help you understand this. It helped me.
upvoted 1 times

  ishant101 1 month ago


answer is D
upvoted 1 times

  TariqKipkemei 2 months ago


Selected Answer: D
DDos = AWS Shield
upvoted 2 times

  hsinchang 2 months ago


Selected Answer: D
large-scale DDos leads to advanced instead of standard AWS Shield.
upvoted 1 times

  james2033 2 months, 2 weeks ago


Selected Answer: D
Keyword "large-scale DDoS attacks" , "Amazon EC2", "VPC", "ELB", "3rd service used for DNS".

Amazon GuardDuty https://aws.amazon.com/guardduty/ Intelligent threat detection.

AWS Shield https://aws.amazon.com/shield/ Automatically detect and mitigate sophisticated network-level DDoS.

AWS Shield Advanced with ELB https://aws.amazon.com/about-aws/whats-new/2022/04/aws-shield-application-balancer-automatic-ddos-


mitigation/ . Choose D.
upvoted 1 times

  miki111 2 months, 2 weeks ago


Option D is the right answer for this.
upvoted 1 times
  Kaab_B 2 months, 2 weeks ago
Selected Answer: D
DDoS extended is AWS Sheild Advance without a doubt.
upvoted 1 times

  karloscetina007 2 months, 4 weeks ago


A third-party service
D is the answer with no doubts
upvoted 1 times

  cookieMr 3 months, 2 weeks ago


Selected Answer: D
Option A is incorrect because Amazon GuardDuty is a threat detection service that focuses on identifying malicious activity and
unauthorized behavior within AWS accounts. While it is useful for detecting various security threats, it does not specifically address large-
scale DDoS attacks.

Option B is also incorrect because Amazon Inspector is a vulnerability assessment service that helps identify security issues and
vulnerabilities within EC2. It does not directly protect against DDoS attacks.

Option C is not the optimal choice because AWS Shield provides basic DDoS protection for resources such as Elastic IP addresses,
CloudFront, and Route53 hosted zones. However, it does not provide the advanced capabilities and assistance offered by AWS Shield
Advanced, which is better suited for protecting against large-scale DDoS attacks.

Therefore, option D with AWS Shield Advanced and assigning the ELB to it is the recommended solution to detect and protect against
large-scale DDoS attacks in the architecture described.
upvoted 6 times

  Bmarodi 4 months ago


Selected Answer: D
I voting for the option D.
upvoted 1 times

  channn 6 months ago


Selected Answer: D
Key words: DDos -> Shield
upvoted 2 times

  Daiking 7 months ago


Selected Answer: D
DDoS attack is a feature of AWS Shield, so I confused C or D. But it usually determines by Health-Check, and Health-Check runs in the level
target group of ELB. Finally, I would go with D.
upvoted 1 times

  techhb 9 months, 1 week ago


Selected Answer: D
Details when to use the service,https://medium.com/@tshemku/aws-waf-vs-firewall-manager-vs-shield-vs-shield-advanced-4c86911e94c6
upvoted 3 times

  pazabal 9 months, 2 weeks ago


Selected Answer: D
A third-party service is used for the DNS. = Not Route 53 (AWS). The company's solutions architect must recommend a solution to detect
and protect against large-scale DDoS attacks = Shield
upvoted 3 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Selected Answer: D
The correct answer is D: Enable AWS Shield Advanced and assign the ELB to it.

AWS Shield is a service that provides DDoS protection for your AWS resources. There are two tiers of AWS Shield: AWS Shield Standard and
AWS Shield Advanced. AWS Shield Standard is included with all AWS accounts at no additional cost and provides protection against most
common network and transport layer DDoS attacks. AWS Shield Advanced provides additional protection against more complex and larger
scale DDoS attacks, as well as access to a team of DDoS response experts.

To detect and protect against large-scale DDoS attacks on a public-facing web application hosted on Amazon EC2 instances behind an
Elastic Load Balancer (ELB), you should enable AWS Shield Advanced and assign the ELB to it. This will provide advanced protection against
DDoS attacks targeting the ELB and the EC2 instances behind it.
upvoted 6 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Amazon GuardDuty is a threat detection service that analyzes network traffic and other data sources to identify potential threats to
your AWS resources. It is not specifically designed for detecting and protecting against DDoS attacks.

Amazon Inspector is a security assessment service that analyzes the runtime behavior of your Amazon EC2 instances to identify
security vulnerabilities. It is not specifically designed for detecting and protecting against DDoS attacks.
Amazon Route 53 is a DNS service that routes traffic to your resources on the internet. It is not specifically designed for detecting and
protecting against DDoS attacks.
upvoted 3 times

  srijrao 6 months ago


hey buddy qq is this saa questions discussion enough to pass the exam?
upvoted 2 times
Question #36 Topic 1

A company is building an application in the AWS Cloud. The application will store data in Amazon S3 buckets in two AWS Regions. The company
must use an AWS Key Management Service (AWS KMS) customer managed key to encrypt all data that is stored in the S3 buckets. The data in
both S3 buckets must be encrypted and decrypted with the same KMS key. The data and the key must be stored in each of the two Regions.
Which solution will meet these requirements with the LEAST operational overhead?

A. Create an S3 bucket in each Region. Configure the S3 buckets to use server-side encryption with Amazon S3 managed encryption keys
(SSE-S3). Configure replication between the S3 buckets.

B. Create a customer managed multi-Region KMS key. Create an S3 bucket in each Region. Configure replication between the S3 buckets.
Configure the application to use the KMS key with client-side encryption.

C. Create a customer managed KMS key and an S3 bucket in each Region. Configure the S3 buckets to use server-side encryption with
Amazon S3 managed encryption keys (SSE-S3). Configure replication between the S3 buckets.

D. Create a customer managed KMS key and an S3 bucket in each Region. Configure the S3 buckets to use server-side encryption with AWS
KMS keys (SSE-KMS). Configure replication between the S3 buckets.

Correct Answer: C

Community vote distribution


B (57%) D (42%)

  pooppants Highly Voted  11 months, 3 weeks ago


Selected Answer: B
KMS Multi-region keys are required https://docs.aws.amazon.com/kms/latest/developerguide/multi-region-keys-overview.html
upvoted 46 times

  hypnozz 3 months, 3 weeks ago


The answer is C, because "Server-side encryption with Amazon S3 managed keys (SSE-S3) is the base level of encryption configuration
for every bucket in Amazon S3. If you want to use a different type of default encryption, you can also specify server-side encryption
with AWS Key Management Service (AWS KMS) keys (SSE-KMS) or customer-provided keys (SSE-C)"

By using SSE-KMS, you can encrypt the data stored in the S3 buckets with a customer managed KMS key. This ensures that the data is
protected and allows you to have control over the encryption key. By creating an S3 bucket in each Region and configuring replication
between them, you can have data and key redundancy in both Regions.
upvoted 3 times

  Clouddon 1 month, 3 weeks ago


Option B, AWS KMS supports multi-Region keys, which are AWS KMS keys in different AWS Regions that can be used interchangeably
– as though you had the same key in multiple Regions. Each set of related multi-Region keys has the same key material and key ID,
so you can encrypt data in one AWS Region and decrypt it in a different AWS Region without re-encrypting or making a cross-Region
call to AWS KMS.
You can use multi-Region keys with client-side encryption libraries, such as the AWS Encryption SDK, the DynamoDB Encryption
Client, and Amazon S3 client-side encryption. For an example of using multi-Region keys with Amazon DynamoDB global tables and
the DynamoDB Encryption Client, see Encrypt global data client-side with AWS KMS multi-Region keys in the AWS Security Blog.
https://docs.aws.amazon.com/kms/latest/developerguide/multi-region-keys-overview.html
upvoted 2 times

  sohailn 1 month, 3 weeks ago


Absoutely D is the right one because s3 kms multi region as an individual key so you must first decrypt in source bucket and then re-
encrypt in target bucket
upvoted 1 times

  magazz 10 months, 2 weeks ago


Amazon S3 cross-region replication decrypts and re-encrypts data under a KMS key in the destination Region, even when replicating
objects protected by a multi-Region key. So stating that Amazon S3 cross-region replication decrypts and re-encrypts data under a KMS
key in the destination Region, even when replicating objects protected by a multi-Region key is required is incorrect
upvoted 3 times

  thanhvx1 6 months ago


Option B involves configuring the application to use client-side encryption, which can increase the operational overhead of
managing and securing the keys.
upvoted 2 times

  TuLe 10 months, 1 week ago


@magazz: it's not true then. Based on the document from AWS
https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication-config-for-kms-objects.html , we will need to setup the
replication rule with destination KMS. In order to have the key available in more than 2, then multi-region key should be required.
But I'm still not favor option B - we can use server-side when why wasting effort to do client side encryption.
upvoted 2 times

  TuLe 10 months, 1 week ago


I would say it's true... Not sure the previous one say "not true" :D.
upvoted 1 times

  JayBee65 10 months ago


It's not clear what you are saying. Are you saying that B is correct or D is correct?
upvoted 2 times

  karbob 9 months ago


:D => is smile i thought
upvoted 2 times

  KJa Highly Voted  11 months, 3 weeks ago


Selected Answer: D
Cannot be A - question says customer managed key
Cannot B - client side encryption is operational overhead
Cannot C -as it says SSE-S3 instead of customer managed
so the answer is D though it required one time setup of keys
upvoted 40 times

  Clouddon 1 month, 3 weeks ago


Kindly point at where server-side encryption support multi-region. It is only mention on the aws blog that client-side support multi-
region.
upvoted 1 times

  BoboChow 11 months, 3 weeks ago


The data in both S3 buckets must be encrypted and decrypted with the same KMS key.
AWS KMS supports multi-Region keys, which are AWS KMS keys in different AWS Regions that can be used interchangeably – as though
you had the same key in multiple Regions.
"as though" means it's different.
So I agree with B
upvoted 10 times

  BoboChow 11 months, 3 weeks ago


key change across regions unless you use multi-Region keys
upvoted 2 times

  th3cookie 10 months, 2 weeks ago


How does client side encryption increase OPERATIONAL overhead? Do you think every connected client is sitting there with gpg cli,
decrypting/encrypting every packet that comes in/out? No, it's done via SDK -> https://docs.aws.amazon.com/encryption-
sdk/latest/developer-guide/introduction.html

The correct answer is B because that's the only way to actually get the same key across multiple regions with minimal operational
overhead
upvoted 12 times

  kakka22 6 months ago


"The data in both S3 buckets must be encrypted and decrypted with the same KMS key"
Client side encryption means that key is generated in from the cient without storing that in the KMS...
upvoted 2 times

  mattlai 11 months, 3 weeks ago


fun joke, if u dont do encryption on client side, where else could it be?
upvoted 1 times

  Newptone 10 months, 4 weeks ago


It could be server side. For client side, the application need to finish the encryption and decryption by itself. So S3 object encryption
on the server side is less operational overhead.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingClientSideEncryption.html

But for option B, the major issue is if you create KMS keys in 2 regions, they can not be the same.
upvoted 3 times

  Newptone 10 months, 4 weeks ago


Sorry for the typo, I mean option D.
upvoted 2 times

  David_Ang Most Recent  4 days, 16 hours ago


Selected Answer: B
the reason why "B" is correct it's because they are asking for only one key, if you create a key per region you now have 2 Keys one for each
bucket and they need the same one to work in both of the buckets. C and D are incorrect
upvoted 1 times
  M0SHE 4 days, 19 hours ago
Selected Answer: B
B. This solution creates a customer managed multi-Region KMS key, which meets the requirement to use the same KMS key across two
regions. It uses client-side encryption with the KMS key, which means the application is responsible for encryption and decryption
processes. This satisfies all requirements.

D. While this solution does use customer managed KMS keys, it creates separate KMS keys in each region. Although it uses SSE-KMS,
which would be closer to the requirement, it doesn't meet the requirement of using the same KMS key across two regions.
upvoted 1 times

  SymnuiSlon 5 days, 12 hours ago


Selected Answer: D
"The company must use an AWS Key Management Service (AWS KMS) customer managed key

(AWS-KMS) was mentioned only in a D option, then only D meets the requirements
upvoted 1 times

  dagr 1 week ago


Selected Answer: B
I think the key to this question is mult-region keys
upvoted 1 times

  hieulam 1 week, 6 days ago


Selected Answer: C
You cannot create multi-Region keys in a custom key store.
https://docs.aws.amazon.com/kms/latest/developerguide/multi-region-keys-
overview.html#:~:text=imported%20key%20material-,You%20cannot%20create%20multi%2DRegion%20keys%20in%20a%20custom%20ke
y%20store.,-Topics
upvoted 1 times

  gsax 2 weeks, 5 days ago


Selected Answer: B
B - It satisfies both conditions.
1. The data in both S3 buckets must be encrypted and decrypted with the same KMS key. 2. The data and the key must be stored in each of
the two Regions.
With D, you will not have same keys.
upvoted 1 times

  KawtarZ 1 month, 1 week ago


Selected Answer: B
B because the question requires using the same KMS key for both buckets encryption
upvoted 1 times

  2284 1 month, 2 weeks ago


Selected Answer: D
The requirement is to use a customer managed key from AWS Key Management Service (AWS KMS) to encrypt all data stored in Amazon
S3 buckets across two AWS Regions.
Option D fulfills this requirement by using customer managed KMS keys (which can be multi-Region keys) for encryption. This ensures that
the same key is used for encrypting and decrypting data across both Regions, while also allowing for centralized key management.
The use of SSE-KMS provides strong encryption and enables the customer to control access to the encryption keys.
Replicating data between the S3 buckets in different Regions helps maintain data consistency and availability.
upvoted 2 times

  aaroncelestin 1 month, 2 weeks ago


Focus on the REQUIREMENTS and not parameters like "min operational overhead". The reqs are far more important than params.

That being said, C and D are automatically INCORRECT because they start off with this "Create a customer managed KMS key and an S3
bucket in //each// Region" Creating two keys immediately fails the reqs of having one key.

A is INCORRECT because it creates an "AWS Managed Key", so that fails the reqs of having a "Cust Managed Key".

That leaves B, which has a Multi-Region, Cust Managed Key. Which Meets the REQUIREMENTS and is within the (admittedly vague)
parameters.
upvoted 3 times

  slackbot 1 month, 2 weeks ago


interesting that you said it:
"Create a customer managed KMS key and an S3 bucket in //each// Region" - doesnt this mean - a key in each region? if yes - this would
indeed mean 2 keys, no? or, when you say - put a ball in each bucket, you mean put 1 ball in both buckets... i guess these are nested
buckets :D it does not seem to be working with regions, no?
upvoted 1 times

  aaroncelestin 1 month, 2 weeks ago


Yes, it means 1 key per each region, which immediately fails to meet the requirements.
upvoted 1 times
  Guru4Cloud 1 month, 2 weeks ago
Selected Answer: B
KMS Multi-region keys are required
https://docs.aws.amazon.com/kms/latest/developerguide/multi-region-keys-overview.html
upvoted 1 times

  jksnu 1 month, 2 weeks ago


Selected Answer: D
Options A and C both use Amazon S3 managed encryption keys (SSE-S3), which do not meet the requirement of using a customer
managed KMS key for encryption.
Option B suggests using a customer managed multi-Region KMS key and client-side encryption, but this introduces more operational
complexity than necessary.
Therefore, Option D is the most suitable solution with the least operational overhead to meet the given requirement
upvoted 1 times

  bindagooner 1 month, 3 weeks ago


After go through related document, D should be the correct answer.
upvoted 2 times

  nafeez7950 1 month, 3 weeks ago


Selected Answer: C
Perhaps we could take the question this way, where just because it mentions about the data has to be encrypted with the same KMS key, it
doesn't mean the data is not encrypted yet. Also, by default, starting in Jan 2023, S3 is already encrypted by SSE-S3 by default. So, my
assumption is that the data could be encrypted again with our customer managed KMS, because as we create our KMS key, it can be re-
used in other regions, thus using the same key to encrypt and decrypt.
upvoted 1 times

  Eobard 1 month, 3 weeks ago


Selected Answer: B
Keyword is "same KMS key"
upvoted 1 times

  Abdou1604 1 month, 3 weeks ago


Option D meets the requirement of using a customer managed KMS key for encryption while also enabling data replication between S3
buckets in different regions. AWS Key Management Service (AWS KMS) customer managed keys provide central management and control
over encryption keys, which ensures that the same key can be used for encryption and decryption across multiple regions. This minimizes
operational overhead and ensures consistency.
Option A suggests using Amazon S3 managed encryption keys (SSE-S3), which would require separate keys for each region and might lead
to inconsistency and operational complexity.
Option B suggests using a customer managed multi-Region KMS key, which could be used with client-side encryption, but it's more
complex than needed for this scenario.
Option C suggests using customer managed KMS keys with SSE-S3, which also introduces operational complexity and does not ensure
that the same key is used for encryption and decryption in both regions.
upvoted 1 times
Question #37 Topic 1

A company recently launched a variety of new workloads on Amazon EC2 instances in its AWS account. The company needs to create a strategy
to access and administer the instances remotely and securely. The company needs to implement a repeatable process that works with native AWS
services and follows the AWS Well-Architected Framework.
Which solution will meet these requirements with the LEAST operational overhead?

A. Use the EC2 serial console to directly access the terminal interface of each instance for administration.

B. Attach the appropriate IAM role to each existing instance and new instance. Use AWS Systems Manager Session Manager to establish a
remote SSH session.

C. Create an administrative SSH key pair. Load the public key into each EC2 instance. Deploy a bastion host in a public subnet to provide a
tunnel for administration of each instance.

D. Establish an AWS Site-to-Site VPN connection. Instruct administrators to use their local on-premises machines to connect directly to the
instances by using SSH keys across the VPN tunnel.

Correct Answer: B

Community vote distribution


B (94%) 6%

  BoboChow Highly Voted  11 months, 1 week ago


Selected Answer: B
How can Session Manager benefit my organization?
Ans: No open inbound ports and no need to manage bastion hosts or SSH keys
https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html
upvoted 16 times

  Nightducky 10 months, 2 weeks ago


Do you know what from the question is it Windows or Linux EC2. I think not so how you want to do SSH session for Windows?
Answer is C
upvoted 1 times

  sohailn 1 month, 3 weeks ago


session manager works with linux, windows, and mac too
upvoted 2 times

  TienHuynh 3 months, 1 week ago


"Cross-platform support for Windows, Linux, and macOS"
https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html
upvoted 2 times

  JayBee65 10 months ago


Session Manager provides support for Windows, Linux, and macOS from a single tool
upvoted 5 times

  Guru4Cloud Most Recent  1 month, 3 weeks ago


Selected Answer: B
This solution meets all of the requirements with the LEAST operational overhead. It is repeatable, uses native AWS services, and follows
the AWS Well-Architected Framework.

Repeatable: The process of attaching an IAM role to an EC2 instance and using Systems Manager Session Manager to establish a remote
SSH session is repeatable. This can be easily automated, so that new instances can be provisioned and administrators can connect to them
securely without any manual intervention.
upvoted 2 times

  TariqKipkemei 2 months ago


Selected Answer: B
With AWS Systems Manager Session Manager, you can manage your Amazon Elastic Compute Cloud (Amazon EC2) instances, edge
devices, on-premises servers, and virtual machines (VMs). You can use either an interactive one-click browser-based shell or the AWS
Command Line Interface (AWS CLI). It provides secure and auditable node management without the need to open inbound ports,
maintain bastion hosts, or manage SSH keys.

https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html#:~:text=RSS-,Session%20Manager,-
is%20a%20fully
upvoted 1 times
  james2033 2 months, 2 weeks ago
Selected Answer: B
Keyword "access and administer the instances remotely and securely" See "AWS Systems Manager Session Manager at "
https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html .
upvoted 1 times

  miki111 2 months, 2 weeks ago


Option B is the right answer for this.
upvoted 1 times

  TienHuynh 3 months, 1 week ago


Selected Answer: B
+Centralized access control to managed nodes using IAM policies
+No open inbound ports and no need to manage bastion hosts or SSH keys
+Cross-platform support for Windows, Linux, and macOS
upvoted 1 times

  cookieMr 3 months, 2 weeks ago


Selected Answer: B
Option A provides direct access to the terminal interface of each instance, but it may not be practical for administration purposes and can
be cumbersome to manage, especially for multiple instances.

Option C adds operational overhead and introduces additional infrastructure that needs to be managed, monitored, and secured. It also
requires SSH key management and maintenance.

Option D is complex and may not be necessary for remote administration. It also requires administrators to connect from their local on-
premises machines, which adds complexity and potential security risks.

Therefore, option B is the recommended solution as it provides secure, auditable, and repeatable remote access using IAM roles and AWS
Systems Manager Session Manager, with minimal operational overhead.
upvoted 2 times

  Bmarodi 4 months ago


Selected Answer: B
The choice for me is the option B.
upvoted 1 times

  cheese929 5 months, 3 weeks ago


Selected Answer: B
B is correct and has the least overhead.
upvoted 1 times

  linux_admin 6 months ago


Selected Answer: B
AWS Systems Manager Session Manager is a fully managed service that provides secure and auditable instance management without the
need for bastion hosts, VPNs, or SSH keys. It provides secure and auditable access to EC2 instances and eliminates the need for managing
and securing SSH keys.
upvoted 1 times

  PaoloRoma 6 months, 1 week ago


Selected Answer: B
I selected B) as "open inbound ports, maintain bastion hosts, or manage SSH keys" https://docs.aws.amazon.com/systems-
manager/latest/userguide/session-manager.html However Session Manager comes with pretty robust list of prerequisites to put in place
(SSM Agent and connectivity to SSM endpoints). On the other side A) come with basically no prerequisites, but it is only for Linux and we
do not have info about OSs, so we should assume Windows as well.
upvoted 1 times

  nour 7 months ago


Selected Answer: B
The keyword that makes option B follows the AWS Well-Architected Framework is "IAM role." IAM roles provide fine-grained access control
and are a recommended best practice in the AWS Well-Architected Framework. By attaching the appropriate IAM role to each instance and
using AWS Systems Manager Session Manager to establish a remote SSH session, the solution is using IAM roles to control access and
follows a recommended best practice.
upvoted 2 times

  Shaw605 7 months, 3 weeks ago


Answer is B ~ Chat GPT
To meet the requirements with the least operational overhead, the company can use the AWS Systems Manager Session Manager. It is a
native AWS service that enables secure and auditable access to instances without the need for remote public IP addresses, inbound
security group rules, or Bastion hosts. With AWS Systems Manager Session Manager, the company can establish a secure and auditable
session to the EC2 instances and perform administrative tasks without the need for additional operational overhead.
upvoted 1 times

  Shaw605 7 months, 3 weeks ago


Answer is B ~ (Chat GPT)
A company recently launched a variety of new workloads on Amazon EC2 instances in its AWS account. The company needs to create a
strategy to access and administer the instances remotely and securely. The company needs to implement a repeatable process that works
with native AWS services and follows the AWS Well-Architected Framework.
Which solution will meet these requirements with the LEAST operational overhead?
upvoted 1 times

  Pranav_523 8 months, 2 weeks ago


Selected Answer: B
correct answer is B
upvoted 1 times

  SilentMilli 8 months, 4 weeks ago


Selected Answer: B
Option B. Attaching the appropriate IAM role to each existing instance and new instance and using AWS Systems Manager Session
Manager to establish a remote SSH session would meet the requirements with the least operational overhead. This approach allows for
secure remote access to the instances without the need to manage additional infrastructure or maintain a separate connection to the
instances. It also allows for the use of native AWS services and follows the AWS Well-Architected Framework.
upvoted 1 times

  techhb 9 months ago


Selected Answer: B
https://dev.to/aws-builders/aws-systems-manager-session-manager-implementation-
f9a#:~:text=Session%20Manager%20is%20a%20fully%20managed%20AWS%20Systems,ports%2C%20maintain%20bastion%20hosts%2C%
20or%20manage%20SSH%20keys.
upvoted 1 times
Question #38 Topic 1

A company is hosting a static website on Amazon S3 and is using Amazon Route 53 for DNS. The website is experiencing increased demand from
around the world. The company must decrease latency for users who access the website.
Which solution meets these requirements MOST cost-effectively?

A. Replicate the S3 bucket that contains the website to all AWS Regions. Add Route 53 geolocation routing entries.

B. Provision accelerators in AWS Global Accelerator. Associate the supplied IP addresses with the S3 bucket. Edit the Route 53 entries to point
to the IP addresses of the accelerators.

C. Add an Amazon CloudFront distribution in front of the S3 bucket. Edit the Route 53 entries to point to the CloudFront distribution.

D. Enable S3 Transfer Acceleration on the bucket. Edit the Route 53 entries to point to the new endpoint.

Correct Answer: C

Community vote distribution


C (100%)

  cookieMr Highly Voted  3 months, 2 weeks ago


Selected Answer: C
Option A (replicating the S3 bucket to all AWS Regions) can be costly and complex, requiring replication of data across multiple Regions
and managing synchronization. It may not provide a significant latency improvement compared to the CloudFront solution.

Option B (provisioning accelerators in AWS Global Accelerator) can be more expensive as it adds an extra layer of infrastructure
(accelerators) and requires associating IP addresses with the S3 bucket. CloudFront already includes global edge locations and provides
similar acceleration capabilities.

Option D (enabling S3 Transfer Acceleration) can help improve upload speed to the S3 bucket but may not have a significant impact on
reducing latency for website visitors.

Therefore, option C is the most cost-effective solution as it leverages CloudFront's caching and global distribution capabilities to decrease
latency and improve website performance.
upvoted 13 times

  Guru4Cloud Most Recent  1 month, 3 weeks ago


Selected Answer: C
Amazon CloudFront is a content delivery network (CDN) service that distributes content globally to reduce latency. By setting up a
CloudFront distribution in front of the S3 bucket hosting the static website, you can take advantage of its edge locations around the world
to deliver content from the nearest location to the users, reducing the latency they experience.

CloudFront automatically caches and replicates content to its edge locations, resulting in faster delivery and lower latency for users
worldwide. This solution is highly effective in optimizing performance while keeping costs under control because CloudFront charges are
based on actual data transfer and requests, and the pay-as-you-go pricing model ensures that you only pay for what you use.
upvoted 3 times

  TariqKipkemei 1 month, 4 weeks ago


Keywords:
Global, Reduce latency, S3, Static Website, Cost effective = Amazon CloudFront
upvoted 1 times

  james2033 2 months, 2 weeks ago


Selected Answer: C
Keyword "Amazon CloudFront" (C).
upvoted 1 times

  miki111 2 months, 2 weeks ago


Option C is the right answer for this.
upvoted 1 times

  miki111 2 months, 2 weeks ago


Option C is the right answer for this.
upvoted 1 times

  TienHuynh 3 months, 1 week ago


Selected Answer: C
key words:
-around the world
-decrease latency
-most cost-effective
answer is C
upvoted 1 times
  cheese929 5 months, 3 weeks ago
Selected Answer: C
C is the most cost effective.
upvoted 1 times

  linux_admin 6 months ago


Selected Answer: C
Amazon CloudFront is a content delivery network (CDN) that caches content at edge locations around the world, providing low latency and
high transfer speeds to users accessing the content. Adding a CloudFront distribution in front of the S3 bucket will cache the static
website's content at edge locations around the world, decreasing latency for users accessing the website.

This solution is also cost-effective as it only charges for the data transfer and requests made by users accessing the content from the
CloudFront edge locations. Additionally, this solution provides scalability and reliability benefits as CloudFront can automatically scale to
handle increased demand and provide high availability for the website.
upvoted 1 times

  test_devops_aws 6 months, 2 weeks ago


Selected Answer: C
Cloud front
upvoted 1 times

  bilel500 6 months, 4 weeks ago


Selected Answer: C
Amazon CloudFront is a content delivery network (CDN) that speeds up the delivery of static and dynamic web content, such as HTML,
CSS, JavaScript, and images. It does this by placing cache servers in locations around the world, which store copies of the content and
serve it to users from the location that is nearest to them.
upvoted 1 times

  Bhawesh 7 months, 1 week ago


My vote is: option B. Stream the transactions data into Amazon Kinesis Data Firehose to store data in Amazon DynamoDB and Amazon S3.
Use AWS Lambda integration with Kinesis Data Firehose to remove sensitive data. Other applications can consume the data stored in
Amazon S3.
This question has 2 requirements:
1. The company needs a scalable, near-real-time solution to share the details of millions of financial transactions with several other
internal applications.
2. Transactions also need to be processed to remove sensitive data before being stored in a document database for low-latency retrieval.
upvoted 1 times

  Ello2023 7 months, 3 weeks ago


Selected Answer: C
C. S3 accelerator is best for uploads to S3, whereas Cloudfront is for content delivery. S3 static website can be the origin which is
distributed to Cloudfront and routed by Route 53.
upvoted 3 times

  AndyMartinez 8 months ago


Selected Answer: C
Option C.
upvoted 1 times

  SilentMilli 8 months, 4 weeks ago


Selected Answer: C
Option C. Adding an Amazon CloudFront distribution in front of the S3 bucket and editing the Route 53 entries to point to the CloudFront
distribution would meet the requirements most cost-effectively. CloudFront is a content delivery network (CDN) that speeds up the
delivery of static and dynamic web content by distributing it across a global network of edge locations. When a user accesses the website,
CloudFront will automatically route the request to the edge location that provides the lowest latency, reducing the time it takes for the
content to be delivered to the user. This solution also allows for easy integration with S3 and Route 53, and provides additional benefits
such as DDoS protection and support for custom SSL certificates.
upvoted 2 times

  pazabal 9 months, 2 weeks ago


Selected Answer: C
decrease latency and most cost-effective = cloudfront in front of S3 bucket (content can be served closer to the user, reducing latency).
Replicating S3 bucket and Global accelerator would also decrease latency but would be less cost-effective. Transfer accelerator wouldn't
decrease latency since it's not for delivering content, but for transfering it
upvoted 2 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Selected Answer: C
The correct answer is C: Add an Amazon CloudFront distribution in front of the S3 bucket. Edit the Route 53 entries to point to the
CloudFront distribution.
Amazon CloudFront is a content delivery network (CDN) that speeds up the delivery of static and dynamic web content, such as HTML,
CSS, JavaScript, and images. It does this by placing cache servers in locations around the world, which store copies of the content and
serve it to users from the location that is nearest to them.

To decrease latency for users who access the static website hosted on Amazon S3, you can add an Amazon CloudFront distribution in front
of the S3 bucket and edit the Route 53 entries to point to the CloudFront distribution. This will allow CloudFront to cache the content of
the website at locations around the world, which will reduce the time it takes for users to access the website by serving it from the location
that is nearest to them.
upvoted 3 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Answer A, (WRONG) - Replicating the S3 bucket that contains the website to all AWS Regions and adding Route 53 geolocation routing
entries would be more expensive than using CloudFront, as it would require you to pay for the additional storage and data transfer
costs associated with replicating the bucket to multiple Regions.

Answer B, (WRONG) - Provisioning accelerators in AWS Global Accelerator and associating the supplied IP addresses with the S3 bucket
would also be more expensive than using CloudFront, as it would require you to pay for the additional cost of the accelerators.

Answer D, (WRONG) - Enabling S3 Transfer Acceleration on the bucket and editing the Route 53 entries to point to the new endpoint
would not reduce latency for users who access the website from around the world, as it only speeds up the transfer of large files over
the public internet and does not have cache servers in multiple locations around the world.
upvoted 5 times
Question #39 Topic 1

A company maintains a searchable repository of items on its website. The data is stored in an Amazon RDS for MySQL database table that
contains more than 10 million rows. The database has 2 TB of General Purpose SSD storage. There are millions of updates against this data every
day through the company's website.
The company has noticed that some insert operations are taking 10 seconds or longer. The company has determined that the database storage
performance is the problem.
Which solution addresses this performance issue?

A. Change the storage type to Provisioned IOPS SSD.

B. Change the DB instance to a memory optimized instance class.

C. Change the DB instance to a burstable performance instance class.

D. Enable Multi-AZ RDS read replicas with MySQL native asynchronous replication.

Correct Answer: B

Community vote distribution


A (96%)

  pazabal Highly Voted  9 months, 2 weeks ago


Selected Answer: A
A: Made for high levels of I/O opps for consistent, predictable performance.
B: Can improve performance of insert opps, but it's a storage performance rather than processing power problem
C: for moderate CPU usage
D: for scale read-only replicas and doesn't improve performance of insert opps on the primary DB instance
upvoted 20 times

  cookieMr Highly Voted  3 months, 2 weeks ago


Selected Answer: A
Option B (changing the DB instance to a memory optimized instance class) focuses on improving memory capacity but may not directly
address the storage performance issue.

Option C (changing the DB instance to a burstable performance instance class) is suitable for workloads with varying usage patterns and
burstable performance needs, but it may not provide consistent and predictable performance for heavy write workloads.

Option D (enabling Multi-AZ RDS read replicas with MySQL native asynchronous replication) is a solution for high availability and read
scaling but does not directly address the storage performance issue.

Therefore, option A is the most appropriate solution to address the performance issue by leveraging Provisioned IOPS SSD storage type,
which provides consistent and predictable I/O performance for the Amazon RDS for MySQL database.
upvoted 11 times

  David_Ang Most Recent  4 days, 16 hours ago


Selected Answer: A
yeah "A" is correct is the most suitable option for this scenario, because you need to improve the speed of the reading and writing of the
storage system.
upvoted 1 times

  Guru4Cloud 1 month, 3 weeks ago


Selected Answer: A
The best solution would be to change the storage type to Provisioned IOPS SSD. This allows you to specify a higher level of IOPS
provisioned for your workload's needs.Therefore, switching to Provisioned IOPS SSD storage is the most direct way to resolve the storage
performance bottleneck causing the slow insert times. The ability to provision high IOPS makes it the best solution for high throughput
transactional workloads like this one.
upvoted 3 times

  TariqKipkemei 1 month, 4 weeks ago


Selected Answer: A
Provisioned IOPS SSD it is.
upvoted 1 times

  Suvam90 2 months, 2 weeks ago


Option A is correct
upvoted 1 times

  james2033 2 months, 2 weeks ago


Selected Answer: A
Keyword "Provisioned IOPS SSD" https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/provisioned-iops.html
upvoted 2 times

  Monu11394 2 months, 2 weeks ago


who decides what the correct answer is?
the question clearly says "company determined storage issue"
upvoted 2 times

  miki111 2 months, 2 weeks ago


Option A is the right answer for this.
upvoted 1 times

  Jayendra0609 2 months, 3 weeks ago


Selected Answer: B
Memory-optimized instances are helpful for efficient performance in the case of workloads that handle huge data sets in memory.
https://www.projectpro.io/article/aws-rds-instance-
types/749#:~:text=Memory%20Optimized%20AWS%20RDS%20Instances,%2C%20X2g)%2C%20and%20Z1d.
upvoted 1 times

  omerap12 3 months, 1 week ago


Selected Answer: A
need I/O
upvoted 1 times

  Mehkay 3 months, 1 week ago


A makes sense
upvoted 1 times

  Sunil0320 4 months, 2 weeks ago


Selected Answer: A
A is correct answer
upvoted 1 times

  beginnercloud 4 months, 2 weeks ago


Selected Answer: A
General purpose SSD is not optimal for database that requires high performance. Answer A is correct
upvoted 1 times

  nyschoi 4 months, 2 weeks ago


A
Option B (changing the DB instance to a memory optimized instance class) focuses on increasing the available memory for the database,
but it may not directly address the storage performance issue.

Option C (changing the DB instance to a burstable performance instance class) is not the optimal choice since burstable performance
instances are designed for workloads with bursty traffic patterns, and they may not provide the sustained performance needed for heavy
update operations.

Option D (enabling Multi-AZ RDS read replicas with MySQL native asynchronous replication) is primarily used for high availability and read
scaling rather than addressing storage performance issues.
upvoted 2 times

  Abrar2022 4 months, 2 weeks ago


They're using General purpose SSD?? Provisioned IOPS SSD willl fix the performance issue described.
upvoted 1 times

  acuaws 5 months, 2 weeks ago


Selected Answer: A
How is the answer B? A is blatantly the correct answer... provisioned IOPS SSD is a given faster choice.
upvoted 2 times
Question #40 Topic 1

A company has thousands of edge devices that collectively generate 1 TB of status alerts each day. Each alert is approximately 2 KB in size. A
solutions architect needs to implement a solution to ingest and store the alerts for future analysis.
The company wants a highly available solution. However, the company needs to minimize costs and does not want to manage additional
infrastructure. Additionally, the company wants to keep 14 days of data available for immediate analysis and archive any data older than 14 days.
What is the MOST operationally efficient solution that meets these requirements?

A. Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts. Configure the Kinesis Data Firehose stream to deliver the
alerts to an Amazon S3 bucket. Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days.

B. Launch Amazon EC2 instances across two Availability Zones and place them behind an Elastic Load Balancer to ingest the alerts. Create a
script on the EC2 instances that will store the alerts in an Amazon S3 bucket. Set up an S3 Lifecycle configuration to transition data to
Amazon S3 Glacier after 14 days.

C. Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts. Configure the Kinesis Data Firehose stream to deliver the
alerts to an Amazon OpenSearch Service (Amazon Elasticsearch Service) cluster. Set up the Amazon OpenSearch Service (Amazon
Elasticsearch Service) cluster to take manual snapshots every day and delete data from the cluster that is older than 14 days.

D. Create an Amazon Simple Queue Service (Amazon SQS) standard queue to ingest the alerts, and set the message retention period to 14
days. Configure consumers to poll the SQS queue, check the age of the message, and analyze the message data as needed. If the message is
14 days old, the consumer should copy the message to an Amazon S3 bucket and delete the message from the SQS queue.

Correct Answer: A

Community vote distribution


A (83%) D (17%)

  Sinaneos Highly Voted  11 months, 3 weeks ago


Selected Answer: A
Definitely A, it's the most operationally efficient compared to D, which requires a lot of code and infrastructure to maintain. A is mostly
managed (firehose is fully managed and S3 lifecycles are also managed)
upvoted 32 times

  Kelvin_ke 9 months, 4 weeks ago


what about the 30 days minimum requirement to transition to S3 glacier?
upvoted 8 times

  Abrar2022 4 months, 2 weeks ago


GLACIER IS 7 DAYS REQUIREMENT NOT 30
upvoted 1 times

  caffee 5 months, 3 weeks ago


This constraint is related to moving from Standard to IA/IA-One Zone only. Nothing to do with Glacier
upvoted 1 times

  studis 9 months, 2 weeks ago


You can directly migrate from S3 standard to glacier without waiting
https://docs.aws.amazon.com/AmazonS3/latest/userguide/lifecycle-transition-general-considerations.html
upvoted 3 times

  ErnShm 4 months, 1 week ago


the current article doesn't enable the current option, minimum days are 30
upvoted 1 times

  Suvam90 1 month, 2 weeks ago


No , It's not correct , We can change the storage class in day 0 also using lifecycle policy , I implemented in my project, 30 days
is just an example.
upvoted 2 times

  123jhl0 Highly Voted  11 months, 2 weeks ago


Selected Answer: A
Only A makes sense operationally.
If you think D, just consider what is needed to move the message from SQS to S3... you are polling daily 14 TB to take out 1 TB... that's no
operationally efficient at all.
upvoted 12 times
  David_Ang Most Recent  4 days, 16 hours ago
Selected Answer: A
"A" is simply correct because kinesis firehouse is made for this, SQS standard is not going to support 500 million alerts 2KB each (1 TB) this
service is made for requests that are lighter.
upvoted 1 times

  Ak9kumar 6 days, 19 hours ago


I picked A. Appeared to be right answer.
upvoted 1 times

  chandu7024 2 weeks ago


Should be A
upvoted 1 times

  TariqKipkemei 1 month, 4 weeks ago


Selected Answer: A
The MOST operationally efficient option is A.
upvoted 1 times

  james2033 2 months, 2 weeks ago


Selected Answer: A
Keyword "Amazon S3 Glacier" (A).
upvoted 1 times

  miki111 2 months, 2 weeks ago


Option A is the right answer for this.
upvoted 1 times

  cookieMr 3 months, 2 weeks ago


Selected Answer: A
B suggests launching EC2 instances to ingest and store the alerts, which introduces additional infrastructure management overhead and
may not be as cost-effective and scalable as using managed services like Kinesis Data Firehose and S3.

C involves delivering the alerts to an Amazon OpenSearch Service cluster and manually managing snapshots and data deletion. This
introduces additional complexity and manual overhead compared to the simpler solution of using Kinesis Data Firehose and S3.

D suggests using SQS to ingest the alerts, but it does not provide the same level of data persistence and durability as storing the alerts
directly in S3. Additionally, it requires manual processing and copying of messages to S3, which adds operational complexity.

Therefore, A provides the most operationally efficient solution that meets the company's requirements by leveraging Kinesis Data Firehose
to ingest the alerts, storing them in an S3 bucket, and using an S3 Lifecycle configuration to transition data to S3 Glacier for long-term
archival, all without the need for managing additional infrastructure.
upvoted 5 times

  Abrar2022 4 months, 2 weeks ago


Focus on keywords: Amazon Kinesis Data Firehose delivery stream to ingest the alerts. S3 Lifecycle configuration to transition data to
Amazon S3 Glacier after 14 days.
upvoted 2 times

  XenonDemon 6 months ago


Selected Answer: D
D is the correct answer. Check the link below
https://docs.aws.amazon.com/AmazonS3/latest/userguide/lifecycle-transition-general-considerations.html
upvoted 1 times

  linux_admin 6 months ago


Selected Answer: A
Amazon Kinesis Data Firehose is a fully managed service that can capture, transform, and deliver streaming data into storage systems or
analytics tools, making it an ideal solution for ingesting and storing status alerts. In this solution, the Kinesis Data Firehose delivery stream
ingests the alerts and delivers them to an S3 bucket, which is a cost-effective storage solution. An S3 Lifecycle configuration is set up to
transition the data to Amazon S3 Glacier after 14 days to minimize storage costs.
upvoted 2 times

  bilel500 6 months, 4 weeks ago


Selected Answer: A
The correct answer is A: Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts. Configure the Kinesis Data Firehose
stream to deliver the alerts to an Amazon S3 bucket. Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14
days.
upvoted 1 times

  Ello2023 7 months, 3 weeks ago


This question was tricky but after some reading my choice went from D to A. Which is Operationally efficient.
upvoted 1 times
  jannymacna 8 months, 3 weeks ago
A. Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts. Configure the Kinesis Data Firehose stream to deliver the
alerts to an Amazon S3 bucket. Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days.

This solution meets the company's requirements to minimize costs and not manage additional infrastructure while providing high
availability. Kinesis Data Firehose is a fully managed service that can automatically ingest streaming data and load it into Amazon S3,
Amazon Redshift, or Amazon Elasticsearch Service. By configuring the Firehose to deliver the alerts to an S3 bucket, the company can take
advantage of S3's high durability and availability. An S3 Lifecycle configuration can be set up to automatically transition data that is older
than 14 days to Amazon S3 Glacier, an extremely low-cost storage class for infrequently accessed data.
upvoted 2 times

  SilentMilli 8 months, 4 weeks ago


Selected Answer: A
Creating an Amazon Kinesis Data Firehose delivery stream to ingest the alerts and configuring it to deliver the alerts to an Amazon S3
bucket is the most operationally efficient solution that meets the requirements. Kinesis Data Firehose is a fully managed service for
delivering real-time streaming data to destinations such as S3, Redshift, Elasticsearch Service, and Splunk. It can automatically scale to
handle the volume and throughput of the alerts, and it can also batch, compress, and encrypt the data as it is delivered to S3. By
configuring a Lifecycle policy on the S3 bucket, the company can automatically transition data to Amazon S3 Glacier after 14 days, allowing
the company to store the data for longer periods of time at a lower cost. This solution requires minimal management and provides high
availability, making it the most operationally efficient choice.
upvoted 2 times

  career360guru 9 months, 1 week ago


Selected Answer: D
A is not a right answer is Kinesis Firehose is not the right service to Ingest small 2KB events. Minimum Message Size for Kinesis Firehose is
5MB. Kinesis Data Stream is the right service for this but as that is not given as option here, SQS with 14 Day retention is right answer.
upvoted 2 times

  secdaddy 9 months ago


"A record can be as large as 1,000 KB." and the diagrams shown in this URL support A as the answer.
https://docs.aws.amazon.com/firehose/latest/dev/what-is-this-service.html
upvoted 2 times

  career360guru 9 months, 1 week ago


Option A:
Thinking about this a more as Low operational overhead primary requirement option A will be better option but it will have higher
Latency compared to using Kinesis Data Stream.
upvoted 1 times
Question #41 Topic 1

A company's application integrates with multiple software-as-a-service (SaaS) sources for data collection. The company runs Amazon EC2
instances to receive the data and to upload the data to an Amazon S3 bucket for analysis. The same EC2 instance that receives and uploads the
data also sends a notification to the user when an upload is complete. The company has noticed slow application performance and wants to
improve the performance as much as possible.
Which solution will meet these requirements with the LEAST operational overhead?

A. Create an Auto Scaling group so that EC2 instances can scale out. Configure an S3 event notification to send events to an Amazon Simple
Notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete.

B. Create an Amazon AppFlow flow to transfer data between each SaaS source and the S3 bucket. Configure an S3 event notification to send
events to an Amazon Simple Notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete.

C. Create an Amazon EventBridge (Amazon CloudWatch Events) rule for each SaaS source to send output data. Configure the S3 bucket as the
rule's target. Create a second EventBridge (Cloud Watch Events) rule to send events when the upload to the S3 bucket is complete. Configure
an Amazon Simple Notification Service (Amazon SNS) topic as the second rule's target.

D. Create a Docker container to use instead of an EC2 instance. Host the containerized application on Amazon Elastic Container Service
(Amazon ECS). Configure Amazon CloudWatch Container Insights to send events to an Amazon Simple Notification Service (Amazon SNS)
topic when the upload to the S3 bucket is complete.

Correct Answer: B

Community vote distribution


B (75%) A (25%)

  Six_Fingered_Jose Highly Voted  11 months, 1 week ago


Selected Answer: B
This question just screams AppFlow (SaaS integration)
https://aws.amazon.com/appflow/
upvoted 21 times

  Six_Fingered_Jose 11 months, 1 week ago


configuring Auto-Scaling also takes time when compared to AppFlow,
in AWS's words "in just a few clicks"
> Amazon AppFlow is a fully managed integration service that enables you to securely transfer data between Software-as-a-Service
(SaaS) applications like Salesforce, SAP, Zendesk, Slack, and ServiceNow, and AWS services like Amazon S3 and Amazon Redshift, in just
a few clicks
upvoted 10 times

  jdr75 Highly Voted  6 months ago


Selected Answer: A
It says "LEAST operational overhead" (ie do it in a way it's the less work for me).
If you know a little Amazon AppFlow (see the some videos) you'll see you'll need time to configure and test it, and at the end cope with the
errors during the extraction and load the info to the target.
The customer in the example ALREADY has some EC2 that do the work, the only problem is the performance, that WILL be improved
scaling out and adding a queue (SNS) to decouple the work of notify the user.
The operational load of doing this is LESS that configuring AppFlow.
upvoted 14 times

  Techi47 Most Recent  1 week, 3 days ago


Selected Answer: A
While option B utilizes managed services and can be a valid approach, it's important to note that Amazon AppFlow is primarily designed
for data integration and synchronization between various SaaS applications and AWS services. It may introduce an additional layer of
complexity compared to directly handling the uploads with EC2 instances.

Ultimately, the choice between Option A and Option B depends on specific factors such as the existing architecture, the nature of data
transfers, and any potential advantages offered by using Amazon AppFlow for data integration.

If the primary concern is to improve performance for data uploads and user notifications without introducing new services, Option A (Auto
Scaling group with S3 event notifications) would likely be the simpler and more operationally efficient choice. However, if data integration
between SaaS sources and the S3 bucket is a critical aspect of the application, Option B might be a more suitable approach.
upvoted 1 times

  TariqKipkemei 1 month, 4 weeks ago


Selected Answer: B
SaaS Integration = Amazon AppFlow
upvoted 1 times
  hsinchang 2 months ago
Selected Answer: B
SaaS -> AppFlow
upvoted 1 times

  miki111 2 months, 2 weeks ago


Option B is the right answer.
upvoted 1 times

  cookieMr 3 months, 2 weeks ago


Selected Answer: B
Option A suggests using an Auto Scaling group to scale out EC2 instances, but it does not address the potential bottleneck of slow
application performance and the notification process.

Option C involves using Amazon EventBridge (CloudWatch Events) rules for data output and S3 uploads, but it introduces additional
complexity with separate rules and does not specifically address the slow application performance.

Option D suggests containerizing the application and using Amazon Elastic Container Service (Amazon ECS) with CloudWatch Container
Insights, which may involve more operational overhead and setup compared to the simpler solution provided by Amazon AppFlow.

Therefore, option B offers the most streamlined solution with the least operational overhead by utilizing Amazon AppFlow for data
transfer, configuring S3 event notifications for upload completion, and leveraging Amazon SNS for notifications without requiring
additional infrastructure management.
upvoted 4 times

  Abrar2022 4 months, 2 weeks ago


So true, This question just screams AppFlow (Saas integration)
upvoted 1 times

  Rahulbit34 5 months ago


With Amazon AppFlow automate bi-directional data flows between SaaS applications and AWS services in just a few clicks.So B is the right
answer
upvoted 1 times

  cheese929 5 months, 3 weeks ago


Selected Answer: B
Amazon AppFlow is a fully-managed integration service that enables you to securely exchange data between software as a service (SaaS)
applications, such as Salesforce, and AWS services, such as Amazon Simple Storage Service (Amazon S3) and Amazon Redshift.
The use of Appflow helps to remove the ec2 as the middle layer which slows down the process of data transmission and introduce an
additional variable.
Appflow is also a fully managed AWS service, thus reducing the operational overhead.

https://docs.aws.amazon.com/appflow/latest/userguide/what-is-appflow.html
upvoted 2 times

  piavik 5 months, 4 weeks ago


Selected Answer: B
Keywords:
SaaS --> AppFlow
Operational overhead (B) vs configuration overhead (A)
upvoted 3 times

  xalien 6 months ago


Selected Answer: B
AppFlow is for SaaS integrations:
https://aws.amazon.com/appflow/
upvoted 2 times

  linux_admin 6 months ago


Selected Answer: B
Amazon AppFlow is a fully managed integration service that can help transfer data between SaaS applications and S3 buckets, making it
an ideal solution for data collection from multiple sources. By using Amazon AppFlow, the company can remove the burden of creating
and maintaining custom integrations, allowing them to focus on the core of their application. Additionally, by using S3 event notifications
to trigger an Amazon SNS topic, the company can improve notification delivery times by removing the dependency on the EC2 instances.
upvoted 3 times

  bullrem 8 months, 1 week ago


Selected Answer: A
This solution allows the EC2 instances to scale out as needed to handle the data processing and uploading, which will improve
performance. Additionally, by configuring an S3 event notification to send a notification to an SNS topic when the upload is complete, the
company can still receive the necessary notifications, but it eliminates the need for the same EC2 instance that is processing and
uploading the data to also send the notifications, which further improves performance. This solution has less operational overhead as it
only requires configuring S3 event notifications, SNS topic and AutoScaling group.
upvoted 4 times
  SilentMilli 8 months, 4 weeks ago
Selected Answer: B
Amazon AppFlow is a fully managed integration service that enables the secure and easy transfer of data between popular software-as-a-
service (SaaS) applications and AWS services. By using AppFlow, the company can easily set up integrations between SaaS sources and the
S3 bucket, and the service will automatically handle the data transfer and transformation. The S3 event notification can then be used to
send a notification to the user when the upload is complete, without the need to manage additional infrastructure or code. This solution
would provide the required performance improvement and require minimal management, making it the most operationally efficient
choice.
upvoted 4 times

  techhb 9 months ago


Selected Answer: B
Appflow only
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Selected Answer: B
To meet the requirements with the least operational overhead, the company could consider the following solution:

Option B. Create an Amazon AppFlow flow to transfer data between each SaaS source and the S3 bucket. Configure an S3 event
notification to send events to an Amazon Simple Notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete.

Amazon AppFlow is a fully managed service that enables you to easily and securely transfer data between your SaaS applications and
Amazon S3. By creating an AppFlow flow to transfer the data between the SaaS sources and the S3 bucket, the company can improve the
performance of the application by offloading the data transfer process to a managed service.
upvoted 4 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


***INCORRECT ANSWERS***

Option A is incorrect because creating an Auto Scaling group and configuring an S3 event notification does not address the root cause
of the slow application performance, which is related to the data transfer process.

Option C is incorrect because creating multiple EventBridge (CloudWatch Events) rules and configuring them to send events to an SNS
topic is more complex and involves additional operational overhead.

Option D is incorrect because creating a Docker container and hosting it on ECS does not address the root cause of the slow application
performance, which is related to the data transfer process.
upvoted 6 times
Question #42 Topic 1

A company runs a highly available image-processing application on Amazon EC2 instances in a single VPC. The EC2 instances run inside several
subnets across multiple Availability Zones. The EC2 instances do not communicate with each other. However, the EC2 instances download images
from Amazon S3 and upload images to Amazon S3 through a single NAT gateway. The company is concerned about data transfer charges.
What is the MOST cost-effective way for the company to avoid Regional data transfer charges?

A. Launch the NAT gateway in each Availability Zone.

B. Replace the NAT gateway with a NAT instance.

C. Deploy a gateway VPC endpoint for Amazon S3.

D. Provision an EC2 Dedicated Host to run the EC2 instances.

Correct Answer: C

Community vote distribution


C (98%)

  SilentMilli Highly Voted  8 months, 4 weeks ago


Selected Answer: C
Deploying a gateway VPC endpoint for Amazon S3 is the most cost-effective way for the company to avoid Regional data transfer charges.
A gateway VPC endpoint is a network gateway that allows communication between instances in a VPC and a service, such as Amazon S3,
without requiring an Internet gateway or a NAT device. Data transfer between the VPC and the service through a gateway VPC endpoint is
free of charge, while data transfer between the VPC and the Internet through an Internet gateway or NAT device is subject to data transfer
charges. By using a gateway VPC endpoint, the company can reduce its data transfer costs by eliminating the need to transfer data
through the NAT gateway to access Amazon S3. This option would provide the required connectivity to Amazon S3 and minimize data
transfer charges.
upvoted 38 times

  johne42 1 month, 1 week ago


https://aws.amazon.com/blogs/storage/managing-amazon-s3-access-with-vpc-endpoints-and-s3-access-points/
upvoted 2 times

  Bmarodi 4 months ago


Very good explanation!
upvoted 5 times

  srinivasmn Most Recent  2 weeks ago


Answer is C: An S3 VPC endpoint provides a way for an S3 request to be routed through to the Amazon S3 service, without having to
connect a subnet to an internet gateway. The S3 VPC endpoint is what's known as a gateway endpoint.
upvoted 1 times

  Guru4Cloud 1 month, 3 weeks ago


Selected Answer: C
the EC2 instances are downloading and uploading images to S3, configuring a gateway VPC endpoint will allow them to access S3 without
crossing Availability Zones or regions, eliminating regional data transfer charges
upvoted 1 times

  TariqKipkemei 1 month, 4 weeks ago


Selected Answer: C
Gateway VPC endpoints provide reliable connectivity to Amazon S3 without requiring an internet gateway or a NAT device for your VPC.
upvoted 2 times

  miki111 2 months, 2 weeks ago


Option C is the right answer.
upvoted 1 times

  cookieMr 3 months, 1 week ago


By deploying a gateway VPC endpoint for S3, the company can establish a direct connection between their VPC and S3 without going
through the internet gateway or NAT gateway. This enables traffic between the EC2 and S3 to stay within the Amazon network, avoiding
Regional data transfer charges.

A suggests launching the NAT gateway in each AZ. While this can help with availability and redundancy, it does not address the issue of
data transfer charges, as the traffic would still traverse the NAT gateways and incur data transfer fees.

B suggests replacing the NAT gateway with a NAT instance. However, this solution still involves transferring data between the instances
and S3 through the NAT instance, which would result in data transfer charges.
D suggests provisioning an EC2 Dedicated Host to run the EC2. While this can provide dedicated hardware for the instances, it does not
directly address the issue of data transfer charges.
upvoted 2 times
  Bmarodi 4 months ago
Selected Answer: C
Option C is the answer.
upvoted 1 times

  linux_admin 6 months ago


Selected Answer: C
A gateway VPC endpoint is a fully managed service that allows connectivity from a VPC to AWS services such as S3 without the need for a
NAT gateway or a public internet gateway. By deploying a Gateway VPC endpoint for Amazon S3, the company can ensure that all S3 traffic
remains within the VPC and does not cross the regional boundary. This eliminates regional data transfer charges and provides a more
cost-effective solution for the company.
upvoted 1 times

  AndyMartinez 8 months ago


Selected Answer: C
C - gateway VPC endpoint.
upvoted 1 times

  secdaddy 9 months ago


'Regional' data transfer isn't clear but I think we have to assume this means the traffic stays in the region.
The two options that seem possible are NAT gateway per AZ vs privatelink gateway endpoints per AZ.
privatelink/endpoints do have costs (url below)
privatelink endpoint / LB costs look lower than NAT gateway costs
privatelink doesn't incur inter-AZ data transfer charges (if in the same region) as NAT gateways do which goes towards the key
requirement stated

good writeup here : https://www.vantage.sh/blog/nat-gateway-vpc-endpoint-savings

https://aws.amazon.com/privatelink/pricing/
https://aws.amazon.com/vpc/pricing/
https://aws.amazon.com/premiumsupport/knowledge-center/vpc-reduce-nat-gateway-transfer-costs/
upvoted 1 times

  pazabal 9 months, 2 weeks ago


Selected Answer: C
C, privately connects vpc to aws services via privatelink. Doesn't require nat gateway, vpn or direct connect. Data doesn't leave amazon
network so there are no data transfer charges
A, used to enable instances in private subnets to connect to internet or aws services, data transfered is charged
B, similar to nat gateway
D, not related to data transfer
upvoted 3 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Selected Answer: C
Option C (correct). Deploy a gateway VPC endpoint for Amazon S3.

A VPC endpoint for Amazon S3 allows you to access Amazon S3 resources within your VPC without using the Internet or a NAT gateway.
This means that data transfer between your EC2 instances and S3 will not incur Regional data transfer charges.

Option A (wrong), launching a NAT gateway in each Availability Zone, would not avoid data transfer charges because the NAT gateway
would still be used to access S3.

Option B (wrong), replacing the NAT gateway with a NAT instance, would also not avoid data transfer charges as it would still require using
the Internet or a NAT gateway to access S3.

Option D (wrong), provisioning an EC2 Dedicated Host, would not affect data transfer charges as it only pertains to the physical host that
the EC2 instances are running on and not the data transfer charges for accessing.
upvoted 3 times

  Morinator 9 months, 2 weeks ago


Selected Answer: C
VPC endpoint
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: C
Option C
upvoted 1 times

  shyam_yadav 10 months ago


Option is C bcz Gateway endpoints provide reliable connectivity to Amazon S3 and DynamoDB without requiring an internet gateway or a
NAT device for your VPC. Gateway endpoints do not enable AWS PrivateLink. There is no additional charge for using gateway endpoints
upvoted 2 times
  Shasha1 10 months, 1 week ago
C is correct
https://docs.aws.amazon.com/vpc/latest/privatelink/gateway-endpoints.html
upvoted 1 times

  Wpcorgan 10 months, 2 weeks ago


C is Correct
upvoted 1 times
Question #43 Topic 1

A company has an on-premises application that generates a large amount of time-sensitive data that is backed up to Amazon S3. The application
has grown and there are user complaints about internet bandwidth limitations. A solutions architect needs to design a long-term solution that
allows for both timely backups to Amazon S3 and with minimal impact on internet connectivity for internal users.
Which solution meets these requirements?

A. Establish AWS VPN connections and proxy all traffic through a VPC gateway endpoint.

B. Establish a new AWS Direct Connect connection and direct backup traffic through this new connection.

C. Order daily AWS Snowball devices. Load the data onto the Snowball devices and return the devices to AWS each day.

D. Submit a support ticket through the AWS Management Console. Request the removal of S3 service limits from the account.

Correct Answer: B

Community vote distribution


B (98%)

  Sinaneos Highly Voted  11 months, 3 weeks ago


Selected Answer: B
A: VPN also goes through the internet and uses the bandwidth
C: daily Snowball transfer is not really a long-term solution when it comes to cost and efficiency
D: S3 limits don't change anything here

So the answer is B
upvoted 26 times

  Buruguduystunstugudunstuy Highly Voted  9 months, 2 weeks ago


Selected Answer: B
Option B (correct). Establish a new AWS Direct Connect connection and direct backup traffic through this new connection.

AWS Direct Connect is a network service that allows you to establish a dedicated network connection from your on-premises data center
to AWS. This connection bypasses the public Internet and can provide more reliable, lower-latency communication between your on-
premises application and Amazon S3. By directing backup traffic through the AWS Direct Connect connection, you can minimize the
impact on your internet bandwidth and ensure timely backups to S3.
upvoted 15 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Option A (wrong), establishing AWS VPN connections and proxying all traffic through a VPC gateway endpoint, would not necessarily
minimize the impact on internet bandwidth as it would still utilize the public Internet to access S3.

Option C (wrong), using AWS Snowball devices, would not address the issue of internet bandwidth limitations as the data would still
need to be transferred over the Internet to and from the Snowball devices.

Option D (wrong), submitting a support ticket to request the removal of S3 service limits, would not address the issue of internet
bandwidth limitations and would not ensure timely backups to S3.
upvoted 5 times

  Bofi 7 months, 1 week ago


Option C is wrong so is your reason. you do not need internet to load data into Snowball Devices. if you are using snow cone for
example, u will connect it to your on-premises device directly for loading and Aws will load it in the cloud. However, it not effective to
do that everyday , hence option B is the better choice.
upvoted 1 times

  Buruguduystunstugudunstuy 7 months, 1 week ago


You're right Option B is the correct answer. I answered Option B as the correct answer above.
upvoted 1 times

  srinivasmn Most Recent  1 week, 6 days ago


Right option is C,, In AWS Direct Connect, the network is not fluctuating and provides a consistent experience, while in AWS VPN the VPN is
connected with shared and public networks, so the bandwidth and latency fluctuate. Hence direct connect is better choice than virtual
connect.
upvoted 1 times

  srinivasmn 1 week, 6 days ago


Typo correction to my my above comment. The right option is B.
upvoted 1 times

  chandu7024 2 weeks ago


Option B Correct. Reason is that, Direct connect will not use internet. But it will take good amount of time to establish the connectivity.
upvoted 1 times

  Guru4Cloud 1 month, 3 weeks ago


Selected Answer: B
AWS Direct Connect is a dedicated network connection between your on-premises network and AWS. This provides a private, high-
bandwidth connection that is not subject to the same internet bandwidth limitations as traditional internet connections. This will allow for
timely backups to Amazon S3 without impacting internet connectivity for internal users.
upvoted 1 times

  TariqKipkemei 1 month, 4 weeks ago


Selected Answer: B
AWS Direct Connect cloud service is the shortest path to your AWS resources. While in transit, your network traffic remains on the AWS
global network and never touches the public internet. This reduces the chance of hitting bottlenecks or unexpected increases in latency.

https://aws.amazon.com/directconnect/#:~:text=The-,AWS%20Direct%20Connect,-cloud%20service%20is
upvoted 1 times

  miki111 2 months, 2 weeks ago


Option B is the right answer.
upvoted 1 times

  Kaab_B 2 months, 2 weeks ago


Selected Answer: B
This is long-term and provides solution for internet speed as well
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: B
AWS Direct Connect provides a dedicated network connection between on-premises and AWS, bypassing public internet. By establishing
this connection for backup traffic, company can ensure fast and reliable transfers between their on-premises and S3 without impacting
their internet connectivity for internal users. This provides a dedicated and high-speed connection that is well-suited for data transfers
and minimizes impact on internet bandwidth limitations.

While option A can provide a secure connection, it still utilizes internet bandwidth for data transfer and may not effectively address issue
of limited bandwidth.

While option C can work for occasional large data transfers, it may not be suitable for frequent backups and can introduce additional
operational overhead.

D, submitting a support ticket to request removal of S3 service limits, does not address issue of internet bandwidth limitations and is not a
relevant solution for given requirements.
upvoted 2 times

  emanuelmelis 3 months, 1 week ago


Galleta siempre veo tus comentarios! sos crack!
upvoted 1 times

  Bmarodi 4 months ago


Selected Answer: B
Option B meets these requirements.
upvoted 1 times

  Abrar2022 4 months, 2 weeks ago


This question can confuse you as it mentions internet and Direct Connect bypasses internet and uses dedicated network connections. So
don't be fooled - keyword in the question is "minimize the impact internet bandwidth for internal users"
upvoted 1 times

  linux_admin 6 months ago


Selected Answer: B
AWS Direct Connect is a dedicated network connection that provides a more reliable and consistent network experience compared to
internet-based connections. By establishing a new Direct Connect connection, the company can dedicate a portion of its network
bandwidth to transferring data to Amazon S3, ensuring timely backups while minimizing the impact on internal users.
upvoted 1 times

  SilentMilli 8 months, 4 weeks ago


Selected Answer: B
Establishing a new AWS Direct Connect connection and directing backup traffic through this new connection would meet these
requirements. AWS Direct Connect is a network service that provides dedicated network connections from on-premises data centers to
AWS. It allows the company to bypass the public Internet and establish a direct connection to AWS, providing a more reliable and lower-
latency connection for data transfer. By directing backup traffic through the Direct Connect connection, the company can reduce the
impact on internet connectivity for internal users and improve the speed of backups to Amazon S3. This solution would provide a long-
term solution for timely backups with minimal impact on internet connectivity.
upvoted 4 times
  thensanity 8 months, 4 weeks ago
Only B and C are the correct choices here, and C is more costly than B, so B is the correct answer
upvoted 2 times

  QueTeddyJR 9 months, 2 weeks ago


Selected Answer: D
I thought Direct Connect was or is used to connect directly to AWS from on premise machines and USERs are mentioned which means
they might have users which are not on premise and need connecions.
upvoted 1 times

  pazabal 9 months, 2 weeks ago


Selected Answer: B
B, low-latency, dedicated network connections bw on-premises data center and AWS cloud. Directing backup traffic through direct connect
would increase bandwidth and lower latency.
A, doesn't specifically address the needs of the backup traffic.
C, useful for transfering large amounts of data in short periods of time, not for ongoing backups
D, doesn't directly address the bandwidth contraints
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: B
Option B
upvoted 1 times
Question #44 Topic 1

A company has an Amazon S3 bucket that contains critical data. The company must protect the data from accidental deletion.
Which combination of steps should a solutions architect take to meet these requirements? (Choose two.)

A. Enable versioning on the S3 bucket.

B. Enable MFA Delete on the S3 bucket.

C. Create a bucket policy on the S3 bucket.

D. Enable default encryption on the S3 bucket.

E. Create a lifecycle policy for the objects in the S3 bucket.

Correct Answer: BD

Community vote distribution


AB (100%)

  Uhrien Highly Voted  11 months, 3 weeks ago


Selected Answer: AB
The correct solution is AB, as you can see here:

https://aws.amazon.com/it/premiumsupport/knowledge-center/s3-audit-deleted-missing-objects/

It states the following:

To prevent or mitigate future accidental deletions, consider the following features:

Enable versioning to keep historical versions of an object.


Enable Cross-Region Replication of objects.
Enable MFA delete to require multi-factor authentication (MFA) when deleting an object version.
upvoted 48 times

  paniya93 Most Recent  1 day, 1 hour ago


The correct solution is AB, as you can see here:

https://aws.amazon.com/it/premiumsupport/knowledge-center/s3-audit-deleted-missing-objects/

It states the following:

To prevent or mitigate future accidental deletions, consider the following features:

Enable versioning to keep historical versions of an object.


Enable Cross-Region Replication of objects.
Enable MFA delete to require multi-factor authentication (MFA) when deleting an object version.
upvoted 1 times

  Guru4Cloud 1 month, 3 weeks ago


Selected Answer: AB
Enable versioning on the S3 bucket. This will create a history of all object versions in the bucket, including deleted objects. This way, even if
an object is deleted, it can be restored from a previous version.
Enable MFA Delete on the S3 bucket. This will require users to enter their MFA token in addition to their password in order to delete
objects from the bucket. This adds an extra layer of protection against accidental deletion.
upvoted 1 times

  TariqKipkemei 1 month, 4 weeks ago


Selected Answer: AB
Enable versioning to ensure restore is possible, Enable two step verification of file deletion using MFA delete to ensure unwanted persons
are unable to perform this action.
upvoted 1 times

  Bill__ 2 months ago


AB is the correct answer.
Admin please don't make us fail the exam 😂
upvoted 4 times

  miki111 2 months, 2 weeks ago


Option AB is the right answer.
upvoted 2 times
  KFCR 2 months, 3 weeks ago
Selected Answer: AB
To prevent or mitigate future accidental deletions, consider the following features:

Enable versioning to keep historical versions of an object.


Enable Cross-Region Replication of objects.
Enable MFA delete to require multi-factor authentication (MFA) when deleting an object version.
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: AB
Enabling versioning on S3 ensures multiple versions of object are stored in bucket. When object is updated or deleted, new version is
created, preserving previous version.

Enabling MFA Delete adds additional layer of protection by requiring MFA device to be present when attempting to delete objects. This
helps prevent accidental or unauthorized deletions by requiring extra level of authentication.

C. Creating a bucket policy on S3 is more focused on defining access control and permissions for bucket and its objects, rather than
protecting against accidental deletion.

D. Enabling default encryption on S3 ensures that any new objects uploaded to bucket are automatically encrypted. While encryption is
important for data security, it does not directly address accidental deletion.

E. Creating lifecycle policy for objects in S3 allows for automated management of objects based on predefined rules. While this can help
with data retention and storage cost optimization, it does not directly protect against accidental deletion.
upvoted 4 times

  Bmarodi 4 months ago


Selected Answer: AB
options A & B meet these requirements, hence A and B are the right answers.
upvoted 1 times

  Chaudhry1997 4 months, 1 week ago


Selected Answer: AB
The correct solution is AB
upvoted 1 times

  pbpally 4 months, 4 weeks ago


Admin out here trying to get people to fail lol.
A and B, folks. If somehow this presents as a question needing only one answer, MFA dlete is your go to.
upvoted 1 times

  acuaws 5 months, 2 weeks ago


Selected Answer: AB
Had this question on TD exam... A and B. Period.
upvoted 1 times

  darn 5 months, 2 weeks ago


A and B seems are the good ones to me but couldnt I create policy to block all deletes and allow Put/Get, etc.?
upvoted 1 times

  cheese929 5 months, 3 weeks ago


Selected Answer: AB
A+B will solve the problem
upvoted 1 times

  piavik 5 months, 4 weeks ago


Selected Answer: AB
Policies and encryption do not affect delete protection
upvoted 1 times

  linux_admin 6 months ago


Selected Answer: AB
A. Enable versioning on the S3 bucket. Versioning allows multiple versions of an object to be stored in the same bucket. When versioning
is enabled, every object uploaded to the bucket is automatically assigned a unique version ID. This provides protection against accidental
deletion or modification of objects.

B. Enable MFA Delete on the S3 bucket. MFA Delete requires the use of a multi-factor authentication (MFA) device to permanently delete
an object or suspend versioning on a bucket. This provides an additional layer of protection against accidental or malicious deletion of
objects.
upvoted 1 times

  GalileoEC2 7 months ago


There no need to add default S3 encryption this is alrady enabled
Amazon S3 now applies server-side encryption with Amazon S3 managed keys (SSE-S3) as the base level of encryption for every bucket in
Amazon S3. Starting January 5, 2023, all new object uploads to Amazon S3 are automatically encrypted at no additional cost and with no
impact on performance. The automatic encryption status for S3 bucket default encryption configuration and for new object uploads is
available in AWS CloudTrail logs, S3 Inventory, S3 Storage Lens, the Amazon S3 console, and as an additional Amazon S3 API response
header in the AWS Command Line Interface and AWS SDKs
upvoted 1 times
Question #45 Topic 1

A company has a data ingestion workflow that consists of the following:


• An Amazon Simple Notification Service (Amazon SNS) topic for notifications about new data deliveries
• An AWS Lambda function to process the data and record metadata
The company observes that the ingestion workflow fails occasionally because of network connectivity issues. When such a failure occurs, the
Lambda function does not ingest the corresponding data unless the company manually reruns the job.
Which combination of actions should a solutions architect take to ensure that the Lambda function ingests all data in the future? (Choose two.)

A. Deploy the Lambda function in multiple Availability Zones.

B. Create an Amazon Simple Queue Service (Amazon SQS) queue, and subscribe it to the SNS topic.

C. Increase the CPU and memory that are allocated to the Lambda function.

D. Increase provisioned throughput for the Lambda function.

E. Modify the Lambda function to read from an Amazon Simple Queue Service (Amazon SQS) queue.

Correct Answer: BE

Community vote distribution


BE (97%)

  Incognito013 Highly Voted  11 months, 2 weeks ago


A, C, D options are out, since Lambda is fully managed service which provides high availability and scalability by its own

Answers are B and E


upvoted 20 times

  Oluseun 6 months, 3 weeks ago


There are times you do have to increase lambda memory for improved performance though. But not in this case.
upvoted 4 times

  Sinaneos Highly Voted  11 months, 3 weeks ago


Selected Answer: BE
BE so that the lambda function reads the SQS queue and nothing gets lost
upvoted 8 times

  Abdou1604 Most Recent  1 month, 3 weeks ago


B and E , the FAN out model , SQS will help to retrie the work and delayed processing
upvoted 1 times

  Guru4Cloud 1 month, 3 weeks ago


Selected Answer: BE
B) Create an Amazon Simple Queue Service (Amazon SQS) queue, and subscribe it to the SNS topic.

E) Modify the Lambda function to read from an Amazon Simple Queue Service (Amazon SQS) queue.
upvoted 1 times

  TariqKipkemei 1 month, 4 weeks ago


Selected Answer: BE
BE is most logical answer.
upvoted 1 times

  miki111 2 months, 2 weeks ago


Option BE is the right answer.
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: BE
A. Deploying the Lambda function in multiple Availability Zones improves availability and fault tolerance but does not guarantee ingestion
of all data.

C. Increasing CPU and memory allocated to the Lambda function may improve its performance but does not address the issue of
connectivity failures.

D. Increasing provisioned throughput for the Lambda function is not applicable as Lambda functions are automatically scaled by AWS and
provisioned throughput is not configurable.
Therefore, the correct combination of actions to ensure that the Lambda function ingests all data in the future is to create an SQS queue
and subscribe it to the SNS topic (option B) and modify the Lambda function to read from the SQS queue (option E).
upvoted 5 times
  Bmarodi 4 months ago
Selected Answer: BE
The combination of actions should a solutions architect take to ensure that the Lambda function ingests all data in the future, are by
Creating an Amazon Simple Queue Service (Amazon SQS) queue, and subscribe it to the SNS topic, and Modifying the Lambda function to
read from an Amazon Simple Queue Service (Amazon SQS) queue
upvoted 1 times

  linux_admin 6 months ago


Selected Answer: BE
B. Create an Amazon Simple Queue Service (Amazon SQS) queue, and subscribe it to the SNS topic. This will decouple the ingestion
workflow and provide a buffer to temporarily store the data in case of network connectivity issues.

E. Modify the Lambda function to read from an Amazon Simple Queue Service (Amazon SQS) queue. This will allow the Lambda function to
process the data from the SQS queue at its own pace, decoupling the data ingestion from the data delivery and providing more flexibility
and fault tolerance.
upvoted 3 times

  Ello2023 7 months, 3 weeks ago


Help
Can SQS Queue have multiple consumers so SNS and Lambda can consume at the same time?
upvoted 1 times

  Lonojack 8 months, 1 week ago


How come no one’s acknowledged the connection issue? Obviously we know we need SQS as a buffer for messages when the system fails.
But shouldn’t we consider provisioned iops to handle the the connectivity so maybe it will be less likely to lose connectivity and fail in the
first place?
upvoted 2 times

  ProfXsamson 7 months, 4 weeks ago


What does connectivity have to do with Provisioned IOPS which is supposed to enhance I/O rate?
upvoted 2 times

  SilentMilli 8 months, 4 weeks ago


Selected Answer: BE
To ensure that the Lambda function ingests all data in the future, the solutions architect can create an Amazon Simple Queue Service
(Amazon SQS) queue and subscribe it to the SNS topic. This will allow the data notifications to be queued in the event of a network
connectivity issue, rather than being lost. The solutions architect can then modify the Lambda function to read from the SQS queue, rather
than from the SNS topic directly. This will allow the Lambda function to process any queued data as soon as the network connectivity issue
is resolved, without the need for manual intervention.

By using an SQS queue as a buffer between the SNS topic and the Lambda function, the company can improve the reliability and resilience
of the ingestion workflow. This approach will help ensure that the Lambda function ingests all data in the future, even when there are
network connectivity issues.
upvoted 3 times

  pazabal 9 months, 2 weeks ago


Selected Answer: BE
B and E, allow the data to be queued up in the event of a failure, rather than being lost, then by reading from the queue, the Lambda
function will be able to process the data
A, improves reliability but doesn't ensure all data is ingested
C and D, they improve performance but not ensure all data is ingested
upvoted 2 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Selected Answer: BE
***CORRECT***
B. Create an Amazon Simple Queue Service (Amazon SQS) queue, and subscribe it to the SNS topic.
E. Modify the Lambda function to read from an Amazon Simple Queue Service (Amazon SQS) queue.

An Amazon Simple Queue Service (SQS) queue can be used to decouple the data ingestion workflow and provide a buffer for data
deliveries. By subscribing the SQS queue to the SNS topic, you can ensure that notifications about new data deliveries are sent to the
queue even if the Lambda function is unavailable or experiencing connectivity issues. When the Lambda function is ready to process the
data, it can read from the SQS queue and process the data in the order in which it was received.
upvoted 2 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


***WRONG***
Option A, deploying the Lambda function in multiple Availability Zones, would not directly address the issue of connectivity failures.
Option C, increasing the CPU and memory that are allocated to the Lambda function, would not directly address the issue of
connectivity failures. Option D, increasing provisioned throughput for the Lambda function, would not directly address the issue of
connectivity failures.
upvoted 2 times
  career360guru 9 months, 2 weeks ago
Selected Answer: BE
B and E
upvoted 1 times

  Wpcorgan 10 months, 2 weeks ago


B and E
upvoted 1 times

  Six_Fingered_Jose 11 months, 1 week ago


Selected Answer: BE
B and E is the obvious answer here,
SQS ensures that message does not get lost
upvoted 4 times
Question #46 Topic 1

A company has an application that provides marketing services to stores. The services are based on previous purchases by store customers. The
stores upload transaction data to the company through SFTP, and the data is processed and analyzed to generate new marketing offers. Some of
the files can exceed 200 GB in size.
Recently, the company discovered that some of the stores have uploaded files that contain personally identifiable information (PII) that should not
have been included. The company wants administrators to be alerted if PII is shared again. The company also wants to automate remediation.
What should a solutions architect do to meet these requirements with the LEAST development effort?

A. Use an Amazon S3 bucket as a secure transfer point. Use Amazon Inspector to scan the objects in the bucket. If objects contain PII, trigger
an S3 Lifecycle policy to remove the objects that contain PII.

B. Use an Amazon S3 bucket as a secure transfer point. Use Amazon Macie to scan the objects in the bucket. If objects contain PII, use
Amazon Simple Notification Service (Amazon SNS) to trigger a notification to the administrators to remove the objects that contain PII.

C. Implement custom scanning algorithms in an AWS Lambda function. Trigger the function when objects are loaded into the bucket. If
objects contain PII, use Amazon Simple Notification Service (Amazon SNS) to trigger a notification to the administrators to remove the objects
that contain PII.

D. Implement custom scanning algorithms in an AWS Lambda function. Trigger the function when objects are loaded into the bucket. If
objects contain PII, use Amazon Simple Email Service (Amazon SES) to trigger a notification to the administrators and trigger an S3 Lifecycle
policy to remove the meats that contain PII.

Correct Answer: B

Community vote distribution


B (59%) D (41%)

  Gatt Highly Voted  10 months, 3 weeks ago


I have a problem with answer B. The question says: "automate remediation". B says that you inform the administrator and he removes the
data manually, that's not automating remediation. Very weird, that would mean that D is correct - but it's so much harder to implement.
upvoted 23 times

  acuaws 5 months, 2 weeks ago


the problem is... you'd have to write lambda to detect PII? AWS has a product for that and we know that's Macie
upvoted 4 times

  Maxpayne009 5 months, 1 week ago


Macie has file size limit and clearly question mentions 200GB filesizes are possible. Lambda is the way to go ..
upvoted 4 times

  RICK150 5 months, 1 week ago


"remediation" not necessarily means "deletions". Since the question stated "The company wants administrators to be alerted ", I
believe, in this case remediation can mean having automation to alert the administrator for every hit
upvoted 5 times

  Joxtat 8 months, 4 weeks ago


Pay attention to the entire question as in What should a solutions architect do to meet these requirements with the LEAST development
effort? That is why Macie is used. Answer is B
upvoted 4 times

  grzeev Highly Voted  10 months, 3 weeks ago


Selected Answer: B
Amazon Macie is a data security and data privacy service that uses machine learning (ML) and pattern matching to discover and protect
your sensitive data
upvoted 10 times

  grzeev 10 months, 3 weeks ago


Macie automatically detects a large and growing list of sensitive data types, including personally identifiable information (PII) such as
names, addresses, and credit card numbers. It also gives you constant visibility of the data security and data privacy of your data stored
in Amazon S3
upvoted 8 times

  KarthikRock25 Most Recent  5 days, 21 hours ago


B. Use an Amazon S3 bucket as a secure transfer point. Use Amazon Macie to scan the objects in the bucket. If objects contain PII, use
Amazon Simple Notification Service (Amazon SNS) to trigger a notification to the administrators to remove the objects that contain PII.
upvoted 1 times
  Its_SaKar 1 week, 1 day ago
Selected Answer: B
PII = Macie
upvoted 1 times

  axelrodb 2 weeks, 6 days ago


Selected Answer: B
B is the correct answer
upvoted 1 times

  RNess 3 weeks, 5 days ago


Selected Answer: B
AWS Macie
• Amazon Macie is a fully managed data security and data privacy service
that uses machine learning and pattern matching to discover and protect your sensitive data in AWS.
• Macie helps identify and alert you to sensitive data, such as personally identifiable information (PII)
upvoted 1 times

  Chiquitabandita 4 weeks, 1 day ago


the file size and the automate remediation part cancels out B and makes the best choice out of these to be D I think.
upvoted 1 times

  Fakhrudin 1 month ago


AWS has macie to discovers PII, that's true. But also stated that sometimes, the files reach up to 200 GB. Please note that for files, Amazon
macie can only process uncompressed file up to 10 GB. So, I think it's D
https://docs.aws.amazon.com/macie/latest/user/macie-quotas.html
upvoted 1 times

  BrijMohan08 1 month ago


Selected Answer: D
File Size 200GB, Macie cannot process
Automate remediation - Lambda
upvoted 1 times

  Abdou1604 1 month, 3 weeks ago


Amazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and
classify sensitive data in Amazon S3. It can automatically identify personally identifiable information (PII) in the uploaded files. By using
Macie, you can avoid the need to implement custom scanning algorithms and reduce development effort , so its B
upvoted 1 times

  Guru4Cloud 1 month, 3 weeks ago


Selected Answer: B
B is the best option.

Using an S3 bucket for secure transfer is good. Amazon Macie can scan the objects for PII automatically without custom development.
SNS can notify administrators if PII is detected, allowing them to handle remediation.
upvoted 1 times

  TariqKipkemei 1 month, 4 weeks ago


Selected Answer: B
PII is to Amazon Macie as Amazon Macie is to PII
upvoted 1 times

  miki111 2 months, 2 weeks ago


Option B is the right answer.
upvoted 1 times

  Kaab_B 2 months, 2 weeks ago


Selected Answer: B
B is the correct answer although it informs administrators and does not automate completely, Amazin Macie is specially designed for PII
upvoted 1 times

  unhinged22 3 months ago


The answer is B
This quota applies only to the Amazon Macie console and the Amazon Macie API. There isn't a quota for the number of finding events that
Macie publishes to Amazon EventBridge or the number of sensitive data discovery results that Macie creates for each run of a job
upvoted 1 times

  live_reply_developers 3 months ago


Selected Answer: D
Answer is D.

1st of all because company wants to AUTOMATE REMEDIATION.


Second, Macie has a limit in the size of the file it can analyze:
Size of an individual file to analyze:

Adobe Portable Document Format (.pdf) file: 1,024 MB

Apache Avro object container (.avro) file: 8 GB

Apache Parquet (.parquet) file: 8 GB

Email message (.eml) file: 20 GB

GNU Zip compressed archive (.gz or .gzip) file: 8 GB

Microsoft Excel workbook (.xls or .xlsx) file: 512 MB

Microsoft Word document (.doc or .docx) file: 512 MB

Non-binary text file: 20 GB

TAR archive (.tar) file: 20 GB

ZIP compressed archive (.zip) file: 8 GB

If a file is larger than the applicable quota, Macie doesn't analyze any data in the file.
https://docs.aws.amazon.com/macie/latest/user/macie-quotas.html
upvoted 6 times

  slackbot 1 month, 2 weeks ago


do you think lambda will not timeout with such big files? 15 minutes to process 200GB of text looking for a string match?
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: B
Amazon Macie is a managed service that automatically discovers, classifies, and protects sensitive data such as PII in AWS. By enabling
Macie on S3, it can scan the uploaded objects for PII.

A. Using Amazon Inspector to scan the objects in S3 is not the optimal choice for scanning PII data. Amazon Inspector is designed for
host-level vulnerability assessment rather than content scanning.

C. Implementing custom scanning algorithms in an AWS Lambda function would require significant development effort to handle
scanning large files.

D. Using SES for notification and triggering S3 Lifecycle policy may add unnecessary complexity to the solution.

Therefore, the best option that meets the requirements with the least development effort is to use an S3 as a secure transfer point, utilize
Amazon Macie for PII scanning, and trigger an SNS notification to the administrators (option B).
upvoted 3 times
Question #47 Topic 1

A company needs guaranteed Amazon EC2 capacity in three specific Availability Zones in a specific AWS Region for an upcoming event that will
last 1 week.
What should the company do to guarantee the EC2 capacity?

A. Purchase Reserved Instances that specify the Region needed.

B. Create an On-Demand Capacity Reservation that specifies the Region needed.

C. Purchase Reserved Instances that specify the Region and three Availability Zones needed.

D. Create an On-Demand Capacity Reservation that specifies the Region and three Availability Zones needed.

Correct Answer: D

Community vote distribution


D (100%)

  Incognito013 Highly Voted  11 months, 2 weeks ago


Reserved instances are for long term so on-demand will be the right choice - Answer D
upvoted 22 times

  Buruguduystunstugudunstuy Highly Voted  9 months, 2 weeks ago


Selected Answer: D
***CORRECT***
Option D. Create an On-Demand Capacity Reservation that specifies the Region and three Availability Zones needed.

An On-Demand Capacity Reservation is a type of Amazon EC2 reservation that enables you to create and manage reserved capacity on
Amazon EC2. With an On-Demand Capacity Reservation, you can specify the Region and Availability Zones where you want to reserve
capacity, and the number of EC2 instances you want to reserve. This allows you to guarantee capacity in specific Availability Zones in a
specific Region.

***WRONG***
Option A, purchasing Reserved Instances that specify the Region needed, would not guarantee capacity in specific Availability Zones.
Option B, creating an On-Demand Capacity Reservation that specifies the Region needed, would not guarantee capacity in specific
Availability Zones.
Option C, purchasing Reserved Instances that specify the Region and three Availability Zones needed, would not guarantee capacity in
specific Availability Zones as Reserved Instances do not provide capacity reservations.
upvoted 14 times

  BlueVolcano1 8 months, 2 weeks ago


Another reason as to why Reserved Instances aren't the solution here is that you have to commit to either a 1 year or 3 year term, not 1
week.
upvoted 13 times

  Abdou1604 Most Recent  1 month, 3 weeks ago


its B , On-Demand Capacity Reservation allows you to reserve capacity for Amazon EC2 instances in a specific AWS Region, without
specifying specific Availability Zones
upvoted 1 times

  Guru4Cloud 1 month, 3 weeks ago


Selected Answer: D
D is the correct option to guarantee EC2 capacity in specific Availability Zones for a set timeframe.

On-Demand Capacity Reservations allow you to reserve EC2 capacity across specific Availability Zones for any duration. This guarantees
you will have access to those resources.
upvoted 1 times

  miki111 2 months, 2 weeks ago


Option D is the right answer.
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: D
The most appropriate option to guarantee EC2 capacity in three specific Availability Zones in the desired AWS Region for the 1-week event
is to create an On-Demand Capacity Reservation that specifies the Region and three Availability Zones (option D).

A. Purchasing Reserved Instances that specify the Region needed does not guarantee capacity in specific Availability Zones.

B. Creating an On-Demand Capacity Reservation without specifying the Availability Zones would not guarantee capacity in the desired
zones.

C. Purchasing Reserved Instances that specify the Region and three Availability Zones is not necessary for a short-term event and involves
longer-term commitments.
upvoted 4 times
  Abrar2022 4 months, 2 weeks ago
Reserved instances is for long term
On-demand Capacity reservation enables you to choose specific AZ for any duration
upvoted 1 times

  Eden 6 months, 2 weeks ago


Just for 1 week so D on demand
upvoted 1 times

  killbots 6 months, 2 weeks ago


Selected Answer: D
I agree that the answer is D because its only needed for a 1 week event. C would be right if it was a re-occurring event for 1 or more years
as reserved instances have to be purchased on long term commitments but would satisfy the capacity requirements.
https://aws.amazon.com/ec2/pricing/reserved-instances/
upvoted 1 times

  Ello2023 7 months, 3 weeks ago


D. Reservations are used for long term. A minimum of 1 - 3 years making it cheaper. Whereas, on demand reservation is where you will
always get access to CAPACITY it either be 1 week in advance or 1 month in an AZ but you pay On-Demand price meaning there is no
discount.
upvoted 1 times

  BlueVolcano1 8 months, 2 weeks ago


Selected Answer: D
Correct answer is On-Demand Capacity Reservation: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-capacity-
reservations.html
upvoted 1 times

  SilentMilli 8 months, 4 weeks ago


Selected Answer: D
To guarantee EC2 capacity in specific Availability Zones, the company should create an On-Demand Capacity Reservation. On-Demand
Capacity Reservations are a type of EC2 resource that allows the company to reserve capacity for On-Demand instances in a specific
Availability Zone or set of Availability Zones. By creating an On-Demand Capacity Reservation that specifies the Region and three
Availability Zones needed, the company can guarantee that it will have the EC2 capacity it needs for the upcoming event. The reservation
will last for the duration of the event (1 week) and will ensure that the company has the capacity it needs to run its workloads.
upvoted 2 times

  pazabal 9 months, 2 weeks ago


Selected Answer: D
D, specify the number of instances and AZs for a period of 1 week and then use them whenever needed.
A and C, aren't designed to provide guaranteed capacity
B, doesn't guarantee that EC2 capacity will be available in the three specific AZs
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: D
Option D
upvoted 1 times

  NikaCZ 9 months, 2 weeks ago


Selected Answer: D
Answer D is correct.
upvoted 1 times

  9014 10 months ago


Selected Answer: D
Yes answer is D
upvoted 1 times

  Wajif 10 months, 1 week ago


Selected Answer: D
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-capacity-reservations.html#capacity-reservations-differences
upvoted 1 times
Question #48 Topic 1

A company's website uses an Amazon EC2 instance store for its catalog of items. The company wants to make sure that the catalog is highly
available and that the catalog is stored in a durable location.
What should a solutions architect do to meet these requirements?

A. Move the catalog to Amazon ElastiCache for Redis.

B. Deploy a larger EC2 instance with a larger instance store.

C. Move the catalog from the instance store to Amazon S3 Glacier Deep Archive.

D. Move the catalog to an Amazon Elastic File System (Amazon EFS) file system.

Correct Answer: A

Community vote distribution


D (93%) 7%

  Six_Fingered_Jose Highly Voted  11 months, 1 week ago


Selected Answer: D
keyword is "durable" location
A and B is ephemeral storage
C takes forever so is not HA,
that leaves D
upvoted 32 times

  Fakhrudin 1 month ago


Yes, if you open EFS home page (https://aws.amazon.com/efs/), Amazon state, "Securely and reliably access your files with a fully
managed file system designed for 99.999999999 percent (11 9s) durability and up to 99.99 percent (4 9s) of availability."
upvoted 2 times

  rajendradba Highly Voted  11 months, 3 weeks ago


Selected Answer: D
Elasticache is in Memory, EFS is for durability
upvoted 14 times

  DebAwsAccount Most Recent  3 weeks, 5 days ago


Selected Answer: D
EFS is most durable solution
upvoted 1 times

  Fakhrudin 1 month ago


Selected Answer: D
The keyword is "durability" and "accessibility". If you open EFS home page (https://aws.amazon.com/efs/), Amazon state, "Securely and
reliably access your files with a fully managed file system designed for 99.999999999 percent (11 9s) durability and up to 99.99 percent (4
9s) of availability."
upvoted 1 times

  Hassaoo 1 month ago


Amazon EFS (Option D) provides the necessary combination of high availability, durability. See question states that high availabilty with
durable location
upvoted 1 times

  nafeez7950 1 month, 2 weeks ago


Selected Answer: A
If i'm not mistaken, Option is A is the right answer because of its Redis technology. Redis can manage its durability using its AOF
persistence which allows logging changes of the catalog data and can be replayed, even in the event of failure. As for the availability, Redis
also allows replication, so if one fails, another is still working. Considering this question isn't about sharing file systems between instances
and rather a customer wants to access a catalog, option A seems to be more suitable option here.
upvoted 1 times

  Guru4Cloud 1 month, 3 weeks ago


Selected Answer: D
D. Move the catalog to an Amazon Elastic File System (Amazon EFS) file system.

The instance store on an EC2 instance is ephemeral storage that does not provide the durability or availability needed for the catalog.

Amazon EFS provides a scalable, high-performance file system that can be shared between EC2 instances. Data on EFS is stored
redundantly across multiple Availability Zones, providing high durability and availability.
EFS is a better solution for the catalog storage than ElastiCache, S3 Glacier, or a larger EC2 instance store. Moving the catalog to EFS would
meet the requirements for high availability and durable storage.
upvoted 1 times
  TariqKipkemei 1 month, 3 weeks ago
Selected Answer: D
Highly available and durable = Elastic File System (Amazon EFS)
upvoted 1 times

  miki111 2 months, 2 weeks ago


Option D is the right answer.
upvoted 1 times

  unhinged22 3 months ago


Selected Answer: A
By default, the data in a Redis node on ElastiCache resides only in memory and isn't persistent. If a node is rebooted, or if the underlying
physical server experiences a hardware failure, the data in the cache is lost.

If you require data durability, you can enable the Redis append-only file feature (AOF). When this feature is enabled, the node writes all of
the commands that change cache data to an append-only file. When a node is rebooted and the cache engine starts, the AOF is
"replayed." The result is a warm Redis cache with all of the data intact.

AOF is disabled by default. To enable AOF for a cluster running Redis, you must create a parameter group with the appendonly parameter
set to yes. You then assign that parameter group to your cluster. You can also modify the appendfsync parameter to control how often
Redis writes to the AOF file.
upvoted 3 times

  unhinged22 3 months ago


Selected Answer: A
Is Redis durable?
Durable Redis Persistence Storage | Redis Enterprise
Redis Enterprise is a fully durable database that serves all data directly from memory, using either RAM or Redis on Flash.
upvoted 1 times

  aadityaravi8 3 months, 1 week ago


Amazon Elastic File System (Amazon EFS) provides a scalable and durable file storage service that can be mounted on multiple EC2
instances simultaneously. By moving the catalog to an EFS file system, the data will be stored in a durable location with built-in
redundancy. It will also be accessible from multiple EC2 instances, ensuring high availability.
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: D
Option A is not suitable for storing the catalog as ElastiCache is an in-memory data store primarily used for caching and cannot provide
durable storage for the catalog.

Option B would not address the requirement for high availability or durability. Instance stores are ephemeral storage attached to EC2
instances and are not durable or replicated.

Option C would provide durability but not high availability. S3 Glacier Deep Archive is designed for long-term archival storage, and
accessing the data from Glacier can have significant retrieval times and costs.

Therefore, option D is the most suitable choice to ensure high availability and durability for the company's catalog.
upvoted 3 times

  Bmarodi 4 months ago


Selected Answer: A
Option A meets the requirements.
upvoted 1 times

  Rahulbit34 5 months ago


Elasticache is using cache functionality. EFS is for durability.
upvoted 1 times

  acuaws 5 months, 2 weeks ago


Selected Answer: D
You can technically store data in A, it's an in memory selection.. but it's nowhere near durable as EFS which is D.
upvoted 1 times

  piavik 5 months, 4 weeks ago


Selected Answer: D
weird question with D the least incorrect option
upvoted 1 times
Question #49 Topic 1

A company stores call transcript files on a monthly basis. Users access the files randomly within 1 year of the call, but users access the files
infrequently after 1 year. The company wants to optimize its solution by giving users the ability to query and retrieve files that are less than 1-year-
old as quickly as possible. A delay in retrieving older files is acceptable.
Which solution will meet these requirements MOST cost-effectively?

A. Store individual files with tags in Amazon S3 Glacier Instant Retrieval. Query the tags to retrieve the files from S3 Glacier Instant Retrieval.

B. Store individual files in Amazon S3 Intelligent-Tiering. Use S3 Lifecycle policies to move the files to S3 Glacier Flexible Retrieval after 1 year.
Query and retrieve the files that are in Amazon S3 by using Amazon Athena. Query and retrieve the files that are in S3 Glacier by using S3
Glacier Select.

C. Store individual files with tags in Amazon S3 Standard storage. Store search metadata for each archive in Amazon S3 Standard storage.
Use S3 Lifecycle policies to move the files to S3 Glacier Instant Retrieval after 1 year. Query and retrieve the files by searching for metadata
from Amazon S3.

D. Store individual files in Amazon S3 Standard storage. Use S3 Lifecycle policies to move the files to S3 Glacier Deep Archive after 1 year.
Store search metadata in Amazon RDS. Query the files from Amazon RDS. Retrieve the files from S3 Glacier Deep Archive.

Correct Answer: C

Community vote distribution


B (72%) C (20%) 6%

  masetromain Highly Voted  11 months, 3 weeks ago


Selected Answer: B
I think the answer is B:
Users access the files randomly

S3 Intelligent-Tiering is the ideal storage class for data with unknown, changing, or unpredictable access patterns, independent of object
size or retention period. You can use S3 Intelligent-Tiering as the default storage class for virtually any workload, especially data lakes,
data analytics, new applications, and user-generated content.

https://aws.amazon.com/fr/s3/storage-classes/intelligent-tiering/
upvoted 35 times

  MutiverseAgent 2 months, 4 weeks ago


Agree, S3 Intelligent-Tiering meets all the requirements. The very important/crucial consideration here to satisfy that all files withing a
year are instantly accessible is that the two options "Archive Access" and "Deep Archive Access" are not enabled in the "Archive rule
actions" section present in the "Intelligent-Tiering Archive configurations" of the bucket. Those options are not enabled by default so
this answer will work.
upvoted 1 times

  ssoffline 4 months, 2 weeks ago


Answer is C, why not intelligent Tiering

If the Intelligent-Tiering data transitions to Glacier after 180 days instead of 1 year, it would still be a cost-effective solution that meets
the requirements.

With files stored in Amazon S3 Intelligent-Tiering, the data is automatically moved to the appropriate storage class based on its access
patterns. In this case, if the data transitions to Glacier after 180 days, it means that files that are infrequently accessed beyond the
initial 180 days will be stored in Glacier, which is a lower-cost storage option compared to S3 Standard.
upvoted 4 times

  RupeC 2 months, 3 weeks ago


With S3 Intelligent-Tiering, you can define rules that determine when objects should be moved from the frequent access tier to the
infrequent access tier, or vice versa, within S3 Standard storage classes.
upvoted 1 times

  sachin 7 months ago


What about if the file you have not accessed 360 days and intelligent tier moved the file to Glacier and on 364 day you want to access
the file instantly ?

I think C is right choice


upvoted 3 times

  habibi03336 7 months, 1 week ago


It says "S3 Intelligent-Tiering is the ideal storage class for data with unknown, changing, or unpredictable access patterns".
However, the statement says access pattern is predictable. It says there is frequent access about 1year.
upvoted 1 times

  killbots 6 months, 2 weeks ago


it doesnt say predictable, it says files are accessed random. Random = Unpredictable. Answer is B
upvoted 6 times

  Lilibell Highly Voted  11 months, 3 weeks ago


The answer is B
upvoted 12 times

  ABS_AWS Most Recent  2 days, 7 hours ago


Correct answer is C
As B has got Athena mentioned which is not fit as per question.
upvoted 1 times

  anilkumarkm 3 weeks ago


Selected Answer: B
"For archive data that needs immediate access, such as medical images, news media assets, or genomics data, choose the S3 Glacier
Instant Retrieval storage class, an archive storage class that delivers the lowest cost
storage with milliseconds retrieval. For archive data that does not require immediate access but needs the flexibility to retrieve large sets
of data at no cost, such as backup or disaster recovery use cases, choose S3
Glacier Flexible Retrieval (formerly S3 Glacier), with retrieval in minutes or free bulk retrievals in 5-12 hours."
https://aws.amazon.com/about-aws/whats-new/2021/11/amazon-s3-glacier-instant-retrieval-storage-class/
upvoted 1 times

  benacert 3 weeks, 6 days ago


B is the right answer..
upvoted 1 times

  Wayne23Fang 4 weeks, 1 day ago


Selected Answer: C
The question is about Cost-effective. Athena search of S3 is probably too much. It cost at least 2.5 times of simple S3 Sql query.
upvoted 1 times

  Syruis 1 month, 2 weeks ago


Selected Answer: C
C and not B just because Athena will be costly.
upvoted 3 times

  Yonimoni 1 month, 1 week ago


Exactly what i thought
upvoted 1 times

  Guru4Cloud 1 month, 3 weeks ago


Selected Answer: B
I would recommend option B.

The key reasons are:

S3 Intelligent-Tiering automatically moves files between frequent and infrequent access tiers based on actual access patterns, optimizing
cost.
Lifecycle policies can move older files to Glacier Flexible Retrieval after 1 year, which has higher latency and lower cost than S3.
Athena allows querying the metadata of files in S3 without retrieving the files themselves.
Glacier Select can directly query files in Glacier without needing to restore the entire file.
upvoted 2 times

  TariqKipkemei 1 month, 3 weeks ago


Selected Answer: B
Users access the files randomly = Amazon S3 Intelligent-Tiering
Users access the files infrequently = S3 Glacier Flexible Retrieval
Ability to query files as quickly as possible = Amazon Athena, S3 Glacier Select
upvoted 2 times

  miki111 2 months, 2 weeks ago


Option B is the right answer.
upvoted 1 times

  RupeC 2 months, 3 weeks ago


Selected Answer: B
As S3 Intelligent-Tiering, you can define rules that determine when objects should be moved from the frequent access tier to the
infrequent access tier, or vice versa, within S3 Standard storage classes that move, could be set to 12 months.
upvoted 1 times

  VitaminNmineral 3 months, 1 week ago


Selected Answer: B
I asked ChatGPT In conclusion, C is possible, but C is not cost-effective.
Both options B and C can meet the requirements, but option B with S3 Intelligent-Tiering may provide more cost savings as it optimizes
storage costs based on access patterns, automatically moving files to the most appropriate tier. However, if the priority is primarily on fast
retrieval times for files less than 1-year-old and the cost difference is not a significant concern, option C with Amazon S3 Standard storage
and S3 Glacier Instant Retrieval can be a valid and cost-effective choice as well.
upvoted 3 times

  cookieMr 3 months, 1 week ago


Selected Answer: B
Option A would not optimize the retrieval of files less than 1-year-old, as the files would be stored in S3 Glacier, which has longer retrieval
times compared to S3 Intelligent-Tiering.

Option C adds complexity by involving two storage classes and may not provide the most cost-effective solution.

Option D would require additional infrastructure with RDS for storing metadata and retrieval from S3 Glacier Deep Archive, which may not
be necessary and could incur higher costs.

Option B is the most suitable and cost-effective solution for optimizing file retrieval based on the access patterns described. Amazon S3
Intelligent-Tiering is a storage class that automatically moves objects between two access tiers: frequent access and infrequent access,
based on their access patterns. By storing the files in S3 Intelligent-Tiering, the files less than 1-year-old will be kept in the frequent access
tier, allowing for quick retrieval.
upvoted 3 times

  HassanYoussef 3 months, 2 weeks ago


Selected Answer: B
The answer is (B) because of the random access on files and the query service needed is Athena.
upvoted 1 times

  ruqui 4 months, 2 weeks ago


For all of you that (incorrectly, in my opinion) select Answer B: you are forgetting that an object is moved to Deep Archive access after 180
days of inactivity (here's the link with the details: https://docs.aws.amazon.com/AmazonS3/latest/userguide/intelligent-tiering-
overview.html)

Considering the above, it could happen that an object is required after day 180 of the first year, in that case the object is not immediately
reachable, so one of the requirements are not met.

The correct answer should the C


upvoted 3 times

  Itsume 3 months, 3 weeks ago


It says: "before your specified number of days of no access ("for example", 180 days)" so this number of days is just an example. Also
the files are to be deleted one year after being made and this would keep the files for a specified time after the last use which means
that if they are used within that year it would be saved for more than the intended year. Therefore, b is correct.
upvoted 2 times

  Abrar2022 4 months, 2 weeks ago


Random> unpredictable> Intelligent-tiering
upvoted 1 times

  oiccic99 3 months, 1 week ago


But it says that for the first year, access must be instant, so it can't be intelligent tiering, because if he moves do deep archive before 1
year, you can't access instantly the resource
upvoted 1 times

  pbpally 4 months, 4 weeks ago


Selected Answer: B
Key notes here:
1. "...randomly within 1 year of the call,.." Randomly = unpredictable -> Intelligent Tiering
2. "but users access the files infrequently after 1 year" coupled with "retrieve files that are less than 1-year-old as quickly as possible. A
delay in retrieving older files is acceptable" -> Glacier Flexible Retrieval (has options for expedited = 1-5 minutes, standard = 3-5 hours, and
bulk = 5-12 hours).
Last but not least is "giving users the ability to QUERY". Query = Athena. It's literally a serverless query service to analyze data stored
explicitly in S3.
upvoted 4 times
Question #50 Topic 1

A company has a production workload that runs on 1,000 Amazon EC2 Linux instances. The workload is powered by third-party software. The
company needs to patch the third-party software on all EC2 instances as quickly as possible to remediate a critical security vulnerability.
What should a solutions architect do to meet these requirements?

A. Create an AWS Lambda function to apply the patch to all EC2 instances.

B. Configure AWS Systems Manager Patch Manager to apply the patch to all EC2 instances.

C. Schedule an AWS Systems Manager maintenance window to apply the patch to all EC2 instances.

D. Use AWS Systems Manager Run Command to run a custom command that applies the patch to all EC2 instances.

Correct Answer: D

Community vote distribution


D (71%) B (29%)

  tinyfoot Highly Voted  10 months, 3 weeks ago


The primary focus of Patch Manager, a capability of AWS Systems Manager, is on installing operating systems security-related updates on
managed nodes. By default, Patch Manager doesn't install all available patches, but rather a smaller set of patches focused on security.
(Ref https://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-how-it-works-selection.html)

Run Command allows you to automate common administrative tasks and perform one-time configuration changes at scale. (Ref
https://docs.aws.amazon.com/systems-manager/latest/userguide/execute-remote-commands.html)

Seems like patch manager is meant for OS level patches and not 3rd party applications. And this falls under run command wheelhouse to
carry out one-time configuration changes (update of 3rd part application) at scale.
upvoted 39 times

  Fakhrudin 1 month ago


3rd party applications are also supported by Patch Manager (https://docs.aws.amazon.com/systems-manager/latest/userguide/patch-
manager.html).
You can use Patch Manager to apply patches for both operating systems and applications. (On Windows Server, application support is
limited to updates for applications released by Microsoft.) You can use Patch Manager to install Service Packs on Windows nodes and
perform minor version upgrades on Linux nodes. You can patch fleets of Amazon Elastic Compute Cloud (Amazon EC2) instances, edge
devices, on-premises servers, and virtual machines (VMs) by operating system type. This includes supported versions of several
operating systems, as listed in Patch Manager prerequisites.
upvoted 2 times

  Shasha1 Highly Voted  9 months, 3 weeks ago


D
AWS Systems Manager Run Command allows the company to run commands or scripts on multiple EC2 instances. By using Run
Command, the company can quickly and easily apply the patch to all 1,000 EC2 instances to remediate the security vulnerability.

Creating an AWS Lambda function to apply the patch to all EC2 instances would not be a suitable solution, as Lambda functions are not
designed to run on EC2 instances. Configuring AWS Systems Manager Patch Manager to apply the patch to all EC2 instances would not be
a suitable solution, as Patch Manager is not designed to apply third-party software patches. Scheduling an AWS Systems Manager
maintenance window to apply the patch to all EC2 instances would not be a suitable solution, as maintenance windows are not designed
to apply patches to third-party software
upvoted 18 times

  gsax Most Recent  2 weeks, 4 days ago


Selected Answer: B
Make note of this requirement, "as quickly as possible to remediate a critical security vulnerability." Patch Manager would save time and
effort.
upvoted 2 times

  anilkumarkm 3 weeks ago


Selected Answer: D
Patching support for applications on Windows Server managed nodes is limited to applications released by Microsoft.
https://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-patching-windows-applications.html
upvoted 1 times

  Abdou1604 1 month, 3 weeks ago


AWS Systems Manager Patch Manager is designed to apply patches not only to the operating system but also to third-party software
running on Amazon EC2 instances, on-premises servers, and virtual machines. It allows you to manage and automate the process of
patching both operating systems and applications, including third-party applications so using the patch manager and scheduling a
maintenance window, you can ensure controlled and coordinated patching of the EC2 instances. This helps in minimizing disruptions and
managing the process effectivel so the answer is C :)
upvoted 2 times
  Guru4Cloud 1 month, 3 weeks ago
Selected Answer: D
Patch Manager is designed to patch the underlying OS and select AWS software like Amazon Linux, Windows, etc. It may not work well for
patching third-party software.
Run Command allows you to run arbitrary commands or scripts across your fleet of instances. So you can use it to run a command/script
that applies the specific patch or update for the third-party software.
Run Command can target the instances very quickly to apply the patch in an urgent scenario.
Since this is a critical vulnerability, the company likely needs more control over how the patch is applied versus relying on Patch Manager's
automated patching process.
Run Command allows checking the output/return code to verify if the patch was applied properly on each instance.
upvoted 2 times

  TariqKipkemei 1 month, 3 weeks ago


Selected Answer: B
Technically both 'Patch Manager' and 'Run Command would work. But the patch manager was build specifically to apply patches for both
operating systems and applications.
upvoted 2 times

  johne42 1 month, 1 week ago


Some folk think the answer is D... but the Run Command is 'instance' level meaning it is connecting to one.
https://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager.html
upvoted 1 times

  miki111 2 months, 2 weeks ago


Option D is the right answer.
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: B
SSM Patch Manager offers a centralized and automated approach to patch management, allowing administrators to efficiently manage
patching operations across a large number of instances. It provides features such as patch compliance reporting and ability to specify
maintenance windows to control timing of patch installations.

A suggests using an Lambda to apply the patch. It requires additional development effort to create and manage it, handle error handling
and retries, and scale solution appropriately to handle a large number of instances.

C suggests scheduling an SSM maintenance window. While it can be used to orchestrate patching activities, they may not provide fastest
patching time for all instances, as execution is scheduled within defined maintenance window timeframe.

D suggests using Run Command to run a custom command for patching. While it can be used for executing commands on multiple
instances, it requires manual execution and may not provide the scalability and automation capabilities that Patch Manager offers.
upvoted 2 times

  Clouddon 1 month, 3 weeks ago


I found this: Patch Manager controls the deployment of updates to operating system and 3rd party applications (ONLY Microsoft) on
network endpoints.https://aws.amazon.com/about-aws/whats-new/2019/05/aws-systems-manager-patch-manager-supports-
microsoft-application-patching/
upvoted 2 times

  Clouddon 1 month, 3 weeks ago


Is it true that Patch Manager is not designed to apply third-party software patches?
upvoted 1 times

  konieczny69 3 months, 3 weeks ago


Selected Answer: D
answer D
keyword - The workload is powered by third-party software
patch manager patches AWS managed nodes OSs
we don't know what is running on the ec2 and what kind of vulnerability is that
upvoted 1 times

  Abrar2022 4 months, 2 weeks ago


Since it's a third-party application then use custom command to apply patches manually on all EC2's.
upvoted 1 times

  AlaTaftaf 5 months ago


Selected Answer: B
Answer of ChatGPT: "To remediate the critical security vulnerability in the third-party software running on 1,000 Amazon EC2 instances,
the most appropriate solution is to use AWS Systems Manager Patch Manager to apply the patch to all instances. AWS Systems Manager
Patch Manager automates the process of patching instances across hybrid environments and reduces the time and effort required to
patch instances. Patch Manager enables administrators to select and approve patches for automatic deployment to instances in a
controlled and secure manner. The patching process can be scheduled, tracked, and automated using Patch Manager, which also provides
compliance reporting and dashboards. By using Patch Manager, the solutions architect can quickly and efficiently patch all EC2 instances
and ensure that the workload remains secure."
upvoted 1 times
  jzam123 4 months, 2 weeks ago
okay here, ChatGPT is insanely in accurate, when I ask ChatGPT a question on this

I first copy paste the question, then I write "the correct answer is [whatever the correct answer determined by the discussion]"

Then I get correct information on why the other answer choices are wrong and why the correct answer choice is correct
upvoted 3 times

  cheese929 5 months, 3 weeks ago


Selected Answer: D
My answer is D for 3rd party patching.
upvoted 2 times

  channn 6 months ago


Selected Answer: B
why not D: using AWS Systems Manager Run Command could also be used to run a custom command to apply the patch to all EC2
instances, but it requires creating and testing the command manually, which could be time-consuming. Additionally, 'B' option- Patch
Manager has more features and capabilities that can help in managing patches, including scheduling patch deployments and reporting on
patch compliance.
upvoted 1 times

  jdr75 6 months ago


Selected Answer: B
Systems Manager Patch Manager can work patching linux boxes, and ALL de instances are Linux. See:
https://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-windows-and-linux-differences.html

So, using Patch Mnt. you can manage the deploy (with policies, creating groups, etc), so it's the best and more secure way to do it.
upvoted 3 times

  linux_admin 6 months ago


Selected Answer: D
AWS Systems Manager Run Command provides a simple way of automating common administrative tasks across groups of instances. It
allows users to execute scripts or commands across multiple instances simultaneously, without requiring SSH or RDP access to each
instance. With AWS Systems Manager Run Command, users can easily manage Amazon EC2 instances and instances running on-premises
or in other cloud environments.
upvoted 3 times

  jcramos 6 months ago


B.- Systems Manager – Patch Manager for OS updates, applications updates, security
updates. Supports Linux, macOS, and Windows
upvoted 1 times
Question #51 Topic 1

A company is developing an application that provides order shipping statistics for retrieval by a REST API. The company wants to extract the
shipping statistics, organize the data into an easy-to-read HTML format, and send the report to several email addresses at the same time every
morning.
Which combination of steps should a solutions architect take to meet these requirements? (Choose two.)

A. Configure the application to send the data to Amazon Kinesis Data Firehose.

B. Use Amazon Simple Email Service (Amazon SES) to format the data and to send the report by email.

C. Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled event that invokes an AWS Glue job to query the application's API
for the data.

D. Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled event that invokes an AWS Lambda function to query the
application's API for the data.

E. Store the application data in Amazon S3. Create an Amazon Simple Notification Service (Amazon SNS) topic as an S3 event destination to
send the report by email.

Correct Answer: DE

Community vote distribution


BD (67%) DE (16%) Other

  whosawsome Highly Voted  11 months, 2 weeks ago


Selected Answer: BD
You can use SES to format the report in HTML.
https://docs.aws.amazon.com/ses/latest/dg/send-email-formatted.html
upvoted 25 times

  apchandana 6 months ago


this document is talking about the SES API. not ses. SES does not format data. just sending emails.
https://aws.amazon.com/ses/
upvoted 3 times

  Clouddon 1 month, 3 weeks ago


When you send an email with Amazon SES, the email information you need to provide depends on how you call Amazon SES. You
can provide a minimal amount of information and have Amazon SES take care of all of the formatting for you. Or, if you want to do
something more advanced like send an attachment, you can provide the raw message yourself.
https://docs.aws.amazon.com/ses/latest/dg/send-email-concepts-email-format.html
upvoted 1 times

  backbencher2022 Highly Voted  11 months ago


Selected Answer: BD
B&D are the only 2 correct options. If you are choosing option E then you missed the daily morning schedule requirement mentioned in
the question which cant be achieved with S3 events for SNS. Event Bridge can used to configure scheduled events (every morning in this
case). Option B fulfills the email in HTML format requirement (by SES) and D fulfills every morning schedule event requirement (by
EventBridge)
upvoted 18 times

  RupeC 2 months, 2 weeks ago


I don't believe you are correct when you say that E cannot meet the scheduling requirement. If the glue action is scheduled and
outputs to S3, then as the S3 event destination is SNS, in effect you have a way of getting SNS to have a scheduled release.
upvoted 1 times

  David_Ang Most Recent  2 days, 12 hours ago


Selected Answer: BD
the reason why "B" is more correct than "E" is because is more simple and you don't have to store data is not what they want, also SES is a
service that is meant for sending the data through email, and is exactly what the company wants. is not the first time the admin is wrong
with the answer
upvoted 1 times

  hieulam 1 week, 6 days ago


Selected Answer: DE
E should be correct:
https://saturncloud.io/blog/how-to-send-html-mails-using-amazon-sns/
upvoted 1 times
  h_sahu 1 week, 4 days ago
I believe BD are the answers. E can't be used, because, in E can't help with email formatting. E won't be the best choice even for
scheduling.
upvoted 2 times

  TariqKipkemei 1 month, 3 weeks ago


Selected Answer: BD
Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled event that invokes an AWS Lambda function to query the
application's API for the data. Then use Amazon Simple Email Service (Amazon SES) to format the data and to send the report by email.
upvoted 1 times

  miki111 2 months, 2 weeks ago


Option BD is the correct answer
upvoted 1 times

  miki111 2 months, 2 weeks ago


Option BD is the right answer.
upvoted 1 times

  RupeC 2 months, 3 weeks ago


Selected Answer: CE
Glue - is scheduled to prep the docs using its ETL functionality. Then E. puts the data into S3 and uses sns to send it out by email.
upvoted 2 times

  RupeC 2 months, 1 week ago


On review, I think DE. D is better than C and glue is ETL but actually, the data needs to be queried, so Lambda is better. The eventbridge
is scheduled so S3 and SNS will also by default be run immediately after the eventbridge rule has run.
upvoted 2 times

  Mia2009687 2 months, 3 weeks ago


Selected Answer: DE
B- Neither Lambda or SEM could hold the data. After the data being handled by Lambda, needs to store it in S3 before publishing to the
end users.
upvoted 1 times

  MutiverseAgent 2 months, 3 weeks ago


A: NOT (Firehose not needed here)
B: NOT (SES supports HTML but does NOT explicitly format data)
D: YES (Schedule process, extract & format)
E: YES (Save emails in S3 for further reference, Send email)
upvoted 1 times

  Dhaysindhu 3 months, 1 week ago


Selected Answer: DE
D: To schedule the event every morning and format the HTML
E: To store the HTML in S3 and send the email using SNS
upvoted 1 times

  Mia2009687 3 months, 1 week ago


Selected Answer: DE
SES cannot format the data.
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: BD
D: Create an EventBridge (CloudWatch Events) scheduled event that invokes Lambda to query API for data. This scheduled event can be
set to trigger at desired time every morning to fetch shipping statistics from API.

B: Use SES to format data and send report by email. In Lambda, after retrieving shipping statistics, you can format data into an easy-to-
read HTML format using any HTML templating framework.

Options A, C, and E are not necessary for achieving the desired outcome. Ooption A is typically used for real-time streaming data ingestion
and delivery to data lakes or analytics services. Glue (C) is a fully managed extract, transform, and load (ETL) service, which may be an
overcomplication for this scenario. Storing the application data in S3 and using SNS (E) can be an alternative approach, but it adds
unnecessary complexity.
upvoted 2 times

  Agustinr10 3 months, 2 weeks ago


Selected Answer: CE
Explanation:

D. By creating an Amazon EventBridge scheduled event that triggers an AWS Lambda function, you can automate the process of querying
the application's API for shipping statistics. The Lambda function can retrieve the data and perform any necessary formatting or
transformation before proceeding to the next step.
E. Storing the application data in Amazon S3 allows for easy accessibility and further processing. You can configure an S3 event
notification to trigger an Amazon Simple Notification Service (SNS) topic whenever new data is uploaded to the S3 bucket. The SNS topic
can be configured to send the report by email to the desired email addresses.
upvoted 1 times
  Bmarodi 3 months, 4 weeks ago
Selected Answer: BD
I go for BD options.
upvoted 1 times

  korn666 5 months ago


Selected Answer: BC
extract and transform the data
AWS Glue is not always used for ETL processes that deal with unstructured data. Glue can also be used for ETL processes that deal with
structured data. Glue provides a fully managed ETL service that makes it easy to move data between data stores. It can be used to
transform and clean data in a scalable and cost-effective manner, and it supports a wide range of data formats, including both structured
and unstructured data.
upvoted 4 times

  korn666 5 months ago


BC
extract and transform the data
AWS Glue is not always used for ETL processes that deal with unstructured data. Glue can also be used for ETL processes that deal with
structured data. Glue provides a fully managed ETL service that makes it easy to move data between data stores. It can be used to
transform and clean data in a scalable and cost-effective manner, and it supports a wide range of data formats, including both structured
and unstructured data.
upvoted 2 times
Question #52 Topic 1

A company wants to migrate its on-premises application to AWS. The application produces output files that vary in size from tens of gigabytes to
hundreds of terabytes. The application data must be stored in a standard file system structure. The company wants a solution that scales
automatically. is highly available, and requires minimum operational overhead.
Which solution will meet these requirements?

A. Migrate the application to run as containers on Amazon Elastic Container Service (Amazon ECS). Use Amazon S3 for storage.

B. Migrate the application to run as containers on Amazon Elastic Kubernetes Service (Amazon EKS). Use Amazon Elastic Block Store
(Amazon EBS) for storage.

C. Migrate the application to Amazon EC2 instances in a Multi-AZ Auto Scaling group. Use Amazon Elastic File System (Amazon EFS) for
storage.

D. Migrate the application to Amazon EC2 instances in a Multi-AZ Auto Scaling group. Use Amazon Elastic Block Store (Amazon EBS) for
storage.

Correct Answer: C

Community vote distribution


C (100%)

  ArielSchivo Highly Voted  11 months, 2 weeks ago


Selected Answer: C
EFS is a standard file system, it scales automatically and is highly available.
upvoted 23 times

  masetromain Highly Voted  11 months, 3 weeks ago


I have absolutely no idea...

Output files that vary in size from tens of gigabytes to hundreds of terabytes

Simit size for a single object:


S3 5To TiB
https://aws.amazon.com/fr/blogs/aws/amazon-s3-object-size-limit/
EBS 64 Tib
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/volume_constraints.html
EFS 47.9 TiB
https://docs.aws.amazon.com/efs/latest/ug/limits.html
upvoted 9 times

  Help2023 7 months, 2 weeks ago


The answer to that is
Limit size for a single object:
S3, 5TiB is per object but you can have more than one object in a bucket, meaning infinity
https://aws.amazon.com/fr/blogs/aws/amazon-s3-object-size-limit/
EBS 64 Tib is per block of storage
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/volume_constraints.html
EFS 47.9 TiB per file and in the questions its says Files the 's'
https://docs.aws.amazon.com/efs/latest/ug/limits.html
upvoted 1 times

  RBSK 9 months, 3 weeks ago


None meets 100s of TB / file. Bit confusing / misleading
upvoted 3 times

  JayBee65 10 months ago


S3 and EBS are block storage but you are looking to store files, so EFS is the correct option.
upvoted 1 times

  Ello2023 8 months, 3 weeks ago


S3 is object storage.
upvoted 11 times

  TariqKipkemei Most Recent  1 month, 3 weeks ago


Selected Answer: C
Standard file system structure, scales automatically, requires minimum operational overhead = Amazon Elastic File System (Amazon EFS)
upvoted 1 times
  miki111 2 months, 2 weeks ago
Option C is the correct answer
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: C
EFS provides a scalable and fully managed file system that can be easily mounted to multiple EC2. It allows you to store and access files
using the standard file system structure, which aligns with the company's requirement for a standard file system. EFS automatically scales
with the size of your data.

A suggests using ECS for container orchestration and S3 for storage. ECS doesn't offer a native file system storage solution. S3 is an object
storage service and may not be the most suitable option for a standard file system structure.

B suggests using EKS for container orchestration and EBS for storage. Similar to A, EBS is block storage and not optimized for file system
access. While EKS can manage containers, it doesn't specifically address the file storage requirements.

D suggests using EC2 with EBS for storage. While EBS can provide block storage for EC2, it doesn't inherently offer a scalable file system
solution like EFS. You would need to manage and provision EBS volumes manually, which may introduce operational overhead.
upvoted 3 times

  Bmarodi 3 months, 4 weeks ago


Selected Answer: C
Option C meets the requirements.
upvoted 1 times

  joshnort 5 months ago


Selected Answer: C
Keywords: file system structure, scales automatically, highly available, and minimal operational overhead
upvoted 1 times

  harirkmusa 7 months, 3 weeks ago


standard file system structure is the KEYWORD here, the S3 and EBS are not file based storage. EFS is. so the automatic answer is C
upvoted 1 times

  NitiATOS 8 months, 1 week ago


Selected Answer: C
I will go with C as If the app is deployed in MultiAZ, computes are different but the Storage needs to be common.
EFS is easist way to configure shared storage as compared to SHARED EBS.
Hence C Suits the best.
upvoted 1 times

  Strk18 8 months, 3 weeks ago


Selected Answer: C
C. Migrate the application to Amazon EC2 instances in a Multi-AZ Auto Scaling group. Use Amazon Elastic File System (Amazon EFS) for
storage.
upvoted 2 times

  SilentMilli 8 months, 4 weeks ago


Selected Answer: C
Migrate the application to Amazon EC2 instances in a Multi-AZ Auto Scaling group. Use Amazon Elastic File System (Amazon EFS) for
storage.
upvoted 1 times

  pazabal 9 months, 2 weeks ago


Selected Answer: C
C = File storage system, Multi AZ ASG lets you maintain high availability
Not A, B or D because they don't meet the requirement of file system storage
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Selected Answer: C
C. Migrate the application to Amazon EC2 instances in a Multi-AZ Auto Scaling group. Use Amazon Elastic File System (Amazon EFS) for
storage.

To meet the requirements, a solution that would allow the company to migrate its on-premises application to AWS and scale automatically,
be highly available, and require minimum operational overhead would be to migrate the application to Amazon Elastic Compute Cloud
(Amazon EC2) instances in a Multi-AZ (Availability Zone) Auto Scaling group.
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


The Auto Scaling group would allow the application to automatically scale up or down based on demand, ensuring that the application
has the required capacity to handle incoming requests. To store the data produced by the application, the company could use Amazon
Elastic File System (Amazon EFS), which is a file storage service that allows the company to store and access file data in a standard file
system structure. Amazon EFS is highly available and scales automatically to support the workload of the application, making it a good
choice for storing the data produced by the application.
upvoted 2 times

  Futurebones 4 months, 3 weeks ago


my only question is : since EFS is also highly available and scalable, why not use EFS alone in this case? Is there any suggestion for
using Auto Scaling as a must.
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: C
Option C. Using EBS as storage is not a right option as it will not scale automatically.
Using ECS and EKS for running the application is not a requirement here and it is not clearly mentioned that application can be
containerized or not.
upvoted 2 times

  benaws 9 months, 3 weeks ago


Selected Answer: C
Highly available & Autoscales == Multi-AZ Auto Scaling group.
Standard File System == Amazon Elastic File System (Amazon EFS)
upvoted 3 times

  Wpcorgan 10 months, 2 weeks ago


C is correct
upvoted 1 times

  pspinelli19 10 months, 3 weeks ago


Selected Answer: C
https://www.examtopics.com/discussions/amazon/view/84147-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times
Question #53 Topic 1

A company needs to store its accounting records in Amazon S3. The records must be immediately accessible for 1 year and then must be
archived for an additional 9 years. No one at the company, including administrative users and root users, can be able to delete the records during
the entire 10-year period. The records must be stored with maximum resiliency.
Which solution will meet these requirements?

A. Store the records in S3 Glacier for the entire 10-year period. Use an access control policy to deny deletion of the records for a period of 10
years.

B. Store the records by using S3 Intelligent-Tiering. Use an IAM policy to deny deletion of the records. After 10 years, change the IAM policy to
allow deletion.

C. Use an S3 Lifecycle policy to transition the records from S3 Standard to S3 Glacier Deep Archive after 1 year. Use S3 Object Lock in
compliance mode for a period of 10 years.

D. Use an S3 Lifecycle policy to transition the records from S3 Standard to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 1 year. Use
S3 Object Lock in governance mode for a period of 10 years.

Correct Answer: C

Community vote distribution


C (100%)

  axelrodb 3 weeks ago


Selected Answer: C
To meet the requirements of immediately accessible records for 1 year and then archived for an additional 9 years with maximum
resiliency, we can use S3 Lifecycle policy to transition records from S3 Standard to S3 Glacier Deep Archive after 1 year. And to ensure that
the records cannot be deleted by anyone, including administrative and root users, we can use S3 Object Lock in compliance mode for a
period of 10 years. Therefore, the correct answer is option C.
Reference: https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock.htmls
upvoted 2 times

  Guru4Cloud 1 month, 3 weeks ago


Selected Answer: C
The key reasons are:

The S3 Lifecycle policy transitions the data to Glacier Deep Archive after 1 year for long-term archival.
S3 Object Lock in compliance mode prevents any user from deleting or overwriting objects for the specified retention period.
Glacier Deep Archive provides very high durability and the lowest storage cost for long-term archival.
Compliance mode ensures no one can override or change the retention settings even if policies change.
This meets all the requirements - immediate access for 1 year, archived for 9 years, unable to delete for 10 years, maximum resiliency
upvoted 2 times

  TariqKipkemei 1 month, 3 weeks ago


Selected Answer: C
No one at the company, including administrative users and root users, can be able to delete the records during the entire 10-year period =
Compliance Mode
upvoted 1 times

  miki111 2 months, 2 weeks ago


Option C is the correct answer
upvoted 2 times

  MutiverseAgent 2 months, 3 weeks ago


Why not A? Move all files to S3 Glacier instant retrieval (Cheaper than S3) and then move files older than a year to S3 Deep archive.
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: C
To prevent deletion of records during the entire 10-year period, you can utilize S3 Object Lock feature. By enabling it in compliance mode,
you can set a retention period on the objects, preventing any user, including administrative and root users, from deleting records.

A: S3 Glacier is suitable for long-term archival, it may not provide immediate accessibility for the first year as required.

B: Intelligent-Tiering may not offer the most cost-effective archival storage option for extended 9-year period. Changing the IAM policy
after 10 years to allow deletion also introduces manual steps and potential human error.
D: While S3 One Zone-IA can provide cost savings, it doesn't offer the same level of resiliency as S3 Glacier Deep Archive for long-term
archival.
upvoted 3 times
  11pantheman11 5 months, 1 week ago
Selected Answer: C
In compliance mode, a protected object version can't be overwritten or deleted by any user, including the root user in your AWS account.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-overview.html
upvoted 3 times

  athiha 6 months, 3 weeks ago


Selected Answer: C
Retention Period: A period is specified by Days & Years.
With Retention Compliance Mode, you can’t change/adjust (even by the account root user) the retention mode during the retention period
while all objects within the bucket are Locked.
With Retention Governance mode, a less restrictive mode, you can grant special permission to a group of users to adjust the Lock settings
by using S3:BypassGovernanceRetention.

Legal Hold: It’s On/Off setting on an object version. There is no retention period. If you enable Legal Hole on specific object version, you
will not be able to delete or override that specific object version. It needs S:PutObjectLegalHole as a permission.
upvoted 3 times

  WherecanIstart 7 months, 1 week ago


Selected Answer: C
S3 Glacier Deep Archive all day....
upvoted 1 times

  SilentMilli 8 months, 4 weeks ago


Selected Answer: C
Use an S3 Lifecycle policy to transition the records from S3 Standard to S3 Glacier Deep Archive after 1 year. Use S3 Object Lock in
compliance mode for a period of 10 years.
upvoted 1 times

  k1kavi1 9 months, 1 week ago


Selected Answer: C
Use S3 Object Lock in compliance mode
https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-overview.html
upvoted 3 times

  pazabal 9 months, 2 weeks ago


Selected Answer: C
C, A lifecycle set to transition from standard to Glacier deep archive and use lock for the delete requirement
A, B and D don't meet the requirements
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Selected Answer: C
C. Use an S3 Lifecycle policy to transition the records from S3 Standard to S3 Glacier Deep Archive after 1 year. Use S3 Object Lock in
compliance mode for a period of 10 years.

To meet the requirements, the company could use an S3 Lifecycle policy to transition the records from S3 Standard to S3 Glacier Deep
Archive after 1 year. S3 Glacier Deep Archive is Amazon's lowest-cost storage class, specifically designed for long-term retention of data
that is accessed rarely. This would allow the company to store the records with maximum resiliency and at the lowest possible cost.
upvoted 3 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


To ensure that the records are not deleted during the entire 10-year period, the company could use S3 Object Lock in compliance
mode. S3 Object Lock allows the company to apply a retention period to objects in S3, preventing the objects from being deleted until
the retention period expires. By using S3 Object Lock in compliance mode, the company can ensure that the records are not deleted by
anyone, including administrative users and root users, during the entire 10-year period.
upvoted 1 times

  Nandan747 9 months, 2 weeks ago


Selected Answer: C
A and B are ruled out as you need them to be accessible for 1 year and using control policy or IAM policies, the administrator or root still
has the ability to delete them.
D is ruled out as it uses One Zone-IA, but requirement says max- resiliency.
SO- C should be the right answer.
upvoted 4 times

  career360guru 9 months, 2 weeks ago


Selected Answer: C
Option C
upvoted 1 times
  Marge_Simpson 9 months, 3 weeks ago
Selected Answer: C
They should've put Glacier Vault Lock into Option C to make it even more obvious
upvoted 1 times

  AlaN652 9 months, 4 weeks ago


Selected Answer: C
C is the answer that fulfill the requirements of immediate access for one year and data durability for 10 years
upvoted 2 times
Question #54 Topic 1

A company runs multiple Windows workloads on AWS. The company's employees use Windows file shares that are hosted on two Amazon EC2
instances. The file shares synchronize data between themselves and maintain duplicate copies. The company wants a highly available and durable
storage solution that preserves how users currently access the files.
What should a solutions architect do to meet these requirements?

A. Migrate all the data to Amazon S3. Set up IAM authentication for users to access files.

B. Set up an Amazon S3 File Gateway. Mount the S3 File Gateway on the existing EC2 instances.

C. Extend the file share environment to Amazon FSx for Windows File Server with a Multi-AZ configuration. Migrate all the data to FSx for
Windows File Server.

D. Extend the file share environment to Amazon Elastic File System (Amazon EFS) with a Multi-AZ configuration. Migrate all the data to
Amazon EFS.

Correct Answer: C

Community vote distribution


C (98%)

  k1kavi1 Highly Voted  9 months, 1 week ago


Selected Answer: C
EFS is not supported on Windows instances
https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/AmazonEFS.html
Amazon FSx for Windows File Server provides fully managed Microsoft Windows file servers, backed by a fully native Windows file system.
https://docs.aws.amazon.com/fsx/latest/WindowsGuide/what-is.html
upvoted 11 times

  Buruguduystunstugudunstuy Highly Voted  10 months, 2 weeks ago


Selected Answer: C
Windows file shares = Amazon FSx for Windows File Server
Hence, the correct answer is C
upvoted 6 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Taking back this answer. As explained in the latest update.

***CORRECT***
D: Extend the file share environment to Amazon Elastic File System (Amazon EFS) with a Multi-AZ configuration. Migrate all the data to
Amazon EFS.
upvoted 1 times

  Guru4Cloud Most Recent  1 month, 3 weeks ago


Selected Answer: C
The key reasons are:

FSx for Windows provides fully managed Windows-native SMB file shares that are accessible from Windows clients.
It allows seamlessly migrating the existing Windows file shares to FSx shares without disrupting users.
The Multi-AZ configuration provides high availability and durability for file storage.
Users can continue to access files the same way over SMB without any changes.
It is optimized for Windows workloads and provides features like user quotas, ACLs, AD integration.
Data is stored on SSDs with automatic backups for resilience.
upvoted 1 times

  TariqKipkemei 1 month, 3 weeks ago


Selected Answer: C
The company wants a highly available and durable storage solution that preserves how users currently access the files = Amazon FSx for
Windows File Server
upvoted 1 times

  miki111 2 months, 2 weeks ago


Option C is the correct answer
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: C
Migrating all the data to FSx for Windows File Server allows you to preserve existing user access method and maintain compatibility with
Windows file shares. Users can continue accessing files using the same method as before, without any disruptions.

A: S3 is a highly durable object storage service, it is not designed to directly host Windows file shares. Implementing IAM authentication
for file access would require significant changes to existing user access method.

B: S3 File Gateway can provide access to Amazon S3 objects through standard file protocols, it may not be ideal solution for preserving
existing user access method and maintaining Windows file shares.

D: Although Amazon EFS provides highly available and durable file storage, it may not directly support the existing Windows file shares
and their access method.
upvoted 3 times
  11pantheman11 5 months, 1 week ago
Selected Answer: C
https://aws.amazon.com/fsx/windows/faqs/
Thousands of compute instances and devices can access a file system concurrently.

EFS does not support Windows


upvoted 2 times

  cheese929 5 months, 1 week ago


Selected Answer: C
C is correct. Amazon FSx for Windows File Server.
upvoted 3 times

  satosis 5 months, 2 weeks ago


Selected Answer: C
EFS is not supported on Windows instances
https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/AmazonEFS.html
Amazon FSx for Windows File Server provides fully managed Microsoft Windows file servers, backed by a fully native Windows file system.
https://docs.aws.amazon.com/fsx/latest/WindowsGuide/what-is.html
upvoted 3 times

  cheese929 5 months, 2 weeks ago


Selected Answer: C
C is correct. Amazon FSx for Windows File Server provides fully managed Microsoft Windows file servers.
upvoted 2 times

  SilentMilli 8 months, 4 weeks ago


Selected Answer: C
Extend the file share environment to Amazon Elastic File System (Amazon EFS) with a Multi-AZ configuration. Migrate all the data to
Amazon EFS.
upvoted 2 times

  dan80 9 months ago


Selected Answer: C
https://aws.amazon.com/blogs/aws/amazon-fsx-for-windows-file-server-update-new-enterprise-ready-features/
upvoted 3 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Selected Answer: D
The best option to meet the requirements specified in the question is option D: Extend the file share environment to Amazon Elastic File
System (Amazon EFS) with a Multi-AZ configuration. Migrate all the data to Amazon EFS.

Amazon EFS is a fully managed, elastic file storage service that scales on demand. It is designed to be highly available, durable, and
secure, making it well-suited for hosting file shares. By using a Multi-AZ configuration, the file share will be automatically replicated across
multiple Availability Zones, providing high availability and durability for the data.

To migrate the data, you can use a variety of tools and techniques, such as Robocopy or AWS DataSync. Once the data has been migrated
to EFS, you can simply update the file share configuration on the existing EC2 instances to point to the EFS file system, and users will be
able to access the files in the same way they currently do.
upvoted 1 times

  Ello2023 8 months, 3 weeks ago


EFS is not support by windows.
upvoted 4 times

  Buruguduystunstugudunstuy 7 months ago


You're 100% right Ello2023. I humbly acknowledged my first answer was WRONG. I am changing my answer. "The correct answer is
Option C". Extend the file share environment to Amazon FSx for Windows File Server with a Multi-AZ configuration. Migrate all the
data to FSx for Windows File Server.
upvoted 6 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Option A, migrating all the data to Amazon S3 and setting up IAM authentication for user access, would not preserve the current file
share access methods and would require users to access the files in a different way.

Option B, setting up an Amazon S3 File Gateway, would not provide the high availability and durability needed for hosting file shares.

Option C, extending the file share environment to FSx for Windows File Server, would provide the desired high availability and
durability, but would also require users to access the files in a different way.
upvoted 3 times

  ronaldchow 9 months, 1 week ago


EFS is for Linux only not Windows
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


You're right Ronald Chow. Thanks! Option D is incorrect because Amazon Elastic File System (EFS) is a file storage service that is not
natively compatible with the Windows operating system, and would not preserve the existing access methods for users.

I am taking back my answer. "The correct answer is Option C". Extend the file share environment to Amazon FSx for Windows File
Server with a Multi-AZ configuration. Migrate all the data to FSx for Windows File Server.
upvoted 6 times

  career360guru 9 months, 2 weeks ago


Selected Answer: C
Option C
upvoted 1 times

  Shasha1 9 months, 3 weeks ago


D
Amazon EFS is fully compatible with the SMB protocol that is used by Windows file shares, which means that users can continue to access
the files in the same way they currently do. Extending the file share environment to FSx for Windows File Server with a Multi-AZ
configuration would not be a suitable solution, as FSx for Windows File Server is not as scalable or cost-effective as Amazon EFS.
upvoted 1 times

  Wpcorgan 10 months, 2 weeks ago


C is correct
upvoted 1 times

  Juhith 10 months, 2 weeks ago


Selected Answer: C
EFS is only for Linux.
upvoted 3 times
Question #55 Topic 1

A solutions architect is developing a VPC architecture that includes multiple subnets. The architecture will host applications that use Amazon EC2
instances and Amazon RDS DB instances. The architecture consists of six subnets in two Availability Zones. Each Availability Zone includes a
public subnet, a private subnet, and a dedicated subnet for databases. Only EC2 instances that run in the private subnets can have access to the
RDS databases.
Which solution will meet these requirements?

A. Create a new route table that excludes the route to the public subnets' CIDR blocks. Associate the route table with the database subnets.

B. Create a security group that denies inbound traffic from the security group that is assigned to instances in the public subnets. Attach the
security group to the DB instances.

C. Create a security group that allows inbound traffic from the security group that is assigned to instances in the private subnets. Attach the
security group to the DB instances.

D. Create a new peering connection between the public subnets and the private subnets. Create a different peering connection between the
private subnets and the database subnets.

Correct Answer: C

Community vote distribution


C (100%)

  Sinaneos Highly Voted  11 months, 3 weeks ago


Selected Answer: C
A: doesn't fully configure the traffic flow
B: security groups don't have deny rules
D: peering is mostly between VPCs, doesn't really help here

answer is C, most mainstream way


upvoted 32 times

  Gary_Phillips_2007 Highly Voted  7 months ago


Just took the exam today and EVERY ONE of the questions came from this dump. Memorize it all. Good luck.
upvoted 15 times

  orhan64 2 months, 1 week ago


Hey bro, did you buy premium access?
upvoted 3 times

  Guru4Cloud Most Recent  1 month, 3 weeks ago


Selected Answer: C
The key reasons are:

Using security groups to control access between resources is a standard practice in VPCs.
The security group attached to the RDS DB instances can allow inbound traffic from the security group for the EC2 instances in the private
subnets.
This allows only those EC2 instances in the private subnets to connect to the databases, meeting the requirements.
Route tables, peering connections, and denying public subnet access would not achieve the needed selectivity of allowing only the private
subnet EC2 instances.
Security groups provide stateful filtering at the instance level for precise access control.
upvoted 1 times

  TariqKipkemei 1 month, 3 weeks ago


Selected Answer: C
Security groups only have allow rules
upvoted 1 times

  praveenvky83 2 months ago


Selected Answer: C
optoin C
upvoted 1 times

  miki111 2 months, 2 weeks ago


Option C is the correct answer
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: C
Creating security group that allows inbound traffic from security group assigned to instances in private subnets ensures that only EC2
running in private subnets can access the RDS databases. By associating security group with DB, you restrict access to only instances that
belong to designated security group.

A: This approach may help control routing within VPC, it does not address the specific access requirement between EC2 instances and RDS
databases.

B: Using a deny rule in a security group can lead to complexities and potential misconfigurations. It is generally recommended to use
allow rules to explicitly define access permissions.

D: Peering connections enable communication between different VPCs or VPCs in different regions, and they are not necessary for
restricting access between subnets within the same VPC.
upvoted 3 times

  Bmarodi 3 months, 4 weeks ago


Selected Answer: C
Option C meets the requirements.
upvoted 1 times

  Abrar2022 4 months, 2 weeks ago


By default, a security group is set up with rules that deny all inbound traffic and permit all outbound traffic.
upvoted 1 times

  water314 5 months ago


Selected Answer: C
CCCCCCCCCCC
upvoted 1 times

  SilentMilli 8 months, 4 weeks ago


Selected Answer: C
Create a security group that allows inbound traffic from the security group that is assigned to instances in the private subnets. Attach the
security group to the DB instances. This will allow the EC2 instances in the private subnets to have access to the RDS databases while
denying access to the EC2 instances in the public subnets.
upvoted 2 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Selected Answer: C
The solution that meets the requirements described in the question is option C: Create a security group that allows inbound traffic from
the security group that is assigned to instances in the private subnets. Attach the security group to the DB instances.

In this solution, the security group applied to the DB instances allows inbound traffic from the security group assigned to instances in the
private subnets. This ensures that only EC2 instances running in the private subnets can have access to the RDS databases.
upvoted 3 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Option A, creating a new route table that excludes the route to the public subnets' CIDR blocks and associating it with the database
subnets, would not meet the requirements because it would block all traffic to the database subnets, not just traffic from the public
subnets.

Option B, creating a security group that denies inbound traffic from the security group assigned to instances in the public subnets and
attaching it to the DB instances, would not meet the requirements because it would allow all traffic from the private subnets to reach
the DB instances, not just traffic from the security group assigned to instances in the private subnets.

Option D, creating a new peering connection between the public subnets and the private subnets and a different peering connection
between the private subnets and the database subnets, would not meet the requirements because it would allow all traffic from the
private subnets to reach the DB instances, not just traffic from the security group assigned to instances in the private subnets.
upvoted 1 times

  Nandan747 9 months, 2 weeks ago


Selected Answer: C
The real trick is between B and C. A and D are ruled out for obvious reasons.
B is wrong as you cannot have deny type rules in Security groups.
So- C is the right answer.
upvoted 4 times

  ashish_t 10 months, 1 week ago


Selected Answer: C
The key is "Only EC2 instances that run in the private subnets can have access to the RDS databases"
The answer is C.
upvoted 2 times

  Wpcorgan 10 months, 2 weeks ago


C is correct
upvoted 1 times
  17Master 11 months ago
Selected Answer: C
Ans correct.
upvoted 2 times

  KVK16 11 months, 2 weeks ago


Selected Answer: C
Inside a VPC, traffic locally between different subnets cannot be restricted by routing but incase they are in different VPCs then it would be
possible. This is imp Gain in VPC
- So only method is Security Groups - like EC2 also RDS also has Security Groups to restrict traffic to database instances
upvoted 6 times
Question #56 Topic 1

A company has registered its domain name with Amazon Route 53. The company uses Amazon API Gateway in the ca-central-1 Region as a public
interface for its backend microservice APIs. Third-party services consume the APIs securely. The company wants to design its API Gateway URL
with the company's domain name and corresponding certificate so that the third-party services can use HTTPS.
Which solution will meet these requirements?

A. Create stage variables in API Gateway with Name="Endpoint-URL" and Value="Company Domain Name" to overwrite the default URL. Import
the public certificate associated with the company's domain name into AWS Certificate Manager (ACM).

B. Create Route 53 DNS records with the company's domain name. Point the alias record to the Regional API Gateway stage endpoint. Import
the public certificate associated with the company's domain name into AWS Certificate Manager (ACM) in the us-east-1 Region.

C. Create a Regional API Gateway endpoint. Associate the API Gateway endpoint with the company's domain name. Import the public
certificate associated with the company's domain name into AWS Certificate Manager (ACM) in the same Region. Attach the certificate to the
API Gateway endpoint. Configure Route 53 to route traffic to the API Gateway endpoint.

D. Create a Regional API Gateway endpoint. Associate the API Gateway endpoint with the company's domain name. Import the public
certificate associated with the company's domain name into AWS Certificate Manager (ACM) in the us-east-1 Region. Attach the certificate to
the API Gateway APIs. Create Route 53 DNS records with the company's domain name. Point an A record to the company's domain name.

Correct Answer: D

Community vote distribution


C (97%)

  masetromain Highly Voted  11 months, 3 weeks ago


Selected Answer: C
I think the answer is C. we don't need to attach a certificate in us-east-1, if is not for cloudfront. In our case the target is ca-central-1.
upvoted 27 times

  MutiverseAgent 2 months, 3 weeks ago


Agree, C is correct by using the API Gateway option "Custom domain names"
https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-custom-domains.html
upvoted 1 times

  Valero_ 11 months, 3 weeks ago


I think that is C too, the target would be the same Region.
https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-regional-api-custom-domain-create.html
upvoted 8 times

  Buruguduystunstugudunstuy Highly Voted  9 months, 2 weeks ago


Selected Answer: C
The correct solution to meet these requirements is option C.

To design the API Gateway URL with the company's domain name and corresponding certificate, the company needs to do the following:

1. Create a Regional API Gateway endpoint: This will allow the company to create an endpoint that is specific to a region.

2. Associate the API Gateway endpoint with the company's domain name: This will allow the company to use its own domain name for the
API Gateway URL.

3. Import the public certificate associated with the company's domain name into AWS Certificate Manager (ACM) in the same Region: This
will allow the company to use HTTPS for secure communication with its APIs.

4. Attach the certificate to the API Gateway endpoint: This will allow the company to use the certificate for securing the API Gateway URL.

5. Configure Route 53 to route traffic to the API Gateway endpoint: This will allow the company to use Route 53 to route traffic to the API
Gateway URL using the company's domain name.
upvoted 24 times

  aadityaravi8 3 months, 1 week ago


google bard reply..
upvoted 3 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Option C includes all the necessary steps to meet the requirements, hence it is the correct solution.

Options A and D do not include the necessary steps to associate the API Gateway endpoint with the company's domain name and
attach the certificate to the endpoint.

Option B includes the necessary steps to associate the API Gateway endpoint with the company's domain name and attach the
certificate, but it imports the certificate into the us-east-1 Region instead of the ca-central-1 Region where the API Gateway is located.
upvoted 5 times
  paniya93 Most Recent  23 hours, 53 minutes ago
Selected Answer: C
Explain why this saying a different region which not mentioned in the Q.
upvoted 1 times

  Hassaoo 1 month ago


c is right
The other options have various issues:

Option A: Using stage variables and importing certificates into ACM is not sufficient for achieving the requirement of associating a custom
domain and certificate with the API Gateway endpoint.

Option B: While it mentions importing the certificate into ACM, it doesn't address the need for a Regional API Gateway or the appropriate
region for the certificate.

Option D: Using certificates from the us-east-1 region for a Regional API Gateway might cause issues. Additionally, it doesn't provide clear
details on how to associate the domain name and certificate with the API Gateway endpoint.
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: C
C is the correct solution.

To use a custom domain name with HTTPS for API Gateway:

The API Gateway endpoint needs to be Regional, not private or edge-optimized.


The ACM certificate must be requested in the same region as the API Gateway endpoint.
The custom domain name is then mapped to the Regional API endpoint under API Gateway domain names.
Route 53 is configured to route traffic to the API Gateway regional domain.
The ACM certificate is attached to the API Gateway domain name to enable HTTP
upvoted 1 times

  TariqKipkemei 1 month, 3 weeks ago


Selected Answer: C
Import the public certificate associated with the company's domain name into AWS Certificate Manager (ACM) in the same Region.
upvoted 1 times

  miki111 2 months, 2 weeks ago


Option C is the correct answer
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: C
Option C encompasses all the necessary steps to design the API Gateway URL with the company's domain name and enable secure HTTPS
access using the appropriate certificate.

A. This approach does not involve using the company's domain name or a custom certificate. It does not provide a solution for enabling
HTTPS access with a corresponding certificate.

B. It suggests importing the certificate into ACM in the us-east-1 Region, which may not align with the desired ca-central-1 Region for this
scenario. It's important to use ACM in the same Region where API Gateway is deployed to simplify certificate management.

D. It suggests importing the certificate into ACM in the us-east-1 Region, which again does not align with the desired ca-central-1 Region.
Additionally, it mentions attaching the certificate to API Gateway, which is not necessary for achieving the desired outcome of enabling
HTTPS access for the API Gateway endpoint.
upvoted 2 times

  Bmarodi 3 months, 4 weeks ago


Selected Answer: C
I switch to option C too, which meets the requirements.
upvoted 1 times

  Bmarodi 3 months, 4 weeks ago


Selected Answer: D
I vote for option D.
upvoted 1 times

  dydzah 4 months, 1 week ago


https://www.youtube.com/watch?v=Ro0rgeLDkO4
upvoted 1 times
  Siva007 4 months, 2 weeks ago
Selected Answer: C
C: It should be in the same Region
upvoted 1 times

  linux_admin 6 months ago


Selected Answer: C
In this scenario, the goal is to design the API Gateway URL with the company's domain name and corresponding certificate so that third-
party services can use HTTPS. To accomplish this, a solutions architect should create a Regional API Gateway endpoint and associate it
with the company's domain name. The public certificate associated with the company's domain name should be imported into AWS
Certificate Manager (ACM) in the same Region as the API Gateway endpoint. The certificate should then be attached to the API Gateway
endpoint to enable HTTPS. Finally, Route 53 should be configured to route traffic to the API Gateway endpoint.
upvoted 2 times

  gmehra 6 months, 3 weeks ago


ACM is always in US east 1
upvoted 2 times

  GalileoEC2 7 months ago


In the solution I provided, the region used for AWS Certificate Manager (ACM) is us-east-1, which is different from the ca-central-1 region
used for Amazon API Gateway in the question. This is because ACM certificates can only be issued in the us-east-1 region, which is a global
endpoint for ACM.

When creating a custom domain name in Amazon API Gateway and attaching an ACM certificate to it, the region of the certificate does not
have to match the region of the API Gateway deployment. However, it's worth noting that there may be additional latency or costs
associated with using a certificate from a different region.

In summary, the solution I provided is still valid and meets the requirements of the question, even though it uses a different region for
ACM...pum!
upvoted 1 times

  BlueVolcano1 8 months, 2 weeks ago


Selected Answer: C
It's C: You can use an ACM certificate in API Gateway.
https://docs.aws.amazon.com/apigateway/latest/developerguide/rest-api-mutual-tls.html

Certificates are regional and have to be uploaded in the same AWS Region as the service you're using it for. (If you're using a certificate
with CloudFront, you have to upload it into US East (N. Virginia).)

https://docs.aws.amazon.com/acm/latest/userguide/import-certificate.html
upvoted 3 times

  duriselvan 9 months, 2 weeks ago


Certificates in ACM are regional resources. To use a certificate with Elastic Load Balancing for the same fully qualified domain name
(FQDN) or set of FQDNs in more than one AWS region, you must request or import a certificate for each region. For certificates provided
by ACM, this means you must revalidate each domain name in the certificate for each region. You cannot copy a certificate between
regions
upvoted 1 times
Question #57 Topic 1

A company is running a popular social media website. The website gives users the ability to upload images to share with other users. The
company wants to make sure that the images do not contain inappropriate content. The company needs a solution that minimizes development
effort.
What should a solutions architect do to meet these requirements?

A. Use Amazon Comprehend to detect inappropriate content. Use human review for low-confidence predictions.

B. Use Amazon Rekognition to detect inappropriate content. Use human review for low-confidence predictions.

C. Use Amazon SageMaker to detect inappropriate content. Use ground truth to label low-confidence predictions.

D. Use AWS Fargate to deploy a custom machine learning model to detect inappropriate content. Use ground truth to label low-confidence
predictions.

Correct Answer: B

Community vote distribution


B (100%)

  masetromain Highly Voted  11 months, 3 weeks ago


Selected Answer: B
Good Answer is B :
https://docs.aws.amazon.com/rekognition/latest/dg/moderation.html?pg=ln&sec=ft
upvoted 13 times

  Buruguduystunstugudunstuy Highly Voted  9 months, 2 weeks ago


Selected Answer: B
The best solution to meet these requirements would be option B: Use Amazon Rekognition to detect inappropriate content, and use
human review for low-confidence predictions.

Amazon Rekognition is a cloud-based image and video analysis service that can detect inappropriate content in images using its pre-
trained label detection model. It can identify a wide range of inappropriate content, including explicit or suggestive adult content, violent
content, and offensive language. The service provides high accuracy and low latency, making it a good choice for this use case.
upvoted 8 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Option A, using Amazon Comprehend, is not a good fit for this use case because Amazon Comprehend is a natural language
processing service that is designed to analyze text, not images.

Option C, using Amazon SageMaker to detect inappropriate content, would require significant development effort to build and train a
custom machine learning model. It would also require a large dataset of labeled images to train the model, which may be time-
consuming and expensive to obtain.

Option D, using AWS Fargate to deploy a custom machine learning model, would also require significant development effort and a
large dataset of labeled images. It may not be the most efficient or cost-effective solution for this use case.

In summary, the best solution is to use Amazon Rekognition to detect inappropriate content in images, and use human review for low-
confidence predictions to ensure that all inappropriate content is detected.
upvoted 7 times

  Syruis Most Recent  1 month, 2 weeks ago


Selected Answer: B
B is the best solution as far
upvoted 1 times

  Guru4Cloud 1 month, 3 weeks ago


Selected Answer: B
Amazon Rekognition is a fully managed service that provides image and video analysis capabilities. It can be used to detect inappropriate
content in images, such as nudity, violence, and hate speech.

Amazon Rekognition is a good choice for this solution because it is a managed service, which means that the company does not have to
worry about managing the infrastructure or the machine learning model. Rekognition is also highly accurate, and it can be used to detect
a wide range of inappropriate content
upvoted 1 times

  TariqKipkemei 1 month, 3 weeks ago


Selected Answer: B
Amazon Rekognition to the rescue...whooosh!
upvoted 1 times
  cookieMr 3 months, 1 week ago
Using Amazon Rekognition for content moderation is a cost-effective and efficient solution that reduces the need for developing and
training custom machine learning models, making it the best option in terms of minimizing development effort.

A. Amazon Comprehend is a natural language processing service provided by AWS, primarily focused on text analysis rather than image
analysis.

C. Amazon SageMaker is a comprehensive machine learning service that allows you to build, train, and deploy custom machine learning
models. It requires significant development effort to build and train a custom model. In addition, utilizing ground truth to label low-
confidence predictions would further add to the development complexity and maintenance overhead.

D. Similar to C, using AWS Fargate to deploy a custom machine learning model requires significant development effort.
upvoted 2 times

  krajar 6 months, 2 weeks ago


Selected Answer: B
Amazon Rekognition is a cloud-based image and video analysis service that can detect inappropriate content in images using its pre-
trained label detection model. It can identify a wide range of inappropriate content, including explicit or suggestive adult content, violent
content, and offensive language.
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: B
Option B
upvoted 1 times

  Shasha1 9 months, 3 weeks ago


B
AWS Rekognition to detect inappropriate content and use human review for low-confidence predictions. This option minimizes
development effort because Amazon Rekognition is a pre-built machine learning service that can detect inappropriate content. Using
human review for low-confidence predictions allows for more accurate detection of inappropriate content.
upvoted 1 times

  Wpcorgan 10 months, 2 weeks ago


B is correct
upvoted 1 times

  ArielSchivo 11 months, 2 weeks ago


Selected Answer: B
Option B.

https://docs.aws.amazon.com/rekognition/latest/dg/a2i-rekognition.html
upvoted 1 times
Question #58 Topic 1

A company wants to run its critical applications in containers to meet requirements for scalability and availability. The company prefers to focus
on maintenance of the critical applications. The company does not want to be responsible for provisioning and managing the underlying
infrastructure that runs the containerized workload.
What should a solutions architect do to meet these requirements?

A. Use Amazon EC2 instances, and install Docker on the instances.

B. Use Amazon Elastic Container Service (Amazon ECS) on Amazon EC2 worker nodes.

C. Use Amazon Elastic Container Service (Amazon ECS) on AWS Fargate.

D. Use Amazon EC2 instances from an Amazon Elastic Container Service (Amazon ECS)-optimized Amazon Machine Image (AMI).

Correct Answer: C

Community vote distribution


C (100%)

  masetromain Highly Voted  11 months, 3 weeks ago


Selected Answer: C
Good answer is C:
AWS Fargate is a serverless, pay-as-you-go compute engine that lets you focus on building applications without having to manage servers.
AWS Fargate is compatible with Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS).

https://aws.amazon.com/fr/fargate/
upvoted 19 times

  Teruteru Most Recent  3 weeks ago


Option C is the correct answer.
upvoted 1 times

  Syruis 1 month, 2 weeks ago


Selected Answer: C
C for Fargate
upvoted 1 times

  TariqKipkemei 1 month, 3 weeks ago


Selected Answer: C
The company does not want to be responsible for provisioning and managing the underlying infrastructure that runs the containerized
workload = Serverless compute for containers = AWS Fargate
upvoted 1 times

  miki111 2 months, 2 weeks ago


Option C is the correct answer
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: C
Using ECS on Fargate allows you to run containers without the need to manage the underlying infrastructure. Fargate abstracts away the
underlying EC2 and provides serverless compute for containers.

A. This option would require manual provisioning and management of EC2, as well as installing and configuring Docker on those
instances. It would introduce additional overhead and responsibilities for maintaining the underlying infrastructure.

B. While this option leverages ECS to manage containers, it still requires provisioning and managing EC2 to serve as worker nodes. It adds
complexity and maintenance overhead compared to the serverless nature of Fargate.

D. This option still involves managing and provisioning EC2, even though an ECS-optimized AMI simplifies the process of setting up EC2 for
running ECS. It does not provide the level of serverless abstraction and ease of management offered by Fargate.
upvoted 3 times

  cheese929 5 months, 2 weeks ago


Selected Answer: C
AWS Fargate is a technology that you can use with Amazon ECS to run containers without having to manage servers or clusters of Amazon
EC2 instances.

https://docs.aws.amazon.com/AmazonECS/latest/userguide/what-is-fargate.html
upvoted 1 times
  SilentMilli 8 months, 4 weeks ago
Selected Answer: C
ECS + Fargate
upvoted 3 times

  gustavtd 9 months ago


Selected Answer: C
AWS Fargate will hide all the complexity for you
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Selected Answer: C
C. Use Amazon Elastic Container Service (Amazon ECS) on AWS Fargate.

AWS Fargate is a fully managed container execution environment that runs containers without the need to provision and manage
underlying infrastructure. This makes it a good choice for companies that want to focus on maintaining their critical applications and do
not want to be responsible for provisioning and managing the underlying infrastructure.

Option A involves installing Docker on Amazon EC2 instances, which would still require the company to manage the underlying
infrastructure. Option B involves using Amazon ECS on Amazon EC2 worker nodes, which would also require the company to manage the
underlying infrastructure. Option D involves using Amazon EC2 instances from an Amazon ECS-optimized Amazon Machine Image (AMI),
which would also require the company to manage the underlying infrastructure.
upvoted 2 times

  career360guru 9 months, 2 weeks ago


Selected Answer: C
Option C
upvoted 1 times

  benaws 9 months, 3 weeks ago


Selected Answer: C
Obviously anything with EC2 in the answer is wrong...
upvoted 1 times

  ashish_t 10 months, 1 week ago


Selected Answer: C
The company does not want to be responsible for provisioning and managing the underlying infrastructure that runs the containerized
workload.
Fargate is serverless and no need to manage.
Answer: C
upvoted 2 times

  Wpcorgan 10 months, 2 weeks ago


C is correct
upvoted 1 times

  PS_R 10 months, 4 weeks ago


Selected Answer: C
Agree Serverless Containerization Think Fargate
upvoted 2 times

  ArielSchivo 11 months, 2 weeks ago


Selected Answer: C
Option C. Fargate is serverless, no need to manage the underlying infrastructure.
upvoted 4 times
Question #59 Topic 1

A company hosts more than 300 global websites and applications. The company requires a platform to analyze more than 30 TB of clickstream
data each day.
What should a solutions architect do to transmit and process the clickstream data?

A. Design an AWS Data Pipeline to archive the data to an Amazon S3 bucket and run an Amazon EMR cluster with the data to generate
analytics.

B. Create an Auto Scaling group of Amazon EC2 instances to process the data and send it to an Amazon S3 data lake for Amazon Redshift to
use for analysis.

C. Cache the data to Amazon CloudFront. Store the data in an Amazon S3 bucket. When an object is added to the S3 bucket. run an AWS
Lambda function to process the data for analysis.

D. Collect the data from Amazon Kinesis Data Streams. Use Amazon Kinesis Data Firehose to transmit the data to an Amazon S3 data lake.
Load the data in Amazon Redshift for analysis.

Correct Answer: D

Community vote distribution


D (90%) 10%

  ArielSchivo Highly Voted  11 months, 2 weeks ago


Selected Answer: D
Option D.

https://aws.amazon.com/es/blogs/big-data/real-time-analytics-with-amazon-redshift-streaming-ingestion/
upvoted 16 times

  RBSK 9 months, 4 weeks ago


Unsure if this is right URL for this scenario. Option D is referring to S3 and then Redshift. Whereas URL discuss about eliminating S3 :-
We’re excited to launch Amazon Redshift streaming ingestion for Amazon Kinesis Data Streams, which enables you to ingest data
directly from the Kinesis data stream without having to stage the data in Amazon Simple Storage Service (Amazon S3). Streaming
ingestion allows you to achieve low latency in the order of seconds while ingesting hundreds of megabytes of data into your Amazon
Redshift cluster.
upvoted 2 times

  Buruguduystunstugudunstuy Highly Voted  9 months, 2 weeks ago


Selected Answer: D
Option D is the most appropriate solution for transmitting and processing the clickstream data in this scenario.

Amazon Kinesis Data Streams is a highly scalable and durable service that enables real-time processing of streaming data at a high
volume and high rate. You can use Kinesis Data Streams to collect and process the clickstream data in real-time.

Amazon Kinesis Data Firehose is a fully managed service that loads streaming data into data stores and analytics tools. You can use
Kinesis Data Firehose to transmit the data from Kinesis Data Streams to an Amazon S3 data lake.

Once the data is in the data lake, you can use Amazon Redshift to load the data and perform analysis on it. Amazon Redshift is a fully
managed, petabyte-scale data warehouse service that allows you to quickly and efficiently analyze data using SQL and your existing
business intelligence tools.
upvoted 14 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Option A, which involves using AWS Data Pipeline to archive the data to an Amazon S3 bucket and running an Amazon EMR cluster with
the data to generate analytics, is not the most appropriate solution because it does not involve real-time processing of the data.

Option B, which involves creating an Auto Scaling group of Amazon EC2 instances to process the data and sending it to an Amazon S3
data lake for Amazon Redshift to use for analysis, is not the most appropriate solution because it does not involve a fully managed
service for transmitting the data from the processing layer to the data lake.

Option C, which involves caching the data to Amazon CloudFront, storing the data in an Amazon S3 bucket, and running an AWS
Lambda function to process the data for analysis when an object is added to the S3 bucket, is not the most appropriate solution
because it does not involve a scalable and durable service for collecting and processing the data in real-time.
upvoted 3 times

  MutiverseAgent 2 months, 3 weeks ago


The question does not say that real-time is needed here
upvoted 2 times

  Reckless_Jas Most Recent  1 month, 2 weeks ago


when you see clickstream data, think about Kinesis Data Stream
upvoted 2 times

  Guru4Cloud 1 month, 3 weeks ago


Selected Answer: D
The key reasons are:

Kinesis Data Streams can continuously capture and ingest high volumes of clickstream data in real-time. This handles the large 30TB daily
data intake.
Kinesis Firehose can automatically load the streaming data into S3. This creates a data lake for further analysis.
Firehose can transform and analyze the data in flight before loading to S3 using Lambda. This enables real-time processing.
The data in S3 can be easily loaded into Amazon Redshift for interactive analysis at scale.
Kinesis auto scales to handle the high data volumes. Minimal effort is needed for infrastructure management.
upvoted 1 times

  miki111 2 months, 2 weeks ago


Option D is the correct answer
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: D
A. This option utilizes S3 for data storage and EMR for analytics, Data Pipeline is not ideal service for real-time streaming data ingestion
and processing. It is better suited for batch processing scenarios.

B. This option involves managing and scaling EC2, which adds operational overhead. It is also not real-time streaming solution.
Additionally, use of Redshift for analyzing clickstream data might not be most efficient or cost-effective approach.

C. CloudFront is CDN service and is not designed for real-time data processing or analytics. While using Lambda to process data can be an
option, it may not be most efficient solution for processing large volumes of clickstream data.

Therefore, collecting the data from Kinesis Data Streams, using Kinesis Data Firehose to transmit it to S3 data lake, and loading it into
Redshift for analysis is the recommended approach. This combination provides scalable, real-time streaming solution with storage and
analytics capabilities that can handle high volume of clickstream data.
upvoted 2 times

  Rahulbit34 5 months ago


Clickstream is the key - Answer is D
upvoted 1 times

  PaoloRoma 6 months, 1 week ago


Selected Answer: A
I am going to be unpopular here and I'll go for A). Even if here are other services that offer a better experience, data Pipeline can do the
job here. "you can use AWS Data Pipeline to archive your web server's logs to Amazon Simple Storage Service (Amazon S3) each day and
then run a weekly Amazon EMR (Amazon EMR) cluster over those logs to generate traffic reports"
https://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/what-is-datapipeline.html In the question there is no specific timing
requirement for analytics. Also the EMR cluster job can be scheduled be executed daily.

Option D) is a valid answer too, however with Amazon Redshift Streaming Ingestion "you can connect to Amazon Kinesis Data Streams
data streams and pull data directly to Amazon Redshift without staging data in S3" https://aws.amazon.com/redshift/redshift-streaming-
ingestion. So in this scenario Kinesis Data Firehose and S3 are redundant.
upvoted 4 times

  MutiverseAgent 2 months, 3 weeks ago


I think I agree with you, I does not make sense in option D) using Amazon Kinesis Data Firehose to transmit the data to an Amazon S3
data lake and then to Redshift, as you can send directly the data from Firehose to Redshift.
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: D
Option D
upvoted 1 times

  studis 9 months, 2 weeks ago


It is C.
The image in here https://aws.amazon.com/kinesis/data-firehose/ shows how kinesis can send data collected to firehose who can send it
to Redshift.
It is also possible to use an intermediary S3 bucket between firehose and redshift. See image in here
https://aws.amazon.com/blogs/big-data/stream-transform-and-analyze-xml-data-in-real-time-with-amazon-kinesis-aws-lambda-and-
amazon-redshift/
upvoted 1 times

  sebasta 10 months ago


Why not A?
You can collect data with AWS Data Pipeline and then analyze it with EMR. Whats wrong with this option?
upvoted 4 times
  bearcandy 9 months, 3 weeks ago
It's not A, the wording is tricky! It says "to archive the data to S3" - there is no mention of archiving in the question, so it has to be D :)
upvoted 2 times

  Wpcorgan 10 months, 2 weeks ago


D is correct
upvoted 1 times

  PS_R 10 months, 4 weeks ago


Click Stream & Analyse/ process- Think KDS,
upvoted 2 times

  BoboChow 11 months, 2 weeks ago


Selected Answer: D
D seems to make sense
upvoted 4 times

  JesseeS 11 months, 2 weeks ago


Option D is correct... See the resource. Thank you Ariel
upvoted 1 times
Question #60 Topic 1

A company has a website hosted on AWS. The website is behind an Application Load Balancer (ALB) that is configured to handle HTTP and
HTTPS separately. The company wants to forward all requests to the website so that the requests will use HTTPS.
What should a solutions architect do to meet this requirement?

A. Update the ALB's network ACL to accept only HTTPS traffic.

B. Create a rule that replaces the HTTP in the URL with HTTPS.

C. Create a listener rule on the ALB to redirect HTTP traffic to HTTPS.

D. Replace the ALB with a Network Load Balancer configured to use Server Name Indication (SNI).

Correct Answer: C

Community vote distribution


C (100%)

  Buruguduystunstugudunstuy Highly Voted  9 months, 2 weeks ago


Selected Answer: C
C. Create a listener rule on the ALB to redirect HTTP traffic to HTTPS.

To meet the requirement of forwarding all requests to the website so that the requests will use HTTPS, a solutions architect can create a
listener rule on the ALB that redirects HTTP traffic to HTTPS. This can be done by creating a rule with a condition that matches all HTTP
traffic and a rule action that redirects the traffic to the HTTPS listener. The HTTPS listener should already be configured to accept HTTPS
traffic and forward it to the target group.
upvoted 13 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Option A. Updating the ALB's network ACL to accept only HTTPS traffic is not a valid solution because the network ACL is used to control
inbound and outbound traffic at the subnet level, not at the listener level.

Option B. Creating a rule that replaces the HTTP in the URL with HTTPS is not a valid solution because this would not redirect the traffic
to the HTTPS listener.

Option D. Replacing the ALB with a Network Load Balancer configured to use Server Name Indication (SNI) is not a valid solution
because it would not address the requirement to redirect HTTP traffic to HTTPS.
upvoted 9 times

  masetromain Highly Voted  11 months, 3 weeks ago


Selected Answer: C
Answer C :
https://docs.aws.amazon.com/fr_fr/elasticloadbalancing/latest/application/create-https-listener.html
https://aws.amazon.com/fr/premiumsupport/knowledge-center/elb-redirect-http-to-https-using-alb/
upvoted 13 times

  Guru4Cloud Most Recent  1 month, 3 weeks ago


Selected Answer: C
The best solution is to create a listener rule on the Application Load Balancer (ALB) to redirect HTTP traffic to HTTPS (option C).

Here is why:

ALB listener rules allow you to redirect traffic from one listener port (e.g. 80 for HTTP) to another (e.g. 443 for HTTPS). This achieves the
goal to forward all requests over HTTPS.
Network ACLs control traffic at the subnet level and cannot distinguish between HTTP and HTTPS requests to implement a redirect (option
A incorrect).
Replacing HTTP with HTTPS in the URL happens at the client side. It does not redirect at the ALB (option B incorrect).
Network Load Balancers work at the TCP level and do not understand HTTP or HTTPS protocols. So they cannot redirect in this manner
(option D incorrect).
upvoted 3 times

  miki111 2 months, 2 weeks ago


Option C is the correct answer
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: C
A. Network ACLs operate at subnet level and control inbound and outbound traffic. Updating the network ACL alone will not enforce the
redirection of HTTP to HTTPS.
B. This approach would require modifying application code or server configuration to perform URL rewrite. It is not an optimal solution as
it adds complexity and potential maintenance overhead. Moreover, it does not leverage the ALB's capabilities for handling HTTP-to-HTTPS
redirection.

D. While NLB can handle SSL/TLS termination using SNI for routing requests to different services, replacing the ALB solely to enforce
HTTP-to-HTTPS redirection would be an unnecessary and more complex solution.

Therefore, the recommended approach is to create a listener rule on the ALB to redirect HTTP traffic to HTTPS. By configuring a listener
rule, you can define a redirect action that automatically directs HTTP requests to their corresponding HTTPS versions.
upvoted 3 times
  Abrar2022 4 months, 2 weeks ago
A solutions architect should create listen rules to direct http traffic to https.
upvoted 1 times

  cheese929 5 months, 2 weeks ago


Selected Answer: C
C is correct. Traffic redirection will solve it.
upvoted 2 times

  elearningtakai 6 months, 1 week ago


Selected Answer: C
This rule can be created in the following way:
1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
2. In the navigation pane, choose Load Balancers.
3. Select the ALB and choose Listeners.
4. Choose View/edit rules and then choose Add rule.
5. In the Add Rule dialog box, choose HTTPS.
6. In the Default action dialog box, choose Redirect to HTTPS.
7. Choose Save rules.
This listener rule will redirect all HTTP requests to HTTPS, ensuring that all traffic is encrypted.
upvoted 2 times

  mell1222 6 months, 3 weeks ago


Selected Answer: C
Configure an HTTPS listener on the ALB: This step involves setting up an HTTPS listener on the ALB and configuring the security policy to
use a secure SSL/TLS protocol and cipher suite.

Create a redirect rule on the ALB: The redirect rule should be configured to redirect all incoming HTTP requests to HTTPS. This can be
done by creating a redirect rule that redirects HTTP requests on port 80 to HTTPS requests on port 443.

Update the DNS record: The DNS record for the website should be updated to point to the ALB's DNS name, so that all traffic is routed
through the ALB.

Verify the configuration: Once the configuration is complete, the website should be tested to ensure that all requests are being redirected
to HTTPS. This can be done by accessing the website using HTTP and verifying that the request is redirected to HTTPS.
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: C
Option C
upvoted 1 times

  Shasha1 9 months, 3 weeks ago


C
To redirect HTTP traffic to HTTPS, a solutions architect should create a listener rule on the ALB to redirect HTTP traffic to HTTPS. Option A
is not correct because network ACLs do not have the ability to redirect traffic. Option B is not correct because it does not redirect traffic, it
only replaces the URL. Option D is not correct because a Network Load Balancer does not have the ability to handle HTTPS traffic.
upvoted 2 times

  Wpcorgan 10 months, 2 weeks ago


C is correct
upvoted 1 times

  hanhdroid 11 months, 3 weeks ago


Selected Answer: C
Answer C: https://aws.amazon.com/premiumsupport/knowledge-center/elb-redirect-http-to-https-using-alb/
upvoted 4 times
Question #61 Topic 1

A company is developing a two-tier web application on AWS. The company's developers have deployed the application on an Amazon EC2
instance that connects directly to a backend Amazon RDS database. The company must not hardcode database credentials in the application. The
company must also implement a solution to automatically rotate the database credentials on a regular basis.
Which solution will meet these requirements with the LEAST operational overhead?

A. Store the database credentials in the instance metadata. Use Amazon EventBridge (Amazon CloudWatch Events) rules to run a scheduled
AWS Lambda function that updates the RDS credentials and instance metadata at the same time.

B. Store the database credentials in a configuration file in an encrypted Amazon S3 bucket. Use Amazon EventBridge (Amazon CloudWatch
Events) rules to run a scheduled AWS Lambda function that updates the RDS credentials and the credentials in the configuration file at the
same time. Use S3 Versioning to ensure the ability to fall back to previous values.

C. Store the database credentials as a secret in AWS Secrets Manager. Turn on automatic rotation for the secret. Attach the required
permission to the EC2 role to grant access to the secret.

D. Store the database credentials as encrypted parameters in AWS Systems Manager Parameter Store. Turn on automatic rotation for the
encrypted parameters. Attach the required permission to the EC2 role to grant access to the encrypted parameters.

Correct Answer: C

Community vote distribution


C (100%)

  KVK16 Highly Voted  11 months, 2 weeks ago


Selected Answer: C
Secrets manager supports Autorotation unlike Parameter store.
upvoted 17 times

  JesseeS 11 months, 2 weeks ago


Parameter store does not support autorotation.
upvoted 8 times

  Buruguduystunstugudunstuy Highly Voted  9 months, 2 weeks ago


Selected Answer: C
The correct solution is C. Store the database credentials as a secret in AWS Secrets Manager. Turn on automatic rotation for the secret.
Attach the required permission to the EC2 role to grant access to the secret.

AWS Secrets Manager is a service that enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets
throughout their lifecycle. By storing the database credentials as a secret in Secrets Manager, you can ensure that they are not hardcoded
in the application and that they are automatically rotated on a regular basis. To grant the EC2 instance access to the secret, you can attach
the required permission to the EC2 role. This will allow the application to retrieve the secret from Secrets Manager as needed.
upvoted 9 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Option A, storing the database credentials in the instance metadata and using a Lambda function to update them, would not meet the
requirement of not hardcoding the credentials in the application.

Option B, storing the database credentials in an encrypted S3 bucket and using a Lambda function to update them, would also not
meet this requirement, as the application would still need to access the credentials from the configuration file.

Option D, storing the database credentials as encrypted parameters in AWS Systems Manager Parameter Store, would also not meet
this requirement, as the application would still need to access the encrypted parameters in order to use them.
upvoted 5 times

  Guru4Cloud Most Recent  1 month, 3 weeks ago


Selected Answer: C
Storing the credentials in AWS Secrets Manager and enabling automatic rotation meets the requirements with the least operational
overhead. The EC2 instance role just needs permission to access the secret, and Secrets Manager handles rotating the credentials
automatically on a schedule.
upvoted 1 times

  TariqKipkemei 1 month, 3 weeks ago


Selected Answer: C
Key Autorotation = AWS Secrets Manager
upvoted 1 times

  miki111 2 months, 2 weeks ago


Option C is the right answer.
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: C
Storing the credentials in Secrets Manager provides dedicated and secure management. With automatic rotation enabled, Secrets
Manager handles the credential updates automatically. Attaching the necessary permissions to the EC2 role allows the application to
securely access the secret.

This approach minimizes operational overhead and provides a secure and managed solution for credential management.
upvoted 2 times

  Bmarodi 3 months, 4 weeks ago


Selected Answer: C
The solution that meets the requirements with the least operational overhead, is option C.
upvoted 1 times

  Bmarodi 4 months, 2 weeks ago


Selected Answer: C
My choice is c.
upvoted 1 times

  AndyMartinez 7 months, 4 weeks ago


Selected Answer: C
The right option is C.
upvoted 1 times

  Adios_Amigo 8 months ago


C is the most correct answer. Automatic replacement must be performed by the secret manager.
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: C
Option C - As the requirement is to rotate the secrets Secrets manager is the one that can support it.
upvoted 1 times

  Wpcorgan 10 months, 2 weeks ago


C is correct
upvoted 2 times

  BoboChow 11 months, 2 weeks ago


Selected Answer: C
AWS Secrets Manager is a newer service than SSM Parameter store
upvoted 3 times

  ArielSchivo 11 months, 2 weeks ago


Selected Answer: C
Option C.

https://docs.aws.amazon.com/secretsmanager/latest/userguide/create_database_secret.html
upvoted 3 times
Question #62 Topic 1

A company is deploying a new public web application to AWS. The application will run behind an Application Load Balancer (ALB). The application
needs to be encrypted at the edge with an SSL/TLS certificate that is issued by an external certificate authority (CA). The certificate must be
rotated each year before the certificate expires.
What should a solutions architect do to meet these requirements?

A. Use AWS Certificate Manager (ACM) to issue an SSL/TLS certificate. Apply the certificate to the ALB. Use the managed renewal feature to
automatically rotate the certificate.

B. Use AWS Certificate Manager (ACM) to issue an SSL/TLS certificate. Import the key material from the certificate. Apply the certificate to
the ALUse the managed renewal feature to automatically rotate the certificate.

C. Use AWS Certificate Manager (ACM) Private Certificate Authority to issue an SSL/TLS certificate from the root CA. Apply the certificate to
the ALB. Use the managed renewal feature to automatically rotate the certificate.

D. Use AWS Certificate Manager (ACM) to import an SSL/TLS certificate. Apply the certificate to the ALB. Use Amazon EventBridge (Amazon
CloudWatch Events) to send a notification when the certificate is nearing expiration. Rotate the certificate manually.

Correct Answer: D

Community vote distribution


D (96%)

  Sinaneos Highly Voted  11 months, 3 weeks ago


Selected Answer: D
It's a third-party certificate, hence AWS cannot manage renewal automatically. The closest thing you can do is to send a notification to
renew the 3rd party certificate.
upvoted 35 times

  mabotega Highly Voted  10 months, 4 weeks ago


Selected Answer: D
It is D, because ACM does not manage the renewal process for imported certificates. You are responsible for monitoring the expiration
date of your imported certificates and for renewing them before they expire.
Check this question on the link below:
Q: What types of certificates can I create and manage with ACM?
https://www.amazonaws.cn/en/certificate-manager/faqs/#Managed_renewal_and_deployment
upvoted 17 times

  est3la21 Most Recent  3 weeks, 3 days ago


answer is D
upvoted 1 times

  Guru4Cloud 1 month, 3 weeks ago


Selected Answer: D
The key points are:

Obtain certificate from external CA, not ACM


Import the external certificate into ACM
Apply imported certificate to the ALB
Set up EventBridge rule to trigger notification on certificate expiration
Manually renew and rotate the external certificate each year.
upvoted 1 times

  miki111 2 months, 2 weeks ago


Option D is the right answer.
upvoted 2 times

  cookieMr 3 months, 1 week ago


Selected Answer: D
D: With this approach, you import the third-party certificate into ACM, which allows you to centrally manage and apply it to the ALB. By
configuring CloudWatch Events, you can receive notifications when the certificate is close to expiring, prompting you to manually initiate
the rotation process.

A & B: These options assume that the SSL/TLS certificate can be issued directly by ACM. However, since the requirement specifies that the
certificate should be issued by an external certificate authority (CA), this option is not suitable.

C: ACM Private Certificate Authority is used when you want to create your own private CA and issue certificates from it. It does not support
certificates issued by external CAs. Therefore, this option is not suitable for the given requirement.
upvoted 3 times
  Router 3 months, 2 weeks ago
D is correct, since it's an external certificate
upvoted 1 times

  Bmarodi 3 months, 4 weeks ago


Selected Answer: D
Option D meets these requirements.
upvoted 1 times

  Bmarodi 4 months, 2 weeks ago


Since the external certificate, you can't automate it. Only u can do is getting notefication, and renew it manually, no other way roud.
upvoted 1 times

  Abrar2022 4 months, 2 weeks ago


In the question it mentions that it's a third-party certificate. AWS has not got much control of third-party certificates and cannot manage
renewal automatically. The closest thing you can do is to send a notification to renew the 3rd party certificate.
upvoted 1 times

  Rahulbit34 5 months ago


EXTERNAL certofocation is the key - Manual rotation is required so Answer is D
upvoted 3 times

  cheese929 5 months, 2 weeks ago


Selected Answer: D
A B and C are all using AWS issued cert. Only D uses cert issued by external CA, which meets the requirement.
upvoted 1 times

  channn 6 months ago


Selected Answer: D
Key word: External CA -> manually
upvoted 1 times

  linux_admin 6 months ago


Selected Answer: D
D. Use AWS Certificate Manager (ACM) to import an SSL/TLS certificate. Apply the certificate to the ALB. Use Amazon EventBridge (Amazon
CloudWatch Events) to send a notification when the certificate is nearing expiration. Rotate the certificate manually.

This option meets the requirements because it uses an SSL/TLS certificate issued by an external CA and involves a manual rotation process
that can be done yearly before the certificate expires. The other options involve using AWS Certificate Manager to issue the certificate,
which does not meet the requirement of using an external CA.
upvoted 1 times

  AndyMartinez 7 months, 4 weeks ago


Selected Answer: D
Option D. ACM cannot automatically renew imported certificates.
upvoted 1 times

  CSS85 9 months, 2 weeks ago


D
https://aws.amazon.com/certificate-manager/faqs/
Imported certificates – If you want to use a third-party certificate with Amazon CloudFront, Elastic Load Balancing, or Amazon API
Gateway, you may import it into ACM using the AWS Management Console, AWS CLI, or ACM APIs. ACM can not renew imported
certificates, but it can help you manage the renewal process. You are responsible for monitoring the expiration date of your imported
certificates and for renewing them before they expire. You can use ACM CloudWatch metrics to monitor the expiration dates of an
imported certificates and import a new third-party certificate to replace an expiring one.
upvoted 2 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Selected Answer: A
The correct answer is A. Use AWS Certificate Manager (ACM) to issue an SSL/TLS certificate. Apply the certificate to the ALB. Use the
managed renewal feature to automatically rotate the certificate.

AWS Certificate Manager (ACM) is a service that lets you easily provision, manage, and deploy Secure Sockets Layer/Transport Layer
Security (SSL/TLS) certificates for use with AWS resources. ACM provides managed renewal for SSL/TLS certificates, which means that ACM
automatically renews your certificates before they expire.

To meet the requirements for the web application, you should use ACM to issue an SSL/TLS certificate and apply it to the Application Load
Balancer (ALB). Then, you can use the managed renewal feature to automatically rotate the certificate each year before it expires. This will
ensure that the web application is always encrypted at the edge with a valid SSL/TLS certificate.
upvoted 2 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


I am taking back my answer after reading the AWS documentation. The correct answer is Option D. Use AWS Certificate Manager (ACM)
to import an SSL/TLS certificate. Apply the certificate to the ALB. Use Amazon EventBridge (Amazon CloudWatch Events) to send a
notification when the certificate is nearing expiration. Rotate the certificate manually.

https://docs.aws.amazon.com/acm/latest/userguide/import-certificate.html

https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/Create-CloudWatch-Events-Rule.html
upvoted 4 times

  gustavtd 9 months ago


That is not good, because you are applying a new cert from AWS and discard the still valid cert from 3rd party, there might reason that
they still want to use the 3rd party cert
upvoted 1 times

  PassNow1234 9 months, 1 week ago


NOT ELIGIBLE if it is a private certificate issued by calling the AWS Private CA IssueCertificate API.

NOT ELIGIBLE if imported.

NOT ELIGIBLE if already expired.


upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Option D, using ACM to import an SSL/TLS certificate and manually rotating the certificate, would not meet the requirement to rotate
the certificate before it expires each year.

Option C, using ACM Private Certificate Authority, is not necessary in this scenario because the requirement is to use a certificate issued
by an external certificate authority.

Option B, importing the key material from the certificate, is not a valid option because ACM does not allow you to import key material
for SSL/TLS certificates.
upvoted 1 times
Question #63 Topic 1

A company runs its infrastructure on AWS and has a registered base of 700,000 users for its document management application. The company
intends to create a product that converts large .pdf files to .jpg image files. The .pdf files average 5 MB in size. The company needs to store the
original files and the converted files. A solutions architect must design a scalable solution to accommodate demand that will grow rapidly over
time.
Which solution meets these requirements MOST cost-effectively?

A. Save the .pdf files to Amazon S3. Configure an S3 PUT event to invoke an AWS Lambda function to convert the files to .jpg format and store
them back in Amazon S3.

B. Save the .pdf files to Amazon DynamoDUse the DynamoDB Streams feature to invoke an AWS Lambda function to convert the files to .jpg
format and store them back in DynamoDB.

C. Upload the .pdf files to an AWS Elastic Beanstalk application that includes Amazon EC2 instances, Amazon Elastic Block Store (Amazon
EBS) storage, and an Auto Scaling group. Use a program in the EC2 instances to convert the files to .jpg format. Save the .pdf files and the .jpg
files in the EBS store.

D. Upload the .pdf files to an AWS Elastic Beanstalk application that includes Amazon EC2 instances, Amazon Elastic File System (Amazon
EFS) storage, and an Auto Scaling group. Use a program in the EC2 instances to convert the file to .jpg format. Save the .pdf files and the .jpg
files in the EBS store.

Correct Answer: A

Community vote distribution


A (98%)

  ArielSchivo Highly Voted  11 months, 2 weeks ago


Selected Answer: A
Option A. Elastic BeanStalk is expensive, and DocumentDB has a 400KB max to upload files. So Lambda and S3 should be the one.
upvoted 34 times

  mrbottomwood 9 months, 2 weeks ago


I'm thinking when you wrote DocumentDB you meant it as DynamoDB...yes?
upvoted 2 times

  benjl 9 months, 1 week ago


Yes, DynamoDB has 400KB limit for the item.
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ServiceQuotas.html
upvoted 4 times

  rob74 11 months ago


In addition to this Lambda is paid only when used....
upvoted 5 times

  raffaello44 11 months, 1 week ago


is lambda scalable as an EC2 ?
upvoted 4 times

  Guru4Cloud Most Recent  1 month, 3 weeks ago


Selected Answer: A
Option A is the most cost-effective solution that meets the requirements. Here is why:

Storing the PDFs in Amazon S3 is inexpensive and scalable storage.


Using S3 events to trigger Lambda functions to do the file conversion is a serverless approach that scales automatically. No need to
manage EC2 instances.
Lambda usage is charged only for compute time used, which is cost-efficient for spiky workloads like this.
Storing the converted JPGs back in S3 keeps the storage scalable and cost-effective.
upvoted 1 times

  RDX19 2 months, 1 week ago


Selected Answer: A
Option A is right answer since Dynamo DB has size limitations.
upvoted 1 times

  miki111 2 months, 2 weeks ago


Option A is the right answer.
upvoted 1 times
  cookieMr 3 months, 1 week ago
Selected Answer: A
B. Using DynamoDB for storing and processing large .pdf files would not be cost-effective due to storage and throughput costs associated
with DynamoDB.

C. Using Elastic Beanstalk with EC2 and EBS storage can work, but it may not be most cost-effective solution. It involves managing the
underlying infrastructure and scaling manually.

D. Similar to C, using Elastic Beanstalk with EC2 and EFS storage can work, but it may not be most cost-effective solution. EFS is a shared
file storage service and may not provide optimal performance for conversion process, especially as demand and file sizes increase.

A. leverages Lambda and the scalable and cost-effective storage of S3. With Lambda, you only pay for actual compute time used during
the file conversion, and S3 provides durable and scalable storage for both .pdf files and .jpg files. The S3 PUT event triggers Lambda to
perform conversion, eliminating need to manage infrastructure and scaling, making it most cost-effective solution for this scenario.
upvoted 4 times

  Bmarodi 3 months, 4 weeks ago


Selected Answer: A
The solution meets these requirements most cost-effectively is option A.
upvoted 1 times

  Bmarodi 4 months, 2 weeks ago


Selected Answer: A
I think the best solution is A.
Ref. https://s3.amazonaws.com/doc/s3-developer-guide/RESTObjectPUT.html
upvoted 1 times

  Abrar2022 4 months, 2 weeks ago


Since this requires a cost-effect solution then you can use Lambda to convert pdf files to jpeg and store them on S3. Lambda is serverless,
so only pay when you use it and automatically scales to cope with demand.
upvoted 1 times

  srirajav 5 months, 1 week ago


if Option A is correct, however storing the data back to the same S3, wont it cause infinite looping, it's not best practice right storing a
object that is processed by Lambda function to the same S3 bucket, it has chances to cause infinite Loop and then if the option B cant we
increase the limits of Dynamo DB requesting AWS?
upvoted 2 times

  bedwal2020 5 months ago


In question, it is never mentioned that the jpg files will also be stored in same s3 bucket. We can have different s3 buckets right ?
upvoted 2 times

  cheese929 5 months, 2 weeks ago


Selected Answer: A
Answer A is the most cost effective solution that meets the requirement
upvoted 1 times

  channn 6 months ago


Selected Answer: A
Key words: MOST cost-effectively, so S3 + Lambda
upvoted 1 times

  SilentMilli 8 months, 4 weeks ago


Selected Answer: A
This solution will meet the company's requirements in a cost-effective manner because it uses a serverless architecture with AWS Lambda
to convert the files and store them in S3. The Lambda function will automatically scale to meet the demand for file conversions and S3 will
automatically scale to store the original and converted files as needed.
upvoted 2 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Selected Answer: A
Option A is the most cost-effective solution that meets the requirements.

In this solution, the .pdf files are saved to Amazon S3, which is an object storage service that is highly scalable, durable, and secure. S3 can
store unlimited amounts of data at a very low cost.

The S3 PUT event triggers an AWS Lambda function to convert the .pdf files to .jpg format. Lambda is a serverless compute service that
runs code in response to specific events and automatically scales to meet demand. This means that the conversion process can scale up or
down as needed, without the need for manual intervention.

The converted .jpg files are then stored back in S3, which allows the company to store both the original .pdf files and the converted .jpg
files in the same service. This reduces the complexity of the solution and helps to keep costs low.
upvoted 1 times
  Buruguduystunstugudunstuy 9 months, 2 weeks ago
Option C is also a valid solution, but it may be more expensive due to the use of EC2 instances, EBS storage, and an Auto Scaling group.
These resources can add additional cost, especially if the demand for the conversion service grows rapidly.

Option D is not a valid solution because it uses Amazon EFS, which is a file storage service that is not suitable for storing large amounts
of data. EFS is designed for storing and accessing files that are accessed frequently, such as application logs and media files. It is not
designed for storing large files like .pdf or .jpg files.
upvoted 2 times

  karbob 8 months, 3 weeks ago


EFS is optimized for a wide range of workloads and file sizes, and it can store files of any size up to the capacity of the file system.
EFS scales automatically to meet your storage needs, and it can store petabyte-level capacity.
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: A
Option A
upvoted 1 times

  JayBee65 9 months, 4 weeks ago


This gives an example, using GET rather than PUT, but the idea is the same:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/tutorial-s3-object-lambda-uppercase.html
upvoted 1 times

  Wpcorgan 10 months, 2 weeks ago


A is correct
upvoted 1 times

  TonyghostR05 10 months, 2 weeks ago


S3 is cost effective
upvoted 1 times
Question #64 Topic 1

A company has more than 5 TB of file data on Windows file servers that run on premises. Users and applications interact with the data each day.
The company is moving its Windows workloads to AWS. As the company continues this process, the company requires access to AWS and on-
premises file storage with minimum latency. The company needs a solution that minimizes operational overhead and requires no significant
changes to the existing file access patterns. The company uses an AWS Site-to-Site VPN connection for connectivity to AWS.
What should a solutions architect do to meet these requirements?

A. Deploy and configure Amazon FSx for Windows File Server on AWS. Move the on-premises file data to FSx for Windows File Server.
Reconfigure the workloads to use FSx for Windows File Server on AWS.

B. Deploy and configure an Amazon S3 File Gateway on premises. Move the on-premises file data to the S3 File Gateway. Reconfigure the on-
premises workloads and the cloud workloads to use the S3 File Gateway.

C. Deploy and configure an Amazon S3 File Gateway on premises. Move the on-premises file data to Amazon S3. Reconfigure the workloads to
use either Amazon S3 directly or the S3 File Gateway. depending on each workload's location.

D. Deploy and configure Amazon FSx for Windows File Server on AWS. Deploy and configure an Amazon FSx File Gateway on premises. Move
the on-premises file data to the FSx File Gateway. Configure the cloud workloads to use FSx for Windows File Server on AWS. Configure the on-
premises workloads to use the FSx File Gateway.

Correct Answer: A

Community vote distribution


D (82%) Other

  sba21 Highly Voted  11 months, 3 weeks ago


Selected Answer: D
https://www.examtopics.com/discussions/amazon/view/83281-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 18 times

  MutiverseAgent 2 months, 3 weeks ago


Agree answer is D)
---
Requirements are:
- "Users and applications interact with the data each day"
- "the company requires access to AWS and on-premises file storage with minimum latency"
---
Explanation: Answer A) will work with the same on-prem <> aws latency as in answer D) as both use the VPN Connection. Having said
this, by using an Amazon FSx File Gateway on premise as the D) scenario mentioned, all users will have a great benefit on using the
cache that the FSx File Gateway has on their daily workloads. And that is part of the requierements: "users", "each day", "latency"
upvoted 2 times

  MrAWS Highly Voted  8 months, 2 weeks ago


D IS WRONG - Its used for caching. you cannot 'Move the on-premises file data to the FSx File Gateway.' which is stated in answer D. It
pretty sure AWS employee's are spamming this site with the wrong answers intentionally.
upvoted 10 times

  DarthVaper 6 days, 15 hours ago


What's the problem with it being a cache? They did say "the company requires access to AWS and on-premises file storage with
minimum latency."
Not discarding what you said but what's wrong here?
upvoted 1 times

  paniya93 Most Recent  22 hours, 27 minutes ago


Selected Answer: D
Answer is D
somewhere, the 6xx question gives the correct answer as D.
upvoted 1 times

  axelrodb 2 weeks, 6 days ago


Selected Answer: D
D is the correct answer
upvoted 1 times

  bahaa_shaker 1 month ago


Selected Answer: A
ITs A and the company use site 2 site VPN which means they can connect to fsx.
upvoted 2 times
  Guru4Cloud 1 month, 3 weeks ago
Selected Answer: D
the requirements are to provide access to both on-premises and AWS file storage with minimum latency, while minimizing operational
overhead and avoiding significant changes to existing file access patterns. Additionally, an AWS Site-to-Site VPN connection is in place for
connectivity.
upvoted 1 times

  miki111 2 months, 2 weeks ago


Option D is the right answer.
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: D
Amazon FSx File Gateway (FSx File Gateway) is a new File Gateway type that provides low latency and efficient access to in-cloud FSx for
Windows File Server file shares from your on-premises facility. If you maintain on-premises file storage because of latency or bandwidth
requirements, you can instead use FSx File Gateway for seamless access to fully managed, highly reliable, and virtually unlimited Windows
file shares provided in the AWS Cloud by FSx for Windows File Server.

FSx File Gateway provides the following benefits:


1. Helps eliminate on-premises file servers and consolidates all their data in AWS to take advantage of the scale and economics of cloud
storage.
2. Provides options that you can use for all your file workloads, including those that require on-premises access to cloud data.
3. Applications that need to stay on premises can now experience the same low latency and high performance that they have in AWS,
without taxing your networks or impacting the latencies experienced by your most demanding applications.
upvoted 4 times

  Bmarodi 3 months, 4 weeks ago


Selected Answer: D
Option D meets these requirements.
upvoted 1 times

  beginnercloud 4 months, 2 weeks ago


Selected Answer: D
D is correct

https://aws.amazon.com/blogs/storage/accessing-your-file-workloads-from-on-premises-with-file-gateway/
upvoted 1 times

  Rahulbit34 5 months ago


Amazon Fix File Gateway for low latency and efficient access to in-cloud FSx for windows File server.
upvoted 1 times

  Thief 5 months, 3 weeks ago


Selected Answer: D
https://aws.amazon.com/blogs/storage/accessing-your-file-workloads-from-on-premises-with-file-gateway/
upvoted 1 times

  apchandana 6 months ago


Selected Answer: A
1.you cannot move on-prem files to FSX FGW as it has limited storage and is is being used for caching only.
2.you need to migrate the on-prem file server to aws fsx file server for windows and let on prem users access the file server through sfx
FGW.
3.configure apps to use aws file server for on-prem apps
4.configure aws apps to access fsx files directly through the app.
5. A is the correct answer.
upvoted 3 times

  Loti2807 8 months, 1 week ago


Selected Answer: D
the company stated that they wanted to move the data from onprem to AWS with 'low latency' and 'no changes on current file access
patterns', so FSx File Gateway is still needed in onprem to cache the data and then to the cloud, plus a secured data/file move. The
Site2Site VPN is for users accessing the data from onprem and cloud within premise network.

Check on the Conclusion section for summary: https://aws.amazon.com/blogs/storage/accessing-your-file-workloads-from-on-premises-


with-file-gateway/
upvoted 2 times

  SilentMilli 8 months, 4 weeks ago


Selected Answer: D
This solution will meet the requirements because it allows the company to continue using a file server with minimal changes to the
existing file access patterns. FSx for Windows File Server integrates with the on-premises Active Directory, so users can continue accessing
the file data with their existing credentials. The Site-to-Site VPN connection can be used to establish low-latency connectivity between the
on-premises file servers and FSx for Windows File Server on AWS. FSx for Windows File Server is also highly available and scalable, so it can
handle the workloads' file storage needs.
upvoted 1 times
  gustavtd 9 months ago
Selected Answer: D
FSx is for windows file, other options like S3 certainly can handle files but might bring compatibility issue. and a FSx gateway might have
sort of cache mechanism that make the users feel they are accessing local file system.
upvoted 2 times

  PassNow1234 9 months, 1 week ago


Benefits of using Amazon FSx File Gateway ****WINDOWS FILE SERVERS***

FSx File Gateway provides the following benefits:

Helps eliminate on-premises file servers and consolidates all their data in AWS to take advantage of the scale and economics of cloud
storage.

Provides options that you can use for all your file workloads, including those that require on-premises access to cloud data.

Applications that need to stay on premises can now experience the same low latency and high performance that they have in AWS,
without taxing your networks or impacting the latencies experienced by your most demanding applications.
upvoted 1 times
Question #65 Topic 1

A hospital recently deployed a RESTful API with Amazon API Gateway and AWS Lambda. The hospital uses API Gateway and Lambda to upload
reports that are in PDF format and JPEG format. The hospital needs to modify the Lambda code to identify protected health information (PHI) in
the reports.
Which solution will meet these requirements with the LEAST operational overhead?

A. Use existing Python libraries to extract the text from the reports and to identify the PHI from the extracted text.

B. Use Amazon Textract to extract the text from the reports. Use Amazon SageMaker to identify the PHI from the extracted text.

C. Use Amazon Textract to extract the text from the reports. Use Amazon Comprehend Medical to identify the PHI from the extracted text.

D. Use Amazon Rekognition to extract the text from the reports. Use Amazon Comprehend Medical to identify the PHI from the extracted text.

Correct Answer: C

Community vote distribution


C (100%)

  Buruguduystunstugudunstuy Highly Voted  9 months, 2 weeks ago


Selected Answer: C
The correct solution is C: Use Amazon Textract to extract the text from the reports. Use Amazon Comprehend Medical to identify the PHI
from the extracted text.

Option C: Using Amazon Textract to extract the text from the reports, and Amazon Comprehend Medical to identify the PHI from the
extracted text, would be the most efficient solution as it would involve the least operational overhead. Textract is specifically designed for
extracting text from documents, and Comprehend Medical is a fully managed service that can accurately identify PHI in medical text. This
solution would require minimal maintenance and would not incur any additional costs beyond the usage fees for Textract and
Comprehend Medical.
upvoted 12 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Option A: Using existing Python libraries to extract the text and identify the PHI from the text would require the hospital to maintain
and update the libraries as needed. This would involve operational overhead in terms of keeping the libraries up to date and
debugging any issues that may arise.

Option B: Using Amazon SageMaker to identify the PHI from the extracted text would involve additional operational overhead in terms
of setting up and maintaining a SageMaker model, as well as potentially incurring additional costs for using SageMaker.

Option D: Using Amazon Rekognition to extract the text from the reports would not be an effective solution, as Rekognition is primarily
designed for image recognition and would not be able to accurately extract text from PDF or JPEG files.
upvoted 4 times

  Chiquitabandita Most Recent  4 weeks, 1 day ago


with the choices here, I would go with C, but if offered, I would use amazon textract for the text and use Macie to do the scanning of text
files, not comprehend.
upvoted 1 times

  Guru4Cloud 1 month, 3 weeks ago


Selected Answer: C
Here's why:

Amazon Textract has built-in support to extract text from PDFs and images, eliminating the need to build this yourself with Python
libraries.
Amazon Comprehend Medical has pre-trained machine learning models to identify PHI entities out-of-the-box, avoiding the need to train
your own SageMaker model.
Using these fully managed AWS services minimizes operational overhead of maintaining machine learning models yourself.
upvoted 1 times

  miki111 2 months, 2 weeks ago


Option C is the right answer.
upvoted 2 times

  cookieMr 3 months, 1 week ago


Selected Answer: C
C leverages capabilities of Textract, which is a service that automatically extracts text and data from documents, including PDF and JPEG.
By using Textract, hospital can extract text content from reports without need for additional custom code or libraries.

Once text is extracted, hospital can then use Comprehend Medical, a natural language processing service specifically designed for medical
text, to analyze and identify PHI. It can recognize medical entities such as medical conditions, treatments, and patient information.
A. suggests using existing Python libraries, which would require hospital to develop and maintain custom code for text extraction and PHI
identification.

B and D involve using Textract along with SageMaker or Rekognition, respectively, for PHI identification. While these options could work,
they introduce additional complexity by incorporating machine learning models and training.
upvoted 2 times
  channn 6 months ago
Key word: hospital!
upvoted 1 times

  alexiscloud 6 months, 1 week ago


Answer C:
upvoted 1 times

  Chirantan 9 months, 1 week ago


Selected Answer: C
Amazon Textract is a machine learning (ML) service that automatically extracts text, handwriting, and data from scanned documents.
upvoted 3 times

  career360guru 9 months, 2 weeks ago


Selected Answer: C
Option C
upvoted 1 times

  SONA_M_ 9 months, 3 weeks ago


WHY OPTION D IS WRONG
upvoted 1 times

  mj61 8 months, 2 weeks ago


B/C you use TextTract to extract text not Rekognition.
upvoted 1 times

  s_fun 9 months ago


D is wrong only because Amazon Rekognition doesn't read text, only explicit image contents.
upvoted 3 times

  k1kavi1 9 months, 3 weeks ago


Selected Answer: C
Agreed
upvoted 1 times

  Rameez1 10 months ago


C is correct
Textract- for extracting the text and Comprehend to identify the medical info
https://aws.amazon.com/comprehend/medical/
upvoted 3 times

  Wpcorgan 10 months, 2 weeks ago


C is correct
upvoted 1 times

  bansalhp 11 months, 2 weeks ago


Selected Answer: C
Textract -to extract textand Comprehend -to identify Medical info
upvoted 3 times

  JesseeS 11 months, 2 weeks ago


Textract and Comprehend is HIPPA compliant
https://aws.amazon.com/blogs/machine-learning/amazon-textract-is-now-hipaa-eligible/
upvoted 1 times

  KVK16 11 months, 2 weeks ago


Selected Answer: C
Textract - Comprehend Medical for PHI info
upvoted 3 times
Question #66 Topic 1

A company has an application that generates a large number of files, each approximately 5 MB in size. The files are stored in Amazon S3.
Company policy requires the files to be stored for 4 years before they can be deleted. Immediate accessibility is always required as the files
contain critical business data that is not easy to reproduce. The files are frequently accessed in the first 30 days of the object creation but are
rarely accessed after the first 30 days.
Which storage solution is MOST cost-effective?

A. Create an S3 bucket lifecycle policy to move files from S3 Standard to S3 Glacier 30 days from object creation. Delete the files 4 years after
object creation.

B. Create an S3 bucket lifecycle policy to move files from S3 Standard to S3 One Zone-Infrequent Access (S3 One Zone-IA) 30 days from
object creation. Delete the files 4 years after object creation.

C. Create an S3 bucket lifecycle policy to move files from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-IA) 30 days from object
creation. Delete the files 4 years after object creation.

D. Create an S3 bucket lifecycle policy to move files from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-IA) 30 days from object
creation. Move the files to S3 Glacier 4 years after object creation.

Correct Answer: C

Community vote distribution


C (66%) A (18%) B (16%)

  Six_Fingered_Jose Highly Voted  11 months, 1 week ago


Selected Answer: C
i think C should be the answer here,
> Immediate accessibility is always required as the files contain critical business data that is not easy to reproduce

If they do not explicitly mention that they are using Glacier Instant Retrieval, we should assume that Glacier -> takes more time to retrieve
and may not meet the requirements
upvoted 59 times

  Kumaran1508 4 months, 1 week ago


Yeah, Correct answer is C
Because even if you assume the glacier class as Instant Retrieval. As per the Instant Retrieval class the immediate availability is only
once per quarter. But in question it is clearly mentioned that the files should be immediately available anytime.
upvoted 2 times

  JayBee65 9 months, 4 weeks ago


You can make that assumption, but I think it would be wrong to make it. It does not state they are not using Glacier Instant Retrieval,
and it's use would be the logical choice in this question, so I'm going for A
upvoted 4 times

  syh_rapha 9 months, 3 weeks ago


I think his assumption is correct because if you go to AWS documentation (https://aws.amazon.com/s3/storage-classes/glacier/) they
clearly mention: "S3 Glacier Flexible Retrieval (formerly S3 Glacier)". So since this question doesn't specify the S3 Glacier class, then it
would default to flexible retrieval (which ofc is not equal to Instant Retrieval).
upvoted 9 times

  slackbot 1 month, 1 week ago


why everybody assumed files must be deleted after 4 years. they said files "can" be deleted, and not "must" be deleted. ideally
store the files in glacier after 4 years
upvoted 1 times

  ninjawrz Highly Voted  11 months, 2 weeks ago


Selected Answer: A
Most COST EFFECTIVE
A: S3 Glacier Instant Retrieval is a new storage class that delivers the fastest access to archive storage, with the same low latency and high-
throughput performance as the S3 Standard and S3 Standard-IA storage classes. You can save up to 68 percent on storage costs as
compared with using the S3 Standard-IA storage class when you use the S3 Glacier Instant Retrieval storage class and pay a low price to
retrieve data.
upvoted 19 times

  Help2023 7 months, 2 weeks ago


Would agree if that was one of the answers, however many questions that are asked do have alternative solutions but again they are
doing this on purpose to check your knowledge. Here C is best.
upvoted 2 times
  wh1t4k3r 9 months, 3 weeks ago
In the other hand, you need to chose a tier when going for glacier, so my previous comment is not stating well. The question is tricky, I
change my mind: agree with you on this one
upvoted 2 times

  wh1t4k3r 9 months, 3 weeks ago


Instant Retrieval was never mentioned. The exams always mention the tier when needed to. To be A the answer given should at least
include the step mentioning that instant retrieval would be used.
upvoted 6 times

  Pamban 10 months, 3 weeks ago


"Immediate accessibility is always required as the files contain critical business data that is not easy to reproduce" is the key sentence.
answer is C.
upvoted 5 times

  Bala75krish 8 months ago


I agree with your key sentence..but the one zone infrequent doesn't fit for critical business and it is used for recreate..
upvoted 1 times

  JayBee65 9 months, 4 weeks ago


But S3 Glacier Instant Retrieval "is designed for rarely accessed data that still needs immediate access in performance-sensitive use
cases", so it offers lower cost and instant retrieval, so A
upvoted 1 times

  daniel1 Most Recent  3 days, 23 hours ago


A is the right answer; You would be paying high for Storage -IA for almost 4 years compared to Glacier. I get the point the Instant Retrieval
is not mentioned and we cant assume also its a deep archive or flexible hence the answer A as its most cost effective
upvoted 1 times

  Subhrangsu 1 week, 1 day ago


As they have not mentioned the S3 Glacier Instant Retrieval (which is 68% cheaper than S3 Standard IA) in the options still 'A' should be the
option as in question it is mentioned as most COST-EFFECTIVE Way.
upvoted 1 times

  Hassaoo 1 month ago


c is right in my way because in question immedate accessbility is required still for 4 years "Immediate accessibility is always required as
the files contain critical business data that is not easy to reproduce"

it refers to standard IA after 30 days


upvoted 1 times

  mesutal 1 month ago


Selected Answer: A
AAAAAAAAAAAAAAAAAAA
upvoted 1 times

  Soumya198725 1 month ago


It will be Option : D as per Google bard
upvoted 1 times

  Syruis 1 month, 2 weeks ago


Selected Answer: B
One Zone is the best for requirements
upvoted 5 times

  TariqKipkemei 1 month, 3 weeks ago


Selected Answer: C
Key requirements: "Immediate accessibility is always required"
and "The files rarely accessed after the first 30 days".

S3 Standard-IA is for data that is accessed less frequently, but requires rapid access when needed.

https://aws.amazon.com/s3/storage-classes/#:~:text=S3%20Standard%2DIA)-,S3%20Standard%2DIA,-is%20for%20data
upvoted 2 times

  n43u435b543ht2b 1 month, 3 weeks ago


Selected Answer: B
B, because the question does not mention availability as a priority. It asks for the most COST effective. Besides, One Zone IA is designed
for 99.5% availability over a given year.
upvoted 8 times

  hsinchang 2 months ago


Selected Answer: C
For data stored in Glacier, it takes usually hours, as immediate accessibility is required, that rules out A.
upvoted 1 times
  Babyface04 2 months, 1 week ago
The options with the Glacier are wrong because the minimum storage duration is for 90 days.
upvoted 1 times

  yeyulchoi 2 months, 1 week ago


I think the answer is A. because it says 'accessed rarely' , not 'infrequently'. When looking at the documents, rarely is used for S3 glacier
instant retrieval while 'less frequently' or ' infrequently' is used for S3 Standard-IA , and Most cost effective one is glacier.
upvoted 1 times

  miki111 2 months, 2 weeks ago


Option C is the right answer.
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: C
In this option, the company utilizes the S3 bucket lifecycle policy to transition the files from the S3 Standard storage class to the S3
Standard-IA storage class after 30 days. S3 Standard-IA is designed for infrequently accessed data and offers a lower storage cost
compared to S3 Standard, making it cost-effective for files that are rarely accessed after the initial 30 days.
upvoted 3 times

  antropaws 4 months ago


Selected Answer: A
A: With S3 Glacier Instant Retrieval, you can save up to 68% on storage costs compared to using the S3 Standard-Infrequent Access (S3
Standard-IA) storage class, when your data is accessed once per quarter. S3 Glacier Instant Retrieval delivers the fastest access to archive
storage, with the same throughput and milliseconds access as the S3 Standard and S3 Standard-IA storage classes.
upvoted 1 times

  omoakin 4 months, 1 week ago


BBBBBBBBBBBBBBB
upvoted 1 times
Question #67 Topic 1

A company hosts an application on multiple Amazon EC2 instances. The application processes messages from an Amazon SQS queue, writes to
an Amazon RDS table, and deletes the message from the queue. Occasional duplicate records are found in the RDS table. The SQS queue does not
contain any duplicate messages.
What should a solutions architect do to ensure messages are being processed once only?

A. Use the CreateQueue API call to create a new queue.

B. Use the AddPermission API call to add appropriate permissions.

C. Use the ReceiveMessage API call to set an appropriate wait time.

D. Use the ChangeMessageVisibility API call to increase the visibility timeout.

Correct Answer: D

Community vote distribution


D (100%)

  KVK16 Highly Voted  11 months, 2 weeks ago


Selected Answer: D
In case of SQS - multi-consumers if one consumer has already picked the message and is processing, in meantime other consumer can
pick it up and process the message there by two copies are added at the end. To avoid this the message is made invisible from the time its
picked and deleted after processing. This visibility timeout is increased according to max time taken to process the message
upvoted 35 times

  JayBee65 9 months, 4 weeks ago


To add to this "The VisibilityTimeout in SQS is a time frame that the message can be hidden so that no others can consume it except the
first consumer who calls the ReceiveMessageAPI." The API ChangeMesssageVisibility changes this value.
upvoted 12 times

  Buruguduystunstugudunstuy Highly Voted  9 months, 2 weeks ago


Selected Answer: D
To ensure that messages are being processed only once, a solutions architect should use the ChangeMessageVisibility API call to increase
the visibility timeout which is Option D.

The visibility timeout determines the amount of time that a message received from an SQS queue is hidden from other consumers while
the message is being processed. If the processing of a message takes longer than the visibility timeout, the message will become visible
to other consumers and may be processed again. By increasing the visibility timeout, the solutions architect can ensure that the message
is not made visible to other consumers until the processing is complete and the message can be safely deleted from the queue.

Option A (Use the CreateQueue API call to create a new queue) would not address the issue of duplicate message processing.

Option B (Use the AddPermission API call to add appropriate permissions) is not relevant to this issue.

Option C (Use the ReceiveMessage API call to set an appropriate wait time) is also not relevant to this issue.
upvoted 6 times

  karbob 8 months, 3 weeks ago


not relevant to this issue. ??? what is added value
upvoted 2 times

  Buruguduystunstugudunstuy 7 months ago


Option B (Use the AddPermission API call to add appropriate permissions) is not relevant to this issue because it deals with setting
permissions for accessing an SQS queue, which is not related to preventing duplicate records in the RDS table.

Option C (Use the ReceiveMessage API call to set an appropriate wait time) is not relevant to this issue because it is related to
configuring how long the ReceiveMessage API call should wait for new messages to arrive in the SQS queue before returning an
empty response. It does not address the issue of duplicate records in the RDS table.
upvoted 2 times

  Subhrangsu Most Recent  1 week, 1 day ago


I also opt for D, but asking does increasing MessageVisibilityTimeOut good always?
upvoted 1 times

  miki111 2 months, 2 weeks ago


Option D is the right answer.
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: D
The visibility timeout is the duration during which SQS prevents other consumers from receiving and processing the same message. By
increasing the visibility timeout, you allow more time for the processing of a message to complete before it becomes visible to other
consumers.

Option A, creating a new queue, does not address the issue of concurrent processing and duplicate records. It would only create a new
queue, which is not necessary for solving the problem.

Option B, adding permissions, also does not directly address the issue of duplicate records. Permissions are necessary for accessing the
SQS queue but not for preventing concurrent processing.

Option C, setting an appropriate wait time using the ReceiveMessage API call, does not specifically prevent duplicate records. It can help
manage the rate at which messages are received from the queue but does not address the issue of concurrent processing.
upvoted 4 times

  cheese929 5 months, 2 weeks ago


Selected Answer: D
D is correct
upvoted 1 times

  alexiscloud 6 months, 1 week ago


Answer D:
visibility timeout beings when amazon SQS return a message
upvoted 1 times

  test_devops_aws 6 months, 2 weeks ago


Selected Answer: D
D = ChangeMessageVisibility
upvoted 1 times

  dev1978 8 months, 2 weeks ago


In theory, between reception and changing visibility, you can have multiple consumers. Question is not good as it won't guarantee not
executing twice.
upvoted 1 times

  techhb 8 months, 3 weeks ago


Selected Answer: D
Increaseing visibility timeout makes sure message is not visible for time taken to process the message.
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: D
Option D
upvoted 1 times

  Wpcorgan 10 months, 2 weeks ago


D is correct
upvoted 1 times

  mabotega 10 months, 4 weeks ago


Selected Answer: D
D is the correct choise, increasing the visibility timeout according to max time taken to process the message on the RDS.
upvoted 1 times

  Valero_ 11 months, 2 weeks ago


Selected Answer: D
True, it's D.
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-visibility-timeout.html
upvoted 6 times
Question #68 Topic 1

A solutions architect is designing a new hybrid architecture to extend a company's on-premises infrastructure to AWS. The company requires a
highly available connection with consistent low latency to an AWS Region. The company needs to minimize costs and is willing to accept slower
traffic if the primary connection fails.
What should the solutions architect do to meet these requirements?

A. Provision an AWS Direct Connect connection to a Region. Provision a VPN connection as a backup if the primary Direct Connect connection
fails.

B. Provision a VPN tunnel connection to a Region for private connectivity. Provision a second VPN tunnel for private connectivity and as a
backup if the primary VPN connection fails.

C. Provision an AWS Direct Connect connection to a Region. Provision a second Direct Connect connection to the same Region as a backup if
the primary Direct Connect connection fails.

D. Provision an AWS Direct Connect connection to a Region. Use the Direct Connect failover attribute from the AWS CLI to automatically create
a backup connection if the primary Direct Connect connection fails.

Correct Answer: A

Community vote distribution


A (91%) 9%

  KVK16 Highly Voted  11 months, 2 weeks ago


Selected Answer: A
Direct Connect + VPN best of both
upvoted 14 times

  mabotega Highly Voted  10 months, 4 weeks ago


Selected Answer: A
Direct Connect goes throught 1 Gbps, 10 Gbps or 100 Gbps and the VPN goes up to 1.25 Gbps.

https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/aws-direct-connect-vpn.html
upvoted 11 times

  benacert Most Recent  3 weeks, 6 days ago


A is the right choice to save cost
upvoted 1 times

  Guru4Cloud 1 month, 3 weeks ago


Selected Answer: A
Highly available connectivity using Direct Connect for consistent low latency and high throughput.
Cost optimization by using a VPN as a slower, lower cost backup for when Direct Connect fails.
Automatic failover to the VPN when Direct Connect fails.
upvoted 3 times

  TariqKipkemei 1 month, 3 weeks ago


Selected Answer: A
A highly available connection with consistent low latency = AWS Direct Connect
Minimize costs and accept slower traffic if the primary connection fails = VPN connection
upvoted 1 times

  hsinchang 2 months ago


Selected Answer: A
Slower traffic when primary fails, so the backup plan needs a cheaper solution, and the primary requires high performance, so A.
upvoted 1 times

  oguzbeliren 2 months, 1 week ago


Even though, there are a lots of variable affecting the cost of the connection, VPN connection is cheaper than the Direct Connect most of
the time since VPN Connection doesn't require any dedicated physical circuit involved.
upvoted 1 times

  miki111 2 months, 2 weeks ago


Option A is the right answer.
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: A
Options B and C propose using multiple VPN connections for private connectivity and as backups. While VPNs can serve as backups, they
may not provide the same level of consistent low latency and high availability as Direct Connect connections. Additionally, provisioning
multiple VPN tunnels can increase operational complexity and costs.

Option D suggests using the Direct Connect failover attribute from the AWS CLI to automatically create a backup connection if the primary
Direct Connect connection fails. While this approach can be automated, it does not provide the same level of immediate failover
capabilities as having a separate backup connection in place.

Therefore, option A, provisioning an AWS Direct Connect connection to a Region and provisioning a VPN connection as a backup, is the
most suitable solution that meets the company's requirements for connectivity, cost-effectiveness, and high availability.
upvoted 4 times

  th3k33n 5 months, 1 week ago


Selected Answer: A
higly available - > direct connect beecause connection can go up to 10GBPs and VPN 1.5GBPs as backup
upvoted 1 times

  linux_admin 6 months ago


Selected Answer: A
Option A is the correct solution to meet the requirements of the company. Provisioning an AWS Direct Connect connection to a Region will
provide a private and dedicated connection with consistent low latency. As the company requires a highly available connection, a VPN
connection can be provisioned as a backup if the primary Direct Connect connection fails. This approach will minimize costs and provide
the required level of availability.
upvoted 1 times

  devonwho 8 months ago


Selected Answer: A
With AWS Direct Connect + VPN, you can combine AWS Direct Connect dedicated network connections with the Amazon VPC VPN. This
solution combines the benefits of the end-to-end secure IPSec connection with low latency and increased bandwidth of the AWS Direct
Connect to provide a more consistent network experience than internet-based VPN connections.

https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/aws-direct-connect-vpn.html
upvoted 2 times

  dev1978 8 months, 2 weeks ago


Why not B? Two VPNs on different connections? Direct Connect costs a fortune?
upvoted 1 times

  J3nkinz 8 months, 2 weeks ago


The company requires a highly available connection with consistent low latency to an AWS Region, this is provided by Direct Connect as
primary connection. The company allows a slower connection only for the backup option, so A is the right answer
upvoted 2 times

  thanhch 9 months, 1 week ago


DX for low latency connect and the company accept slower traffic if the primary connection fails. So we should choose VPN for backup
purpose. And the question also mark : minimize cost.
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Selected Answer: C
This a tricky question but let's try to understand the requirements of the question.

The company requires VS The company needs.

The main difference between need and require is that needs are goals and objectives a business must achieve, whereas require or
requirements are the things we need to do in order to achieve a need.
upvoted 2 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


To meet the requirements specified in the question, the best solution is to provision two AWS Direct Connect connections to the same
Region. This will provide a highly available connection with consistently low latency to the AWS Region and minimize costs by
eliminating internet usage fees. Provisioning a second Direct Connect connection as a backup will ensure that there is a failover option
available in case the primary connection fails.
upvoted 4 times

  studynoplay 5 months ago


2 Direct connections will not minimize costs. Correct Answer is A
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Using VPN connections as a backup, as described in options A and B, is not the best solution because VPN connections are typically
slower and less reliable than Direct Connect connections. Additionally, having two VPN connections to the same Region may not
provide the desired level of availability and may not meet the company's requirement for low latency.

Option D, which involves using the Direct Connect failover attribute from the AWS CLI to automatically create a backup connection if
the primary Direct Connect connection fails, is not a valid option because the Direct Connect failover attribute is not available in the
AWS CLI.
upvoted 6 times

  ruqui 4 months, 2 weeks ago


You forgot to consider that "the company is willing to accept slower traffic if the primary connection fails", so option A is the best
answer
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


See pricing for more info.
https://aws.amazon.com/directconnect/pricing/
upvoted 1 times

  ocbn3wby 8 months ago


I love your comments!
upvoted 2 times

  career360guru 9 months, 2 weeks ago


Selected Answer: A
Option A
upvoted 1 times

  koreanmonkey 10 months, 1 week ago


Selected Answer: A
A is rigth I thought wrong
upvoted 1 times
Question #69 Topic 1

A company is running a business-critical web application on Amazon EC2 instances behind an Application Load Balancer. The EC2 instances are
in an Auto Scaling group. The application uses an Amazon Aurora PostgreSQL database that is deployed in a single Availability Zone. The
company wants the application to be highly available with minimum downtime and minimum loss of data.
Which solution will meet these requirements with the LEAST operational effort?

A. Place the EC2 instances in different AWS Regions. Use Amazon Route 53 health checks to redirect traffic. Use Aurora PostgreSQL Cross-
Region Replication.

B. Configure the Auto Scaling group to use multiple Availability Zones. Configure the database as Multi-AZ. Configure an Amazon RDS Proxy
instance for the database.

C. Configure the Auto Scaling group to use one Availability Zone. Generate hourly snapshots of the database. Recover the database from the
snapshots in the event of a failure.

D. Configure the Auto Scaling group to use multiple AWS Regions. Write the data from the application to Amazon S3. Use S3 Event
Notifications to launch an AWS Lambda function to write the data to the database.

Correct Answer: B

Community vote distribution


B (95%) 5%

  SilentMilli Highly Voted  8 months, 4 weeks ago


Selected Answer: B
By configuring the Auto Scaling group to use multiple Availability Zones, the application will be able to continue running even if one
Availability Zone goes down. Configuring the database as Multi-AZ will also ensure that the database remains available in the event of a
failure in one Availability Zone. Using an Amazon RDS Proxy instance for the database will allow the application to automatically route
traffic to healthy database instances, further increasing the availability of the application. This solution will meet the requirements for high
availability with minimal operational effort.
upvoted 13 times

  KVK16 Highly Voted  11 months, 2 weeks ago


Selected Answer: B
RDS Proxy for Aurora https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/rds-proxy.html
upvoted 8 times

  asulhi Most Recent  2 weeks ago


Selected Answer: B
ASG and MultiAZ is the best answer
upvoted 1 times

  benacert 3 weeks, 6 days ago


B is the right answer
upvoted 1 times

  Guru4Cloud 1 month, 3 weeks ago


Selected Answer: B
Option B requires the least operational effort to meet the high availability and minimum downtime/data loss requirements.

The key points are:

Use an Auto Scaling group across multiple AZs for high availability of the EC2 instances.
Configure the Aurora DB as Multi-AZ for high availability, automatic failover, and minimum data loss.
Use RDS Proxy for connection pooling to the DB for performance
upvoted 2 times

  TariqKipkemei 1 month, 3 weeks ago


Selected Answer: B
Highly available, Minimum downtime and Minimum loss of data = Auto Scaling group on Multi-AZ, Database on Multi-AZ, Amazon RDS
Proxy.
upvoted 1 times

  miki111 2 months, 2 weeks ago


Option B is the right answer.
upvoted 1 times
  hiepdz98 3 months ago
Selected Answer: B
B is correct answer
upvoted 2 times

  cookieMr 3 months, 1 week ago


Selected Answer: B
A. This approach provides geographic redundancy, it introduces additional complexity and operational effort, including managing
replication, handling latency, and potentially higher data transfer costs.

C. While snapshots can be used for data backup and recovery, they do not provide real-time failover capabilities and can result in
significant data loss if a failure occurs between snapshots.

D. While this approach offers some decoupling and scalability benefits, it adds complexity to the data flow and introduces additional
overhead for data processing.

In comparison, option B provides a simpler and more streamlined solution by utilizing multiple AZs, Multi-AZ configuration for the
database, and RDS Proxy for improved connection management. It ensures high availability, minimal downtime, and minimum loss of
data with the least operational effort.
upvoted 4 times

  Abrar2022 4 months, 2 weeks ago


@Wajif the reason why it's not A is because the question mentions High availability and nothing to do with region. You can achieve HA
without spanning multiple regions. Also B is incorrect because ALB are region specific and span across multiple AZ with that specific
region (not cross region)
upvoted 1 times

  UnluckyDucky 7 months, 3 weeks ago


Selected Answer: B
RDS Proxy is fully managed by AWS for RDS/Aurora. It is auto-scaling and highly available by default.
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Selected Answer: B
The correct solution is B: Configure the Auto Scaling group to use multiple Availability Zones. Configure the database as Multi-AZ.
Configure an Amazon RDS Proxy instance for the database.

This solution will meet the requirements of high availability with minimum downtime and minimum loss of data with the least operational
effort. By configuring the Auto Scaling group to use multiple Availability Zones, the web application will be able to withstand the failure of
one Availability Zone without any disruption to the service. By configuring the database as Multi-AZ, the database will automatically
failover to a standby instance in a different Availability Zone in the event of a failure, ensuring minimal downtime. Additionally, using an
RDS Proxy instance will help to improve the performance and scalability of the database.
upvoted 3 times

  k1kavi1 9 months, 1 week ago


Selected Answer: B
Aurora PostgreSQL DB clusters don't support Aurora Replicas in different AWS Regions
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.Replication.html
upvoted 2 times

  career360guru 9 months, 2 weeks ago


Selected Answer: B
Option B
upvoted 1 times

  Shasha1 9 months, 3 weeks ago


Answer is B
it will ensure that the database is highly available by replicating the data to a secondary instance in a different Availability Zone. In the
event of a failure, the secondary instance will automatically take over and continue servicing database requests without any data loss.
Additionally, configuring an Amazon RDS Proxy instance for the database will help improve the availability and scalability of the database
upvoted 4 times

  Wajif 10 months, 1 week ago


Selected Answer: A
Why not A?
upvoted 2 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Here is why Option A is not the correct solution:

Option A: Place the EC2 instances in different AWS Regions. Use Amazon Route 53 health checks to redirect traffic. Use Aurora
PostgreSQL Cross-Region Replication.

While this solution would provide high availability with minimum downtime, it would involve significant operational effort and may
result in data loss. Placing the EC2 instances in different Regions would require significant infrastructure changes and could impact the
performance of the application. Additionally, Aurora PostgreSQL Cross-Region Replication is designed to provide disaster recovery
rather than high availability, and it may result in some data loss during the replication process.
upvoted 4 times
  koreanmonkey 10 months, 1 week ago
maybe because of load balancer, diffrent region can't be answer.
upvoted 2 times

  WZN 10 months ago


"The load balancer distributes incoming application traffic across multiple targets, such as EC2 instances, in multiple Availability
Zones". Why not A?
upvoted 1 times

  javitech83 9 months, 4 weeks ago


They need to be in the same Region
upvoted 1 times

  JayBee65 9 months, 4 weeks ago


The question states multiple regions not multiple Availability Zones, a big difference!
upvoted 1 times

  Wpcorgan 10 months, 2 weeks ago


B is correct
upvoted 1 times
Question #70 Topic 1

A company's HTTP application is behind a Network Load Balancer (NLB). The NLB's target group is configured to use an Amazon EC2 Auto Scaling
group with multiple EC2 instances that run the web service.
The company notices that the NLB is not detecting HTTP errors for the application. These errors require a manual restart of the EC2 instances
that run the web service. The company needs to improve the application's availability without writing custom scripts or code.
What should a solutions architect do to meet these requirements?

A. Enable HTTP health checks on the NLB, supplying the URL of the company's application.

B. Add a cron job to the EC2 instances to check the local application's logs once each minute. If HTTP errors are detected. the application will
restart.

C. Replace the NLB with an Application Load Balancer. Enable HTTP health checks by supplying the URL of the company's application.
Configure an Auto Scaling action to replace unhealthy instances.

D. Create an Amazon Cloud Watch alarm that monitors the UnhealthyHostCount metric for the NLB. Configure an Auto Scaling action to
replace unhealthy instances when the alarm is in the ALARM state.

Correct Answer: C

Community vote distribution


C (87%) 13%

  123jhl0 Highly Voted  11 months, 2 weeks ago


Selected Answer: C
I would choose A, as NLB supports HTTP and HTTPS Health Checks, BUT you can't put any URL (as proposed), only the node IP addresses.
So, the solution is C.
upvoted 23 times

  Ack3rman 10 months, 2 weeks ago


can you elaborate more pls
upvoted 2 times

  BlueVolcano1 8 months, 2 weeks ago


NLBs support HTTP, HTTPS and TCP health checks:
https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-health-checks.html (check HealthCheckProtocol)

But NLBs only accept either selecting EC2 instances or IP addresses directly as targets. You can't provide a URL to your endpoints,
only a health check path (if you're using HTTP or HTTPS health checks).
upvoted 6 times

  km142646 5 months, 1 week ago


What's the difference between endpoint URL and health check path?
upvoted 1 times

  majubmo 3 months, 3 weeks ago


A URL includes the hostname. The health check path is only the path portion. For example,
URL = https://i-0123456789abcdef.us-west-2.compute.internal/index.html
health check path= /index.html
upvoted 4 times

  ArielSchivo Highly Voted  11 months, 2 weeks ago


Selected Answer: C
Option C. NLB works at Layer 4 so it does not support HTTP/HTTPS. The replacement for the ALB is the best choice.
upvoted 12 times

  BlueVolcano1 8 months, 2 weeks ago


That's incorrect. NLB does support HTTP and HTTPS (and TCP) health checks.
https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-health-checks.html

There just isn't an answer option that reflects that. My guess is that the question and/or answer options are outdated.
upvoted 4 times

  Guru4Cloud Most Recent  1 month, 3 weeks ago


Selected Answer: C
The key points are:

Use an Application Load Balancer (ALB) instead of a Network Load Balancer (NLB) since ALBs support HTTP health checks.
Configure HTTP health checks on the ALB to monitor the application health.
Use an Auto Scaling action triggered by the ALB health checks to automatically replace unhealthy instances.
upvoted 1 times
  miki111 2 months, 2 weeks ago
Option C is the right answer.
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: C
A. NLB, but NLB's health checks are designed for TCP/UDP protocols and lack the advanced features specific to HTTP applications provided
by ALB.

B. This approach involves custom scripting and manual intervention, which contradicts the requirement of not writing custom scripts or
code.

D. Since the NLB does not detect HTTP errors, relying solely on the UnhealthyHostCount metric may not accurately capture the health of
the application instances.

Therefore, C is the recommended choice for improving the application's availability without custom scripting or code. By replacing the NLB
with an ALB, enabling HTTP health checks, and configuring Auto Scaling to replace unhealthy instances, the company can ensure that only
healthy instances are serving traffic, enhancing the application's availability automatically.
upvoted 5 times

  Abrar2022 4 months, 2 weeks ago


Replace the NLB (layer 4 udp and tcp) with an Application Load Balancer - ALB (layer 7) supports http and https requests.
upvoted 1 times

  datz 6 months, 2 weeks ago


Selected Answer: C
must be C

Application availability: NLB cannot assure the availability of the application. This is because it bases its decisions solely on network and
TCP-layer variables and has no awareness of the application at all. Generally, NLB determines availability based on the ability of a server to
respond to ICMP ping or to correctly complete the three-way TCP handshake. ALB goes much deeper and is capable of determining
availability based on not only a successful HTTP GET of a particular page but also the verification that the content is as was expected
based on the input parameters.
upvoted 1 times

  datz 6 months, 2 weeks ago


Also A doesn't offer what bellow in C offers...

Configure an Auto Scaling action to replace unhealthy instances


upvoted 1 times

  Tony1980 8 months ago


Answer is C

A solution architect can use Amazon EC2 Auto Scaling health checks to automatically detect and replace unhealthy instances in the EC2
Auto Scaling group. The health checks can be configured to check the HTTP errors returned by the application and terminate the
unhealthy instances. This will ensure that the application's availability is improved, without requiring custom scripts or code.
upvoted 1 times

  aakashkumar1999 8 months ago


I will go with A as Network load balancer supports HTTP and HTTPS health checks, maybe the answer is outdated.
upvoted 2 times

  John_Zhuang 9 months ago


Selected Answer: C
https://medium.com/awesome-cloud/aws-difference-between-application-load-balancer-and-network-load-balancer-cb8b6cd296a4
As NLB does not support HTTP health checks, you can only use ALB to do so.
upvoted 1 times

  BlueVolcano1 8 months, 2 weeks ago


That's incorrect. NLB does support HTTP and HTTPS (and TCP) health checks.
https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-health-checks.html

Just a general tip: Medium is not a reliable resource. Anyone can create content there. Rely only on official AWS documentation.
upvoted 3 times

  benjl 9 months, 1 week ago


Answer is C, and A is wrong because
In NLB, for HTTP or HTTPS health check requests, the host header contains the IP address of the load balancer node and the listener port,
not the IP address of the target and the health check port.
https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-health-checks.html
upvoted 3 times
  Silvestr 9 months, 1 week ago
Selected Answer: C
Correct answer - C
Network load balancers (Layer 4) allow to:
• Forward TCP & UDP traffic to your instances
• Handle millions of request per seconds
• Less latency ~100 ms (vs 400 ms for ALB)
Best choice for HTTP traffic - replace to Application load balancer
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Selected Answer: A
The best option to meet the requirements is to enable HTTP health checks on the NLB by supplying the URL of the company's application.
This will allow the NLB to automatically detect HTTP errors and take action, such as marking the target instance as unhealthy and routing
traffic away from it.

Option A - Enable HTTP health checks on the NLB, supplying the URL of the company's application.
This is the correct solution as it allows the NLB to automatically detect HTTP errors and take action.
upvoted 4 times

  vipyodha 3 months, 2 weeks ago


Option C right. A is is not necessarily wrong, but it may not be the most effective solution to meet the requirements in this scenario.
Here's why:

Option A suggests enabling HTTP health checks on the Network Load Balancer (NLB) by supplying the URL of the company's
application. While this can help the NLB detect if the application is accessible or not, it does not directly address the specific
requirement of automatically restarting the EC2 instances when HTTP errors occur.
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Option B - Add a cron job to the EC2 instances to check the local application's logs once each minute. If HTTP errors are detected, the
application will restart.
This option involves writing custom scripts or code, which is not allowed by the requirements. Additionally, this solution may not be
reliable or efficient, as it relies on checking the logs locally on each instance and may not catch all errors.

Option C - Replace the NLB with an Application Load Balancer. Enable HTTP health checks by supplying the URL of the company's
application. Configure an Auto Scaling action to replace unhealthy instances.
While this option may improve the availability of the application, it is not necessary to replace the NLB with an Application Load
Balancer in order to enable HTTP health checks. The NLB can support HTTP health checks as well, and replacing it may involve
additional effort and cost.
upvoted 3 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Option D - Create an Amazon CloudWatch alarm that monitors the UnhealthyHostCount metric for the NLB. Configure an Auto
Scaling action to replace unhealthy instances when the alarm is in the ALARM state.
This option involves monitoring the UnhealthyHostCount metric, which only reflects the number of unhealthy targets that the NLB is
currently routing traffic away from. It does not directly monitor the health of the application or detects HTTP errors. Additionally,
this solution may not be sufficient to detect and respond to HTTP errors in a timely manner.
upvoted 1 times

  Schladde 6 months ago


This won't increase availability when instances become unavailable.
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: A
Option A is very much a valid option as Autoscaling group can be configured to remove EC2 instances that fails http health check of NLB.
AWS NLB supports http based health check.
upvoted 1 times

  LeGloupier 10 months, 1 week ago


Selected Answer: A
A is the best option.
NLB support http healthcheck, so why do we need to move to ALB ?
moreover the sentence "Configure an Auto Scaling action to replace unhealthy instances" in C seems to be wrong, as auto scaling remove
any unhealthy instance by default, you do not need to configure it.
upvoted 1 times

  JayBee65 9 months, 4 weeks ago


I would say A will not give you what you want. "If you add a TLS listener to your Network Load Balancer, we perform a listener
connectivity test." (https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-health-checks.html) So a check will
be made to see that something is listening on port 443. What it will not check is the status of the application e.g. HTTP 200 OK. Now the
Application Load Balancer HTTP health check using the URL of the company's application, will do this, so C is the correct answer.
upvoted 2 times

  Wpcorgan 10 months, 2 weeks ago


C is correct
upvoted 1 times

  mabotega 10 months, 4 weeks ago


Selected Answer: C
C is the correct!
NLB does not handle HTTP (layer 7) listerns errors only TCP (layer 4) listeners.
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/environments-cfg-nlb.html
upvoted 4 times
Question #71 Topic 1

A company runs a shopping application that uses Amazon DynamoDB to store customer information. In case of data corruption, a solutions
architect needs to design a solution that meets a recovery point objective (RPO) of 15 minutes and a recovery time objective (RTO) of 1 hour.
What should the solutions architect recommend to meet these requirements?

A. Configure DynamoDB global tables. For RPO recovery, point the application to a different AWS Region.

B. Configure DynamoDB point-in-time recovery. For RPO recovery, restore to the desired point in time.

C. Export the DynamoDB data to Amazon S3 Glacier on a daily basis. For RPO recovery, import the data from S3 Glacier to DynamoDB.

D. Schedule Amazon Elastic Block Store (Amazon EBS) snapshots for the DynamoDB table every 15 minutes. For RPO recovery, restore the
DynamoDB table by using the EBS snapshot.

Correct Answer: B

Community vote distribution


B (100%)

  123jhl0 Highly Voted  11 months, 2 weeks ago


Selected Answer: B
A - DynamoDB global tables provides multi-Region, and multi-active database, but it not valid "in case of data corruption". In this case, you
need a backup. This solutions isn't valid.
**B** - Point in Time Recovery is designed as a continuous backup juts to recover it fast. It covers perfectly the RPO, and probably the
RTO. https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/PointInTimeRecovery.html
C - A daily export will not cover the RPO of 15min.
D - DynamoDB is serverless... so what are these EBS snapshots taken from???
upvoted 34 times

  LionelSid 8 months, 1 week ago


Yes, it is possible to take EBS snapshots of a DynamoDB table. The process for doing this involves the following steps:

Create a new Amazon Elastic Block Store (EBS) volume from the DynamoDB table.

Stop the DynamoDB service on the instance.

Detach the EBS volume from the instance.

Create a snapshot of the EBS volume.

Reattach the EBS volume to the instance.

Start the DynamoDB service on the instance.

You can also use AWS Data pipeline to automate the above process and schedule regular snapshots of your DynamoDB table.

Note that, if your table is large and you want to take a snapshot of it, it could take a long time and consume a lot of bandwidth, so it's
recommended to use the Global Tables feature from DynamoDB in order to have a Multi-region and Multi-master DynamoDB table,
and you can snapshot each region separately.
upvoted 2 times

  piavik 5 months, 4 weeks ago


What is "DynamoDB service on the instance" ?
upvoted 1 times

  Buruguduystunstugudunstuy Highly Voted  9 months, 2 weeks ago


Selected Answer: B
The best solution to meet the RPO and RTO requirements would be to use DynamoDB point-in-time recovery (PITR). This feature allows
you to restore your DynamoDB table to any point in time within the last 35 days, with a granularity of seconds. To recover data within a 15-
minute RPO, you would simply restore the table to the desired point in time within the last 35 days.

To meet the RTO requirement of 1 hour, you can use the DynamoDB console, AWS CLI, or the AWS SDKs to enable PITR on your table. Once
enabled, PITR continuously captures point-in-time copies of your table data in an S3 bucket. You can then use these point-in-time copies to
restore your table to any point in time within the retention period.

***CORRECT***
Option B. Configure DynamoDB point-in-time recovery. For RPO recovery, restore to the desired point in time.
upvoted 6 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


***WRONG***
Option A (configuring DynamoDB global tables) would not meet the RPO requirement, as global tables are designed to replicate data
to multiple regions for high availability, but they do not provide a way to restore data to a specific point in time.

Option C (exporting data to S3 Glacier) would not meet the RPO or RTO requirements, as S3 Glacier is a cold storage service with a
retrieval time of several hours.

Option D (scheduling EBS snapshots) would not meet the RPO requirement, as EBS snapshots are taken on a schedule, rather than
continuously. Additionally, restoring a DynamoDB table from an EBS snapshot can take longer than 1 hour, so it would not meet the
RTO requirement.
upvoted 3 times

  Guru4Cloud Most Recent  1 month, 3 weeks ago


Selected Answer: B
The best option to meet the RPO of 15 minutes and RTO of 1 hour is B) Configure DynamoDB point-in-time recovery. For RPO recovery,
restore to the desired point in time.

The key points:

DynamoDB point-in-time recovery can restore to any point in time within the last 35 days. This supports an RPO of 15 minutes.
Restoring from a point-in-time backup meets the 1 hour RTO.
Point-in-time recovery is specifically designed to restore DynamoDB tables with second-level granularity.
upvoted 1 times

  cookieMr 3 months, 1 week ago


A. Global tables provide multi-region replication for disaster recovery purposes, they may not meet the desired RPO of 15 minutes without
additional configuration and potential data loss.

C. Exporting and importing data on a daily basis does not align with the desired RPO of 15 minutes.

D. EBS snapshots can be used for data backup, they are not directly applicable to DynamoDB and cannot provide the desired RPO and RTO
without custom implementation.

In comparison, option B utilizing DynamoDB's built-in point-in-time recovery functionality provides the most straightforward and effective
solution for meeting the specified RPO of 15 minutes and RTO of 1 hour. By enabling PITR and restoring the table to the desired point in
time, the company can recover the customer information with minimal data loss and within the required time frame.
upvoted 3 times

  Abrar2022 4 months, 2 weeks ago


The answer is in the question. Read the question again!!! Option B. Configure DynamoDB point-in-time recovery. For RPO recovery, restore
to the desired point in time.
upvoted 1 times

  [Removed] 5 months, 1 week ago


If there is anyone who is willing to share his/her contributor access, then please write to [email protected]
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: B
Option B
upvoted 1 times

  Shasha1 9 months, 3 weeks ago


B is correct
DynamoDB point-in-time recovery allows the solutions architect to recover the DynamoDB table to a specific point in time, which would
meet the RPO of 15 minutes. This feature also provides an RTO of 1 hour, which is the desired recovery time objective for the application.
Additionally, configuring DynamoDB point-in-time recovery does not require any additional infrastructure or operational effort, making it
the best solution for this scenario.
Option D is not correct because scheduling Amazon EBS snapshots for the DynamoDB table every 15 minutes would not meet the RPO or
RTO requirements. While EBS snapshots can be used to recover data from a DynamoDB table, they are not designed to provide real-time
data protection or recovery capabilities
upvoted 1 times

  Wpcorgan 10 months, 2 weeks ago


B is correct
upvoted 1 times

  SimonPark 11 months, 1 week ago


Selected Answer: B
B is the answer
upvoted 1 times

  BoboChow 11 months, 2 weeks ago


Selected Answer: B
I think DynamoDB global tables also work here, but Point in Time Recovery is a better choice
upvoted 1 times
  Kikiokiki 11 months, 2 weeks ago
I THINK B.
https://dynobase.dev/dynamodb-point-in-time-recovery/
upvoted 1 times

  priya2224 11 months, 2 weeks ago


answer is D
upvoted 1 times

  [Removed] 11 months, 2 weeks ago


bhk gandu chutiye glt ans btata hai
upvoted 1 times

  Az900500 11 months ago


Try communicate in English for audience
upvoted 4 times

  123jhl0 11 months, 2 weeks ago


DynamoDB is serverless, so no storage snapshots available. https://aws.amazon.com/dynamodb/
upvoted 2 times
Question #72 Topic 1

A company runs a photo processing application that needs to frequently upload and download pictures from Amazon S3 buckets that are located
in the same AWS Region. A solutions architect has noticed an increased cost in data transfer fees and needs to implement a solution to reduce
these costs.
How can the solutions architect meet this requirement?

A. Deploy Amazon API Gateway into a public subnet and adjust the route table to route S3 calls through it.

B. Deploy a NAT gateway into a public subnet and attach an endpoint policy that allows access to the S3 buckets.

C. Deploy the application into a public subnet and allow it to route through an internet gateway to access the S3 buckets.

D. Deploy an S3 VPC gateway endpoint into the VPC and attach an endpoint policy that allows access to the S3 buckets.

Correct Answer: D

Community vote distribution


D (100%)

  Buruguduystunstugudunstuy Highly Voted  9 months, 2 weeks ago


Selected Answer: D
***CORRECT***
The correct answer is Option D. Deploy an S3 VPC gateway endpoint into the VPC and attach an endpoint policy that allows access to the
S3 buckets.

By deploying an S3 VPC gateway endpoint, the application can access the S3 buckets over a private network connection within the VPC,
eliminating the need for data transfer over the internet. This can help reduce data transfer fees as well as improve the performance of the
application. The endpoint policy can be used to specify which S3 buckets the application has access to.
upvoted 23 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


***WRONG***
Option A, deploying Amazon API Gateway into a public subnet and adjusting the route table, would not address the issue of data
transfer fees as the application would still be transferring data over the internet.

Option B, deploying a NAT gateway into a public subnet and attaching an endpoint policy, would not address the issue of data transfer
fees either as the NAT gateway is used to enable outbound internet access for instances in a private subnet, rather than for connecting
to S3.

Option C, deploying the application into a public subnet and allowing it to route through an internet gateway, would not reduce data
transfer fees as the application would still be transferring data over the internet.
upvoted 6 times

  KVK16 Highly Voted  11 months, 2 weeks ago


Selected Answer: D
To reduce costs get rid of NAT Gateway , VPC endpoint to S3
upvoted 22 times

  TariqKipkemei Most Recent  1 month, 2 weeks ago


Selected Answer: D
Prevent traffic from traversing the internet = Gateway VPC endpoint for S3.
upvoted 1 times

  Guru4Cloud 1 month, 3 weeks ago


Selected Answer: D
The best solution to reduce data transfer costs for an application frequently accessing S3 buckets in the same region is option D - Deploy
an S3 VPC gateway endpoint into the VPC and attach an endpoint policy that allows access to the S3 buckets.

The key points:


- S3 gateway endpoints allow private connections between VPCs and S3 without going over the public internet.
- This avoids data transfer fees for traffic between the VPC and S3 within the same region.
- An endpoint policy controls access to specific S3 buckets.
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: D
A. API Gateway can serve as a proxy for S3 requests, it adds unnecessary complexity and additional costs compared to a direct VPC
endpoint.

B. Using a NAT gateway for accessing S3 introduces unnecessary data transfer costs as traffic would still flow over the internet.
C. This approach would incur data transfer fees as the traffic would go through the public internet.

In comparison, option D using an S3 VPC gateway endpoint provides a direct and cost-effective solution for accessing S3 buckets within
the same Region. By keeping the data transfer within the AWS network infrastructure, it helps reduce data transfer fees and provides
secure access to the S3 resources.
upvoted 2 times
  Bmarodi 3 months, 3 weeks ago
Selected Answer: D
Option D is correct answer.
upvoted 1 times

  Erbug 8 months ago


To answer this question, I need to know the comparison of the types of gateway of costs, please give me a tip about that issue.
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: D
Option D
upvoted 1 times

  9014 9 months, 3 weeks ago


Selected Answer: D
The answer is D:- Actually, the Application (EC2) is running in the same region...instead of going to the internet, data can be copied
through the VPC endpoint...so there will be no cost because data is not leaving the AWS infra
upvoted 1 times

  JayBee65 9 months, 4 weeks ago


Can somebody please explain this question? Are we assuming the application is running in AWS and that adding the gateway endpoint
avoids the need for the EC2 instance to access the internet and thus avoid costs? Thanks a lot.
upvoted 2 times

  SR0611 9 months, 3 weeks ago


Yes correct
upvoted 1 times

  Wpcorgan 10 months, 2 weeks ago


D is correct
upvoted 1 times

  yd_h 11 months, 2 weeks ago


Selected Answer: D
FYI :
-There is no additional charge for using gateway endpoints.
-Interface endpoints are priced at ~ $0.01/per AZ/per hour. Cost depends on the Region
- S3 Interface Endpoints resolve to private VPC IP addresses and are routable from outside the VPC (e.g via VPN, Direct Connect, Transit
Gateway, etc). S3 Gateway Endpoints use public IP ranges and are only routable from resources within the VPC.
upvoted 5 times

  123jhl0 11 months, 2 weeks ago


Selected Answer: D
Close question to the Question #4, with same solution.
upvoted 3 times
Question #73 Topic 1

A company recently launched Linux-based application instances on Amazon EC2 in a private subnet and launched a Linux-based bastion host on
an Amazon EC2 instance in a public subnet of a VPC. A solutions architect needs to connect from the on-premises network, through the
company's internet connection, to the bastion host, and to the application servers. The solutions architect must make sure that the security
groups of all the EC2 instances will allow that access.
Which combination of steps should the solutions architect take to meet these requirements? (Choose two.)

A. Replace the current security group of the bastion host with one that only allows inbound access from the application instances.

B. Replace the current security group of the bastion host with one that only allows inbound access from the internal IP range for the company.

C. Replace the current security group of the bastion host with one that only allows inbound access from the external IP range for the company.

D. Replace the current security group of the application instances with one that allows inbound SSH access from only the private IP address of
the bastion host.

E. Replace the current security group of the application instances with one that allows inbound SSH access from only the public IP address of
the bastion host.

Correct Answer: CD

Community vote distribution


CD (91%) 9%

  Six_Fingered_Jose Highly Voted  11 months, 1 week ago


Selected Answer: CD
C because from on-prem network to bastion through internet (using on-prem resource's public IP),
D because bastion and ec2 is in same VPC, meaning bastion can communicate to EC2 via it's private IP address
upvoted 30 times

  Subhrangsu Most Recent  1 week, 1 day ago


Please check first comments from top of them:
Help2023
WherecanIstart
Buruguduystunstugudunstuy
upvoted 1 times

  TariqKipkemei 1 month, 2 weeks ago


Selected Answer: CD
Allows inbound access from the external IP range for the company. Then allow inbound SSH access from only the private IP address of the
bastion host.
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: CD
C. This will restrict access to the bastion host from the specific IP range of the on-premises network, ensuring secure connectivity. This
step ensures that only authorized users from the on-premises network can access the bastion host.

D. This step enables SSH connectivity from the bastion host to the application instances in the private subnet. By allowing inbound SSH
access only from the private IP address of the bastion host, you ensure that SSH access is restricted to the bastion host only.
upvoted 2 times

  stanleyjade 5 months ago


the internal and external IP range is not clear
upvoted 4 times

  PLN6302 1 month, 1 week ago


yes same for me
upvoted 1 times

  km142646 5 months, 1 week ago


The private/public IP address thing is confusing. Ideally, the private instances inbound rule would just allow traffic from the security group
of the bastion host.
upvoted 2 times

  Spiffaz 7 months, 1 week ago


Why external and not internal?
upvoted 2 times
  TariqKipkemei 6 months, 4 weeks ago
Because the traffic goes through the public internet. In the public internet, public IP(external IP) is used.
upvoted 5 times

  Help2023 7 months, 2 weeks ago


Selected Answer: CE
Application is in private subnet
Bastion Host is in public subnet

D does not make sense because the bastion host is in public subnet and they don't have a private IP but only a public IP address attached
to them. The IP wanting to connect is Public as well.

Bastion host in public subnet allows external IP (via internet) of the company to access it. Which than leaves us to give permission to the
application private subnet and for that the private subnet with the application accepts the IP coming from Bastion Host by changing its
SG. C&E
upvoted 2 times

  WherecanIstart 7 months, 1 week ago


Bastion host in public subnet because it has a public IP and a NAT Gateway that can route traffic out of your AWS VPC but it does have
the ability to access the private subnet using private IP since it's not leaving AWS to access the private subnet. So C&D are the right
answers.
upvoted 2 times

  swolfgang 8 months, 3 weeks ago


I dont understand why not CE . Because question ask through internet connection to servers and bostion host.I understand they want to
access both of from publıc. I mean not from the servers to bastion host.
upvoted 2 times

  RupeC 2 months, 2 weeks ago


E doesn't seem right to me as this is not a layered approach. i.e. on prem to public subnet, 1st then 2nd bastion to application. That
layering is missed in option E.
upvoted 1 times

  k1kavi1 9 months, 1 week ago


Selected Answer: CD
https://www.examtopics.com/discussions/amazon/view/51356-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 2 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Selected Answer: CE
To meet the requirements, the solutions architect should take the following steps:

C. Replace the current security group of the bastion host with one that only allows inbound access from the external IP range for the
company. This will allow the solutions architect to connect to the bastion host from the company's on-premises network through the
internet connection.

E. Replace the current security group of the application instances with one that allows inbound SSH access from only the public IP address
of the bastion host. This will allow the solutions architect to connect to the application instances through the bastion host using SSH.

Note: It's important to ensure that the security groups for the bastion host and application instances are configured correctly to allow the
desired inbound traffic, while still protecting the instances from unwanted access.
upvoted 2 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


***WRONG***
Here is why the other options are not correct:

A. Replacing the current security group of the bastion host with one that only allows inbound access from the application instances
would not allow the solutions architect to connect to the bastion host from the company's on-premises network through the internet
connection. The bastion host needs to be accessible from the external network in order to allow the solutions architect to connect to it.

B. Replacing the current security group of the bastion host with one that only allows inbound access from the internal IP range for the
company would not allow the solutions architect to connect to the bastion host from the company's on-premises network through the
internet connection. The internal IP range is not accessible from the external network.
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


D. Replacing the current security group of the application instances with one that allows inbound SSH access from only the private
IP address of the bastion host would not allow the solutions architect to connect to the application instances through the bastion
host using SSH. The private IP address of the bastion host is not accessible from the external network, so the solutions architect
would not be able to connect to it from the on-premises network.
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: CD
C and D
upvoted 1 times
  Wpcorgan 10 months, 2 weeks ago
C and D
upvoted 1 times

  gcmrjbr 11 months, 1 week ago


CD is Ok.
upvoted 1 times

  Evangelia 11 months, 2 weeks ago


why C? External?
upvoted 2 times

  JayBee65 9 months, 4 weeks ago


Because the IP address exposed to the Bastian host will be the external not the internal IP address. Google "whats my ip" and you will
see your IP address on the internet is NOT your internal IP.
upvoted 3 times

  ArielSchivo 11 months, 2 weeks ago


Selected Answer: CD
Option C (allow access just from the external IP) and D (allow inbound SSH from the private IP of the bastion host).
upvoted 2 times

  ninjawrz 11 months, 2 weeks ago


Selected Answer: CD
CD is correct
upvoted 2 times
Question #74 Topic 1

A solutions architect is designing a two-tier web application. The application consists of a public-facing web tier hosted on Amazon EC2 in public
subnets. The database tier consists of Microsoft SQL Server running on Amazon EC2 in a private subnet. Security is a high priority for the
company.
How should security groups be configured in this situation? (Choose two.)

A. Configure the security group for the web tier to allow inbound traffic on port 443 from 0.0.0.0/0.

B. Configure the security group for the web tier to allow outbound traffic on port 443 from 0.0.0.0/0.

C. Configure the security group for the database tier to allow inbound traffic on port 1433 from the security group for the web tier.

D. Configure the security group for the database tier to allow outbound traffic on ports 443 and 1433 to the security group for the web tier.

E. Configure the security group for the database tier to allow inbound traffic on ports 443 and 1433 from the security group for the web tier.

Correct Answer: AC

Community vote distribution


AC (98%)

  Athena Highly Voted  11 months ago


Selected Answer: AC
Web Server Rules: Inbound traffic from 443 (HTTPS) Source 0.0.0.0/0 - Allows inbound HTTPS access from any IPv4 address
Database Rules : 1433 (MS SQL)The default port to access a Microsoft SQL Server database, for example, on an Amazon RDS instance

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/security-group-rules-reference.html
upvoted 18 times

  ArielSchivo Highly Voted  11 months, 2 weeks ago


Selected Answer: AC
EC2 web on public subnets + EC2 SQL on private subnet + security is high priority. So, Option A to allow HTTPS from everywhere. Plus
option C to allow SQL connection from the web instance.
upvoted 14 times

  TariqKipkemei Most Recent  1 month, 2 weeks ago


Selected Answer: AC
Allow inbound traffic on port 443 from 0.0.0.0/0 on the web tier. Then allow inbound traffic on port 1433 from the security group for the
web tier on the database tier.
upvoted 1 times

  Guru4Cloud 1 month, 3 weeks ago


Selected Answer: AC
The security group for the web tier should allow inbound traffic on port 443 from 0.0.0.0/0. This will allow clients to connect to the web tier
using HTTPS. The security group for the web tier should also allow outbound traffic on port 443 to 0.0.0.0/0. This will allow the web tier to
connect to the internet to download updates and other resources.

The security group for the database tier should allow inbound traffic on port 1433 from the security group for the web tier. This will allow
the web tier to connect to the database tier to access data. The security group for the database tier should not allow outbound traffic on
ports 443 and 1433 to the security group for the web tier. This will prevent the database tier from being exposed to the public internet.
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: AC
A. This configuration allows external users to access the web tier over HTTPS (port 443). However, it's important to note that it is generally
recommended to restrict the source IP range to a more specific range rather than allowing access from 0.0.0.0/0 (anywhere). This would
limit access to only trusted sources.

C. By allowing inbound traffic on port 1433 (default port for Microsoft SQL Server) from the security group associated with the web tier,
you ensure that the database tier can only be accessed by the EC2 instances in the web tier. This provides a level of isolation and restricts
direct access to the database tier from external sources.
upvoted 2 times

  Abrar2022 4 months, 2 weeks ago


DB tier: Port 1433 is the known standard for SQL server and should be used.
web tier on port 443 (HTTPS)
upvoted 2 times
  beginnercloud 4 months, 2 weeks ago
Selected Answer: AC
AC is correct
upvoted 1 times

  WherecanIstart 7 months, 1 week ago


A & C are the correct answer.

Inbound traffic to the web tier on port 443 (HTTPS)


The web tier will then access the Database tier on port 1433 - inbound.
upvoted 1 times

  techhb 8 months, 3 weeks ago


Selected Answer: AC
AC 443-http inbound and 1433-sql server
Security group => focus on inbound traffic since by default outboud traffic is allowed
upvoted 2 times

  aba2s 8 months, 3 weeks ago


Selected Answer: AC
Security group => focus on inbound traffic since by default outboud traffic is allowed
upvoted 2 times

  orionizzie 9 months, 1 week ago


why both are inbound rules
upvoted 1 times

  kraken21 6 months, 1 week ago


Because security groups are stateful.
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Selected Answer: CE
***CORRECT***
The correct answers are C and E.

For security purposes, it is best practice to limit inbound and outbound traffic as much as possible. In this case, the web tier should only
be able to access the database tier and not the other way around. Therefore, the security group for the web tier should only allow
outbound traffic to the security group for the database tier on the necessary ports. Similarly, the security group for the database tier
should only allow inbound traffic from the security group for the web tier on the necessary ports.

Answer C: Configure the security group for the database tier to allow inbound traffic on port 1433 from the security group for the web tier.
This is correct because the web tier needs to be able to connect to the database on port 1433 in order to access the data.
upvoted 1 times

  PassNow1234 9 months, 1 week ago


This is WRONG. Browse to a website and type :443 at the end of it. IT will translate to HTTPS. HTTPS = 443.

answers are A and C


upvoted 3 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Answer E: Configure the security group for the database tier to allow inbound traffic on ports 443 and 1433 from the security group for
the web tier. This is correct because the web tier needs to be able to connect to the database on both port 443 and 1433 in order to
access the data.

***WRONG***
Answer A: Configure the security group for the web tier to allow inbound traffic on port 443 from 0.0.0.0/0. This is not correct because
the web tier should not allow inbound traffic from the internet. Instead, it should only allow outbound traffic to the security group for
the database tier.
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


***WRONG***
Answer B: Configure the security group for the web tier to allow outbound traffic on port 443 from 0.0.0.0/0. This is not correct
because the web tier should not allow outbound traffic to the internet. Instead, it should only allow outbound traffic to the security
group for the database tier.

Answer D: Configure the security group for the database tier to allow outbound traffic on ports 443 and 1433 to the security group
for the web tier. This is not correct because the database tier should not allow outbound traffic to the web tier. Instead, it should only
allow inbound traffic from the security group for the web tier on the necessary ports.
upvoted 1 times

  techhb 8 months, 3 weeks ago


Chatgpt is unreliable this answer from same.
upvoted 1 times
  career360guru 9 months, 2 weeks ago
Selected Answer: AC
A and C
upvoted 1 times

  Wpcorgan 10 months, 2 weeks ago


A and C
upvoted 1 times

  gcmrjbr 11 months, 1 week ago


Agree with AC.
upvoted 2 times

  srcshekhar 11 months, 3 weeks ago


Very good questions
upvoted 3 times
Question #75 Topic 1

A company wants to move a multi-tiered application from on premises to the AWS Cloud to improve the application's performance. The application
consists of application tiers that communicate with each other by way of RESTful services. Transactions are dropped when one tier becomes
overloaded. A solutions architect must design a solution that resolves these issues and modernizes the application.
Which solution meets these requirements and is the MOST operationally efficient?

A. Use Amazon API Gateway and direct transactions to the AWS Lambda functions as the application layer. Use Amazon Simple Queue Service
(Amazon SQS) as the communication layer between application services.

B. Use Amazon CloudWatch metrics to analyze the application performance history to determine the servers' peak utilization during the
performance failures. Increase the size of the application server's Amazon EC2 instances to meet the peak requirements.

C. Use Amazon Simple Notification Service (Amazon SNS) to handle the messaging between application servers running on Amazon EC2 in an
Auto Scaling group. Use Amazon CloudWatch to monitor the SNS queue length and scale up and down as required.

D. Use Amazon Simple Queue Service (Amazon SQS) to handle the messaging between application servers running on Amazon EC2 in an Auto
Scaling group. Use Amazon CloudWatch to monitor the SQS queue length and scale up when communication failures are detected.

Correct Answer: A

Community vote distribution


A (82%) D (18%)

  gcmrjbr Highly Voted  11 months, 1 week ago


Agree with A>>> Lambda = serverless + autoscale (modernize), SQS= decouple (no more drops)
upvoted 24 times

  LuckyAro Highly Voted  8 months ago


Selected Answer: A
The catch phrase is "scale up when communication failures are detected" Scaling should not be based on communication failures, that'll
be crying over spilled milk ! or rather too late. So D is wrong.
upvoted 11 times

  remand 8 months ago


it says "one tier becomes overloaded" , Not communication failure...
upvoted 2 times

  LuckyAro 7 months, 3 weeks ago


D says: "Use Amazon CloudWatch to monitor the SQS queue length and scale up when communication failures are detected".
upvoted 3 times

  vijaykamal Most Recent  5 days, 18 hours ago


I feel the answer is D, Lambda would increase the complexity and overhead and it has limitation of running for 15 min.
upvoted 1 times

  TariqKipkemei 1 month, 2 weeks ago


Selected Answer: A
MOST operationally efficient = Serverless = AWS Lambda functions, Amazon Simple Queue Service
upvoted 1 times

  zjcorpuz 2 months, 1 week ago


A and D are both good solution however A will suffice the requirement as it is the most operational efficient for modern applications, AWS
Lambda will scale elastically when application will become overloaded and the fact that it is serverless. SQS will handle the queue as well..
upvoted 2 times

  cookieMr 3 months, 1 week ago


Selected Answer: D
This solution addresses the issue of dropped transactions by decoupling the communication between application tiers using SQS. It
ensures that transactions are not lost even if one tier becomes overloaded.

By using EC2 in ASG, the application can automatically scale based on the demand and the length of the SQS. This allows for efficient
utilization of resources and ensures that the application can handle increased workload and communication failures.

CloudWatch is used to monitor the length of SQS. When queue length exceeds a certain threshold, indicating potential communication
failures, the ASG can be configured to scale up by adding more instances to handle the load.

D. This solution utilizes Lambda and API Gateway, which can be a valid approach for building serverless applications. However, it may
introduce additional complexity and operational overhead compared to the requirement of modernizing an existing multi-tiered
application.
upvoted 3 times

  MutiverseAgent 2 months, 3 weeks ago


Supposing the solution is D), what is the point of monitoring the SQS queue length if then the system scales up when communication
failures are detected? Why not monitoring the amount of failures? Is it ok to assume that when the queue grows the system is failing?
What is the system is under more demand? So, my guess, the solution is A)
upvoted 1 times

  prakashiyyanarappan 5 months, 1 week ago


ANS: A Key word - RESTful services - Amazon API Gateway
upvoted 4 times

  ajaynaik44 5 months, 3 weeks ago


Must be D :
Please refer to thread https://pupuweb.com/aws-saa-c02-actual-exam-question-answer-dumps-3/6/
upvoted 1 times

  hemantjv 5 months, 4 weeks ago


@Buruguduystunstugudunstuy Kindly share your comments for this question
upvoted 1 times

  remand 8 months ago


Selected Answer: D
Must be D.
A is incorrect. Even though lambda could auto scale, SQS communication between tires is not addressing drop in transaction per se as SQS
would allow to read (say serially with FIFO or NOT) in a controlled way, your application code determines how many threads are being
spanned to process those messages. This could still overload the tier.
upvoted 4 times

  bullrem 8 months, 2 weeks ago


D. Use Amazon Simple Queue Service (Amazon SQS) to handle the messaging between application servers running on Amazon EC2 in an
Auto Scaling group. Use Amazon CloudWatch to monitor the SQS queue length and scale up when communication failures are detected.
This solution allows for horizontal scaling of the application servers and allows for automatic scaling based on communication failures,
which can help prevent transactions from being dropped when one tier becomes overloaded. It also provides a more modern and
operationally efficient way to handle communication between application services compared to traditional RESTful services.
upvoted 3 times

  goodmail 8 months, 3 weeks ago


Selected Answer: A
Can be A only. Other 3 answers use CloudWatch, which does not make sense for this question.
upvoted 2 times

  techhb 8 months, 3 weeks ago


Selected Answer: A
Server less and de couple.
upvoted 2 times

  Parsons 9 months ago


Selected Answer: A
Serverless (Lambda) + Decouple (SQS) is a modernized application.
The flow looks like this: API Gateway --> SQS (act as decouple) -> Lambda functions (act as subscriber pull msg from the queue to process)
upvoted 3 times

  career360guru 9 months, 2 weeks ago


Selected Answer: A
Option A
upvoted 1 times

  benaws 9 months, 3 weeks ago


Selected Answer: A
EC2 is not modern...
upvoted 1 times

  John_Zhuang 9 months ago


lmao...
upvoted 1 times

  Wpcorgan 10 months, 2 weeks ago


A is correct
upvoted 1 times
Question #76 Topic 1

A company receives 10 TB of instrumentation data each day from several machines located at a single factory. The data consists of JSON files
stored on a storage area network (SAN) in an on-premises data center located within the factory. The company wants to send this data to Amazon
S3 where it can be accessed by several additional systems that provide critical near-real-time analytics. A secure transfer is important because
the data is considered sensitive.
Which solution offers the MOST reliable data transfer?

A. AWS DataSync over public internet

B. AWS DataSync over AWS Direct Connect

C. AWS Database Migration Service (AWS DMS) over public internet

D. AWS Database Migration Service (AWS DMS) over AWS Direct Connect

Correct Answer: B

Community vote distribution


B (100%)

  ArielSchivo Highly Voted  11 months, 2 weeks ago


Selected Answer: B
DMS is for databases and here refers to “JSON files”. Public internet is not reliable. So best option is B.
upvoted 24 times

  Buruguduystunstugudunstuy Highly Voted  9 months, 2 weeks ago


Selected Answer: B
***CORRECT***
The most reliable solution for transferring the data in a secure manner would be option B: AWS DataSync over AWS Direct Connect.

AWS DataSync is a data transfer service that uses network optimization techniques to transfer data efficiently and securely between on-
premises storage systems and Amazon S3 or other storage targets. When used over AWS Direct Connect, DataSync can provide a
dedicated and secure network connection between your on-premises data center and AWS. This can help to ensure a more reliable and
secure data transfer compared to using the public internet.
upvoted 8 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


***WRONG***
Option A, AWS DataSync over the public internet, is not as reliable as using Direct Connect, as it can be subject to potential network
issues or congestion.

Option C, AWS Database Migration Service (DMS) over the public internet, is not a suitable solution for transferring large amounts of
data, as it is designed for migrating databases rather than transferring large amounts of data from a storage area network (SAN).

Option D, AWS DMS over AWS Direct Connect, is also not a suitable solution, as it is designed for migrating databases and may not be
efficient for transferring large amounts of data from a SAN.
upvoted 6 times

  doorahmie 8 months, 1 week ago


explanation about D option is good
upvoted 1 times

  TariqKipkemei Most Recent  1 month, 2 weeks ago


Selected Answer: B
Secure and Most reliable transfer = AWS DataSync over AWS Direct Connect
upvoted 1 times

  Guru4Cloud 1 month, 3 weeks ago


Selected Answer: B
AWS DataSync is designed for large scale, high speed data transfer between on-prem and S3.
Using AWS Direct Connect provides a dedicated, private connection for reliable, consistent data transfer.
DataSync seamlessly handles data replication, encryption, recovery etc.
upvoted 1 times

  MNotABot 2 months, 2 weeks ago


Not over public hence AC out / DMS is for databases and here refers to “JSON files”.
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: B
DataSync is a service specifically designed for data transfer and synchronization between on-premises storage systems and AWS storage
services like S3. It provides reliable and efficient data transfer capabilities, ensuring the secure movement of large volumes of data.

By leveraging Direct Connect, which establishes a dedicated network connection between the on-premises data center and AWS, the data
transfer is conducted over a private and dedicated network link. This approach offers increased reliability, lower latency, and consistent
network performance compared to transferring data over the public internet.

Database Migration Service is primarily focused on database migration and replication, and it may not be the most appropriate tool for
general-purpose data transfer like JSON files.

Transferring data over the public internet may introduce potential security risks and performance variability due to factors like network
congestion, latency, and potential interruptions.
upvoted 2 times

  beginnercloud 4 months, 1 week ago


Best option and correct is B
upvoted 1 times

  Abrar2022 4 months, 2 weeks ago


Selected Answer: B
as Ariel suggested and rightly so.....DMS is for databases and here refers to “JSON files”. Public internet is not reliable. so B
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: B
Option B
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: B
Option B. DMS is not needed as there is no Database migration requirement.
upvoted 1 times

  Wajif 10 months, 1 week ago


Selected Answer: B
Public internet options automatically out being best-effort. DMS is for database migration service and here they have to just transfer the
data to S3. Hence B.
upvoted 2 times

  Wpcorgan 10 months, 2 weeks ago


B is correct
upvoted 1 times

  yd_h 11 months, 2 weeks ago


B

- A SAN presents storage devices to a host such that the storage appears to be locally attached. ( NFS is, or can be, a SAN -
https://serverfault.com/questions/499185/is-san-storage-better-than-nfs )
- AWS Direct Connect does not encrypt your traffic that is in transit by default. But the connection is private
(https://docs.aws.amazon.com/directconnect/latest/UserGuide/encryption-in-transit.html)
upvoted 4 times
Question #77 Topic 1

A company needs to configure a real-time data ingestion architecture for its application. The company needs an API, a process that transforms
data as the data is streamed, and a storage solution for the data.
Which solution will meet these requirements with the LEAST operational overhead?

A. Deploy an Amazon EC2 instance to host an API that sends data to an Amazon Kinesis data stream. Create an Amazon Kinesis Data
Firehose delivery stream that uses the Kinesis data stream as a data source. Use AWS Lambda functions to transform the data. Use the
Kinesis Data Firehose delivery stream to send the data to Amazon S3.

B. Deploy an Amazon EC2 instance to host an API that sends data to AWS Glue. Stop source/destination checking on the EC2 instance. Use
AWS Glue to transform the data and to send the data to Amazon S3.

C. Configure an Amazon API Gateway API to send data to an Amazon Kinesis data stream. Create an Amazon Kinesis Data Firehose delivery
stream that uses the Kinesis data stream as a data source. Use AWS Lambda functions to transform the data. Use the Kinesis Data Firehose
delivery stream to send the data to Amazon S3.

D. Configure an Amazon API Gateway API to send data to AWS Glue. Use AWS Lambda functions to transform the data. Use AWS Glue to send
the data to Amazon S3.

Correct Answer: C

Community vote distribution


C (96%)

  123jhl0 Highly Voted  11 months, 2 weeks ago


Selected Answer: C
(A) - You don't need to deploy an EC2 instance to host an API - Operational overhead
(B) - Same as A
(**C**) - Is the answer
(D) - AWS Glue gets data from S3, not from API GW. AWS Glue could do ETL by itself, so don't need lambda. Non sense.
https://aws.amazon.com/glue/
upvoted 34 times

  Futurebones 4 months, 3 weeks ago


I don''t understand is why we should use Lambda in between to transform data. To me, Kinesis data firehose is enough as it is an
extract, transform, and load (ETL) service.
upvoted 3 times

  TariqKipkemei Most Recent  1 month, 2 weeks ago


Selected Answer: C
The company needs an API = Amazon API Gateway API
A real-time data ingestion = Amazon Kinesis data stream
A process that transforms data = AWS Lambda functions
Kinesis Data Firehose delivery stream to send the data to Amazon S3
A storage solution for the data = Amazon S3
upvoted 4 times

  Guru4Cloud 1 month, 3 weeks ago


Selected Answer: C
Option C provides the least operational overhead to meet the requirements:

API Gateway provides the API


Kinesis Data Streams ingests the real-time data
Lambda functions transform the data
Firehose delivers the data to S3 storage
The key advantages are:

Serverless architecture requires minimal operational overhead


Fully managed ingestion, processing and storage services
No need to manage EC2 instances
upvoted 1 times

  diabloexodia 2 months, 2 weeks ago


Requirements:
API- API gateway
Real time data ingestion - AWS Kinesis data stream
ETL(Extract Transform Load) - Kinesis Firehose
Storage- S3
upvoted 3 times
  tamefi5512 3 months ago
Selected Answer: C
C - is the answer
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: C
C. By leveraging these services together, you can achieve a real-time data ingestion architecture with minimal operational overhead. The
data flows from the API Gateway to the Kinesis data stream, undergoes transformations with Lambda, and is then sent to S3 via the
Kinesis Data Firehose delivery stream for storage.

A. This adds operational overhead as you need to handle EC2 management, scaling, and maintenance. It is less efficient compared to
using a serverless solution like API Gateway.

B. It requires deploying and managing an EC2 to host the API and configuring Glue. This adds operational overhead, including EC2
management and potential scalability limitations.

D. It still requires managing and configuring Glue, which adds operational overhead. Additionally, it may not be the most efficient solution
as Glue is primarily used for ETL scenarios, and in this case, real-time data transformation is required.
upvoted 2 times

  winzzhhzzhh 4 months, 2 weeks ago


Selected Answer: D
I am gonna choose D for this.
Kinesis Data Stream + Data Firehose will adds up to the operational overhead, plus it is "Near real-time", not a real time solution.
Lambda functions scale automatically, so no management of scaling/compute resources is needed.
AWS Glue handles the data storage in S3, so no management of that is needed.
upvoted 2 times

  UnluckyDucky 6 months, 2 weeks ago


Gotta love all those chatgpt answers y'all are throwing at us.

Kinesis Firehose is NEAR real-time, not real-time like your bots tell you.
upvoted 2 times

  bullrem 8 months, 1 week ago


Selected Answer: C
option C is the best solution. It uses Amazon Kinesis Data Firehose which is a fully managed service for delivering real-time streaming data
to destinations such as Amazon S3. This service requires less operational overhead as compared to option A, B, and D. Additionally, it also
uses Amazon API Gateway which is a fully managed service for creating, deploying, and managing APIs. These services help in reducing
the operational overhead and automating the data ingestion process.
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Selected Answer: C
Option C is the solution that meets the requirements with the least operational overhead.

In Option C, you can use Amazon API Gateway as a fully managed service to create, publish, maintain, monitor, and secure APIs. This
means that you don't have to worry about the operational overhead of deploying and maintaining an EC2 instance to host the API.

Option C also uses Amazon Kinesis Data Firehose, which is a fully managed service for delivering real-time streaming data to destinations
such as Amazon S3. With Kinesis Data Firehose, you don't have to worry about the operational overhead of setting up and maintaining a
data ingestion infrastructure.
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Finally, Option C uses AWS Lambda, which is a fully managed service for running code in response to events. With AWS Lambda, you
don't have to worry about the operational overhead of setting up and maintaining a server to run the data transformation code.

Overall, Option C provides a fully managed solution for real-time data ingestion with minimal operational overhead.
upvoted 2 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Option A is incorrect because it involves deploying an EC2 instance to host an API, which adds operational overhead in the form of
maintaining and securing the instance.

Option B is incorrect because it involves deploying an EC2 instance to host an API and disabling source/destination checking on the
instance. Disabling source/destination checking can make the instance vulnerable to attacks, which adds operational overhead in
the form of securing the instance.
upvoted 2 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Option D is incorrect because it involves using AWS Glue to send the data to Amazon S3, which adds operational overhead in the
form of maintaining and securing the AWS Glue infrastructure.
Overall, Option C is the best choice because it uses fully managed services for the API, data transformation, and data delivery,
which minimizes operational overhead.
upvoted 2 times
  career360guru 9 months, 2 weeks ago
Selected Answer: C
Option C
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: C
Option C
upvoted 1 times

  Wpcorgan 10 months, 2 weeks ago


C is correct
upvoted 1 times

  Cristian93 11 months, 1 week ago


Selected Answer: C
C is correct answer
upvoted 2 times
Question #78 Topic 1

A company needs to keep user transaction data in an Amazon DynamoDB table. The company must retain the data for 7 years.
What is the MOST operationally efficient solution that meets these requirements?

A. Use DynamoDB point-in-time recovery to back up the table continuously.

B. Use AWS Backup to create backup schedules and retention policies for the table.

C. Create an on-demand backup of the table by using the DynamoDB console. Store the backup in an Amazon S3 bucket. Set an S3 Lifecycle
configuration for the S3 bucket.

D. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to invoke an AWS Lambda function. Configure the Lambda function to
back up the table and to store the backup in an Amazon S3 bucket. Set an S3 Lifecycle configuration for the S3 bucket.

Correct Answer: B

Community vote distribution


B (100%)

  123jhl0 Highly Voted  11 months, 2 weeks ago


Selected Answer: B
Answer is B
"Amazon DynamoDB offers two types of backups: point-in-time recovery (PITR) and on-demand backups. (==> D is not the answer)
PITR is used to recover your table to any point in time in a rolling 35 day window, which is used to help customers mitigate accidental
deletes or writes to their tables from bad code, malicious access, or user error. (==> A isn't the answer)
On demand backups are designed for long-term archiving and retention, which is typically used to help customers meet compliance and
regulatory requirements.
This is the second of a series of two blog posts about using AWS Backup to set up scheduled on-demand backups for Amazon DynamoDB.
Part 1 presents the steps to set up a scheduled backup for DynamoDB tables from the AWS Management Console." (==> Not the
DynamoBD console and C isn't the answer either)
https://aws.amazon.com/blogs/database/part-2-set-up-scheduled-backups-for-amazon-dynamodb-using-aws-backup/
upvoted 38 times

  MutiverseAgent 2 months, 3 weeks ago


Dynamo backups cannot be scheduled or sent to S3, so answer should be B)
1) https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/BackupRestore.html
2) https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Backup.Tutorial.html
upvoted 1 times

  LuckyAro 8 months, 1 week ago


I think the answer is C because of storage time.
upvoted 1 times

  Buruguduystunstugudunstuy Highly Voted  9 months, 2 weeks ago


Selected Answer: B
The most operationally efficient solution that meets these requirements would be to use option B, which is to use AWS Backup to create
backup schedules and retention policies for the table.

AWS Backup is a fully managed backup service that makes it easy to centralize and automate the backup of data across AWS resources. It
allows you to create backup policies and schedules to automatically back up your DynamoDB tables on a regular basis. You can also
specify retention policies to ensure that your backups are retained for the required period of time. This solution is fully automated and
requires minimal maintenance, making it the most operationally efficient option.
upvoted 8 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Option A, using DynamoDB point-in-time recovery, is also a viable option but it requires continuous backup, which may be more
resource-intensive and may incur higher costs compared to using AWS Backup.

Option C, creating an on-demand backup of the table and storing it in an S3 bucket, is also a viable option but it requires manual
intervention and does not provide the automation and scheduling capabilities of AWS Backup.

Option D, using Amazon EventBridge (CloudWatch Events) and a Lambda function to back up the table and store it in an S3 bucket, is
also a viable option but it requires more complex setup and maintenance compared to using AWS Backup.
upvoted 7 times

  Guru4Cloud Most Recent  1 month, 3 weeks ago


Selected Answer: B
The key advantages of using AWS Backup are:

Fully managed backup service requiring minimal operational overhead


Built-in scheduling, retention policies, and backup monitoring
Supports point-in-time restore for DynamoDB
Automated and scalable solution
upvoted 1 times
  tamefi5512 3 months ago
Selected Answer: B
B - is the answer because its easy to setup via AWS Backup & It indicates the keyword "MOST Operational Efficient". Other answers are
indicating Cost efficient
upvoted 1 times

  cookieMr 3 months, 1 week ago


AWS Backup is a fully managed backup service that simplifies the process of creating and managing backups across various AWS services,
including DynamoDB. It allows you to define backup schedules and retention policies to automatically take backups and retain them for
the desired duration. By using AWS Backup, you can offload the operational overhead of managing backups to the service itself, ensuring
that your data is protected and retained according to the specified retention period.

This solution is more efficient compared to the other options because it provides a centralized and automated backup management
approach specifically designed for AWS services. It eliminates the need to manually configure and maintain backup processes, making it
easier to ensure data retention compliance without significant operational effort.
upvoted 2 times

  Rahul2212 3 months, 2 weeks ago


A
PITR is used to recover your table to any point in time in a rolling 35 day window, which is used to help customers mitigate accidental
deletes or writes to their tables from bad code, malicious access, or user error. (==> A is the answer)
upvoted 1 times

  Abrar2022 4 months, 1 week ago


using AWS Backup cheaper than DynamoDB point-in-time recovery
upvoted 1 times

  kraken21 6 months ago


Selected Answer: B
With less overhead is AWS Backups:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/backuprestore_HowItWorksAWS.html
upvoted 1 times

  klayytech 6 months ago


Selected Answer: B
To retain data for 7 years in an Amazon DynamoDB table, you can use AWS Backup to create backup schedules and retention policies for
the table. You can also use DynamoDB point-in-time recovery to back up the table continuously.
upvoted 1 times

  test_devops_aws 6 months, 2 weeks ago


Selected Answer: B
B = AWS backup
upvoted 1 times

  Jiggs007 8 months, 2 weeks ago


C is correct because we have to store data in s3 and an S3 Lifecycle configuration for the S3 bucket for 7 year.and its used on-demand
backup of the table by using the DynamoDB console because If you need to store backups of your data for longer than 35 days, you can
use on-demand backup. On-demand provides you a fully consistent snapshot of your table data and stay around forever (even after the
table is deleted).
upvoted 2 times

  Mainroad4822 6 months, 2 weeks ago


In AWSBackup Plan, you can choose 7year Retention with Daily, Weekly or Monly frequency. From operational perspective, I think B is
correct.
upvoted 1 times

  LuckyAro 8 months, 1 week ago


I think you are correct
upvoted 1 times

  SilentMilli 8 months, 3 weeks ago


Selected Answer: B
B. Use AWS Backup to create backup schedules and retention policies for the table.

AWS Backup is a fully managed service that makes it easy to centralize and automate the backup of data across AWS resources. It can be
used to create backup schedules and retention policies for DynamoDB tables, which will ensure that the data is retained for the desired
period of 7 years. This solution will provide the most operationally efficient method for meeting the requirements because it requires
minimal effort to set up and manage.
upvoted 3 times

  career360guru 9 months, 2 weeks ago


Selected Answer: B
Option B AWS Backup
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: B
AWS Backup
upvoted 1 times

  Wpcorgan 10 months, 2 weeks ago


B is correct
upvoted 2 times

  mabotega 10 months, 2 weeks ago


Selected Answer: B
We recommend you use AWS Backup to automatically delete the backups that you no longer need by configuring your lifecycle when you
created your backup plan.
https://docs.aws.amazon.com/aws-backup/latest/devguide/deleting-backups.html
upvoted 1 times

  SimonPark 11 months, 1 week ago


Selected Answer: B
B is clear
upvoted 2 times
Question #79 Topic 1

A company is planning to use an Amazon DynamoDB table for data storage. The company is concerned about cost optimization. The table will not
be used on most mornings. In the evenings, the read and write traffic will often be unpredictable. When traffic spikes occur, they will happen very
quickly.
What should a solutions architect recommend?

A. Create a DynamoDB table in on-demand capacity mode.

B. Create a DynamoDB table with a global secondary index.

C. Create a DynamoDB table with provisioned capacity and auto scaling.

D. Create a DynamoDB table in provisioned capacity mode, and configure it as a global table.

Correct Answer: A

Community vote distribution


A (76%) C (24%)

  SimonPark Highly Voted  11 months, 1 week ago


Selected Answer: A
On-demand mode is a good option if any of the following are true:
- You create new tables with unknown workloads.
- You have unpredictable application traffic.
- You prefer the ease of paying for only what you use.
upvoted 32 times

  123jhl0 Highly Voted  11 months, 2 weeks ago


Selected Answer: A
**A** - On demand is the answer -
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html#HowItWorks.OnDem
and
B - not related with the unpredictable traffic
C - provisioned capacity is recommended for known patterns. Not the case here.
D - same as C
upvoted 15 times

  NasosoAuxtyno 7 months ago


Thanks. Your reference link perfectly supports the option "A". 100% correct
upvoted 1 times

  clark777 Most Recent  1 week, 2 days ago


https://docs.aws.amazon.com/wellarchitected/latest/serverless-applications-lens/capacity.html
With on-demand capacity mode, DynamoDB charges you for the data reads and writes your application performs on your tables. You do
not need to specify how much read and write throughput you expect your application to perform because DynamoDB instantly
accommodates your workloads as they ramp up or down.

With provisioned capacity mode, you specify the number of reads and writes per second that you expect your application to require, and
you are billed based on that. Furthermore if you can forecast your capacity requirements you can also reserve a portion of DynamoDB
provisioned capacity and optimize your costs even further.
upvoted 1 times

  TariqKipkemei 1 month, 2 weeks ago


Selected Answer: A
With on-demand capacity mode, DynamoDB instantly accommodates your workloads as they ramp up or down.
upvoted 1 times

  ontheyun 3 months ago


on-demand capacity : unpredictable application traffic
provisioned capacity : predictable application traffic, run applications whose traffic is consistent, and ramps up or down gradually.
https://docs.aws.amazon.com/wellarchitected/latest/serverless-applications-lens/capacity.html
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: C
By choosing provisioned capacity, you can allocate a specific amount of read and write capacity units based on your expected usage
during peak times. This helps in cost optimization as you only pay for the provisioned capacity, which can be adjusted according to your
anticipated traffic.

Enabling auto scaling allows DynamoDB to automatically adjust the provisioned capacity up or down based on the actual usage. This is
beneficial in handling quick traffic spikes without manual intervention and ensuring that the required capacity is available to handle
increased load efficiently. Auto scaling helps to optimize costs by dynamically adjusting the capacity to match the demand, avoiding
overprovisioning during periods of low usage.

A. Creating a DynamoDB table in on-demand capacity mode, may not be the most cost-effective solution in this scenario. On-demand
capacity mode charges you based on the actual usage of read and write requests, which can be beneficial for sporadic or unpredictable
workloads. However, it may not be the optimal choice if the table is not used on most mornings.
upvoted 7 times
  beginnercloud 4 months, 1 week ago
Selected Answer: A
Correct answer is A

- You create new tables with unknown workloads. - You have unpredictable application traffic. - You prefer the ease of paying for only what
you use.
upvoted 1 times

  Abrar2022 4 months, 2 weeks ago


Selected Answer: A
"On-demand" is a good option for applications that have unpredictable or sudden spikes, since it automatically provisions read/write
capacity.

"Provisioned capacity" is suitable for applications with predictable usage.


upvoted 1 times

  cheese929 5 months, 1 week ago


Selected Answer: A
Answer is A.
Provisioned capacity is best if you have relatively predictable application traffic, run applications whose traffic is consistent, and ramps up
or down gradually.
On-demand capacity mode is best when you have unknown workloads, unpredictable application traffic and also if you only want to pay
exactly for what you use. The on-demand pricing model is ideal for bursty, new, or unpredictable workloads whose traffic can spike in
seconds or minutes, and when under-provisioned capacity would impact the user experience.
https://docs.aws.amazon.com/wellarchitected/latest/serverless-applications-lens/capacity.html
upvoted 2 times

  velikivelicu 5 months, 3 weeks ago


Selected Answer: A
For unpredictable cases there's no way you can provision something, as it cannot be predicted, so the answer is A
upvoted 1 times

  linux_admin 6 months ago


Selected Answer: A
On-demand capacity mode allows a DynamoDB table to automatically scale up or down based on the traffic to the table. This means that
capacity will be allocated as needed and billing will be based on actual usage, providing flexibility in capacity while minimizing costs. This
is an ideal choice for a table that is not used on most mornings and has unpredictable traffic spikes in the evenings.
upvoted 1 times

  datz 6 months, 2 weeks ago


Selected Answer: A
unpredictable application traffic meaning answer is on demand Capacity

"This means that provisioned capacity is probably best for you if you have relatively predictable application traffic, run applications whose
traffic is consistent, and ramps up or down gradually.

Whereas on-demand capacity mode is probably best when you have new tables with unknown workloads, unpredictable application traffic
and also if you only want to pay exactly for what you use. The on-demand pricing model is ideal for bursty, new, or unpredictable
workloads whose traffic can spike in seconds or minutes, and when under-provisioned capacity would impact the user experience."
upvoted 2 times

  mell1222 6 months, 3 weeks ago


Selected Answer: A
Use on-demand capacity mode: With on-demand capacity mode, DynamoDB automatically scales up and down to handle the traffic
without requiring any capacity planning. This way, the company only pays for the actual amount of read and write capacity used, with no
minimums or upfront costs.
upvoted 1 times

  Help2023 7 months, 2 weeks ago


Selected Answer: A
A. This is because the traffic spikes have no set time as they can happen at any time, it being morning or evening
upvoted 1 times

  bullrem 8 months, 1 week ago


Selected Answer: C
C. Create a DynamoDB table with provisioned capacity and auto scaling. This will allow the table to automatically scale its capacity based
on usage patterns, which will help to optimize costs by reducing the amount of unused capacity during low traffic times and ensuring that
sufficient capacity is available during traffic spikes.
upvoted 4 times
  LuckyAro 8 months, 1 week ago
Selected Answer: C
Use pattern is not unknown, it was well laid out in the question. I think C is the correct answer.
upvoted 4 times

  BlueVolcano1 8 months, 2 weeks ago


Selected Answer: A
I have a feeling that the need for cost-optimisation is a distractor, and that people will jump on "provisioned with auto-scaling" without
considering that provisioned capacity mode is not a good fit for the requirements. On-demand may end up cheaper as you avoid over- or
underprovisioning capacity (when using auto-scaling, you still need to define a min and max). You can later switch capacity mode once
your usage pattern becomes stable (if it ever does).

AWS say that on-demand capacity mode is a good fit for:


- Unpredictable workloads with sudden spikes (mentioned in the requirements)
- Frequently idle workloads (where the DB isn't used at all; The requirements say that it won't be used most mornings)
- Events with unknown traffic (which this is - traffic in the evenings is unpredictable)

Whereas provisioned capacity mode is used for:


- Predictable workloads
- Gradual ramps (no sudden spikes, as auto-scaling isn't instant and can cause traffic to get throttled)
- Events iwth known traffic

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html
upvoted 3 times
Question #80 Topic 1

A company recently signed a contract with an AWS Managed Service Provider (MSP) Partner for help with an application migration initiative. A
solutions architect needs ta share an Amazon Machine Image (AMI) from an existing AWS account with the MSP Partner's AWS account. The AMI
is backed by Amazon Elastic Block Store (Amazon EBS) and uses an AWS Key Management Service (AWS KMS) customer managed key to encrypt
EBS volume snapshots.
What is the MOST secure way for the solutions architect to share the AMI with the MSP Partner's AWS account?

A. Make the encrypted AMI and snapshots publicly available. Modify the key policy to allow the MSP Partner's AWS account to use the key.

B. Modify the launchPermission property of the AMI. Share the AMI with the MSP Partner's AWS account only. Modify the key policy to allow
the MSP Partner's AWS account to use the key.

C. Modify the launchPermission property of the AMI. Share the AMI with the MSP Partner's AWS account only. Modify the key policy to trust a
new KMS key that is owned by the MSP Partner for encryption.

D. Export the AMI from the source account to an Amazon S3 bucket in the MSP Partner's AWS account, Encrypt the S3 bucket with a new KMS
key that is owned by the MSP Partner. Copy and launch the AMI in the MSP Partner's AWS account.

Correct Answer: B

Community vote distribution


B (89%) 6%

  Sauran Highly Voted  11 months, 2 weeks ago


Selected Answer: B
Share the existing KMS key with the MSP external account because it has already been used to encrypt the AMI snapshot.

https://docs.aws.amazon.com/kms/latest/developerguide/key-policy-modifying-external-accounts.html
upvoted 15 times

  ManoAni Highly Voted  11 months, 1 week ago


Selected Answer: B
If EBS snapshots are encrypted, then we need to share the same KMS key to partners to be able to access it. Read the note section in the
link
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/sharingamis-explicit.html
upvoted 5 times

  TariqKipkemei Most Recent  1 month, 2 weeks ago


Selected Answer: B
Share the AMI and Key with the MSP Partner's AWS account only
upvoted 1 times

  tamefi5512 3 months ago


Selected Answer: B
B - is the Answer
https://docs.aws.amazon.com/kms/latest/developerguide/key-policy-modifying-external-accounts.html
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: B
By modifying the launchPermission property of the AMI and sharing it with the MSP Partner's account only, the solutions architect
restricts access to the AMI and ensures that it is not publicly available.

Additionally, modifying the key policy to allow the MSP Partner's account to use KMS customer managed key used for encrypting the EBS
snapshots ensures that the MSP Partner has the necessary permissions to access and use the key for decryption.
upvoted 2 times

  Abrar2022 4 months, 2 weeks ago


CORRECTION to my last comment Option B is correct not A.

Explanation why..
Making the AMI and snapshots publicly available, is not a secure option as it would allow anyone with access to the AMI to use it. Best
practice would be to share the AMI with the MSP Partner's AWS account then Modify launchPermission property of the AMI. This ensures
that the AMI is shared only with the MSP Partner and is encrypted with a key that they are authorised to use.
upvoted 1 times

  Abrar2022 4 months, 2 weeks ago


Selected Answer: A
Option A, making the AMI and snapshots publicly available, is not a secure option as it would allow anyone with access to the AMI to use
it. Best practice would be to share the AMI with the MSP Partner's AWS account then Modify launchPermission property of the AMI. This
ensures that the AMI is shared only with the MSP Partner and is encrypted with a key that they are authorised to use.
upvoted 1 times

  Simons123 6 months, 1 week ago


It is Good but you Can also have a Gift Card and more information Here https://tinyurl.com/mr4ckeda
upvoted 1 times

  draum010 6 months, 1 week ago


Selected Answer: D
Option D
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Selected Answer: B
***CORRECT***
B. Modify the launchPermission property of the AMI.

The most secure way for the solutions architect to share the AMI with the MSP Partner's AWS account would be to modify the
launchPermission property of the AMI and share it with the MSP Partner's AWS account only. The key policy should also be modified to
allow the MSP Partner's AWS account to use the key. This ensures that the AMI is only shared with the MSP Partner and is encrypted with a
key that they are authorized to use.
upvoted 3 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Option A, making the AMI and snapshots publicly available, is not a secure option as it would allow anyone with access to the AMI to
use it.

Option C, modifying the key policy to trust a new KMS key owned by the MSP Partner, is also not a secure option as it would involve
sharing the key with the MSP Partner, which could potentially compromise the security of the data encrypted with the key.

Option D, exporting the AMI to an S3 bucket in the MSP Partner's AWS account and encrypting the S3 bucket with a new KMS key
owned by the MSP Partner, is also not the most secure option as it involves sharing the AMI and a new key with the MSP Partner, which
could potentially compromise the security of the data.
upvoted 6 times

  career360guru 9 months, 2 weeks ago


Selected Answer: B
Option B
upvoted 1 times

  Jtic 10 months, 3 weeks ago


Selected Answer: B
Must use and share the existing KMS key to decrypt the same key
upvoted 3 times

  flbcobra 10 months, 3 weeks ago


Selected Answer: B
https://aws.amazon.com/premiumsupport/knowledge-center/acm-certificate-expiration/
upvoted 1 times

  tubtab 11 months, 2 weeks ago


Selected Answer: C
MOST secure way should be C
upvoted 2 times

  Chunsli 11 months, 2 weeks ago


MOST secure way should be C, with a separate key, not the one already used.
upvoted 1 times

  Sauran 11 months, 2 weeks ago


A seperate/new key is not possible because it won't be able to decrypt the AMI snapshot which was already encrypted with the
existing/old key.
upvoted 10 times

  UWSFish 11 months, 1 week ago


This is truth
upvoted 2 times

  Jtic 10 months, 3 weeks ago


Must use and share the existing KMS key to decrypt the same key
upvoted 1 times
Question #81 Topic 1

A solutions architect is designing the cloud architecture for a new application being deployed on AWS. The process should run in parallel while
adding and removing application nodes as needed based on the number of jobs to be processed. The processor application is stateless. The
solutions architect must ensure that the application is loosely coupled and the job items are durably stored.
Which design should the solutions architect use?

A. Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon Machine Image (AMI) that consists of the
processor application. Create a launch configuration that uses the AMI. Create an Auto Scaling group using the launch configuration. Set the
scaling policy for the Auto Scaling group to add and remove nodes based on CPU usage.

B. Create an Amazon SQS queue to hold the jobs that need to be processed. Create an Amazon Machine Image (AMI) that consists of the
processor application. Create a launch configuration that uses the AMI. Create an Auto Scaling group using the launch configuration. Set the
scaling policy for the Auto Scaling group to add and remove nodes based on network usage.

C. Create an Amazon SQS queue to hold the jobs that need to be processed. Create an Amazon Machine Image (AMI) that consists of the
processor application. Create a launch template that uses the AMI. Create an Auto Scaling group using the launch template. Set the scaling
policy for the Auto Scaling group to add and remove nodes based on the number of items in the SQS queue.

D. Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon Machine Image (AMI) that consists of the
processor application. Create a launch template that uses the AMI. Create an Auto Scaling group using the launch template. Set the scaling
policy for the Auto Scaling group to add and remove nodes based on the number of messages published to the SNS topic.

Correct Answer: C

Community vote distribution


C (100%)

  Marge_Simpson Highly Voted  9 months, 3 weeks ago


Selected Answer: C
decoupled = SQS
Launch template = AMI
Launch configuration = EC2
upvoted 27 times

  darekw Most Recent  4 weeks, 1 day ago


https://aws.amazon.com/about-aws/whats-new/2021/03/aws-certificate-manager-provides-certificate-expiry-monitoring-through-amazon-
cloudwatch/
upvoted 1 times

  TariqKipkemei 1 month, 2 weeks ago


Selected Answer: C
Loosely coupled = Amazon SQS queue
New application being deployed = deploy on Amazon Machine Image
Adding and removing application nodes as needed based on the number of jobs to be processed = Auto Scaling group with launch
template
upvoted 1 times

  Guru4Cloud 1 month, 3 weeks ago


Selected Answer: C
The recommended design is to use an SQS queue to store jobs (option C):

SQS provides a durable and decoupled queue to store job items


An Auto Scaling group with scaling policies based on SQS queue length will add/remove nodes as needed
Launch templates provide flexibility to update AMIs
The key points:

SQS enables loose coupling and stores jobs durably


Auto Scaling provides parallel processing
Scaling based on queue length manages nodes effectively
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: C
This design follows the best practices for loosely coupled and scalable architecture. By using SQS, the jobs are durably stored in the queue,
ensuring they are not lost. The processor application is stateless, which aligns with the design requirement. The AMI allows for consistent
deployment of the application. The launch template and ASG facilitate the dynamic scaling of the application based on the number of
items in the SQS, ensuring parallel processing of jobs.
Options A and D suggest using SNS, which is a publish/subscribe messaging service and may not provide the durability required for job
storage.

Option B suggests using network usage as a scaling metric, which may not be directly related to the number of jobs to be processed. The
number of items in the SQS provides a more accurate metric for scaling based on the workload.
upvoted 4 times
  Bmarodi 4 months, 2 weeks ago
Selected Answer: C
C for sure
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Selected Answer: C
***CORRECT***
The correct design is Option C. Create an Amazon SQS queue to hold the jobs that need to be processed. Create an Amazon Machine
Image (AMI) that consists of the processor application. Create a launch template that uses the AMI. Create an Auto Scaling group using
the launch template. Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of items in the SQS
queue.

This design satisfies the requirements of the application by using Amazon Simple Queue Service (SQS) as durable storage for the job items
and Amazon Elastic Compute Cloud (EC2) Auto Scaling to add and remove nodes based on the number of items in the queue. The
processor application can be run in parallel on multiple nodes, and the use of launch templates allows for flexibility in the configuration of
the EC2 instances.
upvoted 4 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


***WRONG***
Option A is incorrect because it uses Amazon Simple Notification Service (SNS) instead of SQS, which is not a durable storage solution.

Option B is incorrect because it uses CPU usage as the scaling trigger instead of the number of items in the queue.

Option D is incorrect for the same reasons as option A.


upvoted 5 times

  graveend 1 month, 3 weeks ago


SNS provides durable storage of all messages that it receives.
Ref:
https://aws.amazon.com/sns/faqs/#:~:text=SNS%20provides%20durable%20storage%20of%20all%20messages%20that%20it%20rec
eives.

Why use SQS instead of SNS? In the question it says parallel execution of processes. SNS has that ability.
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: C
SQS with EC2 autoscaling policy based number of messages in the queue.
upvoted 1 times

  Uhrien 9 months, 4 weeks ago


Selected Answer: C
C is correct
upvoted 2 times

  kelljons 10 months ago


what about the word "coupled"
upvoted 1 times

  kewl 10 months ago


Selected Answer: C
AWS strongly recommends that you do not use launch configurations hence answer is C
https://docs.amazonaws.cn/en_us/autoscaling/ec2/userguide/launch-configurations.html
upvoted 3 times

  HussamShokr 10 months ago


Selected Answer: C
answer is C a there is nothing called " Launch Configuration" it's called "Launch Template" which is used by the autoscalling group to creat
the new instances.
upvoted 4 times

  lulzsec2019 8 months, 3 weeks ago


There's launch configuration. Search
upvoted 3 times

  Liliwood 10 months, 1 week ago


I was not sure between Launch template and Launch configuration.
upvoted 1 times

  Wpcorgan 10 months, 2 weeks ago


C is correct
upvoted 1 times

  devopspro 10 months, 3 weeks ago


Selected Answer: C
answer is c
upvoted 1 times

  Wilson_S 11 months ago


https://www.examtopics.com/discussions/amazon/view/22139-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 2 times

  wookchan 11 months, 1 week ago


It looks like C
upvoted 1 times
Question #82 Topic 1

A company hosts its web applications in the AWS Cloud. The company configures Elastic Load Balancers to use certificates that are imported into
AWS Certificate Manager (ACM). The company's security team must be notified 30 days before the expiration of each certificate.
What should a solutions architect recommend to meet this requirement?

A. Add a rule in ACM to publish a custom message to an Amazon Simple Notification Service (Amazon SNS) topic every day, beginning 30
days before any certificate will expire.

B. Create an AWS Config rule that checks for certificates that will expire within 30 days. Configure Amazon EventBridge (Amazon CloudWatch
Events) to invoke a custom alert by way of Amazon Simple Notification Service (Amazon SNS) when AWS Config reports a noncompliant
resource.

C. Use AWS Trusted Advisor to check for certificates that will expire within 30 days. Create an Amazon CloudWatch alarm that is based on
Trusted Advisor metrics for check status changes. Configure the alarm to send a custom alert by way of Amazon Simple Notification Service
(Amazon SNS).

D. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to detect any certificates that will expire within 30 days. Configure the
rule to invoke an AWS Lambda function. Configure the Lambda function to send a custom alert by way of Amazon Simple Notification Service
(Amazon SNS).

Correct Answer: D

Community vote distribution


D (51%) B (47%)

  LeGloupier Highly Voted  11 months, 2 weeks ago


B
AWS Config has a managed rule
named acm-certificate-expiration-check
to check for expiring certificates
(configurable number of days)
upvoted 48 times

  Mia2009687 2 months, 3 weeks ago


B costs more than D
To get a notification that your certificate is about to expire, use one of the following methods:

Use the ACM API in Amazon EventBridge to configure the ACM Certificate Approaching Expiration event.
Create a custom EventBridge rule to receive email notifications when certificates are nearing the expiration date.
Use AWS Config to check for certificates that are nearing the expiration date.
If you use AWS Config for this resolution, then be aware of the following:

Before you set up the AWS Config rule, create the Amazon Simple Notification Service (Amazon SNS) topic and EventBridge rule. This
makes sure that all non-compliant certificates invoke a notification before the expiration date.
Activating AWS Config incurs an additional cost based on usage. For more information, see AWS Config pricing.
https://repost.aws/knowledge-center/acm-certificate-expiration
upvoted 2 times

  ChrisG1454 6 months, 3 weeks ago


Answer B and answer D are possible according to this article.
So, need to read B & D carefully to determine the most suitable answer.

Reference: https://aws.amazon.com/premiumsupport/knowledge-center/acm-certificate-expiration/
upvoted 4 times

  TTaws 2 months, 3 weeks ago


Its B, simply because in option D - event bridge cannot "detect" anything.
upvoted 1 times

  darekw 4 weeks, 1 day ago


AWS Certificate Manager (ACM) now publishes certificate metrics and events through Amazon CloudWatch and Amazon
EventBridge.

https://aws.amazon.com/about-aws/whats-new/2021/03/aws-certificate-manager-provides-certificate-expiry-monitoring-
through-amazon-cloudwatch/
upvoted 1 times

  RupeC 2 months, 1 week ago


My understanding is that the ACM sends a Cert Expiration event to EventBridge. Thus EB. does not need to detect anything.
upvoted 1 times
  LeGloupier 11 months, 2 weeks ago
https://aws.amazon.com/premiumsupport/knowledge-center/acm-certificate-expiration/
upvoted 10 times

  ManoAni Highly Voted  11 months, 1 week ago


Selected Answer: B
https://aws.amazon.com/premiumsupport/knowledge-center/acm-certificate-expiration/
upvoted 10 times

  Subhrangsu Most Recent  1 week, 1 day ago


Why not A?
https://docs.aws.amazon.com/acm/latest/userguide/managed-renewal.html(1st paragraph)
upvoted 1 times

  blurtiger320918 1 week, 5 days ago


Selected Answer: D
Tested in AWS account, answer is D
upvoted 1 times

  BrijMohan08 2 weeks, 2 days ago


Selected Answer: B
https://repost.aws/knowledge-center/acm-certificate-expiration
upvoted 1 times

  RDM10 2 weeks, 1 day ago


As per the above link, AWS Config incur additional charges so D is better.
upvoted 1 times

  Chiquitabandita 4 weeks ago


I believe it is D based on this article mentioning EventBridge event of a certificate expiring
https://docs.aws.amazon.com/acm/latest/userguide/supported-events.html
upvoted 1 times

  darekw 4 weeks, 1 day ago


https://aws.amazon.com/about-aws/whats-new/2021/03/aws-certificate-manager-provides-certificate-expiry-monitoring-through-amazon-
cloudwatch/
upvoted 1 times

  Hassaoo 1 month ago


Option D is a viable solution, but Option B provides a more direct approach by leveraging AWS Config's compliance checking capabilities
and integrating with CloudWatch Events and Amazon SNS for streamlined and automated alerting.
upvoted 1 times

  TariqKipkemei 1 month, 1 week ago


Selected Answer: B
AWS Config Rule: acm-certificate-expiration-check, Checks if AWS Certificate Manager Certificates in your account are marked for
expiration within the specified number of days.
upvoted 1 times

  Stevey 1 month, 2 weeks ago


Vizzini: But it's so simple. All I have to do is divine it from what I know of you. Are you the sort of man who would put the poison into his
own goblet or his enemies? Now, a clever man would put the poison into his own goblet because he would know that only a great fool
would reach for what he was given. I am not a great fool so I can clearly not choose the wine in front of you...But you must have known I
was not a great fool; you would have counted on it, so I can clearly not choose the wine in front of me.
Man in black: You've made your decision then?
Vizzini: [happily] Not remotely! Because Iocaine comes from Australia. As everyone knows, Australia is entirely peopled with criminals. And
criminals are used to having people not trust them, as you are not trusted by me. So, I can clearly not choose the wine in front of you.
Man in black: Truly, you have a dizzying intellect.
upvoted 1 times

  Sat897 1 month, 2 weeks ago


Selected Answer: D
AWS Config - Used to check for configuration updates.
So D
upvoted 1 times

  cookieMr 1 month, 2 weeks ago


Selected Answer: B
B is correct, least op overhead
upvoted 2 times

  yhonatan2288 2 months ago


Selected Answer: B
https://aws.amazon.com/premiumsupport/knowledge-center/acm-certificate-expiration/
upvoted 1 times

  slimjago 2 months, 1 week ago


Selected Answer: A
could be A
https://aws.amazon.com/es/blogs/security/how-to-monitor-expirations-of-imported-certificates-in-aws-certificate-manager-acm/
upvoted 1 times

  speedt115 2 months, 2 weeks ago


B is correct, same crosschecked with DOC
upvoted 1 times

  Kaab_B 2 months, 2 weeks ago


Selected Answer: D
Correct answer is D.

Exact wordings from AWS docs:

The first of the two options I describe is to use the ACM built-in Certificate Expiration event, which is raised through Amazon EventBridge,
to invoke a Lambda function. In this option, the function is configured to publish the result as a finding in Security Hub, and also as an SNS
topic used for email subscriptions. As a result, an administrator can be notified of a specific expiring certificate, or an IT service
management (ITSM) system can automatically open a case or incident through email or SNS.

https://aws.amazon.com/blogs/security/how-to-monitor-expirations-of-imported-certificates-in-aws-certificate-manager-acm/
upvoted 6 times

  kaps57 2 months, 2 weeks ago


Selected Answer: B
Why are people getting confused even after reading - https://repost.aws/knowledge-center/acm-certificate-expiration. Article never talks
about a lambda function. Its Event bridge + SNS, where as option D clearly says Eventbridge + Lambda + SNS.
Whereas, the article also talks about Config Rule + Custom Eventbridge + SNS which is clearly option B.
upvoted 3 times
Question #83 Topic 1

A company's dynamic website is hosted using on-premises servers in the United States. The company is launching its product in Europe, and it
wants to optimize site loading times for new European users. The site's backend must remain in the United States. The product is being launched
in a few days, and an immediate solution is needed.
What should the solutions architect recommend?

A. Launch an Amazon EC2 instance in us-east-1 and migrate the site to it.

B. Move the website to Amazon S3. Use Cross-Region Replication between Regions.

C. Use Amazon CloudFront with a custom origin pointing to the on-premises servers.

D. Use an Amazon Route 53 geoproximity routing policy pointing to on-premises servers.

Correct Answer: C

Community vote distribution


C (100%)

  Buruguduystunstugudunstuy Highly Voted  9 months, 2 weeks ago


Selected Answer: C
***CORRECT***
C. Use Amazon CloudFront with a custom origin pointing to the on-premises servers.

Amazon CloudFront is a content delivery network (CDN) that speeds up the delivery of static and dynamic web content, such as HTML,
CSS, JavaScript, images, and videos. By using CloudFront, the company can distribute the content of their website from edge locations that
are closer to the users in Europe, reducing the loading times for these users.

To use CloudFront, the company can set up a custom origin pointing to their on-premises servers in the United States. CloudFront will
then cache the content of the website at edge locations around the world and serve the content to users from the location that is closest
to them. This will allow the company to optimize the loading times for their European users without having to move the backend of the
website to a different region.
upvoted 19 times

  Euowelllima 2 weeks, 5 days ago


excelente explicação
upvoted 1 times

  TariqKipkemei 6 months, 4 weeks ago


good explanation..thanks
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


***WRONG***
Option A (launch an Amazon EC2 instance in us-east-1 and migrate the site to it) would not address the issue of optimizing loading
times for European users.

Option B (move the website to Amazon S3 and use Cross-Region Replication between Regions) would not be an immediate solution as
it would require time to set up and migrate the website.

Option D (use an Amazon Route 53 geoproximity routing policy pointing to on-premises servers) would not be suitable because it
would not improve the loading times for users in Europe.
upvoted 6 times

  Guru4Cloud Most Recent  1 month, 3 weeks ago


Selected Answer: C
The key reasons are:

CloudFront can cache static content close to European users using edge locations, improving site performance.
The custom origin feature allows seamlessly integrating the CloudFront CDN with existing on-premises servers.
No changes are needed to the site backend or servers. CloudFront just acts as a globally distributed cache.
This can be set up very quickly, meeting the launch deadline.
Other options like migrating to EC2 or S3 would require more time and changes. CloudFront is an easier lift.
Route 53 geoproximity routing alone would not improve performance much without a CDN.
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: C
C. This solution leverages the global network of CloudFront edge locations to cache and serve the website's static content from the edge
locations closest to the European users.
A. Hosting the website in a single region would still result in increased latency for European users accessing the site.

B. Moving the website to S3 and implementing Cross-Region Replication would distribute the website's content across multiple regions,
including Europe. S3 is primarily used for static content hosting, and it does not provide server-side processing capabilities necessary for
dynamic website functionality.

D. Using a geoproximity routing policy in Route 53 would allow you to direct traffic to the on-premises servers based on the geographic
location of the users. However, this option does not optimize site loading times for European users as it still requires them to access the
website from the on-premises servers in the United States. It does not leverage the benefits of content caching and edge locations for
improved performance.
upvoted 2 times
  Bmarodi 4 months, 1 week ago
Selected Answer: C
C is best solution.
upvoted 1 times

  gustavtd 9 months ago


Selected Answer: C
Within few days you can not do more than using CloudFront
upvoted 3 times

  career360guru 9 months, 2 weeks ago


Selected Answer: C
Option C
upvoted 1 times

  kajal1206 10 months ago


Selected Answer: C
C is correct answer
upvoted 1 times

  koreanmonkey 10 months, 1 week ago


Selected Answer: C
CloudFront = CDN Service
upvoted 3 times

  Liliwood 10 months, 1 week ago


C.
S3 Cross region Replication minimize latency but also copies objects across Amazon S3 buckets in different AWS Regions(data has to
remain in origin thou) so B wrong.
Route 53 geo, does not help reducing the latency.
upvoted 2 times

  Wpcorgan 10 months, 2 weeks ago


C is correct
upvoted 1 times

  Hunkie 11 months ago


Same question with detailed explanation

https://www.examtopics.com/discussions/amazon/view/27898-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 3 times

  ArielSchivo 11 months, 2 weeks ago


Selected Answer: C
Option C, use CloudFront.
upvoted 3 times
Question #84 Topic 1

A company wants to reduce the cost of its existing three-tier web architecture. The web, application, and database servers are running on Amazon
EC2 instances for the development, test, and production environments. The EC2 instances average 30% CPU utilization during peak hours and 10%
CPU utilization during non-peak hours.
The production EC2 instances run 24 hours a day. The development and test EC2 instances run for at least 8 hours each day. The company plans
to implement automation to stop the development and test EC2 instances when they are not in use.
Which EC2 instance purchasing solution will meet the company's requirements MOST cost-effectively?

A. Use Spot Instances for the production EC2 instances. Use Reserved Instances for the development and test EC2 instances.

B. Use Reserved Instances for the production EC2 instances. Use On-Demand Instances for the development and test EC2 instances.

C. Use Spot blocks for the production EC2 instances. Use Reserved Instances for the development and test EC2 instances.

D. Use On-Demand Instances for the production EC2 instances. Use Spot blocks for the development and test EC2 instances.

Correct Answer: B

Community vote distribution


B (94%) 6%

  ArielSchivo Highly Voted  11 months, 2 weeks ago


Selected Answer: B
Spot blocks are not longer available, and you can't use spot instances on Prod machines 24x7, so option B should be valid.
upvoted 13 times

  cookieMr Most Recent  3 months, 1 week ago


Selected Answer: B
Option B, would indeed be the most cost-effective solution. Reserved Instances provide cost savings for instances that run consistently,
such as the production environment in this case, while On-Demand Instances offer flexibility and are suitable for instances with variable
usage patterns like the development and test environments. This combination ensures cost optimization based on the specific
requirements and usage patterns described in the question.
upvoted 4 times

  devmon 3 weeks, 5 days ago


In addition to this, we can set up an automated process to start and stop the EC2 instances in the test and dev environment
upvoted 1 times

  Bmarodi 4 months, 1 week ago


Selected Answer: B
B meets the requirements, and most cost-effective.
upvoted 1 times

  ChanghyeonYoon 5 months, 2 weeks ago


Selected Answer: B
Spot instances are not suitable for production due to the possibility of not running.
upvoted 2 times

  alexiscloud 6 months ago


Answeer B:
Sopt block are not longer available and you can't use spot instace on production
upvoted 1 times

  Nandan747 9 months, 1 week ago


Selected Answer: B
Well, AWS has DISCONTINUED the Spot-Block option. so that rules out the two options that use spot-block. Wait, this question must be
from SAA-C02 or even 01. STALE QUESTION. I don't think this will feature in SAA-C03. Anyhow, the most cost-effective solution would be
Option "b"
upvoted 4 times

  Wajif 9 months, 1 week ago


Selected Answer: B
Choosing B as spot blocks (Spot instances with a finite duration) are no longer offered since July 2021
upvoted 1 times

  sparky231 4 months, 1 week ago


https://aws.amazon.com/ec2/spot/?cards.sort-by=item.additionalFields.startDateTime&cards.sort-order=asc&trk=8e336330-37e5-41e0-
8438-bc1c75320d09&sc_channel=ps&ef_id=CjwKCAjw67ajBhAVEiwA2g_jECgIX_lcbqawbH-
wVx2Y_EozBm8xv3g3Ci1eps0V49XcZRyfuy9xPhoCOkcQAvD_BwE:G:s&s_kwcid=AL!4422!3!517520538467!p!!g!!aws%20ec2%20spot!1283
1094520!122300635918
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Selected Answer: A
The most cost-effective solution for the company's requirements would be to use Spot Instances for the development and test EC2
instances and Reserved Instances for the production EC2 instances.

Spot Instances are a cost-effective choice for non-critical, flexible workloads that can be interrupted. Since the development and test EC2
instances are only needed for at least 8 hours per day and can be stopped when not in use, they would be a good fit for Spot Instances.
upvoted 2 times

  PassNow1234 9 months, 1 week ago


The production EC2 instances run 24 hours a day.
upvoted 2 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Reserved Instances are a good fit for production EC2 instances that need to run 24 hours a day, as they offer a significant discount
compared to On-Demand Instances in exchange for a one-time payment and a commitment to use the instances for a certain period of
time.

Option A is the correct answer because it meets the company's requirements for cost-effectively running the development and test EC2
instances and the production EC2 instances.
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Option B is not the most cost-effective solution because it suggests using On-Demand Instances for the development and test EC2
instances, which would be more expensive than using Spot Instances. On-Demand Instances are a good choice for workloads that
require a guaranteed capacity and can't be interrupted, but they are more expensive than Spot Instances.

Option C is not the correct solution because Spot blocks are a variant of Spot Instances that offer a guaranteed capacity and
duration, but they are not available for all instance types and are not necessarily the most cost-effective option in all cases. In this
case, it would be more cost-effective to use Spot Instances for the development and test EC2 instances, as they can be interrupted
when not in use.
upvoted 1 times

  WherecanIstart 7 months, 1 week ago


Can't use Spot instances for Production environment that needs to run 24/7. That should tell you that Production instances can't
have a downtime. Spot instances are used when an application or service can allow disruption and 24/7 production environment
won't allow that.
upvoted 2 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Option D is not the correct solution because it suggests using On-Demand Instances for the production EC2 instances, which
would be more expensive than using Reserved Instances. On-Demand Instances are a good choice for workloads that require a
guaranteed capacity and can't be interrupted, but they are more expensive than Reserved Instances in the long run. Using
Reserved Instances for the production EC2 instances would offer a significant discount compared to On-Demand Instances in
exchange for a one-time payment and a commitment to use the instances for a certain period of time.
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: B
Option B
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: B
Option B
upvoted 1 times

  Vickysss 9 months, 2 weeks ago


Selected Answer: B
Reserved instances for 24/7 production instances seems reasonable. By exclusion I will choose the on-demand for dev and test (despite
thinking that Spot Flees may be even a better solution from a cost-wise perspective)
upvoted 1 times

  Wpcorgan 10 months, 2 weeks ago


B is correct
upvoted 1 times

  Jtic 10 months, 3 weeks ago


Selected Answer: B
Reserved Instances and On-demand

Spot is out as the use case required continues instance running


upvoted 1 times
  Nigma 10 months, 4 weeks ago
B is the answer

https://www.examtopics.com/discussions/amazon/view/80956-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times
Question #85 Topic 1

A company has a production web application in which users upload documents through a web interface or a mobile app. According to a new
regulatory requirement. new documents cannot be modified or deleted after they are stored.
What should a solutions architect do to meet this requirement?

A. Store the uploaded documents in an Amazon S3 bucket with S3 Versioning and S3 Object Lock enabled.

B. Store the uploaded documents in an Amazon S3 bucket. Configure an S3 Lifecycle policy to archive the documents periodically.

C. Store the uploaded documents in an Amazon S3 bucket with S3 Versioning enabled. Configure an ACL to restrict all access to read-only.

D. Store the uploaded documents on an Amazon Elastic File System (Amazon EFS) volume. Access the data by mounting the volume in read-
only mode.

Correct Answer: A

Community vote distribution


A (100%)

  123jhl0 Highly Voted  11 months, 2 weeks ago


Selected Answer: A
You can use S3 Object Lock to store objects using a write-once-read-many (WORM) model. Object Lock can help prevent objects from
being deleted or overwritten for a fixed amount of time or indefinitely. You can use S3 Object Lock to meet regulatory requirements that
require WORM storage, or add an extra layer of protection against object changes and deletion.
Versioning is required and automatically activated as Object Lock is enabled.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-overview.html
upvoted 23 times

  Buruguduystunstugudunstuy Highly Voted  9 months, 2 weeks ago


Selected Answer: A
***CORRECT***
A. Store the uploaded documents in an Amazon S3 bucket with S3 Versioning and S3 Object Lock enabled.

S3 Versioning allows multiple versions of an object to be stored in the same bucket. This means that when an object is modified or
deleted, the previous version is preserved. S3 Object Lock adds additional protection by allowing objects to be placed under a legal hold or
retention period, during which they cannot be deleted or modified. Together, S3 Versioning and S3 Object Lock can be used to meet the
requirement of not allowing documents to be modified or deleted after they are stored.
upvoted 6 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


***WRONG***
Option B, storing the documents in an S3 bucket and configuring an S3 Lifecycle policy to archive them periodically, would not prevent
the documents from being modified or deleted.

Option C, storing the documents in an S3 bucket with S3 Versioning enabled and configuring an ACL to restrict all access to read-only,
would also not prevent the documents from being modified or deleted, since an ACL only controls access to the object and does not
prevent it from being modified or deleted.

Option D, storing the documents on an Amazon Elastic File System (Amazon EFS) volume and accessing the data in read-only mode,
would prevent the documents from being modified, but would not prevent them from being deleted.
upvoted 2 times

  Guru4Cloud Most Recent  1 month, 3 weeks ago


Selected Answer: A
S3 Versioning ensures that all versions of an object are retained when overwritten or deleted - this prevents deletion.
S3 Object Lock can be used to apply a retention period and legal hold on objects to prevent them from being overwritten or deleted, even
by users with full permissions.
Option B only archives objects on a schedule but does not prevent modification or deletion.
Option C uses ACLs which can still be overridden by users with full permissions.
Option D relies on the application to enforce mounting as read-only, which is not as robust as using S3 Object Lock.
upvoted 2 times

  Subhrangsu 1 week, 1 day ago


Liked the explanation for option C.Thanks!
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: A
S3 Versioning allows you to preserve every version of a document as it is uploaded or modified. This prevents accidental or intentional
modifications or deletions of the documents.
S3 Object Loc allows you to set a retention period or legal hold on the objects, making them immutable during the specified period. This
ensures that the stored documents cannot be modified or deleted, even by privileged users or administrators.

B. Configuring an S3 Lifecycle policy to archive documents periodically does not guarantee the prevention of document modification or
deletion after they are stored.

C. Enabling S3 Versioning alone does not prevent modifications or deletions of objects. Configuring an ACL does not guarantee the
prevention of modifications or deletions by authorized users.

D. Using EFS does not prevent modifications or deletions of the documents by users or processes with write permissions.
upvoted 2 times
  Bmarodi 4 months, 1 week ago
Selected Answer: A
S3 Versioning and S3 Object Lock enabled meet the requirements, hence A is correct ans.
upvoted 2 times

  SilentMilli 8 months, 3 weeks ago


Selected Answer: A
Option A. Store the uploaded documents in an Amazon S3 bucket with S3 Versioning and S3 Object Lock enabled. This will ensure that the
documents cannot be modified or deleted after they are stored, and will meet the regulatory requirement. S3 Versioning allows you to
store multiple versions of an object in the same bucket, and S3 Object Lock enables you to apply a retention policy to objects in the bucket
to prevent their deletion.
upvoted 2 times

  career360guru 9 months, 2 weeks ago


Selected Answer: A
Option A. Object Lock will prevent modifications to documents
upvoted 1 times

  HarryZ 9 months, 4 weeks ago


Why not C
upvoted 3 times

  JayBee65 9 months, 2 weeks ago


Configure an ACL to restrict all access to read-only would be you could not write the docs to the bucket in the first place.
upvoted 2 times

  Wpcorgan 10 months, 2 weeks ago


A is correct
upvoted 1 times

  flbcobra 10 months, 3 weeks ago


Selected Answer: A
https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-overview.html
upvoted 1 times

  Evangelia 11 months, 2 weeks ago


Selected Answer: A
aaaaaaaaa
upvoted 1 times

  Evangelia 11 months, 2 weeks ago


aaaaaaaaaaa
upvoted 1 times
Question #86 Topic 1

A company has several web servers that need to frequently access a common Amazon RDS MySQL Multi-AZ DB instance. The company wants a
secure method for the web servers to connect to the database while meeting a security requirement to rotate user credentials frequently.
Which solution meets these requirements?

A. Store the database user credentials in AWS Secrets Manager. Grant the necessary IAM permissions to allow the web servers to access AWS
Secrets Manager.

B. Store the database user credentials in AWS Systems Manager OpsCenter. Grant the necessary IAM permissions to allow the web servers to
access OpsCenter.

C. Store the database user credentials in a secure Amazon S3 bucket. Grant the necessary IAM permissions to allow the web servers to
retrieve credentials and access the database.

D. Store the database user credentials in files encrypted with AWS Key Management Service (AWS KMS) on the web server file system. The
web server should be able to decrypt the files and access the database.

Correct Answer: A

Community vote distribution


A (100%)

  123jhl0 Highly Voted  11 months, 2 weeks ago


Selected Answer: A
Secrets Manager enables you to replace hardcoded credentials in your code, including passwords, with an API call to Secrets Manager to
retrieve the secret programmatically. This helps ensure the secret can't be compromised by someone examining your code, because the
secret no longer exists in the code. Also, you can configure Secrets Manager to automatically rotate the secret for you according to a
specified schedule. This enables you to replace long-term secrets with short-term ones, significantly reducing the risk of compromise.
https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html
upvoted 20 times

  TariqKipkemei Most Recent  1 month, 1 week ago


Selected Answer: A
AWS Secrets Manager to the rescue....up up and awaaaay
upvoted 1 times

  Guru4Cloud 1 month, 3 weeks ago


Selected Answer: A
The correct answer is A.

Here is the explanation:

AWS Secrets Manager is a service that helps you store, manage, and rotate secrets. Secrets Manager is a good choice for storing database
user credentials because it is secure and scalable.
IAM permissions can be used to grant web servers access to AWS Secrets Manager. This will allow the web servers to retrieve the database
user credentials from Secrets Manager and use them to connect to the database.
Rotation of user credentials can be automated using Secrets Manager. This will ensure that the database user credentials are rotated on a
regular basis, meeting the security requirement.
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: A
B. SSM OpsCenter is primarily used for managing and resolving operational issues. It is not designed to securely store and manage
credentials like AWS Secrets Manager.

C. Storing credentials in an S3 bucket may provide some level of security, but it lacks the additional features and security controls offered
by AWS Secrets Manager.

D. While using KMS for encryption is a good practice, managing credentials directly on the web server file system can introduce
complexities and potential security risks. It can be challenging to securely manage and rotate credentials across multiple web servers,
especially when considering scalability and automation.

In summary, option A is the recommended solution as it leverages AWS Secrets Manager, which is purpose-built for securely storing and
managing secrets, and provides the necessary IAM permissions to allow the web servers to access the credentials securely.
upvoted 3 times

  Bmarodi 4 months, 1 week ago


Selected Answer: A
Option A is ans.
upvoted 2 times
  vherman 7 months, 2 weeks ago
Selected Answer: A
A is correct
upvoted 1 times

  thensanity 9 months ago


literally screams for AWS secrets manager to rotate the credentails
upvoted 4 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Selected Answer: A
***CORRECT***
Option A. Store the database user credentials in AWS Secrets Manager. Grant the necessary IAM permissions to allow the web servers to
access AWS Secrets Manager.

Option A is correct because it meets the requirements specified in the question: a secure method for the web servers to connect to the
database while meeting a security requirement to rotate user credentials frequently. AWS Secrets Manager is designed specifically to store
and manage secrets like database credentials, and it provides an automated way to rotate secrets every time they are used, ensuring that
the secrets are always fresh and secure. This makes it a good choice for storing and managing the database user credentials in a secure
way.
upvoted 4 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


***WRONG***
Option B, storing the database user credentials in AWS Systems Manager OpsCenter, is not a good fit for this use case because
OpsCenter is a tool for managing and monitoring systems, and it is not designed for storing and managing secrets.

Option C, storing the database user credentials in a secure Amazon S3 bucket, is not a secure option because S3 buckets are not
designed to store secrets. While it is possible to store secrets in S3, it is not recommended because S3 is not a secure secrets
management service and does not provide the same level of security and automation as AWS Secrets Manager.
upvoted 3 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Option D, storing the database user credentials in files encrypted with AWS Key Management Service (AWS KMS) on the web server
file system, is not a secure option because it relies on the security of the web server file system, which may not be as secure as a
dedicated secrets management service like AWS Secrets Manager. Additionally, this option does not meet the requirement to rotate
user credentials frequently because it does not provide an automated way to rotate the credentials.
upvoted 4 times

  career360guru 9 months, 2 weeks ago


Selected Answer: A
Option A
upvoted 1 times

  kewl 10 months ago


Selected Answer: A
Rotate credentials = Secrets Manager
upvoted 3 times

  Wpcorgan 10 months, 2 weeks ago


A is correct
upvoted 1 times

  renekton 10 months, 2 weeks ago


Selected Answer: A
Answer is A
upvoted 2 times
Question #87 Topic 1

A company hosts an application on AWS Lambda functions that are invoked by an Amazon API Gateway API. The Lambda functions save
customer data to an Amazon Aurora MySQL database. Whenever the company upgrades the database, the Lambda functions fail to establish
database connections until the upgrade is complete. The result is that customer data is not recorded for some of the event.
A solutions architect needs to design a solution that stores customer data that is created during database upgrades.
Which solution will meet these requirements?

A. Provision an Amazon RDS proxy to sit between the Lambda functions and the database. Configure the Lambda functions to connect to the
RDS proxy.

B. Increase the run time of the Lambda functions to the maximum. Create a retry mechanism in the code that stores the customer data in the
database.

C. Persist the customer data to Lambda local storage. Configure new Lambda functions to scan the local storage to save the customer data to
the database.

D. Store the customer data in an Amazon Simple Queue Service (Amazon SQS) FIFO queue. Create a new Lambda function that polls the
queue and stores the customer data in the database.

Correct Answer: A

Community vote distribution


D (60%) A (40%)

  brushek Highly Voted  11 months, 3 weeks ago


Selected Answer: A
https://aws.amazon.com/rds/proxy/

RDS Proxy minimizes application disruption from outages affecting the availability of your database by automatically connecting to a new
database instance while preserving application connections. When failovers occur, RDS Proxy routes requests directly to the new database
instance. This reduces failover times for Aurora and RDS databases by up to 66%.
upvoted 35 times

  aaroncelestin 1 month, 2 weeks ago


You are going to tell your boss that the customer is going to occasionally lose //only// 33% of their data, as if that's just acceptable?
upvoted 5 times

  bgsanata 4 months, 3 weeks ago


This is incorrect as nowhere in the question is mentioned the RDS have more than 1 instance. So... when the instance is down for
maintenance there is no second instance to which RDS Proxy can redirect the requests.

The correct answer is D.


upvoted 9 times

  Abdou1604 1 month, 2 weeks ago


rDS PROXY Supports RDS (MySQL, PostgreSQL, MariaDB, MS
SQL Server) and Aurora (MySQL, PostgreSQL)
upvoted 1 times

  attila9778 10 months, 1 week ago


Aurora supports RDS proxy!
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/rds-proxy.html
upvoted 5 times

  PassNow1234 9 months, 1 week ago


This is MySQL Database. RDS proxy = no no
upvoted 1 times

  Robrobtutu 5 months, 2 weeks ago


It literally says RDS Proxy is available for Aurora MySQL on the link in the comment you're replying to.
upvoted 4 times

  123jhl0 Highly Voted  11 months, 2 weeks ago


Selected Answer: D
The answer is D.
RDS Proxy doesn't support Aurora DBs. See limitations at:
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/rds-proxy.html
upvoted 22 times
  adeyinkaamole 1 month, 1 week ago
This not RDS supports Aurora mysl database. All the limitations listed in the link you posted above are not related to the question,
hence the answer is B
upvoted 1 times

  adeyinkaamole 1 month, 1 week ago


I meant the answer answer is A
upvoted 1 times

  tinyfoot 10 months, 2 weeks ago


Actually RDS Proxy supports Aurora DBs running on PostgreSQL and MySQL.
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Concepts.Aurora_Fea_Regions_DB-eng.Feature.RDS_Proxy.html

With RDS proxy, you only expose a single endpoint for request to hit and any failure of the primary DB in a Multi-AZ configuration is will
be managed automatically by RDS Proxy to point to the new primary DB. Hence RDS proxy is the most efficient way of solving the issue
as additional code change is required.
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/rds-proxy.howitworks.html
upvoted 8 times

  Duke_YU 6 months ago


The question doesn't say the RDS is deployed in a Mutli-AZ mode. which means RDS is not accessible during upgrade anyway. RDS
proxy couldn't resolve the DB HA issue. The question is looking for a solution to store the data during DB upgrade. I don't know RDS
proxy very well, but the RDS proxy introduction doesn't mention it has the capability of storing data. So, answer A couldn't store the
data created during the DB upgrade.
I'm assuming this is a bad question design. The expected answer is A, but the question designer missed some important
information.
upvoted 5 times

  rismail 4 months, 3 weeks ago


https://aws.amazon.com/rds/proxy/, if you go down the page, you will see that RDS is deployed in Multi-AZ (mazon RDS Proxy is
highly available and deployed over multiple Availability Zones (AZs) to protect you from infrastructure failure. Each AZ runs on its
own physically distinct, independent infrastructure and is engineered to be highly reliable. In the unlikely event of an
infrastructure failure, the RDS Proxy endpoint remains online and consistent allowing your application to continue to run
database operations.) from the link.
upvoted 1 times

  JayBee65 9 months, 2 weeks ago


It does, according to that link
upvoted 1 times

  gcmrjbr 10 months, 1 week ago


You can use RDS Proxy with Aurora Serverless v2 clusters but not with Aurora Serverless v1 clusters.
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/rds-proxy.html
upvoted 4 times

  joshik Most Recent  5 days, 6 hours ago


Selected Answer: A
I think this is most suitable, as both SQS and RDS would not store customer data in the right format for RDS
upvoted 1 times

  vijaykamal 5 days, 16 hours ago


Answer is D, RDS proxy would not help if DB is down unless mule AZ is used.
upvoted 1 times

  BrijMohan08 2 weeks, 2 days ago


Selected Answer: A
Aurora stores copies of the data in a DB cluster across multiple Availability Zones in a single AWS Region. Aurora stores these copies
regardless of whether the instances in the DB cluster span multiple Availability Zones.
upvoted 1 times

  underdogpex 4 weeks ago


Selected Answer: D
data is needed to be stored somewhere till the db is up, can be kept in sqs. so the ideal solution would be to send the data to sqs and poll
the queue by lambda and save the data in db. the data can stay till successful processed.
upvoted 1 times

  Hassaoo 1 month ago


Option B (increasing runtime and adding a retry mechanism) could help reduce the impact of connection disruptions, but it doesn't
address the requirement to seamlessly store customer data during database upgrades.

Options C and D involve storing data locally or using Amazon SQS, but these approaches might not ensure data consistency and
availability during database upgrades, which is a critical requirement.

In summary, using Amazon RDS Proxy (Option A) is the best approach to address the challenge of maintaining data availability and
consistency during database upgrades for Lambda functions that interact with the Amazon Aurora MySQL database.
upvoted 1 times
  zjcorpuz 2 months, 1 week ago
A. Amazon RDS Proxy is available for Amazon Aurora with MySQL compatibility, Amazon Aurora with PostgreSQL compatibility, Amazon
RDS for MariaDB, Amazon RDS..

https://aws.amazon.com/rds/proxy/
upvoted 1 times

  Undisputed 2 months, 1 week ago


Selected Answer: D
To ensure that customer data is stored during database upgrades in an AWS Lambda and Amazon Aurora MySQL setup, you should store
the customer data in an Amazon Simple Queue Service (Amazon SQS) FIFO queue. Create a new Lambda function that polls the queue and
store the customer data in the database.
upvoted 2 times

  RupeC 2 months, 1 week ago


Selected Answer: D
The proxy does not address the specific issue of data loss during database upgrades. RDS Proxy can help improve database connections
and scaling, it does not store customer data during the upgrade process.
upvoted 3 times

  TTaws 2 months, 3 weeks ago


D is the best answer here, Proxy in A does not make sense if the database is DOWN!
upvoted 2 times

  jaydesai8 2 months, 3 weeks ago


Selected Answer: D
i was of the opinion that A is the answer, but after going through -https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/rds-
proxy.html

this clearly states if there is standby instance then RDS proxy would help to connect to that instance. In the question, its not mention that
database is highly available or its in Multi-az environment
upvoted 1 times

  diabloexodia 2 months, 2 weeks ago


it does mention that the DB is MULTI AZ deployed.
upvoted 1 times

  narddrer 3 months ago


I vote for A, Question says it's "Amazon Aurora MySQL database" then it's M-AZ's which means proxy can be used
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: D
A. It does not address the issue of storing customer data during database upgrades. The problem lies in the Lambda failing to establish
connections during upgrades.

B. Increasing the Lambda run time and implementing a retry mechanism can help mitigate some failures, but it does not provide a reliable
solution for storing customer data during database upgrades. The issue is not with the Lambda functions' execution time or retry logic,
but with the database connection failures during upgrades.

C. Lambda local storage is temporary and is not designed for durable data storage. It is not a reliable solution for persisting customer
data, especially during database upgrades.

In summary, option D is the recommended solution as it utilizes an SQS FIFO queue to store customer data. By decoupling the data
storage from the database connection, the Lambda can store the data reliably in the queue even during database upgrades. A separate
Lambda can then poll the queue and save the customer data to the database, ensuring no data loss during upgrade periods.
upvoted 2 times

  elmogy 4 months ago


Selected Answer: D
The best solution is to store the customer data in an Amazon SQS queue when the Lambda functions can't connect to the database during
an upgrade. A new Lambda function can then poll the SQS queue and store the customer data in the database once the upgrade is
complete.
The other solution:
A) An RDS proxy would not buffer/store the data during an outage.
B) Increasing Lambda run time and retries would not store the data that fails during the retries.
C) Lambda local storage is ephemeral and data would be lost after a function execution.
upvoted 1 times

  mandragon 4 months, 1 week ago


Selected Answer: A
https://aws.amazon.com/blogs/compute/using-amazon-rds-proxy-with-aws-lambda/

Supports Aurora MySQL or Amazon RDS MySQL. It is designed for that reason.
upvoted 1 times
  MostafaWardany 4 months, 1 week ago
Selected Answer: D
RDS proxy for HA but not suitable for store data during DB outage, I think D is the correct answer
upvoted 1 times
Question #88 Topic 1

A survey company has gathered data for several years from areas in the United States. The company hosts the data in an Amazon S3 bucket that
is 3 TB in size and growing. The company has started to share the data with a European marketing firm that has S3 buckets. The company wants
to ensure that its data transfer costs remain as low as possible.
Which solution will meet these requirements?

A. Configure the Requester Pays feature on the company's S3 bucket.

B. Configure S3 Cross-Region Replication from the company's S3 bucket to one of the marketing firm's S3 buckets.

C. Configure cross-account access for the marketing firm so that the marketing firm has access to the company's S3 bucket.

D. Configure the company's S3 bucket to use S3 Intelligent-Tiering. Sync the S3 bucket to one of the marketing firm's S3 buckets.

Correct Answer: B

Community vote distribution


A (48%) B (45%) 6%

  Six_Fingered_Jose Highly Voted  11 months, 1 week ago


Selected Answer: B
this question is too vague imho
if the question is looking for a way to incur charges to the European company instead of the US company, then requester pay makes
sense.

if they are looking to reduce overall data transfer cost, then B makes sense because the data does not leave the AWS network, thus data
transfer cost should be lower technically?

A. makes sense because the US company saves money, but the European company is paying for the charges so there is no overall saving
in cost when you look at the big picture

I will go for B because they are not explicitly stating that they want the other company to pay for the charges
upvoted 44 times

  thwvthunder 1 month ago


is S3 Cross-Region Replication works between 2 separate aws accounts? shouldn't the answer is C?
upvoted 2 times

  Kp88 2 months, 1 week ago


I would go with A , If I am an SA of the company I would prefer to have other people pay the data transfer fees because same scenario
can happen with multiple different firms in future.
upvoted 2 times

  MutiverseAgent 2 months, 3 weeks ago


I agree, also the question says that the target firm "has S3 buckets." so I think that is a clue to say they can accept replication data on
any of those buckets.
upvoted 1 times

  rushi0611 5 months ago


Agree, B) Cross Region Replication: $0.02/GB
A) over the internet it is $0.09/GB
Answer is B
upvoted 5 times

  123jhl0 Highly Voted  11 months, 2 weeks ago


Selected Answer: A
"Typically, you configure buckets to be Requester Pays buckets when you want to share data but not incur charges associated with others
accessing the data. For example, you might use Requester Pays buckets when making available large datasets, such as zip code
directories, reference data, geospatial information, or web crawling data."
https://docs.aws.amazon.com/AmazonS3/latest/userguide/RequesterPaysBuckets.html
upvoted 27 times

  Ramdi1 Most Recent  2 weeks, 4 days ago


Selected Answer: A
A similar question came up on tutorial dojo and I first assumed it was B, configure S3 Cross Region Replication. However they said the
right answer was A in this case configure the requester pay feature.
upvoted 1 times

  anhthang17 3 weeks, 3 days ago


Selected Answer: C
C is my answer
upvoted 1 times

  anhthang17 3 weeks, 3 days ago


I think the answer must be C.
upvoted 1 times

  oayoade 1 month ago


Selected Answer: C
S3 Cross account access
https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-walkthroughs-managing-access-example2.html
upvoted 2 times

  karloscetina007 1 month, 3 weeks ago


Selected Answer: B
transfer region replica still have lower cost against others manners to transfer and share s3 resources
upvoted 1 times

  Guru4Cloud 1 month, 3 weeks ago


Selected Answer: B
The best solution here is to configure S3 Cross-Region Replication from the company's S3 bucket to one of the marketing firm's S3
buckets.

The key requirements are to minimize data transfer costs while sharing large amounts of data with the marketing firm.

S3 Cross-Region Replication will replicate objects from the source bucket to a destination bucket in a different region. This avoids any data
transfer charges for the company when the marketing firm accesses the replicated data in their own region
upvoted 1 times

  Monu11394 2 months, 2 weeks ago


The company (US) is looking to reduce "its" data transfer costs. So A.
upvoted 1 times

  HassanYoussef 3 months ago


Selected Answer: A
A is the right answer as the source owner of the bucket will remain paying on the data hosted in the S3 bucket but the data transfer to the
other account will be charged to the consumer, so in this case the source owner minimizes the cost:

https://docs.aws.amazon.com/AmazonS3/latest/userguide/RequesterPaysBuckets.html
upvoted 1 times

  LePMGC 3 months ago


Selected Answer: B
In the statement it is well said that "the company wants its transfert cost to remain as slow as possible" so the point is about reducing the
cost of the company and not the european marketing firm.
upvoted 1 times

  haoAWS 3 months, 1 week ago


Selected Answer: A
Make the requester pay feature will cost 0, which is the cheapest.
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: C
A. Enabling the Requester Pays feature would shift the data transfer costs to the European marketing firm, but it may not be the most
cost-effective solution.

B. Enabling cross-region replication would copy the data from the company's S3 to the marketing firm's S3, but it would incur additional
data transfer costs. This solution doesn't focus on minimizing data transfer costs for the company.

D. Using S3 Intelligent-Tiering and syncing the bucket to the marketing firm's S3 may help optimize storage costs by automatically moving
objects to the most cost-effective storage class. However, it does not specifically address the goal of minimizing data transfer costs for the
company.

In summary, option C is the recommended solution as it allows the marketing firm to access the company's S3 through cross-account
access. This enables the marketing firm to retrieve the data directly from the company's bucket without incurring additional data transfer
costs. It ensures that the survey company retains control over its data and can minimize its own data transfer expenses.
upvoted 6 times

  Router 3 months, 2 weeks ago


B makes more sense, the question didn't state you should do away with the cost completely
upvoted 1 times
  samsoft556 3 months, 4 weeks ago
Selected Answer: B
The question demands you to find a way to decrease the expense, not to transfer the expense to someone.)
Cross Region Replication: $0.02/GB
Over the internet, it is $0.09/GB
upvoted 2 times

  antropaws 4 months ago


Selected Answer: A
European company should pay for the transfer costs.
upvoted 1 times

  mandragon 4 months, 1 week ago


Selected Answer: A
With Requester Pays, the requester instead of the bucket owner pays the cost of the request and the data download from the bucket.
upvoted 1 times
Question #89 Topic 1

A company uses Amazon S3 to store its confidential audit documents. The S3 bucket uses bucket policies to restrict access to audit team IAM
user credentials according to the principle of least privilege. Company managers are worried about accidental deletion of documents in the S3
bucket and want a more secure solution.
What should a solutions architect do to secure the audit documents?

A. Enable the versioning and MFA Delete features on the S3 bucket.

B. Enable multi-factor authentication (MFA) on the IAM user credentials for each audit team IAM user account.

C. Add an S3 Lifecycle policy to the audit team's IAM user accounts to deny the s3:DeleteObject action during audit dates.

D. Use AWS Key Management Service (AWS KMS) to encrypt the S3 bucket and restrict audit team IAM user accounts from accessing the KMS
key.

Correct Answer: A

Community vote distribution


A (100%)

  123jhl0 Highly Voted  11 months, 2 weeks ago


Selected Answer: A
Same as Question #44
upvoted 10 times

  TariqKipkemei Most Recent  1 month, 1 week ago


Selected Answer: A
Enable the versioning to ensure restoration in case of accidental deletion and MFA Delete for double verification before deletion.
upvoted 1 times

  Guru4Cloud 1 month, 3 weeks ago


Selected Answer: A
Versioning will keep multiple variants of an object in case one is accidentally or intentionally deleted - the previous versions can still be
restored.

MFA Delete requires additional authentication to permanently delete an object version. This prevents accidental deletion
upvoted 2 times

  cookieMr 3 months, 1 week ago


B. Enabling MFA on the IAM user credentials adds an extra layer of security to the user authentication process. However, it does not
specifically address the concern of accidental deletion of documents in the S3 bucket.

C. Adding an S3 Lifecycle policy to deny the delete action during audit dates would prevent intentional deletions during specific time
periods. However, it does not address accidental deletions that can occur at any time.

D. Using KMS for encryption and restricting access to the KMS key provides additional security for the data stored in the S3 . However, it
does not directly prevent accidental deletion of documents in the S3.

Enabling versioning and MFA Delete on the S3 (option A) is the most appropriate solution for securing the audit documents. Versioning
ensures that multiple versions of the documents are stored, allowing for easy recovery in case of accidental deletions. Enabling MFA
Delete requires the use of multi-factor authentication to authorize deletion actions, adding an extra layer of protection against unintended
deletions.
upvoted 2 times

  beginnercloud 4 months, 1 week ago


Selected Answer: A
A is answer.
upvoted 1 times

  Bmarodi 4 months, 1 week ago


Selected Answer: A
A is answer.
upvoted 1 times

  Robrobtutu 5 months, 2 weeks ago


Selected Answer: A
A is correct.
upvoted 1 times
  remand 8 months, 2 weeks ago
Selected Answer: A
only accidental deletion should be avoided. IAM policy will completely remove their access.hence, MFA is the right choice.
upvoted 1 times

  karbob 8 months, 3 weeks ago


what about : IAM policies are used to specify permissions for AWS resources, and they can be used to allow or deny specific actions on
those resources.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DenyDeleteObject",
"Effect": "Deny",
"Action": "s3:DeleteObject",
"Resource": [
"arn:aws:s3:::my-bucket/my-object",
"arn:aws:s3:::my-bucket"
]
}
]
}
upvoted 2 times

  remand 8 months, 2 weeks ago


only accidental deletion should be avoided. IAM policy will completely remove their access.hence, MFA is the right choice.
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Selected Answer: A
The solution architect should do Option A: Enable the versioning and MFA Delete features on the S3 bucket.

This will secure the audit documents by providing an additional layer of protection against accidental deletion. With versioning enabled,
any deleted or overwritten objects in the S3 bucket will be preserved as previous versions, allowing the company to recover them if
needed. With MFA Delete enabled, any delete request made to the S3 bucket will require the use of an MFA code, which provides an
additional layer of security.
upvoted 2 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Option B: Enable multi-factor authentication (MFA) on the IAM user credentials for each audit team IAM user account, would not
provide protection against accidental deletion.

Option C: Adding an S3 Lifecycle policy to the audit team's IAM user accounts to deny the s3:DeleteObject action during audit dates,
which would not provide protection against accidental deletion outside of the specified audit dates.

Option D: Use AWS Key Management Service (AWS KMS) to encrypt the S3 bucket and restrict audit team IAM user accounts from
accessing the KMS key, would not provide protection against accidental deletion.
upvoted 2 times

  career360guru 9 months, 2 weeks ago


Selected Answer: A
A is the right answer
upvoted 1 times

  Wpcorgan 10 months, 2 weeks ago


A is correct
upvoted 1 times

  Jtic 10 months, 3 weeks ago


Selected Answer: A
Enable the versioning and MFA Delete features on the S3 bucket.
upvoted 1 times
Question #90 Topic 1

A company is using a SQL database to store movie data that is publicly accessible. The database runs on an Amazon RDS Single-AZ DB instance.
A script runs queries at random intervals each day to record the number of new movies that have been added to the database. The script must
report a final total during business hours.
The company's development team notices that the database performance is inadequate for development tasks when the script is running. A
solutions architect must recommend a solution to resolve this issue.
Which solution will meet this requirement with the LEAST operational overhead?

A. Modify the DB instance to be a Multi-AZ deployment.

B. Create a read replica of the database. Configure the script to query only the read replica.

C. Instruct the development team to manually export the entries in the database at the end of each day.

D. Use Amazon ElastiCache to cache the common queries that the script runs against the database.

Correct Answer: D

Community vote distribution


B (95%) 5%

  alvarez100 Highly Voted  11 months, 2 weeks ago


Selected Answer: B
Elasti Cache if for reading common results. The script is looking for new movies added. Read replica would be the best choice.
upvoted 26 times

  Gil80 Highly Voted  10 months, 4 weeks ago


Selected Answer: B
• You have a production DB that is taking on a normal load
• You want to run a reporting application to run some analytics
• You create a read replica to run the new workload there
• The prod application is unaffected
• Read replicas are used for SELECT (=read) only kind of statements
Therefore I believe B to be the better answer.

As for "D" - ElastiCache use cases are:


1. Your data is slow or expensive to get when compared to cache retrieval.
2. Users access your data often.
3. Your data stays relatively the same, or if it changes quickly staleness is not a large issue.

1 - Somewhat true.
2 - Not true for our case.
3 - Also not true. The data changes throughout the day.

For my understanding, caching has to do with millisecond results, high-performance reads. These are not the issues mentioned in the
questions, therefore B.
upvoted 11 times

  NitiATOS 8 months ago


I will support this by point to the question : " with the LEAST operational overhead?"

Configuring the read replica is much easier than configuring and integrating new service.
upvoted 1 times

  joshik Most Recent  5 days, 7 hours ago


Selected Answer: B
- Cached data might not always be up-to-date, so you need to manage cache expiry and invalidation carefully.
- It may require some code changes to implement caching logic in your script.
- ElastiCache comes with additional costs, so you should assess the cost implications based on your usage.
upvoted 1 times

  underdogpex 4 weeks ago


Selected Answer: B
Why not D:
While ElastiCache can be relatively easy to set up, it still requires ongoing management, monitoring, and potentially scaling as the dataset
and query load grow. This introduces operational overhead that may not align with the goal of minimizing operational work.
upvoted 1 times

  Router 1 month ago


the correct answer should be A, you can't create a read replica on a single-AZ DB instance
upvoted 1 times

  TariqKipkemei 1 month, 1 week ago


Selected Answer: B
a read replica is always fit for these type of scenarios.
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: B
The key requirements are:

The script must report a final total during business hours


Resolve the issue of inadequate database performance for development tasks when the script is running
With the least operational overhead
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: B
A. Modifying the DB to be a Multi-AZ deployment improves high availability and fault tolerance but does not directly address the
performance issue during the script execution.

C. Instructing the development team to manually export the entries in the database introduces manual effort and is not a scalable or
efficient solution.

D. While using ElastiCache for caching can improve read performance for common queries, it may not be the most suitable solution for
the scenario described. Caching is effective for reducing the load on the database for frequently accessed data, but it may not directly
address the performance issue during the script execution.

Creating a read replica of the database (option B) provides a scalable solution that offloads read traffic from the primary database. The
script can be configured to query the read replica, reducing the impact on the primary database during the script execution.
upvoted 4 times

  MostafaWardany 4 months, 1 week ago


Selected Answer: B
For LEAST operational overhead, I recommended to use read replica DB
upvoted 1 times

  Bmarodi 4 months, 1 week ago


Selected Answer: B
The option B will reduce burden on DB, becase the script will read only from replica, not from DB, hence option B is correct answer.
upvoted 1 times

  Siva007 4 months, 2 weeks ago


Selected Answer: B
B is correct. Read replica for read only script any analytical loads.
upvoted 1 times

  cheese929 5 months, 1 week ago


Selected Answer: B
B is correct. Run the script on the read replica.
upvoted 1 times

  alexiscloud 6 months ago


B:
read replica would be the best choice
upvoted 1 times

  Mahadeva 9 months ago


Selected Answer: B
Reason to have a Read Replica is improved performance (key word) which is native to RDS. Elastic Cache may have misses.

The other way of looking at this question is : Elastic Cache could be beneficial for development tasks (and hence improve the overall DB
performance). But then, Option D mentions that the queries for scripts are cached, and not the DB content (or metadata). This may not
necessarily improve the performance of the DB.

So, Option B is the best answer.


upvoted 1 times

  DavidNamy 9 months ago


Selected Answer: B
The correct answer would be option B
upvoted 1 times
  Nandan747 9 months, 1 week ago
Selected Answer: B
D is incorrect. The requirement says LEAST OPERATIONAL OVERHEAD. Here, using Elasticache you need to heavily modify your
scripts/code to accommodate Elasticache into the architecture which is higher Operational overhead compared to turning DB into Muti-AZ
mode.
upvoted 3 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Selected Answer: B
***CORRECT***
The best solution to meet the requirement with the least operational overhead would be to create a read replica of the database and
configure the script to query only the read replica. Option B.

A read replica is a fully managed database that is kept in sync with the primary database. Read replicas allow you to scale out read-heavy
workloads by distributing read queries across multiple databases. This can help improve the performance of the database and reduce the
impact on the primary database.

By configuring the script to query the read replica, the development team can continue to use the primary database for development
tasks, while the script's queries will be directed to the read replica. This will reduce the load on the primary database and improve its
performance.
upvoted 6 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


***WRONG***
Option A (modifying the DB instance to be a Multi-AZ deployment) would not address the issue of the script's queries impacting the
primary database.

Option C (instructing the development team to manually export the entries in the database at the end of each day) would not be an
efficient solution as it would require manual effort and could lead to data loss if the export process is not done properly.

Option D (using Amazon ElastiCache to cache the common queries) could improve the performance of the script's queries, but it would
not address the issue of the script's queries impacting the primary database.
upvoted 4 times
Question #91 Topic 1

A company has applications that run on Amazon EC2 instances in a VPC. One of the applications needs to call the Amazon S3 API to store and
read objects. According to the company's security regulations, no traffic from the applications is allowed to travel across the internet.
Which solution will meet these requirements?

A. Configure an S3 gateway endpoint.

B. Create an S3 bucket in a private subnet.

C. Create an S3 bucket in the same AWS Region as the EC2 instances.

D. Configure a NAT gateway in the same subnet as the EC2 instances.

Correct Answer: A

Community vote distribution


A (100%)

  ArielSchivo Highly Voted  11 months, 2 weeks ago


Selected Answer: A
Gateway endpoints provide reliable connectivity to Amazon S3 and DynamoDB without requiring an internet gateway or a NAT device for
your VPC. It should be option A.

https://docs.aws.amazon.com/vpc/latest/privatelink/gateway-endpoints.html
upvoted 23 times

  Buruguduystunstugudunstuy Highly Voted  9 months, 2 weeks ago


Selected Answer: A
***CORRECT***
The correct solution is Option A (Configure an S3 gateway endpoint.)

A gateway endpoint is a VPC endpoint that you can use to connect to Amazon S3 from within your VPC. Traffic between your VPC and
Amazon S3 never leaves the Amazon network, so it doesn't traverse the internet. This means you can access Amazon S3 without the need
to use a NAT gateway or a VPN connection.

***WRONG***
Option B (creating an S3 bucket in a private subnet) is not a valid solution because S3 buckets do not have subnets.

Option C (creating an S3 bucket in the same AWS Region as the EC2 instances) is not a requirement for meeting the given security
regulations.

Option D (configuring a NAT gateway in the same subnet as the EC2 instances) is not a valid solution because it would allow traffic to leave
the VPC and travel across the Internet.
upvoted 11 times

  TariqKipkemei Most Recent  1 month, 1 week ago


Selected Answer: A
Prevent traffic from traversing the internet = Gateway VPC endpoint for S3.
upvoted 1 times

  Guru4Cloud 1 month, 3 weeks ago


Selected Answer: A
Configure an S3 gateway endpoint
upvoted 1 times

  tamefi5512 3 months ago


Selected Answer: A
https://docs.aws.amazon.com/vpc/latest/privatelink/gateway-endpoints.html
upvoted 1 times

  cookieMr 3 months, 1 week ago


B. Creating an S3 in a private subnet restricts direct internet access to the bucket but does not provide a direct and secure connection
between the EC2and the S3. The application would still need to traverse the internet to access the S3 API.

C. Creating an S3 in the same Region as the EC2 does not inherently prevent traffic from traversing the internet.

D. Configuring a NAT gateway allows outbound internet connectivity for resources in private subnets, but it does not provide a direct and
secure connection to the S3 service. The traffic from the EC2 to the S3 API would still traverse the internet.

The most suitable solution is to configure an S3 gateway endpoint (option A). It provides a secure and private connection between the VPC
and the S3 service without requiring the traffic to traverse the internet. With an S3 gateway endpoint, the EC2 can access the S3 API
directly within the VPC, meeting the security requirement of preventing traffic from traveling across the internet.
upvoted 2 times
  Bmarodi 4 months, 1 week ago
Selected Answer: A
Configure an S3 gateway endpoint is answer.
upvoted 1 times

  gustavtd 9 months ago


Selected Answer: A
S3 Gateway Endpoint is a VPC endpoint,
upvoted 1 times

  langiac 9 months, 4 weeks ago


Selected Answer: A
https://docs.aws.amazon.com/vpc/latest/privatelink/gateway-endpoints.html
upvoted 1 times

  Wpcorgan 10 months, 2 weeks ago


A is correct
upvoted 1 times
Question #92 Topic 1

A company is storing sensitive user information in an Amazon S3 bucket. The company wants to provide secure access to this bucket from the
application tier running on Amazon EC2 instances inside a VPC.
Which combination of steps should a solutions architect take to accomplish this? (Choose two.)

A. Configure a VPC gateway endpoint for Amazon S3 within the VPC.

B. Create a bucket policy to make the objects in the S3 bucket public.

C. Create a bucket policy that limits access to only the application tier running in the VPC.

D. Create an IAM user with an S3 access policy and copy the IAM credentials to the EC2 instance.

E. Create a NAT instance and have the EC2 instances use the NAT instance to access the S3 bucket.

Correct Answer: AC

Community vote distribution


AC (84%)
( )

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: AC
The key requirements are to provide secure access to the S3 bucket only from the application tier EC2 instances inside the VPC.

A VPC gateway endpoint allows private access to S3 from within the VPC without needing internet access. This keeps the traffic secure
within the AWS network.

The bucket policy should limit access to only the application tier, not make the objects public. This restricts access to the sensitive data to
only the authorized application tier.
upvoted 1 times

  Guru4Cloud 1 month, 3 weeks ago


Selected Answer: AC
The correct options are:

A) Configure a VPC gateway endpoint for Amazon S3 within the VPC.

C) Create a bucket policy that limits access to only the application tier running in the VPC.

The key requirements are secure access to the S3 bucket from EC2 instances in the VPC.

A VPC endpoint for S3 allows connectivity from the VPC to S3 without needing internet access. The bucket policy should limit access only to
the VPC by whitelisting the VPC endpoint.
upvoted 2 times

  sohailn 1 month, 3 weeks ago


ac is the correct answer, as per my knowledge people are confused with IAM user we can use IAM role for secure access.
upvoted 1 times

  tamefi5512 3 months ago


Selected Answer: AC
AC is the right answer
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: AC
A. This eliminates the need for the traffic to go over the internet, providing an added layer of security.

B. It is important to restrict access to the bucket and its objects only to authorized entities.

C. This helps maintain the confidentiality of the sensitive user information by limiting access to authorized resources.

D. In this case, since the EC2 instances are accessing the S3 bucket from within the VPC, using IAM user credentials is unnecessary and can
introduce additional security risks.

E. a NAT instance to access the S3 bucket adds unnecessary complexity and overhead.

In summary, the recommended steps to provide secure access to the S3 from the application tier running on EC2 inside a VPC are to
configure a VPC gateway endpoint for S3 within the VPC (option A) and create a bucket policy that limits access to only the application tier
running in the VPC (option C).
upvoted 2 times
  Bmarodi 4 months, 1 week ago
Selected Answer: AC
A & C the correct solutions.
upvoted 2 times

  TillieEhaung 4 months, 2 weeks ago


Selected Answer: AC
A and C
upvoted 1 times

  annabellehiro 6 months, 1 week ago


Selected Answer: AC
A and C
upvoted 1 times

  Help2023 7 months, 2 weeks ago


Selected Answer: AC
The key part that many miss out on is 'Combination'
The other answers are not wrong but
A works with C and not with the rest as they need an internet connection.
upvoted 2 times

  vherman 7 months, 2 weeks ago


Selected Answer: AC
AC is correct
upvoted 1 times

  bdp123 7 months, 2 weeks ago


Selected Answer: AC
https://aws.amazon.com/premiumsupport/knowledge-center/s3-private-connection-noauthentication/
upvoted 2 times

  remand 8 months, 2 weeks ago


Selected Answer: CD
c & D for security. A addresses accessibility which is not a concern here imo
upvoted 2 times

  goodmail 8 months, 3 weeks ago


Selected Answer: AC
A & C.
When the question is about security, do not select the answer that storing credential in EC2. This shall be done by using IAM policy + role
or Secret Manager.
upvoted 2 times

  mhmt4438 9 months ago


C and D
To provide secure access to the S3 bucket from the application tier running on EC2 instances inside a VPC, you should create a bucket
policy that limits access to only the application tier running in the VPC. This will ensure that only the application tier has access to the
bucket and its contents.

Additionally, you should create an IAM user with an S3 access policy and copy the IAM credentials to the EC2 instance. This will allow the
EC2 instance to access the S3 bucket using the IAM user's permissions.

Option A, configuring a VPC gateway endpoint for Amazon S3 within the VPC, would not provide any additional security for the S3 bucket.

Option B, creating a bucket policy to make the objects in the S3 bucket public, would not provide sufficient security for sensitive user
information.

Option E, creating a NAT instance and having the EC2 instances use the NAT instance to access the S3 bucket, would not provide any
additional security for the S3 bucket
upvoted 1 times

  career360guru 9 months, 1 week ago


Selected Answer: AC
A and C is right among the choice.
But instead of having bucket policy for VPC access better option would be to create a role with specific S3 bucket access and attach that
role EC2 instances that needs access to S3 buckets.
upvoted 3 times

  k1kavi1 9 months, 1 week ago


Selected Answer: AC
A & C looks correct
upvoted 1 times
  Buruguduystunstugudunstuy 9 months, 2 weeks ago
Selected Answer: CD
***CORRECT***
The solutions architect should take the following steps to accomplish secure access to the S3 bucket from the application tier running on
Amazon EC2 instances inside a VPC:

C. Create a bucket policy that limits access to only the application tier running in the VPC.
D. Create an IAM user with an S3 access policy and copy the IAM credentials to the EC2 instance.
upvoted 3 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


After reviewing thoroughly the AWS documentation and the other answers in the discussion, I am taking back my previous answer. The
correct answer for me is Option A and Option C.

To provide secure access to the S3 bucket from the application tier running on Amazon EC2 instances inside the VPC, the solutions
architect should take the following combination of steps:

Option A: Configure a VPC gateway endpoint for Amazon S3 within the VPC.

Amazon S3 VPC Endpoints: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints-s3.html

Option C: Create a bucket policy that limits access to only the application tier running in the VPC.

Amazon S3 Bucket Policies: https://docs.aws.amazon.com/AmazonS3/latest/dev/using-iam-policies.html

AWS Identity and Access Management (IAM) Policies: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html


upvoted 6 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


***INCORRECT***
Option C ensures that the S3 bucket is only accessible to the application tier running in the VPC, while Option D allows the EC2
instances to access the S3 bucket using the IAM credentials of the IAM user. This ensures that access to the S3 bucket is secure and
controlled through IAM.

Option A is incorrect because configuring a VPC gateway endpoint for Amazon S3 does not directly control access to the S3 bucket.

Option B is incorrect because making the objects in the S3 bucket public would not provide secure access to the bucket.

Option E is incorrect because creating a NAT instance is not necessary to provide secure access to the S3 bucket from the application
tier running on EC2 instances in the VPC.
upvoted 1 times
Question #93 Topic 1

A company runs an on-premises application that is powered by a MySQL database. The company is migrating the application to AWS to increase
the application's elasticity and availability.
The current architecture shows heavy read activity on the database during times of normal operation. Every 4 hours, the company's development
team pulls a full export of the production database to populate a database in the staging environment. During this period, users experience
unacceptable application latency. The development team is unable to use the staging environment until the procedure completes.
A solutions architect must recommend replacement architecture that alleviates the application latency issue. The replacement architecture also
must give the development team the ability to continue using the staging environment without delay.
Which solution meets these requirements?

A. Use Amazon Aurora MySQL with Multi-AZ Aurora Replicas for production. Populate the staging database by implementing a backup and
restore process that uses the mysqldump utility.

B. Use Amazon Aurora MySQL with Multi-AZ Aurora Replicas for production. Use database cloning to create the staging database on-demand.

C. Use Amazon RDS for MySQL with a Multi-AZ deployment and read replicas for production. Use the standby instance for the staging
database.

D. Use Amazon RDS for MySQL with a Multi-AZ deployment and read replicas for production. Populate the staging database by implementing a
backup and restore process that uses the mysqldump utility.

Correct Answer: B

Community vote distribution


B (85%) C (15%)

  Buruguduystunstugudunstuy Highly Voted  9 months, 2 weeks ago


Selected Answer: B
The recommended solution is Option B: Use Amazon Aurora MySQL with Multi-AZ Aurora Replicas for production. Use database cloning to
create the staging database on-demand.

To alleviate the application latency issue, the recommended solution is to use Amazon Aurora MySQL with Multi-AZ Aurora Replicas for
production, and use database cloning to create the staging database on-demand. This allows the development team to continue using the
staging environment without delay, while also providing elasticity and availability for the production application.

Therefore, Options A, C, and D are not recommended


upvoted 10 times

  MutiverseAgent 2 months, 3 weeks ago


Agree, solution it seems to be the B)
1) Because the company wants "elasticity and availability" as the question mentioned, so I think this leaves us in the two questions
related to Aurora discarding the RDS Mysql solution.
2) Accoding AWS documentation (https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Managing.Clone.html)
"Aurora cloning is especially useful for quickly setting up test environments using your production data, without risking data
corruption"
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Option A: Use Amazon Aurora MySQL with Multi-AZ Aurora Replicas for production. Populating the staging database by implementing a
backup and restore process that uses the mysqldump utility is not the recommended solution because it involves taking a full export of
the production database, which can cause unacceptable application latency.

Option C: Use Amazon RDS for MySQL with a Multi-AZ deployment and read replicas for production. Using the standby instance for the
staging database is not the recommended solution because it does not give the development team the ability to continue using the
staging environment without delay. The standby instance is used for failover in case of a production instance failure, and it is not
intended for use as a staging environment.
upvoted 12 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Option D: Use Amazon RDS for MySQL with a Multi-AZ deployment and read replicas for production. Populating the staging
database by implementing a backup and restore process that uses the mysqqldump utility is not the recommended solution
because it involves taking a full export of the production database, which can cause unacceptable application latency.
upvoted 5 times

  Modulopi Most Recent  3 days, 13 hours ago


Selected Answer: B
B is the correct
upvoted 1 times
  TariqKipkemei 1 month, 1 week ago
Selected Answer: C
No mention of cost, so technically both options B & C would work.

C. https://aws.amazon.com/blogs/database/readable-standby-instances-in-amazon-rds-multi-az-deployments-a-new-high-availability-
option/#:~:text=read%20replicas.-,Amazon%20RDS,-now%20offers%20Multi

B.https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Managing.Clone.html#:~:text=cloning%20works.-,Aurora%20
cloning,-is%20especially%20useful
upvoted 1 times

  Guru4Cloud 1 month, 3 weeks ago


Selected Answer: B
Option B is the best solution that meets all the requirements:

Use Amazon Aurora MySQL with Multi-AZ Aurora Replicas for production. Use database cloning to create the staging database on-
demand.

The key requirements are to:

Alleviate application latency caused by database exports


Give development immediate access to a staging environment
Aurora Multi-AZ replicas improves availability and provides fast failover.

Database cloning creates an instantly available copy of the production database that can be used for staging. This avoids any export or
restoration del
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: B
A. Populating the staging database through a backup and restore process using the mysqldump utility would still result in delays and
impact application latency.

B. With Aurora, you can create a clone of the production database quickly and efficiently, without the need for time-consuming backup
and restore processes. The development team can spin up the staging database on-demand, eliminating delays and allowing them to
continue using the staging environment without interruption.

C. Using the standby instance for the staging database would not provide the development team with the ability to use the staging
environment without delay. The standby instance is designed for failover purposes and may not be readily available for immediate use.

D. Relying on a backup and restore process using the mysqldump utility would still introduce delays and impact application latency during
the data population phase.
upvoted 2 times

  linux_admin 6 months ago


Selected Answer: B
With Amazon Aurora MySQL, creating a staging database using database cloning is an easy process. Using database cloning will eliminate
the performance issues that occur when a full export is done, and the new database is created. In addition, Amazon Aurora's high
availability is provided through Multi-AZ deployment, and read replicas can be used to serve the heavy read traffic without affecting the
production database. This solution provides better scalability, elasticity, and availability than the current architecture.
upvoted 3 times

  alexiscloud 6 months ago


Answer B:
upvoted 1 times

  bdp123 7 months, 2 weeks ago


Selected Answer: B
https://aws.amazon.com/blogs/aws/amazon-aurora-fast-database-cloning/
upvoted 3 times

  john2323 7 months, 3 weeks ago


Selected Answer: B
Database cloning is the best answer
upvoted 1 times

  techhb 9 months, 1 week ago


Selected Answer: B
Database cloning is right answer here.
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Option B is right.
You can not access Standby instance for Read in RDS Multi-AZ Deployments.
upvoted 3 times
  aadi7 9 months, 2 weeks ago
This is correct, stand by instances cannot be used for read/write and is for failover targets. Read Replicas can be used for that so B is
correct.
upvoted 2 times

  aadi7 9 months, 2 weeks ago


In a RDS Multi-AZ deployment, you can use the standby instance for read-only purposes, such as running queries and reporting. This is
known as a "read replica." You can create one or more read replicas of a DB instance and use them to offload read traffic from the
primary instance.
https://aws.amazon.com/about-aws/whats-new/2018/01/amazon-rds-read-replicas-now-support-multi-az-deployments/
upvoted 3 times

  333666999 9 months, 3 weeks ago


Selected Answer: C
why not C
upvoted 4 times

  MutiverseAgent 2 months, 3 weeks ago


Also the company wants "elasticity and availability" as the question mentioned, so I think this leaves us in the two questions related to
Aurora discarding the RDS Mysql solution.
upvoted 1 times

  MutiverseAgent 2 months, 3 weeks ago


Because standby instances are not writable, and at least from my side I occasionally have used the staging database for bug
replication. So being able to write might be a thing to consider.
upvoted 1 times

  TTaws 2 months, 3 weeks ago


You don't need to write anything as they are only pulling the reports. (READ requests)
The Best answer here is C
upvoted 1 times

  DivaLight 10 months, 1 week ago


Selected Answer: B
Option B
upvoted 1 times

  pspinelli19 10 months, 3 weeks ago


Selected Answer: B
Amazon Aurora Fast Database Cloning is what is required here.
https://aws.amazon.com/blogs/aws/amazon-aurora-fast-database-cloning/
upvoted 1 times

  KLLIM 11 months ago


https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Managing.Clone.html
upvoted 2 times

  LeGloupier 11 months, 2 weeks ago


Selected Answer: B
B
Database cloning
upvoted 4 times
Question #94 Topic 1

A company is designing an application where users upload small files into Amazon S3. After a user uploads a file, the file requires one-time simple
processing to transform the data and save the data in JSON format for later analysis.
Each file must be processed as quickly as possible after it is uploaded. Demand will vary. On some days, users will upload a high number of files.
On other days, users will upload a few files or no files.
Which solution meets these requirements with the LEAST operational overhead?

A. Configure Amazon EMR to read text files from Amazon S3. Run processing scripts to transform the data. Store the resulting JSON file in an
Amazon Aurora DB cluster.

B. Configure Amazon S3 to send an event notification to an Amazon Simple Queue Service (Amazon SQS) queue. Use Amazon EC2 instances
to read from the queue and process the data. Store the resulting JSON file in Amazon DynamoDB.

C. Configure Amazon S3 to send an event notification to an Amazon Simple Queue Service (Amazon SQS) queue. Use an AWS Lambda
function to read from the queue and process the data. Store the resulting JSON file in Amazon DynamoDB.

D. Configure Amazon EventBridge (Amazon CloudWatch Events) to send an event to Amazon Kinesis Data Streams when a new file is
uploaded. Use an AWS Lambda function to consume the event from the stream and process the data. Store the resulting JSON file in an
Amazon Aurora DB cluster.

Correct Answer: C

Community vote distribution


C (100%)

  rjam Highly Voted  11 months ago


Option C
Dynamo DB is a NoSQL-JSON supported
upvoted 10 times

  rjam 11 months ago


also Use an AWS Lambda - serverless - less operational overhead
upvoted 8 times

  cookieMr Highly Voted  3 months, 1 week ago


Selected Answer: C
A. Configuring EMR and an Aurora DB cluster for this use case would introduce unnecessary complexity and operational overhead. EMR is
typically used for processing large datasets and running big data frameworks like Apache Spark or Hadoop.

B. While using S3 event notifications and SQS for decoupling is a good approach, using EC2 to process the data would introduce
operational overhead in terms of managing and scaling the EC2.

D. Using EventBridge and Kinesis Data Streams for this use case would introduce additional complexity and operational overhead
compared to the other options. EventBridge and Kinesis are typically used for real-time streaming and processing of large volumes of
data.

In summary, option C is the recommended solution as it provides a serverless and scalable approach for processing uploaded files using
S3 event notifications, SQS, and Lambda. It offers low operational overhead, automatic scaling, and efficient handling of varying demand.
Storing the resulting JSON file in DynamoDB aligns with the requirement of saving the data for later analysis.
upvoted 5 times

  Modulopi Most Recent  3 days, 13 hours ago


Selected Answer: C
C: Lambdas are made for that
upvoted 1 times

  TariqKipkemei 1 month, 1 week ago


Selected Answer: C
C is best
upvoted 1 times

  Guru4Cloud 1 month, 3 weeks ago


Selected Answer: C
Option C is the best solution that meets the requirements with the least operational overhead:

Configure Amazon S3 to send event notification to SQS queue


Use Lambda function triggered by SQS to process each file
Store output JSON in DynamoDB
This leverages serverless components like S3, SQS, Lambda, and DynamoDB to provide automated file processing without needing to
provision and manage servers.

SQS queues the notifications and Lambda scales automatically to handle spikes and drops in file uploads. No EMR cluster or EC2 Fleet is
needed to manage.
upvoted 1 times
  beginnercloud 4 months, 1 week ago
Selected Answer: C
Option C is correct - Dynamo DB is a NoSQL-JSON supported
upvoted 1 times

  Abrar2022 4 months, 1 week ago


Selected Answer: C
SQS + LAMDA + JSON >>>>>> Dynamo DB
upvoted 1 times

  Bmarodi 4 months, 1 week ago


Selected Answer: C
The option C is right answer.
upvoted 1 times

  jy190 5 months, 1 week ago


can someone explain why SQS? it's a poll-based messaging, does it guarantee reacting the event asap?
upvoted 1 times

  Zerotn3 9 months ago


Selected Answer: C
Dynamo DB is a NoSQL-JSON supported
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Selected Answer: C
Option C, Configuring Amazon S3 to send an event notification to an Amazon Simple Queue Service (SQS) queue and using an AWS
Lambda function to read from the queue and process the data, would likely be the solution with the least operational overhead.

AWS Lambda is a serverless computing service that allows you to run code without the need to provision or manage infrastructure. When
a new file is uploaded to Amazon S3, it can trigger an event notification which sends a message to an SQS queue. The Lambda function
can then be set up to be triggered by messages in the queue, and it can process the data and store the resulting JSON file in Amazon
DynamoDB.
upvoted 3 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Using a serverless solution like AWS Lambda can help to reduce operational overhead because it automatically scales to meet demand
and does not require you to provision and manage infrastructure. Additionally, using an SQS queue as a buffer between the S3 event
notification and the Lambda function can help to decouple the processing of the data from the uploading of the data, allowing the
processing to happen asynchronously and improving the overall efficiency of the system.
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: C
Option C as JSON is supported by DynamoDB. RDS or AuroraDB are not suitable for JSON data.
A - Because this is not a Bigdata analytics usecase.
upvoted 1 times

  gloritown 9 months, 3 weeks ago


Selected Answer: C
CCCCCCCC
upvoted 1 times

  AlaN652 9 months, 3 weeks ago


Selected Answer: C
Answer C
upvoted 1 times

  HussamShokr 10 months ago


Selected Answer: C
answer is C
upvoted 1 times

  Kapello10 10 months, 1 week ago


Selected Answer: C
cccccccccccc
upvoted 1 times
  DivaLight 10 months, 1 week ago
Selected Answer: C
Option C
upvoted 1 times
Question #95 Topic 1

An application allows users at a company's headquarters to access product data. The product data is stored in an Amazon RDS MySQL DB
instance. The operations team has isolated an application performance slowdown and wants to separate read traffic from write traffic. A solutions
architect needs to optimize the application's performance quickly.
What should the solutions architect recommend?

A. Change the existing database to a Multi-AZ deployment. Serve the read requests from the primary Availability Zone.

B. Change the existing database to a Multi-AZ deployment. Serve the read requests from the secondary Availability Zone.

C. Create read replicas for the database. Configure the read replicas with half of the compute and storage resources as the source database.

D. Create read replicas for the database. Configure the read replicas with the same compute and storage resources as the source database.

Correct Answer: D

Community vote distribution


D (96%)

  Buruguduystunstugudunstuy Highly Voted  9 months, 2 weeks ago


Selected Answer: D
The solutions architect should recommend option D: Create read replicas for the database. Configure the read replicas with the same
compute and storage resources as the source database.

Creating read replicas allows the application to offload read traffic from the source database, improving its performance. The read replicas
should be configured with the same compute and storage resources as the source database to ensure that they can handle the read
workload effectively.
upvoted 12 times

  TariqKipkemei Most Recent  1 month, 1 week ago


Selected Answer: B
Both B and D would work.

Amazon RDS now offers Multi-AZ deployments with readable standby instances (also called Multi-AZ DB cluster deployments) . You should
consider using Multi-AZ DB cluster deployments with two readable DB instances if you need additional read capacity in your Amazon RDS
Multi-AZ deployment and if your application workload has strict transaction latency requirements such as single-digit milliseconds
transactions.

https://aws.amazon.com/blogs/database/readable-standby-instances-in-amazon-rds-multi-az-deployments-a-new-high-availability-
option/#:~:text=read%20replicas.-,Amazon%20RDS,-now%20offers%20Multi
upvoted 1 times

  Guru4Cloud 1 month, 3 weeks ago


Selected Answer: D
The best solution is to create read replicas for the database and configure them with the same compute and storage resources as the
source database.

The key requirements are to quickly optimize performance by isolating reads from writes.

Read replicas allow read-only workloads to be directed to one or more replicas of the source RDS instance. This separates reporting or
analytics queries from transactional workloads.

The read replicas should have the same compute and storage as the source to provide equivalent performance for reads. Scaling down
the replicas would limit read performance.

Using Multi-AZ alone does not achieve read/write separation. The secondary AZ instance is for disaster recovery, not performance.
upvoted 4 times

  MNotABot 2 months, 2 weeks ago


Read replica + Same resources as we may need to turn replica to primary in few cases
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: D
A. In a Multi-AZ deployment, a standby replica of the database is created in a different AZ for high availability and automatic failover
purposes. However, serving read requests from the primary AZ alone would not effectively separate read and write traffic. Both read and
write traffic would still be directed to the primary database instance, which might not fully optimize performance.

B. The secondary instance in a Multi-AZ deployment is intended for failover and backup purposes, not for actively serving read traffic. It
operates in a standby mode and is not optimized for handling read queries efficiently.
C. Configuring the read replicas with half of the compute and storage resources as the source database might not be optimal. It's
generally recommended to configure the read replicas with the same compute and storage resources as the source database to ensure
they can handle the read workload effectively.

D. Configuring the read replicas with the same compute and storage resources as the source database ensures that they can handle the
read workload efficiently and provide the required performance boost.
upvoted 3 times
  Bmarodi 4 months, 1 week ago
Selected Answer: D
D meets the requiremets.
upvoted 1 times

  Adeshina 4 months, 3 weeks ago


Option C suggests creating read replicas for the database and configuring them with half of the compute and storage resources as the
source database. This is a better option as it allows read traffic to be offloaded from the primary database, separating read traffic from
write traffic. Configuring the read replicas with half the resources will also save on costs.
upvoted 1 times

  Charlesleeee 4 months, 1 week ago


Err, just curious, what if the production database is 51% full? Your half storage read replica would explode…?
upvoted 4 times

  Oldman2023 6 months, 1 week ago


Can anyone explain why B is not an option?
upvoted 4 times

  caffee 5 months, 3 weeks ago


Multi-AZ: Synchronous replication occurs, meaning that synchronizing data between DB instances immediately can slow down
application's performance. But this method increases High Availability.
Read Replicas: Asynchronous replication occurs, meaning that replicating data in other moments rather than in the writing will
maintain application's performance. Although the data won't be HA as Multi-AZ kind of deployment, this method increases Scalability.
Good for read heavy workloads.
upvoted 3 times

  draum010 6 months ago


CHATGPT says:

To optimize the application's performance and separate read traffic from write traffic, the solutions architect should recommend
creating read replicas for the database and configuring them to serve read requests. Option C and D both suggest creating read
replicas, but option D is a better choice because it configures the read replicas with the same compute and storage resources as the
source database.

Option A and B suggest changing the existing database to a Multi-AZ deployment, which would provide high availability by replicating
the database across multiple Availability Zones. However, it would not separate read and write traffic, so it is not the best solution for
optimizing application performance in this scenario.
upvoted 4 times

  SuketuKohli 6 months, 2 weeks ago


You can create up to 15 read replicas from one DB instance within the same Region. For replication to operate effectively, each read replica
should have the same amount of compute and storage resources as the source DB instance. If you scale the source DB instance, also scale
the read replicas.
upvoted 2 times

  dhuno 4 months, 2 weeks ago


I think for RDS it is 5 read replicas. 15 is for aurora serverless
upvoted 1 times

  DivaLight 10 months, 1 week ago


Selected Answer: D
Option D
upvoted 1 times

  Wpcorgan 10 months, 2 weeks ago


D is correct
upvoted 1 times

  Nigma 10 months, 4 weeks ago


D

https://www.examtopics.com/discussions/amazon/view/46461-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times

  Hunkie 11 months ago


Selected Answer: D
If you scale the source DB instance, also scale the read replicas.
upvoted 2 times
  ArielSchivo 11 months, 2 weeks ago
Selected Answer: D
D is correct.

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_MySQL.Replication.ReadReplicas.html
upvoted 2 times
Question #96 Topic 1

An Amazon EC2 administrator created the following policy associated with an IAM group containing several users:

What is the effect of this policy?

A. Users can terminate an EC2 instance in any AWS Region except us-east-1.

B. Users can terminate an EC2 instance with the IP address 10.100.100.1 in the us-east-1 Region.

C. Users can terminate an EC2 instance in the us-east-1 Region when the user's source IP is 10.100.100.254.

D. Users cannot terminate an EC2 instance in the us-east-1 Region when the user's source IP is 10.100.100.254.

Correct Answer: C

Community vote distribution


C (62%) D (38%)

  Joxtat Highly Voted  9 months ago


What the policy means:
1. Allow termination of any instance if user’s source IP address is 100.100.254.
2. Deny termination of instances that are not in the us-east-1 Combining this two, you get:
“Allow instance termination in the us-east-1 region if the user’s source IP address is 10.100.100.254. Deny termination operation on other
regions.”
upvoted 31 times

  KMohsoe 4 months, 3 weeks ago


Nice explanation. Thanks
upvoted 3 times

  Subh_fidelity Highly Voted  10 months ago


C is correct.
0.0/24 , the following five IP addresses are reserved:
0.0: Network address.
0.1: Reserved by AWS for the VPC router.
0.2: Reserved by AWS. The IP address of the DNS server is the base of the VPC network range plus two. ...
0.3: Reserved by AWS for future use.
0.255: Network broadcast address.
upvoted 14 times

  Bmarodi 4 months, 1 week ago


A good explanation!
upvoted 2 times

  Subhrangsu Most Recent  1 week, 1 day ago


D is not because of Deny & NOT Equals
upvoted 1 times

  Valder21 1 month ago


I went for C for obvious reasons

Wondering though; this policy also allows to terminate EC2 instances in US-east-1 even if your source IP is not the 10.100.100.254, right?
The idea is that since I do not deny this for the other source IP addresses, the Allow action is a obsolete?
upvoted 1 times

  TariqKipkemei 1 month, 1 week ago


Selected Answer: C
Deny all actions on the EC2 instances in the us-east1 region, but let anyone with source IP 10.100.100.254 be able to terminate the EC2
instances.
upvoted 1 times

  prudhvi08 1 month, 3 weeks ago


Answer C:
Example 4: Granting access to a specific version of an object
https://docs.aws.amazon.com/AmazonS3/latest/userguide/amazon-s3-policy-keys.html
upvoted 1 times

  RupeC 2 months, 1 week ago


Selected Answer: D
The effect of the policy is:

D. Users cannot terminate an EC2 instance in the us-east-1 Region when the user's source IP is 10.100.100.254.

The policy allows users to terminate EC2 instances only when their source IP is within the range 10.100.100.0/24.
However, there is a Deny statement that blocks users from terminating any EC2 instance in regions other than us-east-1.
So, when a user tries to terminate an EC2 instance from the IP 10.100.100.254 in the us-east-1 region, the Deny statement will take effect,
and the action will be denied. However, if the user tries to terminate an instance from the 10.100.100.0/24 IP range in any region other
than us-east-1, the Deny statement will not apply, and the Allow statement will permit the action.
upvoted 4 times

  JoeGuan 1 month, 3 weeks ago


The Deny statement 'will not' take effect, because the Deny statement is StringNotEquals to US-East-1. That means that any other
region that DOES NOT EQUAL Us-East-1 will be denied, if the region is NOT Us-East-1, then DENY. So Us-East-1 is allowed!
upvoted 3 times

  Subhrangsu 1 week, 1 day ago


oh, ok got it now.
upvoted 1 times

  MNotABot 2 months, 2 weeks ago


https://cidr.xyz/
upvoted 1 times

  beginnercloud 4 months, 1 week ago


Selected Answer: C
Users can terminate an EC2 instance in the us-east-1 Region when the user's source IP is 10.100.100.254. Option C is correct
upvoted 1 times

  Bmarodi 4 months, 1 week ago


Selected Answer: C
Users can terminate an EC2 instance in the us-east-1 Region when the user's source IP is 10.100.100.254. Option C is right one.
upvoted 2 times

  Moccorso 4 months, 3 weeks ago


I think D
upvoted 1 times

  darn 5 months, 1 week ago


Selected Answer: C
its C
Deny & NOT Equal = CAN (basic logic folks)
upvoted 2 times

  shinejh0528 5 months, 3 weeks ago


Selected Answer: C
Oh... tricky.. TT... C is correct ...
upvoted 1 times
  xalien 6 months ago
It's C:
deny all ec2 if StringEquals: means deny everything unless the region is us-east-1
upvoted 1 times

  alexiscloud 6 months ago


Answer C:
upvoted 1 times

  Hemanthgowda1932 6 months, 2 weeks ago


C is correct answer
upvoted 1 times

  Mainroad4822 6 months, 2 weeks ago


Selected Answer: D
10.100.100.254 is within the allowed CIDR block. However, it's in us-eas-1 region and deny rules all
upvoted 3 times

  Robrobtutu 5 months, 2 weeks ago


The deny rule blocks everyone EXCEPT us-east-1 from deleting EC2 instances.
upvoted 1 times
Question #97 Topic 1

A company has a large Microsoft SharePoint deployment running on-premises that requires Microsoft Windows shared file storage. The company
wants to migrate this workload to the AWS Cloud and is considering various storage options. The storage solution must be highly available and
integrated with Active Directory for access control.
Which solution will satisfy these requirements?

A. Configure Amazon EFS storage and set the Active Directory domain for authentication.

B. Create an SMB file share on an AWS Storage Gateway file gateway in two Availability Zones.

C. Create an Amazon S3 bucket and configure Microsoft Windows Server to mount it as a volume.

D. Create an Amazon FSx for Windows File Server file system on AWS and set the Active Directory domain for authentication.

Correct Answer: D

Community vote distribution


D (100%)

  Buruguduystunstugudunstuy Highly Voted  9 months, 2 weeks ago


Selected Answer: D
D. Create an Amazon FSx for Windows File Server file system on AWS and set the Active Directory domain for authentication.

Amazon FSx for Windows File Server is a fully managed file storage service that is designed to be used with Microsoft Windows workloads.
It is integrated with Active Directory for access control and is highly available, as it stores data across multiple availability zones.
Additionally, FSx can be used to migrate data from on-premises Microsoft Windows file servers to the AWS Cloud. This makes it a good fit
for the requirements described in the question.
upvoted 14 times

  TariqKipkemei Most Recent  1 month, 1 week ago


Selected Answer: D
Microsoft Windows shared file storage = Amazon FSx for Windows File Server
upvoted 1 times

  Guru4Cloud 1 month, 3 weeks ago


Selected Answer: D
The best solution that satisfies the requirements is D) Create an Amazon FSx for Windows File Server file system on AWS and set the Active
Directory domain for authentication.

The key requirements are:

Shared Windows file storage for SharePoint


High availability
Integrated Active Directory authentication
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: D
A. EFS does not provide native integration with AD for access control. While you can configure EFS to work with AD, it requires additional
setup and is not as straightforward as using a dedicated Windows file system like FSx for Windows File Server.

B. It may introduce additional complexity for this use case. Creating an SMB file share using AWS Storage Gateway would require
maintaining the gateway and managing the synchronization between on-premises and AWS storage.

C. S3 does not natively provide the SMB file protocol required for MS SharePoint and Windows shared file storage. While it is possible to
mount an S3 as a volume using 3rd-party tools or configurations, it is not the recommended.

D. FSx for Windows File Server is a fully managed, highly available file storage service that is compatible with MSWindows shared file
storage requirements. It provides native integration with AD, allowing for seamless access control and authentication using existing AD
user accounts.
upvoted 2 times

  cheese929 5 months, 1 week ago


Selected Answer: D
D is correct. FSx is for windows and supports AD authentication
upvoted 1 times

  kakka22 5 months, 2 weeks ago


Why not B? Migrating the workload? Maybe is needed a hybrid cloud solution
upvoted 1 times
  gx2222 6 months ago
Selected Answer: D
One solution that can satisfy the mentioned requirements is to use Amazon FSx for Windows File Server. Amazon FSx is a fully managed
service that provides highly available and scalable file storage for Windows-based applications. It is designed to be fully integrated with
Active Directory, which allows you to use your existing domain users and groups to control access to your file shares.

Amazon FSx provides the ability to migrate data from on-premises file servers to the cloud, using tools like AWS DataSync, Robocopy or
PowerShell. Once the data is migrated, you can continue to use the same tools and processes to manage and access the file shares as you
would on-premises.

Amazon FSx also provides features such as automatic backups, data encryption, and native multi-Availability Zone (AZ) deployments for
high availability. It can be easily integrated with other AWS services, such as Amazon S3, Amazon EFS, and AWS Backup, for additional
functionality and backup options.
upvoted 2 times

  psr83 9 months, 2 weeks ago


Selected Answer: D
FSx is for Windows
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: D
Option D
upvoted 1 times

  xeun88 9 months, 3 weeks ago


Im going for D as the answer because FXs is compatible with windows
upvoted 1 times

  kajal1206 10 months ago


Selected Answer: D
Answer is D
upvoted 1 times

  Wpcorgan 10 months, 2 weeks ago


D is correct
upvoted 1 times

  TonyghostR05 10 months, 2 weeks ago


Window only available for using FSx
upvoted 3 times

  Nigma 10 months, 4 weeks ago


D. Windows is the keyword

https://www.examtopics.com/discussions/amazon/view/29780-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 2 times

  Nigma 10 months, 4 weeks ago


EFS is for Linux
FSx is for Windows
upvoted 6 times

  Hunkie 11 months ago


Selected Answer: D
DDDDDDDD
upvoted 1 times

  dokaedu 11 months, 1 week ago


Correct Answer:D
https://docs.aws.amazon.com/fsx/latest/WindowsGuide/aws-ad-integration-fsxW.html
upvoted 2 times
Question #98 Topic 1

An image-processing company has a web application that users use to upload images. The application uploads the images into an Amazon S3
bucket. The company has set up S3 event notifications to publish the object creation events to an Amazon Simple Queue Service (Amazon SQS)
standard queue. The SQS queue serves as the event source for an AWS Lambda function that processes the images and sends the results to users
through email.
Users report that they are receiving multiple email messages for every uploaded image. A solutions architect determines that SQS messages are
invoking the Lambda function more than once, resulting in multiple email messages.
What should the solutions architect do to resolve this issue with the LEAST operational overhead?

A. Set up long polling in the SQS queue by increasing the ReceiveMessage wait time to 30 seconds.

B. Change the SQS standard queue to an SQS FIFO queue. Use the message deduplication ID to discard duplicate messages.

C. Increase the visibility timeout in the SQS queue to a value that is greater than the total of the function timeout and the batch window
timeout.

D. Modify the Lambda function to delete each message from the SQS queue immediately after the message is read before processing.

Correct Answer: A

Community vote distribution


C (80%) 11% 6%

  Six_Fingered_Jose Highly Voted  11 months, 1 week ago


Selected Answer: C
answer should be C,
users get duplicated messages because -> lambda polls the message, and starts processing the message.
However, before the first lambda can finish processing the message, the visibility timeout runs out on SQS, and SQS returns the message
to the poll, causing another Lambda node to process that same message.
By increasing the visibility timeout, it should prevent SQS from returning a message back to the poll before Lambda can finish processing
the message
upvoted 36 times

  JoeGuan 1 month, 3 weeks ago


The FIFO SQS is for solving a different problem, where items in the queue require order. You cannot simply switch from a standard
queue to fifo queue. Duplicate emails are a common issue with a standard queue. The documentation consistently reminds us that
duplicate emails can occur, and the solution is not to create a FIFO queue, but rather adjust the configuration parameters accordingly.
upvoted 1 times

  PLN6302 1 month, 1 week ago


amazon s3 doesn't support fifo queues
upvoted 1 times

  MutiverseAgent 2 months, 3 weeks ago


I agree it seems solution is C, as thought the SQS FIFO makes sense deduplication id would make NO sense as the system who put
messages in the queue is S3 events; and as far as I know S3 do not send duplicated events. Also, the question mention that users are
complaining about receiving multiple emails for each email, which is different to say they are receiving occasionally a repeated email;
so my guess is SQS FIFO is not needed.
upvoted 1 times

  Ello2023 8 months, 2 weeks ago


I am confused. If the email has been sent many times already why would they need more time?
I believe SQS Queue Fifo will keep in order and any duplicates with same ID will be deleted. Can you tell me where i am going wrong?
Thanks
upvoted 3 times

  Robrobtutu 5 months, 2 weeks ago


Increasing the visibility timeout would give time to the lambda function to finish processing the message, which would make it
disappear from the queue, and therefore only one email would be send to the user.
If the visibility timeout ends while the lambda function is still processing the message, the message will be returned to the queue
and there another lambda function would pick it up and process it again, which would result in the user receiving two or more
emails about the same thing.
upvoted 3 times

  Abdou1604 1 month, 2 weeks ago


i aggree because the issue is multiple received email for an image uploaded
upvoted 1 times

  aadityaravi8 3 months ago


I agree with your answer explanation
upvoted 1 times

  MrAWS 8 months, 2 weeks ago


I tend to agree with you. See my comments above.
upvoted 1 times

  brushek Highly Voted  11 months, 3 weeks ago


Selected Answer: C
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-visibility-timeout.html

this is important part:


Immediately after a message is received, it remains in the queue. To prevent other consumers from processing the message again,
Amazon SQS sets a visibility timeout, a period of time during which Amazon SQS prevents other consumers from receiving and processing
the message. The default visibility timeout for a message is 30 seconds. The minimum is 0 seconds. The maximum is 12 hours.
upvoted 13 times

  vijaykamal Most Recent  5 days, 15 hours ago


Long polling is incorrect...it just means that SQS queue is connected after specific interval instead of looking for messages in queue in very
short interval...long polling saves money but does not help to remove duplicate.

Correct Answer: C
upvoted 1 times

  hieulam 1 week, 5 days ago


Selected Answer: A
I think A is correct.
https://aws.amazon.com/blogs/developer/polling-messages-from-a-amazon-sqs-
queue/#:~:text=When%20disabling,more%20API%20calls.
upvoted 1 times

  kwang312 1 month, 1 week ago


D is an incorrect answer because Lamda automatically deletes message from the queue when finish process
upvoted 1 times

  TariqKipkemei 1 month, 1 week ago


Selected Answer: C
Immediately after a message is received, it remains in the queue. To prevent other consumers from processing the message again,
Amazon SQS sets a visibility timeout, a period of time during which Amazon SQS prevents all consumers from receiving and processing
the message. The default visibility timeout for a message is 30 seconds. The minimum is 0 seconds. The maximum is 12 hours.

https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-visibility-
timeout.html#:~:text=SQS%20sets%20a-,visibility%20timeout,-%2C%20a%20period%20of
upvoted 1 times

  Guru4Cloud 1 month, 3 weeks ago


Selected Answer: C
I would go with the C.
upvoted 1 times

  Olaunfazed 3 months, 1 week ago


Answer is B.

B. Change the SQS standard queue to an SQS FIFO queue. Use the message deduplication ID to discard duplicate messages.

By changing the SQS standard queue to an SQS FIFO (First-In-First-Out) queue, you can ensure that messages are processed in the order
they are received and that each message is processed only once. FIFO queues provide exactly-once processing and eliminate duplicates.

Using the message deduplication ID feature of SQS FIFO queues, you can assign a unique identifier (such as the S3 object key) to each
message. SQS will check the deduplication ID of incoming messages and discard duplicate messages with the same deduplication ID. This
ensures that only unique messages are processed by the Lambda function.

This solution requires minimal operational overhead as it mainly involves changing the queue type and using the deduplication ID feature,
without requiring modifications to the Lambda function or adjusting timeouts.
upvoted 3 times

  dangvanduc90 3 weeks ago


compare with C, SQS FIFO must take time than C, B is important when you concern about ordering
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: C
A. Long polling doesn't directly address the issue of multiple invocations of the Lambda for the same message. Increasing the
ReceiveMessage may not completely prevent duplicate invocations.

B. Changing the queue type from standard to FIFO requires additional considerations and changes to the application architecture. It may
involve modifying the event configuration and handling message deduplication IDs, which can introduce operational overhead.

D. Deleting messages immediately after reading them may lead to message loss if the Lambda encounters an error or fails to process the
image successfully. It does not guarantee message processing and can result in data loss.

C. By setting the visibility timeout to a value greater than the total time required for the Lambda to process the image and send the email,
you ensure that the message is not made visible to other consumers during processing. This prevents duplicate invocations of the
Lambda for the same message.
upvoted 2 times
  Abrar2022 4 months, 1 week ago
FIFO - IS A SOLUTION BUT REQUIRES OPERATIONAL OVERHEAD.
INCREASING VISIBILITY TIMEOUT - REQUIRES FAR LESS OPERATIONAL OVERHEAD.
upvoted 3 times

  Bmarodi 4 months, 1 week ago


Selected Answer: C
I go for option C.
upvoted 2 times

  Rahulbit34 5 months ago


SQS VISIBILITY TIMEOUT can help preventing the reprocessing of the message from the queue. By default the timeout is 30 secs, min 0
and max is 12 hours.
upvoted 1 times

  quanbui 5 months, 1 week ago


Selected Answer: C
ccccccc
upvoted 2 times

  tikytaka 5 months, 2 weeks ago


Apologies, I meant A is wrong
upvoted 1 times

  tikytaka 5 months, 2 weeks ago


C is wrong:
'When the wait time for the ReceiveMessage API action is greater than 0, long polling is in effect. The maximum long polling wait time is
20 seconds.'
upvoted 1 times

  kuls91 5 months, 2 weeks ago


I took the exam in April 2023, out of 65 Questions only 25-30 were from the dumps. I got passed (820!) but these dumps are singly not
reliable. I thing is sure, going through these questions and working itself for the answers will help to pass the actual exam.
upvoted 2 times

  lquintero 5 months ago


Hola KUl91, cual otra fuente de test de estudio utilizaste?
upvoted 1 times

  kraken21 6 months ago


Selected Answer: C
Key is minimal operational overhead.
upvoted 1 times
Question #99 Topic 1

A company is implementing a shared storage solution for a gaming application that is hosted in an on-premises data center. The company needs
the ability to use Lustre clients to access data. The solution must be fully managed.
Which solution meets these requirements?

A. Create an AWS Storage Gateway file gateway. Create a file share that uses the required client protocol. Connect the application server to the
file share.

B. Create an Amazon EC2 Windows instance. Install and configure a Windows file share role on the instance. Connect the application server to
the file share.

C. Create an Amazon Elastic File System (Amazon EFS) file system, and configure it to support Lustre. Attach the file system to the origin
server. Connect the application server to the file system.

D. Create an Amazon FSx for Lustre file system. Attach the file system to the origin server. Connect the application server to the file system.

Correct Answer: D

Community vote distribution


D (95%) 5%

  123jhl0 Highly Voted  11 months, 2 weeks ago


Selected Answer: D
Answer is D.
Lustre in the question is only available as FSx
https://aws.amazon.com/fsx/lustre/
upvoted 22 times

  Buruguduystunstugudunstuy Highly Voted  9 months, 2 weeks ago


Selected Answer: D
Option D. Create an Amazon FSx for Lustre file system. Attach the file system to the origin server. Connect the application server to the file
system.

Amazon FSx for Lustre is a fully managed file system that is designed for high-performance workloads, such as gaming applications. It
provides a high-performance, scalable, and fully managed file system that is optimized for Lustre clients, and it is fully integrated with
Amazon EC2. It is the only option that meets the requirements of being fully managed and able to support Lustre clients.
upvoted 9 times

  TariqKipkemei Most Recent  1 month, 1 week ago


Selected Answer: D
Lustre clients = Amazon FSx for Lustre file system
upvoted 1 times

  Guru4Cloud 1 month, 3 weeks ago


Selected Answer: D
The correct solution is D) Create an Amazon FSx for Lustre file system. Attach the file system to the origin server. Connect the application
server to the file system.

The key requirements are:

Shared storage solution


Support Lustre clients
Fully managed service
Amazon FSx for Lustre provides a fully managed file system that is optimized for Lustre workloads. It allows Lustre clients to seamlessly
connect to the file system.
upvoted 2 times

  RupeC 2 months, 1 week ago


Selected Answer: A
Sorry, but I disagree with everyone. The question states "a gaming application that is hosted in an on-premises data center". Option D
does not address this and cannot to my knowledge address it. Thus:

A. Create an AWS Storage Gateway file gateway. Create a file share that uses the required client protocol. Connect the application server to
the file share.

By using AWS Storage Gateway in file gateway mode, you can extend your on-premises data center storage into the AWS cloud. The file
share created on AWS Storage Gateway can use the necessary client protocol (such as Lustre), which would allow the Lustre clients in your
on-premises data center to access the data stored on AWS Storage Gateway.
This solution enables you to use Lustre clients to access data, while still keeping the gaming application hosted in your on-premises data
center. AWS Storage Gateway provides a fully managed solution for this hybrid scenario, allowing seamless integration between on-
premises and AWS cloud storage.
upvoted 2 times

  JoeGuan 1 month, 3 weeks ago


So, I think that the FSx File Gateway is currently only available for Windows? I don't think Lustre is part of this offering yet as of
8/8/2023
upvoted 1 times

  james2033 2 months, 2 weeks ago


Selected Answer: D
Content of "Amazon FSx for Lustre" at this link https://aws.amazon.com/fsx/lustre/ . Focus at image, section: "On-premises clients".
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: D
A. Lustre client access is not supported by AWS Storage Gateway file gateway.

B. Creating a Windows file share on an EC2 Windows instance is suitable for Windows-based file sharing, but it does not provide the
required Lustre client access. Lustre is a high-performance parallel file system primarily used in high-performance computing (HPC)
environments.

C. EFS does not natively support Lustre client access. Although EFS is a managed file storage service, it is designed for general-purpose file
storage and is not optimized for Lustre workloads.

D. Amazon FSx for Lustre is a fully managed file system optimized for high-performance computing workloads, including Lustre clients. It
provides the ability to use Lustre clients to access data in a managed and scalable manner. By choosing this option, the company can
benefit from the performance and manageability of Amazon FSx for Lustre while meeting the requirement of Lustre client access.
upvoted 2 times

  Musti35 5 months, 2 weeks ago


Selected Answer: D
https://aws.amazon.com/fsx/lustre/?
nc1=h_ls#:~:text=Amazon%20FSx%20for%20Lustre%20provides%20fully%20managed%20shared%20storage%20with%20the%20scalabilit
y%20and%20performance%20of%20the%20popular%20Lustre%20file%20system.
upvoted 1 times

  jdr75 5 months, 4 weeks ago


Selected Answer: D
Option D. Create an Amazon FSx for Lustre file system. Attach the file system to the origin server. Connect the application server to the file
system.

BUT the onprem server couldn't view and have good perf with the EFS, so the question is an absurd !
upvoted 1 times

  fkie4 6 months, 3 weeks ago


Selected Answer: D
seriously? it spells out "Lustre" for you
upvoted 1 times

  CaoMengde09 7 months, 4 weeks ago


D is the most logical solution. But still the app is OnPrem so AWS Fx for Lustre is not enough to connect the storage to the app, we'll need
a File Gateway to use with the FSx Lustre
upvoted 2 times

  Chalamalli 8 months ago


D is correct
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: D
Option D
upvoted 1 times

  Wpcorgan 10 months, 2 weeks ago


D is correct
upvoted 1 times
Question #100 Topic 1

A company's containerized application runs on an Amazon EC2 instance. The application needs to download security certificates before it can
communicate with other business applications. The company wants a highly secure solution to encrypt and decrypt the certificates in near real
time. The solution also needs to store data in highly available storage after the data is encrypted.
Which solution will meet these requirements with the LEAST operational overhead?

A. Create AWS Secrets Manager secrets for encrypted certificates. Manually update the certificates as needed. Control access to the data by
using fine-grained IAM access.

B. Create an AWS Lambda function that uses the Python cryptography library to receive and perform encryption operations. Store the function
in an Amazon S3 bucket.

C. Create an AWS Key Management Service (AWS KMS) customer managed key. Allow the EC2 role to use the KMS key for encryption
operations. Store the encrypted data on Amazon S3.

D. Create an AWS Key Management Service (AWS KMS) customer managed key. Allow the EC2 role to use the KMS key for encryption
operations. Store the encrypted data on Amazon Elastic Block Store (Amazon EBS) volumes.

Correct Answer: D

Community vote distribution


C (77%) D (23%)

  Chunsli Highly Voted  11 months, 2 weeks ago


C makes a better sense. Between C (S3) and D (EBS), S3 is highly available with LEAST operational overhead.
upvoted 30 times

  MutiverseAgent 2 months, 3 weeks ago


Agree, also the data in EBS will be accessible only to the EC2 instance and that is not as available as S3 would be.
upvoted 1 times

  MXB05 Highly Voted  11 months, 3 weeks ago


Selected Answer: C
Correct Answer is C: EBS is not highly available
upvoted 17 times

  Ello2023 8 months, 2 weeks ago


EBS is Highly Available as it stores in multi AZ and S3 is regional.
upvoted 1 times

  oguz11 8 months, 1 week ago


EBS also has Multi-AZ capability, but it does not replicate the data across multiple availability zones by default. When Multi-AZ is
enabled, it creates a replica of the EBS volume in a different availability zone and automatically failover to the replica in case of a
failure. However, this requires additional configuration and management. In comparison, Amazon S3 automatically replicates data
across multiple availability zones without any additional configuration. Therefore, storing the data on Amazon S3 provides a simpler
and more efficient solution for high availability.
upvoted 7 times

  FNJ1111 9 months, 1 week ago


Per AWS: "Amazon EBS volumes are designed to be highly available, reliable, and durable"

https://aws.amazon.com/ebs/features/
upvoted 2 times

  JayBee65 9 months, 2 weeks ago


Yes it is!
upvoted 1 times

  joshik Most Recent  5 days, 7 hours ago


C. when it comes to availability, Amazon S3 is generally more highly available than Amazon EBS because S3 replicates data across multiple
AZs by default, providing greater resilience to failures. However, the choice between S3 and EBS depends on your specific use case and
whether you need block storage for EC2 instances (EBS) or object storage for storing and retrieving data (S3).
upvoted 1 times

  Ramdi1 2 weeks, 4 days ago


Selected Answer: D
I selected D, even though S3 has high availability to 11 9’s. The question started with EC2 Instance. EBS provides block level storage that is
attached to EC2 Instances. They are also designed for High Availability.
upvoted 1 times
  Guru4Cloud 1 month, 3 weeks ago
Selected Answer: C
Option C is the best solution that meets all the requirements with the least operational overhead:

Use AWS KMS customer managed key for encryption


Allow EC2 instance role access to use the KMS key
Store encrypted data in Amazon S3
upvoted 1 times

  mr_D3v1n3 2 months, 1 week ago


All data within EBS is stored in equally sized blocks. This system offers some performance advantages over traditional storage, and
generally boasts lower latency, too. This would meet the near real time requirement over the S3 option
upvoted 1 times

  james2033 2 months, 2 weeks ago


Selected Answer: C
A: Missing encrypt/decrypt process. B: "Store the function in an Amazon S3 bucket" made meaningless. D: Amazon Elastic Block Store
(Amazon EBS) for clone all of hard disk, CD/DVD. The context of question requires near real-time, it need save small parts, not a big part. --
> Choose C (with S3, AWS Key Management Service - AWS KMS).

See https://docs.aws.amazon.com/kms/index.html . Decrypt process


https://docs.aws.amazon.com/kms/latest/APIReference/API_Decrypt.html .
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: C
A. Manual - no no no!
B. External (python) library - no no no!
C. yeap.
D. S3 over EBS (see answer C)
upvoted 2 times

  Futurebones 4 months, 2 weeks ago


I will go for D, as mentioned in the question ' an EC2 instance' , ' near real-time', 'LEAST operational overhead' all refer to EBS rather than
S3.
upvoted 2 times

  bgsanata 4 months, 3 weeks ago


The correct answer is D...
Using a containerized applications in EC2 mean it's easier to use EBS. S3 require extra work to be done and the question is about Least
operational overhead.
upvoted 1 times

  studynoplay 4 months, 4 weeks ago


Selected Answer: C
The moment you see storage, think S3. It is default unless there is a very specific requirement where S3 does not fit which will be explicitly
described in the question
upvoted 2 times

  Rahulbit34 5 months ago


C make sense. as its asking for least operational overhead
upvoted 1 times

  channn 5 months, 3 weeks ago


Selected Answer: C
A. manual put <> near real time
C. chooses as S3 is highly available
D: only for that EC2
upvoted 1 times

  gx2222 6 months ago


Selected Answer: C
To meet the requirements of securely downloading, encrypting, decrypting, and storing certificates with minimal operational overhead,
you can use AWS Key Management Service (KMS) and Amazon S3.

Here's how this solution would work:

Store the security certificates in an S3 bucket with Server-Side Encryption enabled.


Create a KMS Customer Master Key (CMK) for encrypting and decrypting the certificates.
Grant permission to the EC2 instance to access the CMK.
Have the application running on the EC2 instance retrieve the security certificates from the S3 bucket.
Use the KMS API to encrypt and decrypt the certificates as needed.
Store the encrypted certificates in another S3 bucket with Server-Side Encryption enabled.
This solution provides a highly secure way to encrypt and decrypt certificates and store them in highly available storage with minimal
operational overhead. AWS KMS handles the encryption and decryption of data, while S3 provides highly available storage for the
encrypted data. The only operational overhead involved is setting up the KMS CMK and S3 buckets, which is a one-time setup task.
upvoted 2 times
  alexiscloud 6 months ago
C: S3 is hight available
upvoted 1 times

  AHUI 8 months, 2 weeks ago


Ans is C:
Security certificates are just normal files. it is not SSL certificate etc… confusing !!!!!!!
upvoted 1 times

  goodmail 8 months, 3 weeks ago


Selected Answer: C
Is this the real question from Exam? It is typically vague. Usually S3 would be chosen when the situation mentioned "high availability". But
AWS official website states that EBS volume has 99.999% availability.
upvoted 3 times
Question #101 Topic 1

A solutions architect is designing a VPC with public and private subnets. The VPC and subnets use IPv4 CIDR blocks. There is one public subnet
and one private subnet in each of three Availability Zones (AZs) for high availability. An internet gateway is used to provide internet access for the
public subnets. The private subnets require access to the internet to allow Amazon EC2 instances to download software updates.
What should the solutions architect do to enable Internet access for the private subnets?

A. Create three NAT gateways, one for each public subnet in each AZ. Create a private route table for each AZ that forwards non-VPC traffic to
the NAT gateway in its AZ.

B. Create three NAT instances, one for each private subnet in each AZ. Create a private route table for each AZ that forwards non-VPC traffic
to the NAT instance in its AZ.

C. Create a second internet gateway on one of the private subnets. Update the route table for the private subnets that forward non-VPC traffic
to the private internet gateway.

D. Create an egress-only internet gateway on one of the public subnets. Update the route table for the private subnets that forward non-VPC
traffic to the egress-only Internet gateway.

Correct Answer: A

Community vote distribution


A (97%)

  Gil80 Highly Voted  10 months, 3 weeks ago


Selected Answer: A
NAT Instances - OUTDATED BUT CAN STILL APPEAR IN THE EXAM!
However, given that A provides the newer option of NAT Gateway, then A is the correct answer.

B would be correct if NAT Gateway wasn't an option.


upvoted 10 times

  Shrestwt 5 months, 1 week ago


NAT instance or NAT Gateway always created in public subnet to provide internet access to private subnet. In option B. they are
creating NAT Instance in private subnet which is not correct.
upvoted 8 times

  Guru4Cloud Most Recent  1 month, 2 weeks ago


Selected Answer: A
The best solution is to create a NAT gateway in each public subnet (one per availability zone), and update the route tables for the private
subnets to send internet traffic to the NAT gateway.

NAT gateways allow private subnets to access the internet for things like software updates, without exposing those instances directly to
the internet. An egress-only internet gateway would allow outbound access, but also allow inbound internet traffic, which is not desired
for the private subnets.
upvoted 1 times

  james2033 2 months, 1 week ago


Selected Answer: A
"Egress" means outbound connection, remove D. "Second gateway", remove C.

Now has only A and B. The different between A versus B is "1 NAT gateway, 1 for public subnet in each AZ" (A) and "1 NAT gateway, 1 for
private subnet in each AZ" (B).

Choose A.
upvoted 3 times

  cookieMr 3 months, 1 week ago


By creating a NAT gateway in each public subnet, the private subnets can route their Internet-bound traffic through the NAT gateways.
This allows EC2 in the private subnets to download software updates and access other resources on the Internet.

Additionally, a separate private route table should be created for each AZ. The private route tables should have a default route that
forwards non-VPC traffic (0.0.0.0/0) to the corresponding NAT gateway in the same AZ. This ensures that the private subnets use the
appropriate NAT gateway for Internet access.

B is incorrect because NAT instances require manual management and configuration compared to NAT gateways, which are a fully
managed service. NAT instances are also being deprecated in favor of NAT gateways.

C is incorrect because creating a second internet gateway on a private subnet is not a valid solution. Internet gateways are associated with
public subnets and cannot be directly associated with private subnets.
D is incorrect because egress-only internet gateways are used for IPv6 traffic.
upvoted 3 times
  Jeeva28 4 months, 1 week ago
NAT Gateway will be created Public Subnet and Provide access to Private Subnet
upvoted 1 times

  cheese929 5 months ago


Selected Answer: A
A is correct.
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-example-private-subnets-nat.html
upvoted 1 times

  Heric 5 months, 2 weeks ago


Selected Answer: A
Now NAT Instances is avoided by AWS. Then choose the NAT Gateway
upvoted 3 times

  alexiscloud 6 months ago


A: NAT Gateway
upvoted 1 times

  Rudraman 6 months, 1 week ago


Selected Answer: A
NAT Gateway - AWS-managed NAT, higher bandwidth, high availability, no administration
upvoted 1 times

  RODCCN 7 months ago


You should create 3 NAT gateways, but not in the public subnet. So, even NAT instance is already deprecated, is the right answer in this
case, since it's relate to create in a private subnet, not public.
upvoted 2 times

  Ben2008 7 months, 1 week ago


Refer:
https://docs.aws.amazon.com/vpc/latest/userguide/nat-gateway-scenarios.html#public-nat-gateway-overview
Should be A.
upvoted 1 times

  erik29 9 months ago


aaaaaa
upvoted 1 times

  techhb 9 months, 1 week ago


Selected Answer: A
Networking 101, A is only right option
upvoted 2 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Selected Answer: A
The correct answer is option A.

To enable Internet access for the private subnets, the solutions architect should create three NAT gateways, one for each public subnet in
each Availability Zone (AZ). NAT gateways allow private instances to initiate outbound traffic to the Internet but do not allow inbound
traffic from the Internet to reach the private instances.

The solutions architect should then create a private route table for each AZ that forwards non-VPC traffic to the NAT gateway in its AZ. This
will allow instances in the private subnets to access the Internet through the NAT gateways in the public subnets.
upvoted 4 times

  career360guru 9 months, 2 weeks ago


Option A
NAT gateway needs to be configured within each VPC's in Public Subnet.
upvoted 1 times

  Deplake 10 months, 4 weeks ago


Selected Answer: B
Should be B
upvoted 1 times

  Nigma 10 months, 4 weeks ago


https://www.examtopics.com/discussions/amazon/view/35679-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 2 times
Question #102 Topic 1

A company wants to migrate an on-premises data center to AWS. The data center hosts an SFTP server that stores its data on an NFS-based file
system. The server holds 200 GB of data that needs to be transferred. The server must be hosted on an Amazon EC2 instance that uses an
Amazon Elastic File System (Amazon EFS) file system.
Which combination of steps should a solutions architect take to automate this task? (Choose two.)

A. Launch the EC2 instance into the same Availability Zone as the EFS file system.

B. Install an AWS DataSync agent in the on-premises data center.

C. Create a secondary Amazon Elastic Block Store (Amazon EBS) volume on the EC2 instance for the data.

D. Manually use an operating system copy command to push the data to the EC2 instance.

E. Use AWS DataSync to create a suitable location configuration for the on-premises SFTP server.

Correct Answer: AB

Community vote distribution


BE (52%) AB (44%)

  123jhl0 Highly Voted  11 months, 2 weeks ago


Selected Answer: AB
**A**. Launch the EC2 instance into the same Availability Zone as the EFS file system.
Makes sense to have the instance in the same AZ the EFS storage is.
**B**. Install an AWS DataSync agent in the on-premises data center.
The DataSync with move the data to the EFS, which already uses the EC2 instance (see the info provided). No more things are required...
C. Create a secondary Amazon Elastic Block Store (Amazon EBS) volume on the EC2 instance for the data.
This secondary EBS volume isn't required... the data should be move on to EFS...
D. Manually use an operating system copy command to push the data to the EC2 instance.
Potentially possible (instead of A), BUT the "automate this task" premise goes against any "manually" action. So, we should keep A.
E. Use AWS DataSync to create a suitable location configuration for the on-premises SFTP server.
I don't get the relationship between DataSync and the configuration for SFTP "on-prem"! Nonsense.
So, anwers are A&B
upvoted 39 times

  Iconique 1 week, 3 days ago


Just go to AWS Console, to DataSync and choose "Create Location Configuration". Locations configurations are endpoints used in
DataSync task. A location can be the source endpoint of the task, e.g. a NFS on-premise filesystem. So E is helping in the automation
process. A is not even part of this automation process, it is a solution already agreed to have EC2 with EFS, how you connect EC2 to EFS
is not part of the solution!
upvoted 1 times

  attila9778 10 months, 1 week ago


Can someone explain why A is correct?
EFS is spread across Availability Zones in a region, as per https://aws.amazon.com/blogs/gametech/gearbox-entertainment-goes-
remote-with-aws-and-perforce/
My question then is whether it makes sense to launch EC2 instances in the *same Availability Zone as the EFS file system* ?
upvoted 6 times

  lovelazur 6 months, 1 week ago


However, launching the EC2 instance in the same AZ as the EFS file system can provide some performance benefits, such as reduced
network latency and improved throughput. Therefore, it may be a best practice to launch the EC2 instance in the same AZ as the EFS
file system if performance is a concern.
upvoted 2 times

  BlueVolcano1 8 months, 2 weeks ago


Yes exactly, that's why A doesn't make sense. I voted for B and E.
upvoted 3 times

  Lalo 7 months, 2 weeks ago


CORRECT ANSWER: B&E
Steps 4 &5
https://aws.amazon.com/datasync/getting-started/?nc1=h_ls
upvoted 9 times

  RBSK 9 months, 3 weeks ago


will A,B work without E?
upvoted 3 times

  Buruguduystunstugudunstuy Highly Voted  9 months, 2 weeks ago


Selected Answer: BE
Answer and HOW-TO

B. Install an AWS DataSync agent in the on-premises data center.


E. Use AWS DataSync to create a suitable location configuration for the on-premises SFTP server.

To automate the process of transferring the data from the on-premises SFTP server to an EC2 instance with an EFS file system, you can use
AWS DataSync. AWS DataSync is a fully managed data transfer service that simplifies, automates, and accelerates transferring data
between on-premises storage systems and Amazon S3, Amazon EFS, or Amazon FSx for Windows File Server.

To use AWS DataSync for this task, you should first install an AWS DataSync agent in the on-premises data center. This agent is a
lightweight software application that you install on your on-premises data source. The agent communicates with the AWS DataSync
service to transfer data between the data source and target locations.
upvoted 22 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Next, you should use AWS DataSync to create a suitable location configuration for the on-premises SFTP server. A location represents a
data source or a data destination in an AWS DataSync task. You can create a location for the on-premises SFTP server by specifying the
IP address, the path to the data, and the necessary credentials to access the data.

Once you have created the location configuration for the on-premises SFTP server, you can use AWS DataSync to transfer the data to
the EC2 instance with the EFS file system. AWS DataSync handles the data transfer process automatically and efficiently, transferring
the data at high speeds and minimizing downtime.
upvoted 9 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Explanation of other options

A. Launch the EC2 instance into the same Availability Zone as the EFS file system.

This option is not wrong, but it is not directly related to automating the process of transferring the data from the on-premises SFTP
server to the EC2 instance with the EFS file system. Launching the EC2 instance into the same Availability Zone as the EFS file system
can improve the performance and reliability of the file system, as it reduces the latency between the EC2 instance and the file
system. However, it is not necessary for automating the data transfer process.
upvoted 5 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


C. Create a secondary Amazon Elastic Block Store (Amazon EBS) volume on the EC2 instance for the data.

This option is incorrect because Amazon EBS is a block-level storage service that is designed for use with Amazon EC2 instances.
It is not suitable for storing large amounts of data that need to be accessed by multiple EC2 instances, like in the case of the NFS-
based file system on the on-premises SFTP server. Instead, you should use Amazon EFS, which is a fully managed, scalable, and
distributed file system that can be accessed by multiple EC2 instances concurrently.
upvoted 3 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


D. Manually use an operating system copy command to push the data to the EC2 instance.

This option is not wrong, but it is not the most efficient or automated way to transfer the data from the on-premises SFTP
server to the EC2 instance with the EFS file system. Manually transferring the data using an operating system copy command
would require manual intervention and would not scale well for large amounts of data. It would also not provide the same
level of performance and reliability as a fully managed service like AWS DataSync.
upvoted 3 times

  axelrodb Most Recent  2 weeks, 4 days ago


Selected Answer: BD
BE is the correct answer
upvoted 1 times

  SuperDuperPooperScooper 1 month, 1 week ago


https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-associate-saa-c03/view/11/#
upvoted 1 times

  Raggz 1 month, 2 weeks ago


Selected Answer: AE
A. Launch the EC2 instance into the same Availability Zone as the EFS file system and E. Use AWS DataSync to create a suitable location
configuration for the on-premises SFTP server.
These two steps in combination should be taken to automate this task. Launching the EC2 instance into the same Availability Zone as the
EFS file system ensures that the instance has low latency access to the file system. AWS DataSync can then be used to automate the
transfer of data from the on-premises SFTP server to the EFS file system on the EC2 instance. DataSync is an easy-to-use data transfer
service that simplifies, automates, and accelerates moving large amounts of data into and out of AWS services such as Amazon S3,
Amazon Elastic File System (Amazon EFS), and Amazon FSx for Windows File Server. The other options are not relevant or do not provide
an automated solution for migrating the data center to AWS.

This is AI response, Is this correct?


upvoted 1 times

  Abdou1604 1 month, 2 weeks ago


C is good , Amazon CloudFront is a content delivery network (CDN) service that helps distribute content globally with low latency and high
data transfer speeds. By configuring your website to use CloudFront, your website's traffic can be distributed across multiple edge
locations around the world. This not only improves user experience by reducing latency but also provides protection against DDoS attacks.
CloudFront is designed to absorb and mitigate DDoS attacks by distributing traffic across its network of edge locations.
upvoted 1 times

  james2033 2 months, 1 week ago


Selected Answer: BE
Keyword "AWS DataSync" . Choose B and E, where has this keyword. NFS stands for "Network File System". SFTP stands for "Secure Fiel
Transfer Protocol". AWS DataSync https://aws.amazon.com/datasync/ , it is suitable for migration data from on-premises to AWS cloud.
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: BE
B. By installing an AWS DataSync agent in the on-premises data center, the architect can establish a secure connection between the on-
premises environment and AWS.

E. Once the DataSync agent is installed, the solutions architect should configure it to create a suitable location configuration that specifies
the source location as the on-premises SFTP server and the target location as the EFS. AWS DataSync will handle the secure and efficient
transfer of the data from the on-premises server to the EC2 using EFS.

A. Launching EC2 into the same AZ as the EFS is not directly related to automating the migration task.

C. Creating a secondary EBS on the EC2 for the data is not necessary when using EFS. EFS provides a scalable, fully managed NFS-based
file system that can be mounted directly on the EC2, eliminating the need for separate EBS.

D. It would require manual intervention and could be error-prone, especially for large amounts of data.
upvoted 2 times

  Anmol_1010 3 months, 2 weeks ago


Efs is launched in same region so.answer is option AB
upvoted 1 times

  fishy_resolver 3 months, 3 weeks ago


Selected Answer: BE
B: DataSync to copy the data automatically
E: DataSync discovery job to identify how / where to store your data automatically
https://docs.aws.amazon.com/datasync/latest/userguide/getting-started-discovery-job.html
upvoted 1 times

  antropaws 4 months ago


Selected Answer: BE
A is irrelevant given the scenario.
upvoted 1 times

  Pradeepdekhane 4 months, 4 weeks ago


Selected Answer: BE
Datasync configuration are required
upvoted 1 times

  shinejh0528 5 months ago


Selected Answer: BE
A : same AZ? why?
upvoted 1 times

  kruasan 5 months, 1 week ago


Selected Answer: BE
B* To access your self-managed on-premises or cloud storage, you need an AWS DataSync agent that's associated with your AWS account.
https://docs.aws.amazon.com/datasync/latest/userguide/configure-agent.html

E* A location is a storage system or service that AWS DataSync reads from or writes to. Each DataSync transfer has a source and
destination location.
https://docs.aws.amazon.com/datasync/latest/userguide/configure-agent.html
upvoted 1 times

  kamx44 5 months, 2 weeks ago


Selected Answer: BE
needs to install and provide a location so BE
upvoted 1 times

  darn 5 months, 2 weeks ago


Selected Answer: AB
A needs instance
B needs Datasync
upvoted 1 times
  ErfanKh 5 months, 3 weeks ago
Selected Answer: AB
I chose AB and Chat GPT said AB
upvoted 1 times
Question #103 Topic 1

A company has an AWS Glue extract, transform, and load (ETL) job that runs every day at the same time. The job processes XML data that is in an
Amazon S3 bucket. New data is added to the S3 bucket every day. A solutions architect notices that AWS Glue is processing all the data during
each run.
What should the solutions architect do to prevent AWS Glue from reprocessing old data?

A. Edit the job to use job bookmarks.

B. Edit the job to delete data after the data is processed.

C. Edit the job by setting the NumberOfWorkers field to 1.

D. Use a FindMatches machine learning (ML) transform.

Correct Answer: A

Community vote distribution


A (100%)

  123jhl0 Highly Voted  11 months, 2 weeks ago


Selected Answer: A
This is the purpose of bookmarks: "AWS Glue tracks data that has already been processed during a previous run of an ETL job by persisting
state information from the job run. This persisted state information is called a job bookmark. Job bookmarks help AWS Glue maintain state
information and prevent the reprocessing of old data."
https://docs.aws.amazon.com/glue/latest/dg/monitor-continuations.html
upvoted 32 times

  Guru4Cloud Most Recent  1 month, 2 weeks ago


Selected Answer: A
The best solution is to edit the AWS Glue job to use job bookmarks.

Job bookmarks allow AWS Glue ETL jobs to track which data has already been processed during previous runs. This prevents reprocessing
of old data.

Deleting the data after processing would cause the data to be lost and unavailable for future processing. Reducing the number of workers
may improve performance but does not prevent reprocessing of old data. Using a FindMatches ML transform is used for record matching,
not preventing reprocessing.

So the solutions architect should enable job bookmarks in the AWS Glue job configuration. This will allow the ETL job to keep track of
processed data and only transform the new data added since the last run.
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: A
A. Job bookmarks in Glue allow you to track the last-processed data in a job. By enabling job bookmarks, Glue keeps track of the processed
data and automatically resumes processing from where it left off in subsequent job runs.

B. Results in the permanent removal of the data from the S3, making it unavailable for future job runs. This is not desirable if the data
needs to be retained or used for subsequent analysis.

C.It would only affect the parallelism of the job but would not address the issue of reprocessing old data. It does not provide a mechanism
to track the processed data or skip already processed data.

D. It is not directly related to preventing Glue from reprocessing old data. The FindMatches transform is used for identifying and matching
duplicate or matching records in a dataset. While it can be used in data processing pipelines, it does not address the specific requirement
of avoiding reprocessing old data in this scenario.
upvoted 4 times

  bedwal2020 5 months ago


Selected Answer: A
Job bookmark to make sure that the glue job will not process already processed files.
upvoted 1 times

  Heric 5 months, 2 weeks ago


Selected Answer: A
Job bookmarks are used in AWS Glue ETL jobs to keep track of the data that has already been processed in a previous job run. With
bookmarks enabled, AWS Glue will read the bookmark information from the previous job run and will only process the new data that has
been added to the data source since the last job run. This saves time and reduces costs by eliminating the need to reprocess old data.
Therefore, a solutions architect should edit the AWS Glue ETL job to use job bookmarks so that it will only process new data added to the
S3 bucket since the last job run.
upvoted 2 times
  linux_admin 6 months ago
Selected Answer: A
Job bookmarks enable AWS Glue to track the data that has been processed in a previous run of the job. With job bookmarks enabled, AWS
Glue will only process new data that has been added to the S3 bucket since the previous run of the job, rather than reprocessing all data
every time the job runs.
upvoted 2 times

  gustavtd 9 months ago


Delete files in S3 freely is not good. so B is not correct,
upvoted 1 times

  techhb 9 months, 1 week ago


Selected Answer: A
A is correct
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Selected Answer: A
Option A. Edit the job to use job bookmarks.

Job bookmarks in AWS Glue allow the ETL job to track the data that has been processed and to skip data that has already been processed.
This can prevent AWS Glue from reprocessing old data and can improve the performance of the ETL job by only processing new data. To
use job bookmarks, the solutions architect can edit the job and set the "Use job bookmark" option to "True". The ETL job will then use the
job bookmark to track the data that has been processed and skip data that has already been processed in subsequent runs.
upvoted 3 times

  career360guru 9 months, 2 weeks ago


Selected Answer: A
Option A
upvoted 1 times

  SilentMilli 9 months, 3 weeks ago


Selected Answer: A
It's obviously A. Bookmarks serve this purpose
upvoted 1 times

  Wpcorgan 10 months, 2 weeks ago


A is correct
upvoted 2 times

  LeGloupier 11 months, 2 weeks ago


Selected Answer: A
A
https://docs.aws.amazon.com/glue/latest/dg/monitor-continuations.html
upvoted 3 times
Question #104 Topic 1

A solutions architect must design a highly available infrastructure for a website. The website is powered by Windows web servers that run on
Amazon EC2 instances. The solutions architect must implement a solution that can mitigate a large-scale DDoS attack that originates from
thousands of IP addresses. Downtime is not acceptable for the website.
Which actions should the solutions architect take to protect the website from such an attack? (Choose two.)

A. Use AWS Shield Advanced to stop the DDoS attack.

B. Configure Amazon GuardDuty to automatically block the attackers.

C. Configure the website to use Amazon CloudFront for both static and dynamic content.

D. Use an AWS Lambda function to automatically add attacker IP addresses to VPC network ACLs.

E. Use EC2 Spot Instances in an Auto Scaling group with a target tracking scaling policy that is set to 80% CPU utilization.

Correct Answer: AC

Community vote distribution


AC (82%) Other

  alvarez100 Highly Voted  11 months, 2 weeks ago


Selected Answer: AC
I think it is AC, reason is they require a solution that is highly available. AWS Shield can handle the DDoS attacks. To make the solution HA
you can use cloud front. AC seems to be the best answer imo.
AB seem like redundant answers. How do those answers make the solution HA?
upvoted 23 times

  attila9778 10 months, 1 week ago


A - AWS Shield Advanced
C - (protecting this option) IMO: AWS Shield Advanced has to be attached. But it can not be attached directly to EC2 instances.
According to the docs: https://aws.amazon.com/shield/
It requires to be attached to services such as CloudFront, Route 53, Global Accelerator, ELB or (in the most direct way using) Elastic IP
(attached to the EC2 instance)
upvoted 18 times

  Buruguduystunstugudunstuy Highly Voted  9 months, 1 week ago


Selected Answer: AC
Option A. Use AWS Shield Advanced to stop the DDoS attack.

It provides always-on protection for Amazon EC2 instances, Elastic Load Balancers, and Amazon Route 53 resources. By using AWS Shield
Advanced, the solutions architect can help protect the website from large-scale DDoS attacks.

Option C. Configure the website to use Amazon CloudFront for both static and dynamic content.

CloudFront is a content delivery network (CDN) that integrates with other Amazon Web Services products, such as Amazon S3 and Amazon
EC2, to deliver content to users with low latency and high data transfer speeds. By using CloudFront, the solutions architect can distribute
the website's content across multiple edge locations, which can help absorb the impact of a DDoS attack and reduce the risk of downtime
for the website.
upvoted 8 times

  Devsin2000 Most Recent  4 days, 5 hours ago


Selected Answer: AE
A - no brainer
E = "must design a highly available infrastructure". I am not sure if CloudFront addresses this requirement.
upvoted 1 times

  TariqKipkemei 1 month, 1 week ago


Selected Answer: AC
Mitigate a large-scale DDoS attack = AWS Shield Advanced
Downtime is not acceptable for the website = high availability = Amazon CloudFront
upvoted 1 times

  mtmayer 1 month, 1 week ago


Selected Answer: D
yeah , AWS Shield Advanced can be used directly on EC2.....
https://docs.aws.amazon.com/waf/latest/developerguide/ddos-protections-by-resource-type.html
upvoted 1 times
  Guru4Cloud 1 month, 2 weeks ago
Selected Answer: AC
Cloud front supports SHIELD ADVANCED integration
upvoted 1 times

  diabloexodia 2 months, 2 weeks ago


Cloud front supports SHIELD ADVANCED integration
upvoted 1 times

  Aash24 2 months, 3 weeks ago


Selected Answer: D
D should be the one here
upvoted 3 times

  cookieMr 3 months, 1 week ago


Selected Answer: AC
A. AWS Shield Advanced provides advanced DDoS protection for AWS resources, including EC2. It includes features such as real-time threat
intelligence, automatic protection, and DDoS cost protection.

C. CloudFront is a CDN service that can help mitigate DDoS attacks. By routing traffic through CloudFront, requests to the website are
distributed across multiple edge locations, which can absorb and mitigate DDoS attacks more effectively. CloudFront also provides
additional DDoS protection features, such as rate limiting, SSL/TLS termination, and custom security policies.

B. While GuardDuty can detect and provide insights into potential malicious activity, it is not specifically designed for DDoS mitigation.

D. Network ACLs are not designed to handle high-volume traffic or DDoS attacks efficiently.

E. Spot Instances are a cost optimization strategy and may not provide the necessary availability and protection against DDoS attacks
compared to using dedicated instances with DDoS protection mechanisms like Shield Advanced and CloudFront.
upvoted 2 times

  Heric 5 months, 2 weeks ago


Selected Answer: AC
Key word:
DDoS attack will choose the AWS Shield Advanced
Cloudfront have attached the WAF
upvoted 2 times

  jdr75 5 months, 4 weeks ago


Selected Answer: AC
A&C
but no fully understand why cloudfront is opted.
The customer does not need it, and it's not exactly cheap.
Yes it could serve the cached content to the attacker, alighting the job in backend, but as I said it's not cheap, and the OOTB AWS Shield is
free and can cope with the attack (as far as it won't be waf-style-attack).
upvoted 1 times

  Khushna 7 months, 2 weeks ago


Selected Answer: AC
DDos is better with shield and Cloudfront also provide protection for ddos
upvoted 1 times

  CloudForFun 9 months, 1 week ago


AC
"AWS Shield Advanced is available globally on all Amazon CloudFront, AWS Global Accelerator, and Amazon Route 53 edge locations
worldwide. You can protect your web applications hosted anywhere in the world by deploying Amazon CloudFront in front of your
application. Your origin servers can be Amazon Simple Storage Service (S3), Amazon EC2, Elastic Load Balancing, or a custom server
outside of AWS."
https://aws.amazon.com/shield/faqs/
upvoted 1 times

  career360guru 9 months, 2 weeks ago


A and C as your will need to configure Cloudfront to activate AWS Advance Shield
upvoted 1 times

  ishitamodi4 9 months, 2 weeks ago


AC, AWS Shield Advanced is available globally on all Amazon CloudFront, AWS Global Accelerator, and Amazon Route 53 edge locations
worldwide
upvoted 1 times

  333666999 9 months, 3 weeks ago


Selected Answer: AC
c not b. b is wrong because it's not malicious activity, just annoying activity
upvoted 1 times
  Newptone 10 months, 1 week ago
Selected Answer: AC
I thought it was AB. But after I read the docs, I vote for AC.

Amazon GuardDuty is a threat detection service, it can NOT take action directly, it needs to work with Lambda.
upvoted 1 times
Question #105 Topic 1

A company is preparing to deploy a new serverless workload. A solutions architect must use the principle of least privilege to configure
permissions that will be used to run an AWS Lambda function. An Amazon EventBridge (Amazon CloudWatch Events) rule will invoke the function.
Which solution meets these requirements?

A. Add an execution role to the function with lambda:InvokeFunction as the action and * as the principal.

B. Add an execution role to the function with lambda:InvokeFunction as the action and Service: lambda.amazonaws.com as the principal.

C. Add a resource-based policy to the function with lambda:* as the action and Service: events.amazonaws.com as the principal.

D. Add a resource-based policy to the function with lambda:InvokeFunction as the action and Service: events.amazonaws.com as the
principal.

Correct Answer: D

Community vote distribution


D (100%)

  123jhl0 Highly Voted  11 months, 2 weeks ago


Selected Answer: D
Best way to check it... The question is taken from the example shown here in the documentation:
https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-use-resource-based.html#eb-lambda-permissions
upvoted 25 times

  Buruguduystunstugudunstuy Highly Voted  9 months, 2 weeks ago


Selected Answer: D
The correct solution is D. Add a resource-based policy to the function with lambda:InvokeFunction as the action and Service:
events.amazonaws.com as the principal.

The principle of least privilege requires that permissions are granted only to the minimum necessary to perform a task. In this case, the
Lambda function needs to be able to be invoked by Amazon EventBridge (Amazon CloudWatch Events). To meet these requirements, you
can add a resource-based policy to the function that allows the InvokeFunction action to be performed by the Service:
events.amazonaws.com principal. This will allow Amazon EventBridge to invoke the function, but will not grant any additional permissions
to the function.
upvoted 13 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Why other options are wrong

Option A is incorrect because it grants the lambda:InvokeFunction action to any principal (*), which would allow any entity to invoke the
function and goes beyond the minimum permissions needed.

Option B is incorrect because it grants the lambda:InvokeFunction action to the Service: lambda.amazonaws.com principal, which
would allow any Lambda function to invoke the function and goes beyond the minimum permissions needed.

Option C is incorrect because it grants the lambda:* action to the Service: events.amazonaws.com principal, which would allow Amazon
EventBridge to perform any action on the function and goes beyond the minimum permissions needed.
upvoted 11 times

  Evonne_HY Most Recent  2 weeks, 1 day ago


why not choose B, an execution role is attached to lambda and a policy is attached to an execution role
upvoted 1 times

  Georgeyp 1 week, 3 days ago


B would be the wrong choice as the both roles are granted to lambda, however the question requires Eventbridge to call the Lambda
function.
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: D
lambda:InvokeFunction is the action needed to invoke the Lambda function.
Service: events.amazonaws.com is the principal (the AWS service) that is allowed to invoke the Lambda function. In this case, you're
explicitly allowing CloudWatch Events to invoke the function.
upvoted 1 times

  MNotABot 2 months, 2 weeks ago


D
* is BIG NO. And we are talking about policy --> hence D
upvoted 2 times
  cookieMr 3 months, 1 week ago
Selected Answer: D
In this solution, a resource-based policy is added to the Lambda function, which allows the specified principal (events.amazonaws.com) to
invoke the function. The lambda:InvokeFunction action provides the necessary permission for the Amazon EventBridge rule to trigger the
Lambda function.

Option A is incorrect because it assigns the lambda:InvokeFunction action to all principals (*), which grants permission to invoke the
function to any entity, which is broader than necessary.

Option B is incorrect because it assigns the lambda:InvokeFunction action to the specific principal "lambda.amazonaws.com," which is the
service principal for AWS Lambda. However, the requirement is for the EventBridge service principal to invoke the function.

Option C is incorrect because it assigns the lambda:* action to the specific principal "events.amazonaws.com," which is the service
principal for Amazon EventBridge. However, it grants broader permissions than necessary, allowing any Lambda function action, not just
lambda:InvokeFunction.
upvoted 2 times

  Abrar2022 4 months, 1 week ago


Option C is incorrect, the reason is that, firstly, lambda:* allows Amazon EventBridge to perform any action on the function and this is
beyond the minimum permissions needed.
upvoted 1 times

  Rahulbit34 5 months ago


Since its for Lamda which is a resource, resource policy is the trick
upvoted 2 times

  bdp123 7 months, 3 weeks ago


Selected Answer: D
https://docs.aws.amazon.com/eventbridge/latest/userguide/resource-based-policies-eventbridge.html#lambda-permissions
upvoted 1 times

  gustavtd 9 months ago


Selected Answer: D
The definition scope of D is the smallest, so is it
upvoted 1 times

  techhb 9 months, 1 week ago


Selected Answer: D
events.amazonaws.com is principal for eventbridge
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: D
Option D
upvoted 1 times

  wly_al 9 months, 3 weeks ago


least privilege meant the role cannot be "*". answer B only mention lambda. so the answer was D
upvoted 1 times

  ocbn3wby 10 months, 1 week ago


Selected Answer: D
My answer was D, as this is the most specific answer.
And then there's this guy's answer (123jhl0) which provides more details.
upvoted 1 times
Question #106 Topic 1

A company is preparing to store confidential data in Amazon S3. For compliance reasons, the data must be encrypted at rest. Encryption key
usage must be logged for auditing purposes. Keys must be rotated every year.
Which solution meets these requirements and is the MOST operationally efficient?

A. Server-side encryption with customer-provided keys (SSE-C)

B. Server-side encryption with Amazon S3 managed keys (SSE-S3)

C. Server-side encryption with AWS KMS keys (SSE-KMS) with manual rotation

D. Server-side encryption with AWS KMS keys (SSE-KMS) with automatic rotation

Correct Answer: D

Community vote distribution


D (93%) 7%

  123jhl0 Highly Voted  11 months, 2 weeks ago


Selected Answer: D
The MOST operationally efficient one is D.
Automating the key rotation is the most efficient.
Just to confirm, the A and B options don't allow automate the rotation as explained here:
https://aws.amazon.com/kms/faqs/#:~:text=You%20can%20choose%20to%20have%20AWS%20KMS%20automatically%20rotate%20KMS,K
MS%20custom%20key%20store%20feature
upvoted 15 times

  vadiminski_a 9 months, 2 weeks ago


In addition you cannot log key usage in B, for A I am not certain
upvoted 1 times

  ocbn3wby 10 months, 1 week ago


Thank you for the explanation.
upvoted 1 times

  Guru4Cloud Most Recent  1 month, 2 weeks ago


Selected Answer: D
The correct answer is D. Server-side encryption with AWS KMS keys (SSE-KMS) with automatic rotation.

SSE-KMS is the most secure way to encrypt data in Amazon S3. It uses AWS KMS, which is a highly secure key management service that is
managed by AWS. AWS KMS logs all key usage, so the company can meet its compliance requirements. AWS KMS also rotates keys
automatically, so the company does not have to worry about manually rotating keys.
upvoted 2 times

  cookieMr 3 months, 1 week ago


Selected Answer: D
SSE-KMS provides a secure and efficient way to encrypt data at rest in S3. SSE-KMS uses KMS to manage the encryption keys securely. With
SSE-KMS, encryption keys can be automatically rotated using KMS key rotation feature, which simplifies the key management process and
ensures compliance with the requirement to rotate keys every year.

Additionally, SSE-KMS provides built-in audit logging for encryption key usage through CloudTrail, which captures API calls related to the
management and usage of KMS keys. This meets the requirement for logging key usage for auditing purposes.

Option A (SSE-C) requires customers to provide their own encryption keys, but it does not provide key rotation or built-in logging of key
usage.
Option B (SSE-S3) uses Amazon S3 managed keys for encryption, which simplifies key management but does not provide key rotation or
detailed key usage logging.
Option C (SSE-KMS with manual rotation) uses AWS KMS keys but requires manual rotation, which is less operationally efficient than the
automatic key rotation available with option D.
upvoted 3 times

  SilentMilli 8 months, 3 weeks ago


Selected Answer: D
Server-side encryption with AWS KMS keys (SSE-KMS) with automatic rotation meets the requirements and is the most operationally
efficient solution. This option allows you to use AWS KMS to automatically rotate the keys every year, which simplifies key management. In
addition, key usage is logged for auditing purposes, and the data is encrypted at rest to meet compliance requirements.
upvoted 2 times

  Zerotn3 9 months ago


Selected Answer: B
mazon API Gateway is a fully managed service that makes it easy to create, publish, maintain, monitor, and secure APIs at any scale. You
can use API Gateway to create a REST API that exposes the location data as an API endpoint, allowing you to access the data from your
analytics platform.

AWS Lambda is a serverless compute service that lets you run code in response to events or HTTP requests. You can use Lambda to write
the code that retrieves the location data from your data store and returns it to API Gateway as a response to API requests. This allows you
to scale the API to handle a large number of requests without the need to provision or manage any infrastructure.
upvoted 2 times
  Buruguduystunstugudunstuy 9 months, 2 weeks ago
Selected Answer: D
The most operationally efficient solution that meets the requirements listed would be option D: Server-side encryption with AWS KMS keys
(SSE-KMS) with automatic rotation.

SSE-KMS allows you to use keys that are managed by the AWS Key Management Service (KMS) to encrypt your data at rest. KMS is a fully
managed service that makes it easy to create and control the encryption keys used to encrypt your data. With automatic key rotation
enabled, KMS will automatically create a new key for you on a regular basis, typically every year, and use it to encrypt your data. This
simplifies the key rotation process and reduces the operational burden on your team.

In addition, SSE-KMS provides logging of key usage through AWS CloudTrail, which can be used for auditing purposes.
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Why other options are wrong

Option A: Server-side encryption with customer-provided keys (SSE-C) would require you to manage the encryption keys yourself, which
can be more operationally burdensome.

Option B: Server-side encryption with Amazon S3 managed keys (SSE-S3) does not allow for key rotation or logging of the key usage.

Option C: Server-side encryption with AWS KMS keys (SSE-KMS) with manual rotation would require you to manually initiate the key
rotation process, which can be more operationally burdensome compared to automatic rotation.
upvoted 3 times

  career360guru 9 months, 2 weeks ago


Selected Answer: D
Option D
upvoted 1 times

  Berny 9 months, 3 weeks ago


You can choose to have AWS KMS automatically rotate KMS keys every year, provided that those keys were generated within AWS KMS
HSMs. Automatic key rotation is not supported for imported keys, asymmetric keys, or keys generated in a CloudHSM cluster using the
AWS KMS custom key store feature. If you choose to import keys to AWS KMS or asymmetric keys or use a custom key store, you can
manually rotate them by creating a new KMS key and mapping an existing key alias from the old KMS key to the new KMS key.
upvoted 1 times

  PavelTech 9 months, 3 weeks ago


Can anybody correct me if I'm wrong, KMS does not offer automatic rotations but SSE-KMS only allows automatic rotation once in 3 years
thus if we want rotation every year we need to rotate it manually?
upvoted 2 times

  JayBee65 9 months, 2 weeks ago


You're wrong :) "All AWS managed keys are automatically rotated every year. You cannot change this rotation schedule."
https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#customer-cmk
upvoted 1 times

  PS_R 10 months, 3 weeks ago


Selected Answer: D
Agree Also, SSE-S3 cannot be audited.
upvoted 2 times
Question #107 Topic 1

A bicycle sharing company is developing a multi-tier architecture to track the location of its bicycles during peak operating hours. The company
wants to use these data points in its existing analytics platform. A solutions architect must determine the most viable multi-tier option to support
this architecture. The data points must be accessible from the REST API.
Which action meets these requirements for storing and retrieving location data?

A. Use Amazon Athena with Amazon S3.

B. Use Amazon API Gateway with AWS Lambda.

C. Use Amazon QuickSight with Amazon Redshift.

D. Use Amazon API Gateway with Amazon Kinesis Data Analytics.

Correct Answer: D

Community vote distribution


B (50%) D (41%) 9%

  ArielSchivo Highly Voted  11 months, 2 weeks ago


Selected Answer: B
API Gateway is needed to get the data so option A and C are out.
“The company wants to use these data points in its existing analytics platform” so there is no need to add Kynesis. Option D is also out.
This leaves us with option B as the correct one.
upvoted 63 times

  MutiverseAgent 2 months, 3 weeks ago


B might work but D works better. B requieres API gateway + lambda for data input & output, whereas D is a broader solution, as Kinesis
Data Analytics APIs can be used to extract and process data better that API Gateway + Lambdas. Also, Kinesis is highly recommended
for telemetry data which is the question scenario. @See Kinesys flexible API (https://aws.amazon.com/documentation-overview/kinesis-
data-analytics/)
upvoted 4 times

  MutiverseAgent 2 months, 3 weeks ago


Also by using kinesis the analytics platform will have a storing buffer to take & process data through the kinesys API. The lambda
aproach in the B scenario is to wide and leaves many loose ends.
upvoted 2 times

  alfonso_ciampa 2 months, 3 weeks ago


You are right, but it clearly say "store data".
AWS Lambda don't store data, Kinesis could.
upvoted 4 times

  ces26015 8 months, 1 week ago


i dont understand the use of a lambda function here, maybe if there would be need to transform the data, can you explain?
upvoted 3 times

  bullrem 8 months, 1 week ago


AWS Lambda is a serverless compute service that can be used to run code in response to specific events, such as changes to data in an
Amazon S3 bucket or updates to a DynamoDB table. It could be used to process the location data, but it doesn't provide storage
solution. Therefore, it would not be the best option for storing and retrieving location data in this scenario.
upvoted 4 times

  Six_Fingered_Jose Highly Voted  11 months, 1 week ago


Selected Answer: D
I dont understand why you will vote B?
how are you going to store data with just lambda?
> Which action meets these requirements for storing and retrieving location data

In this use case there will obviously be a ton of data and you want to get real-time location data of the bicycles, and to analyze all these
info kinesis is the one that makes most sense here.
upvoted 38 times

  JackLo 2 weeks, 5 days ago


B is more appropriate the question contains to retrieve, Kinesis doesn't have such function, Lambda can set custom function as you like
upvoted 1 times

  MutiverseAgent 2 months, 3 weeks ago


B might work but D works better. B requieres API gateway + lambda for data input & output, whereas D is a broader solution, as Kinesis
Data Analytics APIs can be used to extract and process data better that API Gateway + Lambdas. API Supports integration with several
languages. Also, Kinesis is highly recommended for telemetry data which is the question scenario. @See Kinesys flexible API
(https://aws.amazon.com/documentation-overview/kinesis-data-analytics/)
upvoted 1 times

  MutiverseAgent 2 months, 3 weeks ago


Also by using kinesis the analytics platform will have a storing buffer to take & process data through the kinesys API. The lambda
aproach in the B scenario is to wide and leaves many loose ends.
upvoted 1 times

  aadityaravi8 3 months ago


100% agree
upvoted 1 times

  JiyuKim 7 months, 3 weeks ago


But KDA also cannot store data.
upvoted 2 times

  vipyodha 3 months, 2 weeks ago


kda can store data with retention period
upvoted 1 times

  vijaykamal Most Recent  5 days, 3 hours ago


Selected Answer: D
lambda does not store the information and since real time tracking is needed for peak hrs., kinesis would work better
upvoted 1 times

  rushiwaman95 6 days, 23 hours ago


tracking = real time
upvoted 1 times

  chandu7024 1 week, 6 days ago


It should be D.
upvoted 1 times

  kambarami 3 weeks, 3 days ago


Answer is D.
Amazon Kinesis Data Streams is a serverless streaming data service that simplifies the capture, processing, and storage of data streams at
any scale. Kinesis Data Firehose.
upvoted 1 times

  sonyaws 1 month ago


Selected Answer: B
Correction required in question: Which action meets these requirements for sorting(*not storing) and retrieving location data?
- Lambda function does the sorting and retrieving
- API Gateway exposes a RestAPI to call the Lambda function
upvoted 1 times

  Stevey 1 month, 1 week ago


Selected Answer: D
D because a storage solution is required.
B. Lamda is not a storage solution.
upvoted 2 times

  karloscetina007 1 month, 2 weeks ago


Selected Answer: D
D is the correct answer. Lambda can not store the infotmation for analysis, instead API gateway and Kinesis could do that years ago.
upvoted 2 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: D
The best option is to use Amazon API Gateway with Amazon Kinesis Data Analytics.

Amazon API Gateway provides a REST API that can be used to ingest and retrieve the location data points. Kinesis Data Analytics can then
process and analyze those data streams in real-time. The results can be queried through the API Gateway, meeting the requirements.
upvoted 2 times

  ack1 1 month, 3 weeks ago


Selected Answer: D
D is right answer
upvoted 2 times

  oeufmeister 2 months ago


Selected Answer: D
Kinesis will probably do the job better
upvoted 1 times
  RupeC 2 months, 1 week ago
Selected Answer: D
The last line of the question states: "for storing and retrieving location data?" Thus as B has no storage capability unless you add
DynamoDB which is out of scope. D is the only answer that fulfils the brief. Reading the other answers, I think B is more elegant and has
less redundancy given the existing dev but you would have to address the storage.
upvoted 1 times

  Aash24 2 months, 3 weeks ago


Selected Answer: D
D should be
upvoted 2 times

  small_zipgeniuis 3 months ago


Selected Answer: B
B is correct. With D option API Gateway cannot connect directly to Kinesis Data Analitics. If this would be Kinesis Data Streams this would
be feasible
upvoted 4 times

  aadityaravi8 3 months ago


D should be right answer.
Using Amazon API Gateway with Amazon Kinesis Data Analytics allows you to capture and process streaming data in real-time.
This architecture provides the ability to capture and analyze location data in real-time, allowing the bicycle sharing company to track the
location of its bicycles during peak operating hours. The REST API exposed through Amazon API Gateway enables easy access to the
location data.
Option B, using Amazon API Gateway with AWS Lambda, can be a valid choice for handling REST API requests, but it may not be the
optimal solution for real-time data processing and analytics.
upvoted 2 times

  cookieMr 3 months, 1 week ago


Selected Answer: B
Combination of API Gateway and Lambda provides a scalable and cost-effective solution for handling the REST API and processing the
location data. API Gateway can handle the API request and response management, while Lambda can process and store/retrieve the
location data from a suitable data store like Amazon S3.

Options A, C, and D are not the most suitable options for storing and retrieving location data in this scenario:

Option A suggests using Amazon Athena with Amazon S3, which is a query service for data in S3 but does not provide a direct REST API
integration for real-time location data retrieval.

Option C suggests using Amazon QuickSight with Amazon Redshift, which is more suitable for data analytics and visualization rather than
real-time data retrieval through a REST API.

Option D suggests using Amazon API Gateway with Amazon Kinesis Data Analytics, which is more suitable for real-time streaming
analytics rather than data storage and retrieval for REST APIs.
upvoted 4 times
Question #108 Topic 1

A company has an automobile sales website that stores its listings in a database on Amazon RDS. When an automobile is sold, the listing needs
to be removed from the website and the data must be sent to multiple target systems.
Which design should a solutions architect recommend?

A. Create an AWS Lambda function triggered when the database on Amazon RDS is updated to send the information to an Amazon Simple
Queue Service (Amazon SQS) queue for the targets to consume.

B. Create an AWS Lambda function triggered when the database on Amazon RDS is updated to send the information to an Amazon Simple
Queue Service (Amazon SQS) FIFO queue for the targets to consume.

C. Subscribe to an RDS event notification and send an Amazon Simple Queue Service (Amazon SQS) queue fanned out to multiple Amazon
Simple Notification Service (Amazon SNS) topics. Use AWS Lambda functions to update the targets.

D. Subscribe to an RDS event notification and send an Amazon Simple Notification Service (Amazon SNS) topic fanned out to multiple Amazon
Simple Queue Service (Amazon SQS) queues. Use AWS Lambda functions to update the targets.

Correct Answer: C

Community vote distribution


A (64%) D (35%)

  romko Highly Voted  10 months, 2 weeks ago


Selected Answer: A
Interesting point that Amazon RDS event notification doesn't support any notification when data inside DB is updated.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Events.overview.html
So subscription to RDS events doesn't give any value for Fanout = SNS => SQS

B is out because FIFO is not required here.

A is left as correct answer


upvoted 63 times

  Evonne_HY 2 weeks, 1 day ago


RDS event notification is supporting object deletion. What's more, it is saying the listing will be removed rather than update, so D is
correct.
Here's the link for event notification categories link:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Events.ListingCategories.html
upvoted 1 times

  ruqui 4 months, 1 week ago


I don't think A is a valid solution ... how do you send the data to multiple targets using a single SQS?
upvoted 5 times

  studynoplay 4 months, 4 weeks ago


Wow, great find romko. Didn't realize that Event notification doesn't notify when the data is changed, it notifies when major changes at
DB level occur like settings etc
upvoted 1 times

  nauman001 6 months ago


Listing the Amazon RDS event notification categories.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Events.ListingCategories.html:
upvoted 1 times

  ksolovyov Highly Voted  9 months ago


Selected Answer: A
RDS events only provide operational events such as DB instance events, DB parameter group events, DB security group events, and DB
snapshot events. What we need in the scenario is to capture data-modifying events (INSERT, DELETE, UPDATE) which can be achieved thru
native functions or stored procedures.
upvoted 8 times

  BlueVolcano1 8 months, 2 weeks ago


I agree with it requiring a native function or stored procedure, but can they in turn invoke a Lambda function? I have only seen this
being possible with Aurora, but not RDS - and I'm not able to find anything googling for it either. I guess it has to be possible, since
there's no other option that fits either.

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.Lambda.html
upvoted 1 times
  BlueVolcano1 8 months, 2 weeks ago
To add to that though, A also states to only use SQS (no SNS to SQS fan-out), which doesn't seem right as the message needs to go
to multiple targets?
upvoted 4 times

  rlaisqls Most Recent  1 week ago


Selected Answer: D
I thing D is more recommended by AWS then C.
It should be send data to multiple consumer, A is definitely not.
https://docs.aws.amazon.com/sns/latest/dg/sns-sqs-as-subscriber.html
upvoted 2 times

  mtmayer 1 month, 1 week ago


Selected Answer: D
..... data must be sent to multiple target systems. = SNS
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: A
There is RDS Fanout to SNS, but not specifically for DB level events (write, reads, etc).
It can fan out events at instance level (turn on, restart, update), cluster level (added to cluster, removed from cluster, etc). But not at DB
level.

More detailed event list here:


https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Events.Messages.html

Correct answer is A
upvoted 1 times

  cookieMr 2 months, 3 weeks ago


Selected Answer: D
Fanout -> SNS + SQS
upvoted 2 times

  aadityaravi8 3 months ago


Answer should be C
When an automobile is sold, the listing needs to be removed from the website and the data must be sent to multiple target systems - it
can be done through SQS polling option, as soon as it is processed, it will be removed and won't be pickedup by another node of lambda
for further processing. i.e Amazon Simple Queue Service (Amazon SQS) queue fanned out to multiple Amazon Simple Notification Service
(Amazon SNS) topics
upvoted 3 times

  Mia2009687 3 months ago


Selected Answer: D
SNS Fan out to multiple SQS
upvoted 2 times

  Anmol_1010 3 months, 2 weeks ago


Sqs, does not send notification to multiple targets Answer is D
upvoted 3 times

  northyork 3 months, 3 weeks ago


D: RDS event to SNS notification
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Events.overview.html
upvoted 2 times

  zmaster 4 months ago


Option D is the most suitable for this scenario:
AWS Lambda can then process these messages in SQS to update the target systems.
The other options aren't as suitable because:

Option A and B: Lambda functions are not directly triggered by Amazon RDS updates.
Option C: SQS does not fan out to multiple SNS topics. It's the other way around; an SNS topic can fan out to multiple SQS queues.
upvoted 1 times

  PrasanthVarada 4 months, 1 week ago


Answer is D. Fanout pattern is basically from SNS -> to -> multiple SQS.
Option C is wrong - because SQS doesn't support push.
upvoted 1 times

  Jesuisleon 4 months, 1 week ago


Selected Answer: D
A is WRONG. the question requires message to " multiple target systems", how message in sqs to route to multiple systems, you need sns
to fan-out.
upvoted 4 times
  Siva007 4 months, 1 week ago
Selected Answer: A
RDS event notification subscription doesn't support any notification when data is removed\deleted. SQS FIFO doesn't make sense.
So, A is the most closest answer
upvoted 1 times

  ealpuche 4 months, 2 weeks ago


Selected Answer: C
The answer is C
upvoted 1 times

  bedwal2020 5 months ago


Selected Answer: D
For me "Multiple target systems" and SNS is the key
upvoted 2 times

  kruasan 5 months, 1 week ago


Selected Answer: A
RDS resources eligible for event subscription
-DB instance
-DB snapshot
-DB parameter group
-DB security group
-RDS Proxy
-Custom engine version
upvoted 2 times
Question #109 Topic 1

A company needs to store data in Amazon S3 and must prevent the data from being changed. The company wants new objects that are uploaded
to Amazon S3 to remain unchangeable for a nonspecific amount of time until the company decides to modify the objects. Only specific users in
the company's AWS account can have the ability 10 delete the objects.
What should a solutions architect do to meet these requirements?

A. Create an S3 Glacier vault. Apply a write-once, read-many (WORM) vault lock policy to the objects.

B. Create an S3 bucket with S3 Object Lock enabled. Enable versioning. Set a retention period of 100 years. Use governance mode as the S3
bucket’s default retention mode for new objects.

C. Create an S3 bucket. Use AWS CloudTrail to track any S3 API events that modify the objects. Upon notification, restore the modified objects
from any backup versions that the company has.

D. Create an S3 bucket with S3 Object Lock enabled. Enable versioning. Add a legal hold to the objects. Add the s3:PutObjectLegalHold
permission to the IAM policies of users who need to delete the objects.

Correct Answer: D

Community vote distribution


D (81%) B (19%)

  123jhl0 Highly Voted  11 months, 2 weeks ago


Selected Answer: D
A - No as "specific users can delete"
B - No as "nonspecific amount of time"
C - No as "prevent the data from being change"
D - The answer: "The Object Lock legal hold operation enables you to place a legal hold on an object version. Like setting a retention
period, a legal hold prevents an object version from being overwritten or deleted. However, a legal hold doesn't have an associated
retention period and remains in effect until removed." https://docs.aws.amazon.com/AmazonS3/latest/userguide/batch-ops-legal-
hold.html
upvoted 23 times

  PassNow1234 9 months, 1 week ago


The Object Lock legal hold operation enables you to place a legal hold on an object version. Like setting a retention period, a legal hold
prevents an object version from being overwritten or deleted. However, a legal hold doesn't have an associated retention period and
remains in effect until removed.

Correct
upvoted 1 times

  Chunsli Highly Voted  11 months, 2 weeks ago


typo -- 10 delete the objects => TO delete the objects
upvoted 13 times

  oddnoises 1 week ago


they were trying to speak in binary lol
upvoted 1 times

  TariqKipkemei Most Recent  1 month ago


Selected Answer: D
"The company wants new objects that are uploaded to Amazon S3 to remain unchangeable for a nonspecific amount of time until the
company decides to modify the objects" = A legal hold prevents an object version from being overwritten or deleted. However, a legal hold
doesn't have an associated retention period and remains in effect until removed.
s3:PutObjectLegalHold permission is required in your IAM role to add or remove legal hold from objects.
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: D
The Object Lock legal hold operation enables you to place a legal hold on an object version. Like setting a retention period, a legal hold
prevents an object version from being overwritten or deleted. However, a legal hold doesn't have an associated retention period and
remains in effect until removed.
upvoted 1 times

  RupeC 2 months, 1 week ago


Selected Answer: D
My understanding is that the s3:PutObjectLegalHold permission allows certain users to apply or remove the legal hold on objects in the S3
bucket. However, having the permission to apply or remove the legal hold does not necessarily mean users can override the hold set by
another user.

Once the legal hold is set on an object, it is in effect until the hold is removed by the user who applied it or an admin with the necessary
permissions. Other users, even if they have the s3:PutObjectLegalHold permission, won't be able to remove the hold unless they are
granted access by the user who originally applied it.
upvoted 2 times
  omoakin 4 months, 1 week ago
I go with option B as they still need some specific users to be able to make changes so Gov mode is the best choice and 100 yrs is like
infinity as well haha
upvoted 3 times

  KZM 7 months ago


Selected Answer: D
The correct answer is D.
upvoted 1 times

  WherecanIstart 7 months ago


Selected Answer: D
Option B specifies a retention period of 100 years which contradicts what the question asked for.....
"The company wants new objects that are uploaded to Amazon S3 to remain unchangeable for a nonspecific amount of time until the
company decides to modify the objects"
Setting the retention period of 100 years is specific and the company wants new data/objects to remain unchanged for nonspecific
amount of time.

Correct answer is D

https://docs.aws.amazon.com/AmazonS3/latest/userguide/batch-ops-legal-hold.html
upvoted 3 times

  slackbot 1 month, 1 week ago


FFS 100 years = indefinitely. no company has a policy of keeping data for more than 10 years.
having specific admins run 2 additional commands every time they want to modify an object, is really in sync with nowadays
automation processes.
instead of commenting each letter from the question, start thinking. if you were to decide, would you make your users always run
commands before modifying or would you rather allow them to directly modify?
upvoted 1 times

  bdp123 7 months, 2 weeks ago


Selected Answer: D
"The Object Lock legal hold operation enables you to place a legal hold on an object version. Like setting a retention period, a legal hold
prevents an object version from being overwritten or deleted. However, a legal hold doesn't have an associated retention period and
remains in effect until removed." https://docs.aws.amazon.com/AmazonS3/latest/userguide/batch-ops-legal-hold.html
upvoted 1 times

  Yelizaveta 7 months, 3 weeks ago


Selected Answer: D
retention period of 100 Years prevents the object to be deleted bevor the retention period expires, so it's not a good fit.
upvoted 1 times

  nadir_kh 8 months, 3 weeks ago


it is B.
Once a legal hold is enabled, regardless of the object's retention date or retention mode, the object version cannot be deleted until the
legal hold is removed.

Question says: "Specific users must have ability to delete objects"


upvoted 5 times

  MutiverseAgent 2 months, 2 weeks ago


If users have the policy s3:PutObjectLegalHold then they can remove the legal hold before deleting.
upvoted 1 times

  John_Zhuang 8 months, 4 weeks ago


Selected Answer: D
While S3 bucket governance mode does allow certain users with permissions to alter retention/delete objects, the 100 years in Option B
makes it invalid.

Correct answer is option D.


"With Object Lock you can also place a legal hold on an object version. Like a retention period, a legal hold prevents an object version from
being overwritten or deleted. However, a legal hold doesn't have an associated retention period and remains in effect until removed. "
https://aws.amazon.com/s3/features/object-lock/
https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-overview.html#object-lock-legal-holds
upvoted 1 times

  aba2s 9 months ago


Selected Answer: D
With Object Lock, you can also place a legal hold on an object version. Like a retention period, a legal hold prevents an object version from
being overwritten or deleted. However, a legal hold doesn't have an associated retention period and remains in effect until removed. Legal
holds can be freely placed and removed by any user who has the s3:PutObjectLegalHold permission.
B - No as "nonspecific amount of time" otherwise B will meet the requirement with legal hold attached.
upvoted 1 times
  FNJ1111 9 months, 1 week ago
Wouldn't D require s3:GetBucketObjectLockConfiguration IAM permission? If so, D is incomplete and wouldn't meet the requirement.
(from the link shared above)
upvoted 1 times

  Silvestr 9 months, 1 week ago


Selected Answer: B
Correct answer : B
Retention mode - Governance:
• Most users can't overwrite or delete an object version or alter its lock settings
• Some users have special permissions to change the retention or delete the object
upvoted 2 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Selected Answer: B
To meet the requirements specified in the question, the solution architect should choose Option B: Create an S3 bucket with S3 Object
Lock enabled. Enable versioning. Set a retention period of 100 years. Use governance mode as the S3 bucket's default retention mode for
new objects.

S3 Object Lock is a feature of Amazon S3 that allows you to apply a retention period to objects in your bucket, during which time the
objects cannot be deleted or overwritten. By enabling versioning on the bucket, you can ensure that all versions of an object are retained,
including any deletions or overwrites. By setting a retention period of 100 years, you can ensure that the objects remain unchangeable for
a long time.

By using governance mode as the default retention mode for new objects, you can ensure that the retention period is applied to all new
objects that are uploaded to the bucket. This will prevent the objects from being deleted or overwritten until the retention period expires.
upvoted 2 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Why other options are wrong
Option A (creating an S3 Glacier vault and applying a WORM vault lock policy) would not meet the requirement to prevent the objects
from being changed, because S3 Glacier is a storage class for long-term data archival and does not support read-write operations.

Option C (using CloudTrail to track API events and restoring modified objects from backup versions) would not prevent the objects from
being changed in the first place.

Option D (adding a legal hold and the s3:PutObjectLegalHold permission to IAM policies) would not meet the requirement to prevent
the objects from being changed for a nonspecific amount of time.
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Legal holds are used to prevent objects that are subject to legal or compliance requirements from being deleted or overwritten,
even if their retention period has expired. While legal holds can be useful for preventing the accidental deletion of important
objects, they do not prevent the objects from being changed. S3 Object Lock can be used to prevent objects from being deleted or
overwritten for a specified retention period, but a legal hold does not provide this capability.

In addition, the s3:PutObjectLegalHold permission allows users to place a legal hold on an object, but it does not prevent the object
from being changed. To prevent the objects from being changed for a nonspecific amount of time, the solution architect should use
S3 Object Lock and set a longer retention period on the objects.
upvoted 3 times

  career360guru 9 months, 2 weeks ago


Selected Answer: D
Option D
upvoted 1 times
Question #110 Topic 1

A social media company allows users to upload images to its website. The website runs on Amazon EC2 instances. During upload requests, the
website resizes the images to a standard size and stores the resized images in Amazon S3. Users are experiencing slow upload requests to the
website.
The company needs to reduce coupling within the application and improve website performance. A solutions architect must design the most
operationally efficient process for image uploads.
Which combination of actions should the solutions architect take to meet these requirements? (Choose two.)

A. Configure the application to upload images to S3 Glacier.

B. Configure the web server to upload the original images to Amazon S3.

C. Configure the application to upload images directly from each user's browser to Amazon S3 through the use of a presigned URL

D. Configure S3 Event Notifications to invoke an AWS Lambda function when an image is uploaded. Use the function to resize the image.

E. Create an Amazon EventBridge (Amazon CloudWatch Events) rule that invokes an AWS Lambda function on a schedule to resize uploaded
images.

Correct Answer: BD

Community vote distribution


BD (50%) CD (49%)

  Buruguduystunstugudunstuy Highly Voted  9 months, 2 weeks ago


Selected Answer: CD
To meet the requirements of reducing coupling within the application and improving website performance, the solutions architect should
consider taking the following actions:

C. Configure the application to upload images directly from each user's browser to Amazon S3 through the use of a pre-signed URL. This
will allow the application to upload images directly to S3 without having to go through the web server, which can reduce the load on the
web server and improve performance.

D. Configure S3 Event Notifications to invoke an AWS Lambda function when an image is uploaded. Use the function to resize the image.
This will allow the application to resize images asynchronously, rather than having to do it synchronously during the upload request,
which can improve performance.
upvoted 30 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Why other options are wrong
Option A, Configuring the application to upload images to S3 Glacier, is not relevant to improving the performance of image uploads.

Option B, Configuring the webserver to upload the original images to Amazon S3, is not a recommended solution as it would not
reduce coupling within the application or improve performance.

Option E, Creating an Amazon EventBridge (Amazon CloudWatch Events) rule that invokes an AWS Lambda function on a schedule to
resize uploaded images, is not a recommended solution as it would not be able to resize images in a timely manner and would not
improve performance.
upvoted 3 times

  MutiverseAgent 2 months, 2 weeks ago


About your comments regarding option B)... But if images are being saved directly to S3 instead of the EBS/SSD storage of E2
instances as they originally were, the new approach will reduce coupling and improve performance. Also you have to consider the
security concerns about presign URLs as the question does not mention if users are public or private.
upvoted 1 times

  Yelizaveta 7 months, 3 weeks ago


Here it means to decouple the processes, so that the web server don't have to do the resizing, so it doesn't slow down. The
customers access the web server, so the web server have to be involved in the process, and how the others already wrote, the pre-
signed URL is not the right solution because, of the explanation you can read in the other comments.

And additional! "Configure the application to upload images directly from EACH USER'S BROWSER to Amazon S3 through the use of
a pre-signed URL"

I am not an expert, but I can't imagine that you can store an image that an user uploads in his browser etc.
upvoted 3 times

  jdr75 5 months, 4 weeks ago


presigned URL is for download the data from S3, not for uploads, so the user does not upload anything. C is no correct.
upvoted 4 times
  EricYu2023 5 months, 2 weeks ago
Presigned URL can be use for upload.
upvoted 3 times

  PoisonBlack 5 months ago


https://docs.aws.amazon.com/AmazonS3/latest/userguide/PresignedUrlUploadObject.html
upvoted 3 times

  AF_1221 5 months ago


preassigned URL is for upload or download for temporary time and for specific users outside the company
upvoted 2 times

  AF_1221 5 months ago


but for temporary purpose not for permanent
upvoted 3 times

  fkie4 Highly Voted  6 months, 3 weeks ago


Selected Answer: BD
Why would anyone vote C? signed URL is for temporary access. also, look at the vote here:
https://www.examtopics.com/discussions/amazon/view/82971-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 14 times

  baggam Most Recent  1 week, 5 days ago


Selected Answer: CD
CD is correct
upvoted 1 times

  numark 3 weeks, 3 days ago


This is a social media company, so random users are uploading images. These are not employees. The signed URL has to be sent to the
user and they only have a certain amount of time to use it. That's a disaster for a social media company. No way C is the answer. Lambda
all the way.
upvoted 2 times

  MarcusLEK 3 weeks, 4 days ago


Selected Answer: BD
while technically its possible to upload with pre-signed urls, its also worth mentioning that pre-signed urls have a time validity, so I think it
might not be suitable to long term use.

https://docs.aws.amazon.com/AmazonS3/latest/userguide/PresignedUrlUploadObject.html

https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-presigned-
url.html#:~:text=User%20Guide.-,Expiration%20time%20for%20presigned%20URLs,-A%20presigned%20URL
upvoted 3 times

  judyda 3 weeks, 5 days ago


Selected Answer: CD
https://docs.aws.amazon.com/ko_kr/AmazonS3/latest/userguide/PresignedUrlUploadObject.html
upvoted 1 times

  KawtarZ 1 month ago


C is not correct. the pre-signed urls are for download only, not upload.
upvoted 1 times

  Iconique 1 week, 3 days ago


wrong, they both for upload/download.
upvoted 1 times

  TariqKipkemei 1 month ago


Selected Answer: CD
Main requirement is decoupling and improve performance for which option C&D suit best.
You may use presigned URLs to allow someone to upload an object to your Amazon S3 bucket.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/PresignedUrlUploadObject.html#:~:text=You%20may%20use-,presigned%20UR
Ls,-to%20allow%20someone

Technically option D would work, but with the overhead of EC2/HDD/SDD.


upvoted 1 times

  mtmayer 1 month, 1 week ago


Selected Answer: CD
CD is much more efficient.
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: BD
Correct answers are BD
upvoted 1 times
  ofdengiz 2 months, 1 week ago
Selected Answer: CD
I'll go with C,D
B still involves the EC2 instances handling the image uploads and resizing, which does not improve website performance and increases
coupling within the application.
upvoted 1 times

  sosda 2 months, 3 weeks ago


Selected Answer: BD
presign url is not operational efficient
upvoted 1 times

  vini15 2 months, 3 weeks ago


I will go with B and D. Pre signed URL is temporary thing.

A presigned URL remains valid for the period of time specified when the URL is generated. If you create a presigned URL with the Amazon
S3 console, the expiration time can be set between 1 minute and 12 hours. If you use the AWS CLI or AWS SDKs, the expiration time can be
set as high as 7 days.
upvoted 1 times

  Kostya 3 months, 2 weeks ago


Selected Answer: BD
Correct answers are BD
upvoted 1 times

  omoakin 4 months, 1 week ago


BC BC BC
upvoted 1 times

  Abrar2022 4 months, 1 week ago


pre-signed URL is not the correct answer as it allows you to grant temporary access to users who don't have permission to directly run
AWS operations in your account.
upvoted 1 times

  rushi0611 5 months ago


Selected Answer: BD
B D are correct options.
upvoted 3 times
Question #111 Topic 1

A company recently migrated a message processing system to AWS. The system receives messages into an ActiveMQ queue running on an
Amazon EC2 instance. Messages are processed by a consumer application running on Amazon EC2. The consumer application processes the
messages and writes results to a MySQL database running on Amazon EC2. The company wants this application to be highly available with low
operational complexity.
Which architecture offers the HIGHEST availability?

A. Add a second ActiveMQ server to another Availability Zone. Add an additional consumer EC2 instance in another Availability Zone.
Replicate the MySQL database to another Availability Zone.

B. Use Amazon MQ with active/standby brokers configured across two Availability Zones. Add an additional consumer EC2 instance in another
Availability Zone. Replicate the MySQL database to another Availability Zone.

C. Use Amazon MQ with active/standby brokers configured across two Availability Zones. Add an additional consumer EC2 instance in
another Availability Zone. Use Amazon RDS for MySQL with Multi-AZ enabled.

D. Use Amazon MQ with active/standby brokers configured across two Availability Zones. Add an Auto Scaling group for the consumer EC2
instances across two Availability Zones. Use Amazon RDS for MySQL with Multi-AZ enabled.

Correct Answer: D

Community vote distribution


D (97%)

  123jhl0 Highly Voted  11 months, 2 weeks ago


Selected Answer: D
Answer is D as the "HIGHEST available" and less "operational complex"
The "Amazon RDS for MySQL with Multi-AZ enabled" option excludes A and B
The "Auto Scaling group" is more available and reduces operational complexity in case of incidents (as remediation it is automated) than
just adding one more instance. This excludes C.

C and D to choose from based on


D over C since is configured
upvoted 15 times

  TariqKipkemei Most Recent  1 month ago


Selected Answer: D
HIGHEST availability. Definitely option D.
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: D
The key reasons are:

Amazon MQ active/standby brokers across AZs for queue high availability


Auto Scaling group with consumer EC2 instances across AZs for redundant processing
RDS MySQL with Multi-AZ for database high availability
This combines the HA capabilities of MQ, EC2 and RDS to maximize fault tolerance across all components. The auto scaling also provides
flexibility to scale processing capacity as needed.
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: D
D is the correct answer.

Using Amazon MQ with active/standby brokers provides highly available message queuing across AZs.

Adding an Auto Scaling group for consumer EC2 instances across 2 AZs provides highly available processing.

Using RDS MySQL with Multi-AZ provides a highly available database.

This architecture provides high availability for all components of the system - queue, processing, and database.
upvoted 2 times

  james2033 2 months, 2 weeks ago


Selected Answer: D
Keyword Amazon RDS, has C and D. Then D has "Auto Scaling group", choose D.
upvoted 2 times
  MNotABot 2 months, 2 weeks ago
D
With 3 options with Amazon MQ --> A is odd one out / Then ASG with M-AZ was an easy choice
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: D
Amazon MQ with active/standby brokers configured across two AZ ensures high availability for the message broker. In case of a failure in
one AZ, the other AZ's broker can take over seamlessly.

Adding an ASG for the consumer EC2 instances across two AZ provides redundancy and automatic scaling based on demand. If one
consumer instance becomes unavailable or if the message load increases, the ASG can automatically launch additional instances to
handle the workload.

Using RDS for MySQL with Multi-AZ enabled ensures high availability for the database. Multi-AZ automatically replicates the database to a
standby instance in another AZ. If a failure occurs, RDS automatically fails over to the standby instance without manual intervention.

This architecture combines high availability for the message broker (Amazon MQ), scalability and redundancy for the consumer EC2
instances (ASG), and high availability for the database (RDS Multi-AZ). It offers the highest availability with low operational complexity by
leveraging managed services and automated failover mechanisms.
upvoted 2 times

  Kostya 3 months, 2 weeks ago


Selected Answer: D
Correct answer D
upvoted 1 times

  Bmarodi 3 months, 3 weeks ago


Selected Answer: D
to achieve ha + low operational complexity, the solution architect has to choose option D, which fulfill these requirements.
upvoted 1 times

  Abrar2022 4 months, 1 week ago


Auto scaling and Multi-AZ enabled for high availability.
upvoted 1 times

  Erbug 6 months, 2 weeks ago


you can find some details about Amazon MQ active/standby broker for high availability https://docs.aws.amazon.com/amazon-
mq/latest/developer-guide/active-standby-broker-deployment.html
upvoted 1 times

  Abdel42 8 months, 1 week ago


Selected Answer: D
D as the Auto Scaling group offer the highest availability between all solutions
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Selected Answer: D
Option D offers the highest availability because it addresses all potential points of failure in the system:

Amazon MQ with active/standby brokers configured across two Availability Zones ensures that the message queue is available even if one
Availability Zone experiences an outage.

An Auto Scaling group for the consumer EC2 instances across two Availability Zones ensures that the consumer application is able to
continue processing messages even if one Availability Zone experiences an outage.

Amazon RDS for MySQL with Multi-AZ enabled ensures that the database is available even if one Availability Zone experiences an outage.
upvoted 3 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Option A addresses some potential points of failure, but it does not address the potential for the consumer application to become
unavailable due to an Availability Zone outage.

Option B addresses some potential points of failure, but it does not address the potential for the database to become unavailable due
to an Availability Zone outage.

Option C addresses some potential points of failure, but it does not address the potential for the consumer application to become
unavailable due to an Availability Zone outage.
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: D
Option D
upvoted 2 times
  Wpcorgan 10 months, 2 weeks ago
D is correct
upvoted 1 times

  UWSFish 11 months, 1 week ago


Selected Answer: A
I don't know about D. Active/Standby adds to fault tolerance but does nothing for HA.
upvoted 1 times

  Wajif 10 months ago


Fault tolerance goes up a level from HA. Active Standby contributes to HA.
upvoted 1 times

  nullvoiddeath 10 months, 3 weeks ago


Amazon RDS > MySQL, hence A and B are eliminated
upvoted 1 times

  Six_Fingered_Jose 11 months, 1 week ago


Selected Answer: D
agree with D
upvoted 1 times
Question #112 Topic 1

A company hosts a containerized web application on a fleet of on-premises servers that process incoming requests. The number of requests is
growing quickly. The on-premises servers cannot handle the increased number of requests. The company wants to move the application to AWS
with minimum code changes and minimum development effort.
Which solution will meet these requirements with the LEAST operational overhead?

A. Use AWS Fargate on Amazon Elastic Container Service (Amazon ECS) to run the containerized web application with Service Auto Scaling.
Use an Application Load Balancer to distribute the incoming requests.

B. Use two Amazon EC2 instances to host the containerized web application. Use an Application Load Balancer to distribute the incoming
requests.

C. Use AWS Lambda with a new code that uses one of the supported languages. Create multiple Lambda functions to support the load. Use
Amazon API Gateway as an entry point to the Lambda functions.

D. Use a high performance computing (HPC) solution such as AWS ParallelCluster to establish an HPC cluster that can process the incoming
requests at the appropriate scale.

Correct Answer: A

Community vote distribution


A (100%)

  123jhl0 Highly Voted  11 months, 2 weeks ago


Selected Answer: A
Less operational overhead means A: Fargate (no EC2), move the containers on ECS, autoscaling for growth and ALB to balance
consumption.
B - requires configure EC2
C - requires add code (developpers)
D - seems like the most complex approach, like re-architecting the app to take advantage of an HPC platform.
upvoted 14 times

  TariqKipkemei Most Recent  1 month ago


Selected Answer: A
LEAST operational overhead = AWS Fargate
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: A
A is the best solution to meet the requirements with the least operational overhead. The key reasons are:

AWS Fargate removes the need to provision and manage servers. Fargate will automatically scale the application based on demand. This
removes a significant operational burden.
Using ECS along with Fargate provides a managed orchestration layer to easily run and scale the containerized application.
The Application Load Balancer handles distribution of traffic without additional effort.
No code changes are required to move the application to Fargate. The containers can run as-is.
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: A
A is the correct answer.

AWS Fargate removes the need to provision and manage servers, allowing you to focus on deploying and running applications. Fargate
will scale compute capacity up and down automatically based on application load. This removes the operational overhead of managing
servers.
upvoted 1 times

  james2033 2 months, 2 weeks ago


Selected Answer: A
Existing: "containerized web-app", "minimum code changes + minimum development effort" --> AWS Fargate + Amazon Elastic Container
Services (ECS). Easy question.
upvoted 1 times

  MNotABot 2 months, 2 weeks ago


A
Fargate, ECS, ASG, ALB….What else one will need for a nice sleep?
upvoted 2 times
  cookieMr 3 months, 1 week ago
Selected Answer: A
Option A (AWS Fargate on Amazon ECS with Service Auto Scaling) is the best choice as it provides a serverless and managed environment
for your containerized web application. It requires minimal code changes, offers automatic scaling, and utilizes an Application Load
Balancer for request distribution.

Option B (Amazon EC2 instances with an Application Load Balancer) requires manual management of EC2 instances, resulting in more
operational overhead compared to option A.

Option C (AWS Lambda with API Gateway) may require significant code changes and restructuring, introducing complexity and potentially
increasing development effort.

Option D (AWS ParallelCluster) is not suitable for a containerized web application and involves significant setup and configuration
overhead.
upvoted 2 times

  Jeeva28 4 months, 1 week ago


Selected Answer: A
AWS Fargate is a technology that you can use with Amazon ECS to run containers without having to manage servers or clusters of Amazon
EC2 instances. With Fargate, you no longer have to provision, configure, or scale clusters of virtual machines to run containers. This
removes the need to choose server types, decide when to scale your clusters, or optimize cluster packing.
upvoted 1 times

  studynoplay 4 months, 4 weeks ago


Selected Answer: A
Least Operational Overhead = Serverless
upvoted 1 times

  airraid2010 6 months, 3 weeks ago


Selected Answer: A
AWS Fargate is a technology that you can use with Amazon ECS to run containers without having to manage servers on clusters of
Amazon EC2 instances. With Fargate, you no longer have to provision, configure, or scale of virtual machines to run containers.

https://docs.aws.amazon.com/AmazonECS/latest/userguide/what-is-fargate.html
upvoted 1 times

  Chalamalli 7 months, 3 weeks ago


A is correct
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Selected Answer: A
The best solution to meet the requirements with the least operational overhead is Option A: Use AWS Fargate on Amazon Elastic Container
Service (Amazon ECS) to run the containerized web application with Service Auto Scaling. Use an Application Load Balancer to distribute
the incoming requests.
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: A
Option A has minimum operational overhead and almost no application code changes.
upvoted 1 times

  Wpcorgan 10 months, 2 weeks ago


A is correct
upvoted 1 times

  Six_Fingered_Jose 11 months, 1 week ago


Selected Answer: A
Agreed with A,
lambda will work too but requires more operational overhead (more chores)

with A, you are just moving from an on-prem container to AWS container
upvoted 3 times
Question #113 Topic 1

A company uses 50 TB of data for reporting. The company wants to move this data from on premises to AWS. A custom application in the
company’s data center runs a weekly data transformation job. The company plans to pause the application until the data transfer is complete and
needs to begin the transfer process as soon as possible.
The data center does not have any available network bandwidth for additional workloads. A solutions architect must transfer the data and must
configure the transformation job to continue to run in the AWS Cloud.
Which solution will meet these requirements with the LEAST operational overhead?

A. Use AWS DataSync to move the data. Create a custom transformation job by using AWS Glue.

B. Order an AWS Snowcone device to move the data. Deploy the transformation application to the device.

C. Order an AWS Snowball Edge Storage Optimized device. Copy the data to the device. Create a custom transformation job by using AWS
Glue.

D. Order an AWS Snowball Edge Storage Optimized device that includes Amazon EC2 compute. Copy the data to the device. Create a new EC2
instance on AWS to run the transformation application.

Correct Answer: C

Community vote distribution


C (71%) D (29%)

  123jhl0 Highly Voted  11 months, 2 weeks ago


Selected Answer: C
A. Use AWS DataSync to move the data. Create a custom transformation job by using AWS Glue. - No BW available for DataSync, so "asap"
will be weeks/months (?)
B. Order an AWS Snowcone device to move the data. Deploy the transformation application to the device. - Snowcone will just store 14TB
(SSD configuration).
**C**. Order an AWS Snowball Edge Storage Optimized device. Copy the data to the device. Create a custom transformation job by using
AWS Glue. - SnowBall can store 80TB (ok), takes around 1 week to move the device (faster than A), and AWS Glue allows to do ETL jobs. This
is the answer.
D. Order an AWS Snowball Edge Storage Optimized device that includes Amazon EC2 compute. Copy the data to the device. Create a new
EC2 instance on AWS to run the transformation application. - Same as C, but the ETL job requires the
deployment/configuration/maintenance of an EC2 instance, while Glue is serverless. This means D has more operational overhead than C.
upvoted 38 times

  jdr75 5 months, 4 weeks ago


I agree. When it said "with least Operational overhead" , it does not takes in account "migration activities" neccesary to reach the "final
photo/scenario". In "operational overhead" schema, you're situated in a "final scenario" and you've only take into account how do you
operate it, and if the operation of that scheme is ALIGHTED (least effort to operate than original scenario), that's the desired state.
upvoted 2 times

  remand 8 months ago


I disagree on D. transformation job is already in place.so, all you have to do is deploy and run on ec2.
C takes more effort to build Glue process, like reinventing the wheel . this is unnecessary
upvoted 5 times

  goodmail Highly Voted  8 months, 2 weeks ago


Selected Answer: D
Why C? This answer misses the part between SnowBall and AWS Glue.
D at least provides a full-step solution that copies data in snowball device, and installs the custom application in device's EC2 to do the
transformation job.
upvoted 9 times

  TariqKipkemei Most Recent  1 month ago


Selected Answer: C
Snowball Edge has storage and compute capabilities, can be used to support workload in offline locations.

Technically option D will work but with the overhead of EC2, negating the requirement for LEAST ops.
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: C
The Snowball Edge Storage Optimized device allows transferring a large amount of data without using network bandwidth.
Once the data is copied to the Snowball, AWS Glue can be used to create a custom ETL job to transform the data, avoiding the need to
reconfigure the existing on-premises application.
This meets the requirements to transfer the data with minimal operational overhead and configure the data transformation job to run in
AWS
upvoted 1 times
  james2033 2 months, 2 weeks ago
Selected Answer: C
AWS Glue for ETL (Extract, Transform, Load) https://docs.aws.amazon.com/glue/latest/dg/how-it-works.html is good for this case
(transformation). Keyword "50 TB", "AWS Snowball". Choose C. Easy question.
upvoted 1 times

  small_zipgeniuis 3 months ago


Selected Answer: C
A - no bandwith, option out
B - snowcone SSD has max 14TB of capacity
C - is correct one here
D - cannot use Compute optimized as max capactiy for this snowball is 39.5TB, and only that's why ;-)
https://docs.aws.amazon.com/snowball/latest/developer-guide/device-differences.html
upvoted 4 times

  rcarmin 2 months, 3 weeks ago


D answer says Snowball Edge STORAGE Optimized, which supports 80TB. 39.5TB is for the Snowball Edge COMPUTE Optimized.
upvoted 2 times

  live_reply_developers 3 months ago


Selected Answer: D
"A custom application in the company’s data center runs a weekly data transformation job."

"A solutions architect must transfer the data and must configure the transformation job to continue to run in the AWS Cloud."

LEAST operational overhead -> just take app and put on EC2, instead of configuring Glue
upvoted 1 times

  rcarmin 2 months, 3 weeks ago


IMHO, that's the least CONFIGURATION overhead, not operational. After you configure Glue, the operation should be easier than
maintaining the EC2 and the transformation job.
upvoted 1 times

  cookieMr 3 months, 1 week ago


Option A (AWS DataSync with AWS Glue) involves using AWS DataSync for data transfer, which requires available network bandwidth. Since
the data center has no additional network bandwidth, this option is not suitable.

Option B (AWS Snowcone device with deployment) is designed for smaller workloads and may not have enough storage capacity for
transferring 50 TB of data. Additionally, deploying the transformation application on the Snowcone device could introduce complexity and
operational overhead.

Option D (AWS Snowball Edge with EC2 compute) involves transferring the data using a Snowball Edge device and then creating a new EC2
instance in AWS to run the transformation application. This option adds additional complexity and operational overhead of managing an
EC2 instance.

In comparison, option C offers a straightforward and efficient approach. The Snowball Edge Storage Optimized device can handle the
large data transfer without relying on network bandwidth. Once the data is transferred, AWS Glue can be used to create the
transformation job, ensuring the continuity of the application's processing in the AWS Cloud.
upvoted 4 times

  rcarmin 2 months, 3 weeks ago


My thoughts exactly. I think people are missunderstanding CONFIG for OPERATION overhead.
upvoted 1 times

  beginnercloud 3 months, 3 weeks ago


Selected Answer: C
Correctly answer is C.

“The data center does not have any available network bandwidth for additional workloads.”
upvoted 1 times

  KMohsoe 4 months, 2 weeks ago


Option is C.
“The data center does not have any available network bandwidth for additional workloads.”
D is new EC instance is need to created. So I choose option C.
upvoted 1 times

  studynoplay 4 months, 4 weeks ago


Selected Answer: C
LEAST operational overhead = Serverless = Glue
upvoted 3 times
  SkyZeroZx 5 months ago
Selected Answer: D
Exist " A custom application in the company’s data center runs a weekly data transformation job"
Because existing previous app rebuild with Glue is more effort

Ans D
upvoted 1 times

  darn 5 months, 2 weeks ago


Selected Answer: C
D is far too manual, lots of overhead
upvoted 2 times

  Robrobtutu 5 months, 2 weeks ago


Selected Answer: C
I'm voting C and not D because creating a new EC2 instance in Snowball to run the transformation application has more overhead than
running Glue. Another thing to consider is that answer C does not mandate us to install Glue in Snowball, we can run Glue after the data
has been uploaded from Snowball to AWS.
upvoted 2 times

  Bang3R 6 months, 1 week ago


Selected Answer: C
C has less operational overhead than D. Managing EC2 has higher operational overhead than serverless AWS Glue
upvoted 2 times

  StuMoz 6 months, 4 weeks ago


I was originally going to vote for C, however it is D because of 2 reasons. 1) AWS love to promote their own products, so Glue is most likely
and 2) because Glue presents the least operational overhead moving forward as it is serverless unlike an EC2 instance which requires
patching, feeding and watering
upvoted 3 times

  Robrobtutu 5 months, 2 weeks ago


Answer C uses Glue, answer D uses EC2, so I believe you probably meant you're voting for C.
upvoted 4 times

  Dody 7 months ago


Selected Answer: C
Using the EC2 instance created on the Snowball Edge for the transformation job will do it once , However the solution architect must
configure the transformation job to continue to run in the AWS Cloud so it's AWS Glue
upvoted 1 times
Question #114 Topic 1

A company has created an image analysis application in which users can upload photos and add photo frames to their images. The users upload
images and metadata to indicate which photo frames they want to add to their images. The application uses a single Amazon EC2 instance and
Amazon DynamoDB to store the metadata.
The application is becoming more popular, and the number of users is increasing. The company expects the number of concurrent users to vary
significantly depending on the time of day and day of week. The company must ensure that the application can scale to meet the needs of the
growing user base.
Which solution meats these requirements?

A. Use AWS Lambda to process the photos. Store the photos and metadata in DynamoDB.

B. Use Amazon Kinesis Data Firehose to process the photos and to store the photos and metadata.

C. Use AWS Lambda to process the photos. Store the photos in Amazon S3. Retain DynamoDB to store the metadata.

D. Increase the number of EC2 instances to three. Use Provisioned IOPS SSD (io2) Amazon Elastic Block Store (Amazon EBS) volumes to store
the photos and metadata.

Correct Answer: A

Community vote distribution


C (100%)

  MXB05 Highly Voted  11 months, 3 weeks ago


Selected Answer: C
Do not store images in databases ;)... correct answer should be C
upvoted 31 times

  cookieMr Highly Voted  3 months, 1 week ago


Selected Answer: C
Solution C offloads the photo processing to Lambda. Storing the photos in S3 ensures scalability and durability, while keeping the
metadata in DynamoDB allows for efficient querying of the associated information.

Option A does not provide an appropriate solution for storing the photos, as DynamoDB is not suitable for storing large binary data like
images.

Option B is more focused on real-time streaming data processing and is not the ideal service for processing and storing photos and
metadata in this use case.

Option D involves manual scaling and management of EC2 instances, which is less flexible and more labor-intensive compared to the
serverless nature of Lambda. It may not efficiently handle the varying number of concurrent users and can introduce higher operational
overhead.

In conclusion, option C provides the best solution for scaling the application to meet the needs of the growing user base by leveraging the
scalability and durability of Lambda, S3, and DynamoDB.
upvoted 5 times

  TariqKipkemei Most Recent  1 month ago


Selected Answer: C
I stopped at option C
upvoted 1 times

  sand444 1 month ago


Selected Answer: C
c is correct
upvoted 1 times

  Abdou1604 1 month, 2 weeks ago


DynamoDB can technically store images as binary data (BLOBs)
upvoted 1 times

  RajkumarTatipaka 2 months, 2 weeks ago


Selected Answer: C
Why one would store photos in DB
upvoted 2 times

  MNotABot 2 months, 2 weeks ago


This one is in exam
upvoted 5 times
  beginnercloud 3 months, 3 weeks ago
Selected Answer: C
Option C is the best.
upvoted 1 times

  MostafaWardany 4 months, 1 week ago


Selected Answer: C
C is the correct answer, A can't store images in DB
upvoted 1 times

  cheese929 5 months ago


Selected Answer: C
Go for C which is able to scale
upvoted 1 times

  TheAbsoluteTruth 6 months ago


Selected Answer: C
La opción A no es la solución más adecuada para manejar la carga potencialmente alta de usuarios simultáneos, ya que las instancias de
Lambda tienen un límite de tiempo de ejecución y la carga alta puede causar un retraso significativo en la respuesta de la aplicación.
Además, no se proporciona una solución escalable para almacenar las imágenes.

La opción C proporciona una solución escalable para el procesamiento y almacenamiento de imágenes y metadatos. La aplicación puede
utilizar AWS Lambda para procesar las fotos y almacenar las imágenes en Amazon S3, que es un servicio de almacenamiento escalable y
altamente disponible. Los metadatos pueden almacenarse en DynamoDB, que es un servicio de base de datos escalable y de alto
rendimiento que puede manejar una gran cantidad de solicitudes simultáneas.
upvoted 3 times

  cookieMr 3 months, 1 week ago


Si Señior Siarra!
upvoted 1 times

  TheAbsoluteTruth 6 months ago


C!
La opción A no es la solución más adecuada para manejar la carga potencialmente alta de usuarios simultáneos, ya que las instancias de
Lambda tienen un límite de tiempo de ejecución y la carga alta puede causar un retraso significativo en la respuesta de la aplicación.
Además, no se proporciona una solución escalable para almacenar las imágenes.

La opción C proporciona una solución escalable para el procesamiento y almacenamiento de imágenes y metadatos. La aplicación puede
utilizar AWS Lambda para procesar las fotos y almacenar las imágenes en Amazon S3, que es un servicio de almacenamiento escalable y
altamente disponible. Los metadatos pueden almacenarse en DynamoDB, que es un servicio de base de datos escalable y de alto
rendimiento que puede manejar una gran cantidad de solicitudes simultáneas.
upvoted 1 times

  rdss11 6 months, 3 weeks ago


C is the answer
upvoted 1 times

  Sdraju 7 months ago


Selected Answer: C
most optimal solution
upvoted 1 times

  aba2s 9 months ago


Selected Answer: C
Have look in that discution https://www.quora.com/How-can-I-use-DynamoDB-for-storing-metadata-for-Amazon-S3-objects
upvoted 1 times

  DavidNamy 9 months, 1 week ago


Selected Answer: C
Option C involves using AWS Lambda to process the photos and storing the photos in Amazon S3, which can handle a large amount of
data and scale to meet the needs of the growing user base. Retaining DynamoDB to store the metadata allows the application to continue
to use a fast and highly available database for this purpose.
upvoted 1 times

  DavidNamy 9 months, 1 week ago


Selected Answer: C
According to the well-designed framework, option C is the safest and most efficient option.
upvoted 1 times
Question #115 Topic 1

A medical records company is hosting an application on Amazon EC2 instances. The application processes customer data files that are stored on
Amazon S3. The EC2 instances are hosted in public subnets. The EC2 instances access Amazon S3 over the internet, but they do not require any
other network access.
A new requirement mandates that the network traffic for file transfers take a private route and not be sent over the internet.
Which change to the network architecture should a solutions architect recommend to meet this requirement?

A. Create a NAT gateway. Configure the route table for the public subnets to send traffic to Amazon S3 through the NAT gateway.

B. Configure the security group for the EC2 instances to restrict outbound traffic so that only traffic to the S3 prefix list is permitted.

C. Move the EC2 instances to private subnets. Create a VPC endpoint for Amazon S3, and link the endpoint to the route table for the private
subnets.

D. Remove the internet gateway from the VPC. Set up an AWS Direct Connect connection, and route traffic to Amazon S3 over the Direct
Connect connection.

Correct Answer: C

Community vote distribution


C (100%)

  TariqKipkemei 1 month ago


Selected Answer: C
Move the EC2 instances to private subnets. Create a VPC endpoint for Amazon S3, and link the endpoint to the route table for the private
subnets.
upvoted 1 times

  sand444 1 month ago


Selected Answer: C
link VPC endpoint in route tables ---- EC2 instance to communicate S3 with a private connection in VPC
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: C
Option A (creating a NAT gateway) would not meet the requirement since it still involves sending traffic to S3 over the internet. NAT
gateway is used for outbound internet connectivity from private subnets, but it doesn't provide a private route for accessing S3.

Option B (configuring security groups) focuses on controlling outbound traffic using security groups. While it can restrict outbound traffic,
it doesn't provide a private route for accessing S3.

Option D (setting up Direct Connect) involves establishing a dedicated private network connection between the on-premises environment
and AWS. While it offers private connectivity, it is more suitable for hybrid scenarios and not necessary for achieving private access to S3
within the VPC.

In summary, option C provides a straightforward solution by moving the EC2 instances to private subnets, creating a VPC endpoint for S3,
and linking the endpoint to the route table for private subnets. This ensures that file transfer traffic between the EC2 instances and S3
remains within the private network without going over the internet.
upvoted 4 times

  DavidNamy 9 months, 1 week ago


Selected Answer: C
According to the well-designed framework, option C is the safest and most efficient option.
upvoted 3 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Selected Answer: C
The correct answer is C. Move the EC2 instances to private subnets. Create a VPC endpoint for Amazon S3, and link the endpoint to the
route table for the private subnets.

To meet the new requirement of transferring files over a private route, the EC2 instances should be moved to private subnets, which do
not have direct access to the internet. This ensures that the traffic for file transfers does not go over the internet.

To enable the EC2 instances to access Amazon S3, a VPC endpoint for Amazon S3 can be created. VPC endpoints allow resources within a
VPC to communicate with resources in other services without the traffic being sent over the internet. By linking the VPC endpoint to the
route table for the private subnets, the EC2 instances can access Amazon S3 over a private connection within the VPC.
upvoted 3 times
  Buruguduystunstugudunstuy 9 months, 2 weeks ago
Option A (Create a NAT gateway) would not work, as a NAT gateway is used to allow resources in private subnets to access the internet,
while the requirement is to prevent traffic from going over the internet.

Option B (Configure the security group for the EC2 instances to restrict outbound traffic) would not achieve the goal of routing traffic
over a private connection, as the traffic would still be sent over the internet.

Option D (Remove the internet gateway from the VPC and set up an AWS Direct Connect connection) would not be necessary, as the
requirement can be met by simply creating a VPC endpoint for Amazon S3 and routing traffic through it.
upvoted 1 times

  Kayamables 8 months, 3 weeks ago


How about the question of moving the instances across subnets. Because according to AWS you can't do it.
https://aws.amazon.com/premiumsupport/knowledge-center/move-ec2-
instance/#:~:text=It%27s%20not%20possible%20to%20move,%2C%20Availability%20Zone%2C%20or%20VPC.
Kindly clarify. Maybe I miss something.
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: C
Option C
upvoted 1 times

  ocbn3wby 10 months, 1 week ago


C is correct.
There is no requirement for public access from internet.

Application must be moved in Private subnet. This is a prerequisite in using VPC endpoints with S3
https://aws.amazon.com/blogs/storage/managing-amazon-s3-access-with-vpc-endpoints-and-s3-access-points/
upvoted 4 times

  Wpcorgan 10 months, 2 weeks ago


C is correct
upvoted 1 times

  Jtic 10 months, 3 weeks ago


Selected Answer: C
Use VPC endpoint
upvoted 1 times

  Jtic 10 months, 3 weeks ago


Selected Answer: C
User VPC endpoint and make the EC2 private
upvoted 1 times

  Jtic 10 months, 3 weeks ago


Use VPC endpoint
upvoted 1 times

  backbencher2022 11 months ago


Selected Answer: C
VPC endpoint is the best choice to route S3 traffic without traversing internet. Option A alone can't be used as NAT Gateway requires an
Internet gateway for outbound internet traffic. Option B would still require traversing through internet and option D is also not a suitable
solution
upvoted 3 times
Question #116 Topic 1

A company uses a popular content management system (CMS) for its corporate website. However, the required patching and maintenance are
burdensome. The company is redesigning its website and wants anew solution. The website will be updated four times a year and does not need
to have any dynamic content available. The solution must provide high scalability and enhanced security.
Which combination of changes will meet these requirements with the LEAST operational overhead? (Choose two.)

A. Configure Amazon CloudFront in front of the website to use HTTPS functionality.

B. Deploy an AWS WAF web ACL in front of the website to provide HTTPS functionality.

C. Create and deploy an AWS Lambda function to manage and serve the website content.

D. Create the new website and an Amazon S3 bucket. Deploy the website on the S3 bucket with static website hosting enabled.

E. Create the new website. Deploy the website by using an Auto Scaling group of Amazon EC2 instances behind an Application Load Balancer.

Correct Answer: AD

Community vote distribution


AD (80%) 10% 7%

  palermo777 Highly Voted  11 months, 2 weeks ago


A -> We can configure CloudFront to require HTTPS from clients (enhanced security)
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/using-https-viewers-to-cloudfront.html
D -> storing static website on S3 provides scalability and less operational overhead, then configuration of Application LB and EC2 instances
(hence E is out)

B is out since AWS WAF Web ACL does not to provide HTTPS functionality, but to protect HTTPS only.
upvoted 25 times

  Six_Fingered_Jose Highly Voted  11 months, 1 week ago


Selected Answer: AD
agree with A and D

static website -> obviously S3, and S3 is super scalable


CDN -> CloudFront obviously as well, and with HTTPS security is enhanced.

B does not make sense because you are not replacing the CDN with anything,
E works too but takes too much effort and compared to S3, S3 still wins in term of scalability. plus why use EC2 when you are only hosting
static website
upvoted 5 times

  Lalo 3 months, 3 weeks ago


Amazon CloudFront is for Securely deliver content with low latency and high transfer speeds
But what about the SQLinjection XSS attacks? we use WAF and olso use HTTPS
https://www.f5.com/glossary/web-application-firewall-
waf#:~:text=A%20WAF%20protects%20your%20web,and%20what%20traffic%20is%20safe.
WAF protects your web apps by filtering, monitoring, and blocking any malicious HTTP/S traffic traveling to the web application, and
prevents any unauthorized data from leaving the app.
Answer is WAF Not Cloudfront
upvoted 1 times

  aussiehoa 4 months, 3 weeks ago


does not need to have any dynamic content available
upvoted 1 times

  TariqKipkemei Most Recent  1 month ago


Selected Answer: AD
Scalability, enhanced security and less operational overhead = CloudFront with HTTPS
Scalability and less operational overhead = S3 bucket with static website hosting
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: AD
A. Amazon CloudFront provides scalable content delivery with HTTPS functionality, meeting security and scalability requirements.

D. Deploying the website on an Amazon S3 bucket with static website hosting reduces operational overhead by eliminating server
maintenance and patching.

Why other options are incorrect:


B. AWS WAF does not provide HTTPS functionality or address patching and maintenance.

C. Using AWS Lambda introduces complexity and does not directly address patching and maintenance.

E. Managing EC2 instances and an Application Load Balancer increases operational overhead and does not minimize patching and
maintenance tasks.

In summary, configuring Amazon CloudFront for HTTPS and deploying on Amazon S3 with static website hosting provide security,
scalability, and reduced operational overhead.
upvoted 1 times
  beginnercloud 3 months, 3 weeks ago
Selected Answer: AD
AD

A for enhanced security D for static content


upvoted 1 times

  studynoplay 4 months, 3 weeks ago


Selected Answer: AD
LEAST operational overhead = Serverless
https://aws.amazon.com/serverless/
upvoted 2 times

  angolateoria 5 months ago


AD misses the operational part, how can the app work without a lambda function, an EC2 instance or something?
upvoted 1 times

  darn 5 months, 2 weeks ago


Selected Answer: AD
people do not seem to get the LEAST OPERATIONAL OVERHEAD statement, many people keep voting for options that bring far too Op
work
upvoted 1 times

  channn 5 months, 3 weeks ago


Selected Answer: AD
A for enhanced security
D for static content
upvoted 2 times

  Erbug 6 months, 2 weeks ago


Since Amazon S3 is unlimited and you pay as you go so it means there will be no limit to scale as long as your data is going to grow, so D is
one of the correct answers and another correct answer is A, because of this:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteHosting.html

so my answer is AD.
upvoted 1 times

  ManOnTheMoon 7 months, 1 week ago


I vote A & C for the reason being least operational overhead.
upvoted 1 times

  Yelizaveta 7 months, 3 weeks ago


Selected Answer: AD
Here a perfect explanation:
https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-serve-static-website/
upvoted 2 times

  Abdel42 8 months, 1 week ago


Selected Answer: AD
Simple and secure
upvoted 1 times

  remand 8 months, 2 weeks ago


Selected Answer: AD
D. Create the new website and an Amazon S3 bucket. Deploy the website on the S3 bucket with static website hosting enabled.
A. Configure Amazon CloudFront in front of the website to use HTTPS functionality.

By deploying the website on an S3 bucket with static website hosting enabled, the company can take advantage of the high scalability and
cost-efficiency of S3 while also reducing the operational overhead of managing and patching a CMS.
By configuring Amazon CloudFront in front of the website, it will automatically handle the HTTPS functionality, this way the company can
have a secure website with very low operational overhead.
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Selected Answer: CD
KEYWORD: LEAST operational overhead

D. Create the new website and an Amazon S3 bucket. Deploy the website on the S3 bucket with static website hosting enabled.

C. Create and deploy an AWS Lambda function to manage and serve the website content.

Option D (using Amazon S3 with static website hosting) would provide high scalability and enhanced security with minimal operational
overhead because it requires little maintenance and can automatically scale to meet increased demand.

Option C (using an AWS Lambda function) would also provide high scalability and enhanced security with minimal operational overhead.
AWS Lambda is a serverless compute service that runs your code in response to events and automatically scales to meet demand. It is
easy to set up and requires minimal maintenance.
upvoted 3 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Why other options are not correct?

Option A (using Amazon CloudFront) and Option B (using an AWS WAF web ACL) would provide HTTPS functionality but would require
additional configuration and maintenance to ensure that they are set up correctly and remain secure.

Option E (using an Auto Scaling group of Amazon EC2 instances behind an Application Load Balancer) would provide high scalability,
but it would require more operational overhead because it involves managing and maintaining EC2 instances.
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: AD
A and D
upvoted 1 times

  AlaN652 9 months, 3 weeks ago


Selected Answer: AD
A: for high availability and security through cloudfront HTTPS
D: Scalable storge solution and support of static hosting
upvoted 1 times
Question #117 Topic 1

A company stores its application logs in an Amazon CloudWatch Logs log group. A new policy requires the company to store all application logs
in Amazon OpenSearch Service (Amazon Elasticsearch Service) in near-real time.
Which solution will meet this requirement with the LEAST operational overhead?

A. Configure a CloudWatch Logs subscription to stream the logs to Amazon OpenSearch Service (Amazon Elasticsearch Service).

B. Create an AWS Lambda function. Use the log group to invoke the function to write the logs to Amazon OpenSearch Service (Amazon
Elasticsearch Service).

C. Create an Amazon Kinesis Data Firehose delivery stream. Configure the log group as the delivery streams sources. Configure Amazon
OpenSearch Service (Amazon Elasticsearch Service) as the delivery stream's destination.

D. Install and configure Amazon Kinesis Agent on each application server to deliver the logs to Amazon Kinesis Data Streams. Configure
Kinesis Data Streams to deliver the logs to Amazon OpenSearch Service (Amazon Elasticsearch Service).

Correct Answer: C

Community vote distribution


A (65%) C (33%)

  Six_Fingered_Jose Highly Voted  11 months, 1 week ago


Selected Answer: A
answer is A
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_OpenSearch_Stream.html

> You can configure a CloudWatch Logs log group to stream data it receives to your Amazon OpenSearch Service cluster in NEAR REAL-
TIME through a CloudWatch Logs subscription

least overhead compared to kinesis


upvoted 59 times

  Zerotn3 9 months ago


Option A (Configure a CloudWatch Logs subscription to stream the logs to Amazon OpenSearch Service (Amazon Elasticsearch Service))
is not a suitable option, as a CloudWatch Logs subscription is designed to send log events to a destination such as an Amazon Simple
Notification Service (Amazon SNS) topic or an AWS Lambda function. It is not designed to write logs directly to Amazon Elasticsearch
Service (Amazon ES).
upvoted 3 times

  kucyk 7 months, 2 weeks ago


that is not true, you can stream logs from CloudWatch Logs directly to OpenSearch
upvoted 5 times

  HayLLlHuK 9 months ago


Zerotn3 is right! There should be a Lambda for writing into ES
upvoted 1 times

  UWSFish 11 months, 1 week ago


Great link. Convinced me
upvoted 5 times

  Buruguduystunstugudunstuy Highly Voted  9 months, 2 weeks ago


Selected Answer: C
The correct answer is C: Create an Amazon Kinesis Data Firehose delivery stream. Configure the log group as the delivery stream source.
Configure Amazon OpenSearch Service (Amazon Elasticsearch Service) as the delivery stream's destination.

This solution uses Amazon Kinesis Data Firehose, which is a fully managed service for streaming data to Amazon OpenSearch Service
(Amazon Elasticsearch Service) and other destinations. You can configure the log group as the source of the delivery stream and Amazon
OpenSearch Service as the destination. This solution requires minimal operational overhead, as Kinesis Data Firehose automatically scales
and handles data delivery, transformation, and indexing.
upvoted 14 times

  Lalo 3 months, 3 weeks ago


ANSWER A
https://docs.aws.amazon.com/opensearch-service/latest/developerguide/integrations.html
You can use CloudWatch or Kinesis, but in the Kinesis description it never says real time, however in the Cloudwatch description it does
say Real time ""You can load streaming data from CloudWatch Logs to your OpenSearch Service domain by using a CloudWatch Logs
subscription . For information about Amazon CloudWatch subscriptions, see Real-time processing of log data with subscriptions.""
upvoted 2 times
  Buruguduystunstugudunstuy 9 months, 2 weeks ago
Option A: Configure a CloudWatch Logs subscription to stream the logs to Amazon OpenSearch Service (Amazon Elasticsearch Service)
would also work, but it may require more operational overhead as you would need to set up and manage the subscription and ensure
that the logs are delivered in near-real time.

Option B: Create an AWS Lambda function. Use the log group to invoke the function to write the logs to Amazon OpenSearch Service
(Amazon Elasticsearch Service) would also work, but it may require more operational overhead as you would need to set up and
manage the Lambda function and ensure that it scales to handle the incoming logs.

Option D: Install and configure Amazon Kinesis Agent on each application server to deliver the logs to Amazon Kinesis Data Streams.
Configure Kinesis Data Streams to deliver the logs to Amazon OpenSearch Service (Amazon Elasticsearch Service) would also work, but
it may require more operational overhead as you would need to install and configure the Kinesis Agent on each application server and
set up and manage the Kinesis Data Streams.
upvoted 2 times

  ocbn3wby 8 months ago


https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_OpenSearch_Stream.html
upvoted 1 times

  JKevin778 Most Recent  1 week, 1 day ago


Selected Answer: C
100% C.
CloudWatch logs cannot be send to OpenSearch directly, need KDS or KDF works in the middle.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SubscriptionFilters.html
upvoted 1 times

  hootani 2 weeks, 6 days ago


Selected Answer: C
The answer is C
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: C
C is the correct answer.

Using Kinesis Data Firehose will allow near real-time delivery of the CloudWatch logs to Amazon Elasticsearch Service with the least
operational overhead compared to the other options.

Firehose can be configured to automatically ingest data from CloudWatch Logs into Elasticsearch without needing to run Lambda
functions or install agents on the application servers. This makes it the most operationally simple way to meet the stated requirements.
upvoted 1 times

  npraveen 2 months, 2 weeks ago


Selected Answer: C
Near Real Time: Cloud watch logs --> Subscription Filter --> Kinesis data fire house --> S3
Real Time: Cloud watch logs --> Subscription Filter -->Lmabda --> S3
upvoted 2 times

  Cloudnative9990 2 months, 2 weeks ago


We need to consider the “least operation overhead” and with that said Cloudwatch log Group and opersearch is already existing in the
system and needs integration. Kinesics is preferable for near real time streaming but it will be additional overhead..Hence answer should
be A
upvoted 2 times

  bala_s 2 months, 3 weeks ago


Answer is A . The question says near real time and not real time
You can also use a CloudWatch Logs subscription to stream log data in near real time to an Amazon OpenSearch Service cluster. For more
information, see Streaming CloudWatch Logs data to Amazon OpenSearch Service.
upvoted 1 times

  bigboi23 2 months, 3 weeks ago


Selected Answer: C
OPTION C

You can use subscriptions to get access to a real-time feed of log events from CloudWatch Logs and have it delivered to other services
such as an Amazon Kinesis stream, an Amazon Kinesis Data Firehose stream, or AWS Lambda for custom processing, analysis, or loading
to other systems.

https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Subscriptions.html
upvoted 1 times

  cookieMr 3 months, 1 week ago


By configuring a CloudWatch Logs subscription, you can stream the logs from CloudWatch Logs to Amazon OpenSearch Service in near-
real-time. This solution requires minimal operational overhead as it leverages the built-in functionality of CloudWatch Logs and Amazon
OpenSearch Service for log streaming and indexing.
Option B (Creating an AWS Lambda function) would involve additional development effort and maintenance of a custom Lambda function
to write the logs to Amazon OpenSearch Service.

Option C (Creating an Amazon Kinesis Data Firehose delivery stream) introduces an additional service (Kinesis Data Firehose) that may not
be necessary for this specific requirement, adding unnecessary complexity.

Option D (Installing and configuring Amazon Kinesis Agent) also introduces additional overhead in terms of manual installation and
configuration on each application server, which may not be needed if the logs are already stored in CloudWatch Logs.

In summary, option A is the correct choice as it provides a straightforward and efficient way to stream logs from CloudWatch Logs to
Amazon OpenSearch Service with minimal operational overhead.
upvoted 3 times

  srijrao 3 months, 1 week ago


https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Subscriptions.html
upvoted 1 times

  konieczny69 3 months, 2 weeks ago


Selected Answer: C
I vote for C.
Solution A add unnecessary hop
upvoted 1 times

  ruqui 3 months, 4 weeks ago


Selected Answer: C
A is wrong because subscriptions cannot be sent directly to Opensearch, see 'destination arn' in
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Subscriptions.html

Correct answer is C
upvoted 1 times

  Abrar2022 4 months, 1 week ago


@six _fingers is right!!!! You can configure a CloudWatch Logs log group to stream data it receives to your Amazon OpenSearch Service
cluster in near real-time through a CloudWatch Logs subscription.

answer is A
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_OpenSearch_Stream.html
upvoted 1 times

  Rud90 4 months, 2 weeks ago


Selected Answer: C
This should be C. OpenSearch is one of the main destinations for Kinesis Data Firehose.
upvoted 1 times

  ErfanKh 5 months, 3 weeks ago


Selected Answer: C
C for me and ChatGPT
upvoted 2 times

  channn 5 months, 3 weeks ago


Selected Answer: C
choose C after seeing all comments from community
upvoted 2 times

  jayce5 6 months, 1 week ago


Selected Answer: C
Must be C, https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_OpenSearch_Stream.html
"You can configure a CloudWatch Logs log group to stream data it receives to your Amazon OpenSearch Service cluster in near real-time
through a CloudWatch Logs subscription. For more information, see Real-time processing of log data with subscriptions.".
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Subscriptions.html
"You can use subscriptions to get access to a real-time feed of log events from CloudWatch Logs and have it delivered to other services
such as an Amazon Kinesis stream, an Amazon Kinesis Data Firehose stream, or AWS Lambda for custom processing, analysis, or loading
to other systems."

CloudWatch cannot stream directly to Amazon OpenSearch Service.


upvoted 3 times

  fishy_resolver 3 months, 3 weeks ago


The link above supports answer A not C, there is no mention of Kinesis
upvoted 1 times
Question #118 Topic 1

A company is building a web-based application running on Amazon EC2 instances in multiple Availability Zones. The web application will provide
access to a repository of text documents totaling about 900 TB in size. The company anticipates that the web application will experience periods
of high demand. A solutions architect must ensure that the storage component for the text documents can scale to meet the demand of the
application at all times. The company is concerned about the overall cost of the solution.
Which storage solution meets these requirements MOST cost-effectively?

A. Amazon Elastic Block Store (Amazon EBS)

B. Amazon Elastic File System (Amazon EFS)

C. Amazon OpenSearch Service (Amazon Elasticsearch Service)

D. Amazon S3

Correct Answer: D

Community vote distribution


D (94%) 6%

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: D
MOST cost-effective = S3 (unless explicitly stated in the requirements)
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: D
Amazon S3 (Simple Storage Service) is a highly scalable and cost-effective storage service. It is well-suited for storing large amounts of
data, such as the 900 TB of text documents mentioned in the scenario. S3 provides high durability, availability, and performance.

Option A (Amazon EBS) is block storage designed for individual EC2 instances and may not scale as seamlessly and cost-effectively as S3
for large amounts of data.

Option B (Amazon EFS) is a scalable file storage service, but it may not be the most cost-effective option compared to S3, especially for the
anticipated storage size of 900 TB.

Option C (Amazon OpenSearch Service) is a search and analytics service and may not be suitable as the primary storage solution for the
text documents.

In summary, Amazon S3 is the recommended choice as it offers high scalability, cost-effectiveness, and durability for storing the large
repository of text documents required by the web application.
upvoted 2 times

  Jeeva28 4 months, 1 week ago


Selected Answer: D
900 in the question to divert our Thinking.When you have keyword least in question S3 will be only thing we should look
upvoted 1 times

  Abrar2022 4 months, 1 week ago


EFS and S3 meet the requirements but S3 is a better option because it is cheaper.
upvoted 1 times

  studynoplay 4 months, 3 weeks ago


Selected Answer: D
MOST cost-effective = S3 (unless explicitly stated in the requirements)
upvoted 2 times

  Robrobtutu 5 months, 2 weeks ago


Selected Answer: D
S3 is the cheapest and most scalable.
upvoted 1 times

  jdr75 5 months, 4 weeks ago


Selected Answer: C
Now in OpenSearch you can reach at 3 PB so option C is better.
With S3 in an intensive scenario the costs of retriving the buckets could be high.
Yes OpenSearch is NOT cheap but this has to be analysed carefully.
So, I opt "C" to increase the discussion.
With UltraWarm, you can retain up to 3 PB of data on a single Amazon OpenSearch Service cluster, while reducing your cost per GB by
nearly 90% compared to the warm storage tier. You can also easily query and visualize the data in your Kibana interface (version 7.10 and
earlier) or OpenSearch Dashboards. Analyze both your recent (weeks) and historical (months or years) log data without spending hours or
days restoring archived logs.

https://aws.amazon.com/es/opensearch-service/features/
upvoted 2 times
  Dr_Chomp 6 months ago
EFS is a good option but expensive alongside S3 and customer concerned about cost - thus: S3 (D)
upvoted 2 times

  frenzoid 6 months, 1 week ago


I wonder why people choose S3, yet S3 max capacity is 5TB 🤔.
upvoted 1 times

  frenzoid 6 months, 1 week ago


My bad, the 5TB limit is for individual files. S3 has virtually unlimited storage capacity.
upvoted 5 times

  Help2023 7 months, 2 weeks ago


Selected Answer: D
A. It is Not a block storage
B. It is Not a file storage
C. Opensearch is useful but can only accommodate up to 600TiB and is mainly for search and anaytics.
D. S3 is more cost effective than all and can handle all objects like Block, File or Text.
upvoted 4 times

  remand 8 months, 2 weeks ago


Selected Answer: D
D. Amazon S3

Amazon S3 is an object storage service that can store and retrieve large amounts of data at any time, from anywhere on the web. It is
designed for high durability, scalability, and cost-effectiveness, making it a suitable choice for storing a large repository of text documents.
With S3, you can store and retrieve any amount of data, at any time, from anywhere on the web, and you can scale your storage up or
down as needed, which will help to meet the demand of the web application. Additionally, S3 allows you to choose between different
storage classes, such as standard, infrequent access, and archive, which will enable you to optimize costs based on your specific use case.
upvoted 1 times

  SilentMilli 8 months, 3 weeks ago


Selected Answer: D
The most cost-effective storage solution for a web application that needs to scale to meet high demand and store a large repository of
text documents would be Amazon S3. Amazon S3 is an object storage service that is designed for durability, availability, and scalability. It
can store and retrieve any amount of data from anywhere on the internet, making it a suitable choice for storing a large repository of text
documents. Additionally, Amazon S3 is designed to be highly scalable and can easily handle periods of high demand without requiring any
additional infrastructure or maintenance.
upvoted 2 times

  gustavtd 9 months ago


Selected Answer: D
Is there anything cheaper than S3?
upvoted 3 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Selected Answer: D
D. Amazon S3 is the most cost-effective storage solution that meets the requirements described.

Amazon S3 is an object storage service that is designed to store and retrieve large amounts of data from anywhere on the web. It is highly
scalable, highly available, and cost-effective, making it an ideal choice for storing a large repository of text documents that will experience
periods of high demand. S3 is a standalone storage service that can be accessed from anywhere, and it is designed to handle large
numbers of objects, making it well-suited for storing the 900 TB repository of text documents described in the scenario. It is also designed
to handle high levels of demand, making it suitable for handling periods of high demand.
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: D
Option D
upvoted 1 times

  NikaCZ 9 months, 2 weeks ago


Selected Answer: D
Only EFS and S3 meeting the requirements but S3 is better option because it is cheaper.
upvoted 4 times
  Wpcorgan 10 months, 2 weeks ago
D is correct
upvoted 1 times
Question #119 Topic 1

A global company is using Amazon API Gateway to design REST APIs for its loyalty club users in the us-east-1 Region and the ap-southeast-2
Region. A solutions architect must design a solution to protect these API Gateway managed REST APIs across multiple accounts from SQL
injection and cross-site scripting attacks.
Which solution will meet these requirements with the LEAST amount of administrative effort?

A. Set up AWS WAF in both Regions. Associate Regional web ACLs with an API stage.

B. Set up AWS Firewall Manager in both Regions. Centrally configure AWS WAF rules.

C. Set up AWS Shield in bath Regions. Associate Regional web ACLs with an API stage.

D. Set up AWS Shield in one of the Regions. Associate Regional web ACLs with an API stage.

Correct Answer: A

Community vote distribution


B (75%) A (25%)

  Gil80 Highly Voted  10 months, 3 weeks ago


Selected Answer: B
If you want to use AWS WAF across accounts, accelerate WAF configuration, automate the protection of new resources, use Firewall
Manager with AWS WAF
upvoted 23 times

  Nigma Highly Voted  10 months, 4 weeks ago


B

Using AWS WAF has several benefits. Additional protection against web attacks using criteria that you specify. You can define criteria using
characteristics of web requests such as the following:
Presence of SQL code that is likely to be malicious (known as SQL injection).
Presence of a script that is likely to be malicious (known as cross-site scripting).

AWS Firewall Manager simplifies your administration and maintenance tasks across multiple accounts and resources for a variety of
protections.

https://docs.aws.amazon.com/waf/latest/developerguide/what-is-aws-waf.html
upvoted 14 times

  JayBee65 9 months, 2 weeks ago


Q: Can I create security policies across regions?

No, AWS Firewall Manager security policies are region specific. Each Firewall Manager policy can only include resources available in that
specified AWS Region. You can create a new policy for each region where you operate.

So you could not centrally (i.e. in one place) configure policies, you would need to do this is each region
upvoted 2 times

  Valder21 Most Recent  1 month ago


Selected Answer: A
SQL injection, cross-site scripting = WAF
upvoted 1 times

  Hassaoo 1 month ago


A is Right Option
https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-control-access-aws-waf.html
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: B
B is the correct answer.

Using AWS Firewall Manager to centrally configure AWS WAF rules provides the least administrative effort compared to the other options.

Firewall Manager allows centralized administration of AWS WAF rules across multiple accounts and Regions. WAF rules can be defined
once in Firewall Manager and automatically applied to APIs in all the required Regions and accounts.
upvoted 1 times

  ukivanlamlpi 1 month, 2 weeks ago


Selected Answer: A
awf setting is region specific
upvoted 1 times
  RajkumarTatipaka 2 months, 2 weeks ago
Selected Answer: B
if you want to manage protection accross accounts and resources then use AWS firewall manager. AWS WAF protect against web attacks
like sql-injection and cross-site scrpting
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: B
B. By setting up AWS Firewall Manager, you can centrally configure AWS WAF rules, which can be applied to multiple AWS accounts and
Regions. This allows for efficient management and enforcement of security rules across accounts without the need for separate
configuration in each individual Region.

Option A (Setting up AWS WAF with Regional web ACLs) requires setting up and managing AWS WAF in each Region separately, which
increases administrative effort.

Option C (Setting up AWS Shield with Regional web ACLs) primarily focuses on DDoS protection and may not provide the same level of
protection against SQL injection and cross-site scripting attacks as AWS WAF.

Option D (Setting up AWS Shield in one Region) provides DDoS protection but does not directly address protection against SQL injection
and cross-site scripting attacks.

In summary, option B offers the most efficient and centralized approach by leveraging AWS Firewall Manager to configure AWS WAF rules
across multiple Regions, minimizing administrative effort while ensuring protection against SQL injection and cross-site scripting attacks.
upvoted 1 times

  omoakin 4 months, 1 week ago


AAAAAAAAAAA
upvoted 2 times

  HelloTomorrow 5 months, 1 week ago


Crazy community voting !
Correct answer is => A : AWS Firewall Manager security policies are region specific. Each Firewall Manager policy can only include
resources available in that specified AWS Region.
upvoted 3 times

  JummyFash 1 month, 3 weeks ago


You can say that again. I will go with A as well
upvoted 1 times

  JummyFash 1 month, 3 weeks ago


B is the correct answer..
Among the options provided, option B offers the least amount of administrative effort to protect the API Gateway managed REST
APIs from SQL injection and cross-site scripting attacks across multiple accounts.

AWS Firewall Manager allows you to centrally configure and manage AWS WAF rules across multiple accounts and resources. By
setting up AWS Firewall Manager in both the us-east-1 and ap-southeast-2 Regions, you can apply consistent WAF rules to the API
Gateway instances in those regions without the need to individually configure WAF rules for each API Gateway.
upvoted 1 times

  TheAbsoluteTruth 6 months ago


Selected Answer: B
La opción A proporciona protección contra inyecciones SQL y secuencias de comandos entre sitios utilizando AWS WAF, que es una
solución de firewall de aplicaciones web. Sin embargo, esta opción requiere que se configure AWS WAF en cada región individualmente y
se asocie una lista de control de acceso web (ACL) con una etapa de API. Esto puede resultar en un esfuerzo administrativo significativo si
hay varias regiones y etapas de API que se deben proteger.

La opción B es una solución centralizada que utiliza AWS Firewall Manager para administrar las reglas de AWS WAF en múltiples regiones.
Con esta opción, es posible configurar las reglas de AWS WAF en una sola ubicación y aplicarlas a todas las regiones relevantes de manera
uniforme. Esta solución puede reducir significativamente el esfuerzo administrativo en comparación con la opción A.
upvoted 3 times

  sezer 6 months, 1 week ago


Prerequisites for using AWS Firewall Manager
Your account must be a member of AWS Organizations
Your account must be the AWS Firewall Manager administrator
You must have AWS Config enabled for your accounts and Regions
To manage AWS Network Firewall or Route 53 resolver DNS Firewall, the AWS Organizations management account must enable AWS
Resource Access Manager (AWS RAM).

can anybody explain me least Administration efficiency


i will go with A
if ı am wrong anybody correct me
upvoted 1 times

  jdr75 5 months, 4 weeks ago


When they said "LEAST amount of administrative effort" they ignore the "transition costs" associated to get the final scenario. Only
takes account the administration effort supposing all the migration task & prerrequisites were done.
So B is probably, BEST.
upvoted 1 times

  bdp123 7 months, 1 week ago


Selected Answer: B
https://aws.amazon.com/blogs/security/centrally-manage-aws-waf-api-v2-and-aws-managed-rules-at-scale-with-firewall-manager/
upvoted 1 times

  andyto 7 months, 1 week ago


B.
Set up AWS Firewall Manager
https://docs.aws.amazon.com/waf/latest/developerguide/enable-disabled-region.html
Create WAF policies separate for each Region:
https://docs.aws.amazon.com/waf/latest/developerguide/get-started-fms-create-security-policy.html
To protect resources in multiple Regions (other than CloudFront distributions), you must create separate Firewall Manager policies for
each Region.
upvoted 2 times

  JiyuKim 7 months, 3 weeks ago


Selected Answer: A
I' ll go with A.
B is wrong because
To protect resources in multiple Regions (other than CloudFront distributions), you must create separate Firewall Manager policies for
each Region.

https://docs.aws.amazon.com/waf/latest/developerguide/get-started-fms-create-security-policy.html
upvoted 5 times

  Mahadeva 8 months, 4 weeks ago


Though Option A and B are valid, the question is on Administration efficiency. Since only 2 regions are in consideration, it is much easier to
provision WAF than a central Firewall Manager (plus WAF).

Regarding "to protect API Gateways across multiple accounts". may be it is an extra information. Web ACLs are at regional level, essentially
filters out HTTP messages irrespective of the account i.e., it is applicable to all accounts.
upvoted 1 times

  Help2023 7 months, 2 weeks ago


A & B are viable options, however because it is two regions instead of creating WAF twice (one for each region) simply create it all at
once in the Central Firewall Manager. Imagine you need to make some changes later and again rather than changing it on each, 1 by 1
simply change it on the Central Firewall Manager once and you can deploy more in the future by just adding regions.
upvoted 2 times

  Mahadeva 8 months, 4 weeks ago


Option A: WAF
upvoted 1 times

  aba2s 9 months ago


Selected Answer: B
Use AWS WAF and set up a managed rule to block request patterns associated with the exploitation of SQL databases, like SQL injection
attacks. Associate it with the Application Load Balancer. Integrate AWS WAF with AWS Firewall Manager to reuse the rules across all the
AWS accounts.
upvoted 1 times
Question #120 Topic 1

A company has implemented a self-managed DNS solution on three Amazon EC2 instances behind a Network Load Balancer (NLB) in the us-west-
2 Region. Most of the company's users are located in the United States and Europe. The company wants to improve the performance and
availability of the solution. The company launches and configures three EC2 instances in the eu-west-1 Region and adds the EC2 instances as
targets for a new NLB.
Which solution can the company use to route traffic to all the EC2 instances?

A. Create an Amazon Route 53 geolocation routing policy to route requests to one of the two NLBs. Create an Amazon CloudFront distribution.
Use the Route 53 record as the distribution’s origin.

B. Create a standard accelerator in AWS Global Accelerator. Create endpoint groups in us-west-2 and eu-west-1. Add the two NLBs as
endpoints for the endpoint groups.

C. Attach Elastic IP addresses to the six EC2 instances. Create an Amazon Route 53 geolocation routing policy to route requests to one of the
six EC2 instances. Create an Amazon CloudFront distribution. Use the Route 53 record as the distribution's origin.

D. Replace the two NLBs with two Application Load Balancers (ALBs). Create an Amazon Route 53 latency routing policy to route requests to
one of the two ALBs. Create an Amazon CloudFront distribution. Use the Route 53 record as the distribution’s origin.

Correct Answer: A

Community vote distribution


B (71%) A (24%)

  dokaedu Highly Voted  11 months, 1 week ago


B is the correct one for seld manage DNS
If need to use Route53, ALB (layar 7 ) needs to be used as end points for 2 reginal x 3 EC2s, if it the case answer would be the option 4
upvoted 12 times

  MutiverseAgent 2 months, 2 weeks ago


After reading the discussion I think the right answer is B, as the service they use is DNS it does not make sense using a cloudfront
distribution for this. The scenario would be different if the service were HTTP/HTTPS.
upvoted 1 times

  MutiverseAgent 2 months, 2 weeks ago


Just to complete my previous comment. If the scenario were that the company uses HTTP/HTTPS service, then the correct answer (as
the original dokaedu message mentions) would be option D)
upvoted 1 times

  LeGloupier Highly Voted  11 months, 2 weeks ago


Selected Answer: B
for me it is B
upvoted 10 times

  Hassaoo Most Recent  1 month ago


B. Create a standard accelerator in AWS Global Accelerator. Create endpoint groups in us-west-2 and eu-west-1. Add the two NLBs as
endpoints for the endpoint groups.

Here's why this option is the most suitable:

Global Accelerator: AWS Global Accelerator is designed to improve the availability and performance of applications by using static IP
addresses (Anycast IPs) and routing traffic over the AWS global network infrastructure.

Endpoint Groups: By creating endpoint groups in both the us-west-2 and eu-west-1 Regions, the company can effectively distribute traffic
to the NLBs in both Regions. This improves availability and allows traffic to be directed to the closest Region based on latency.
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: B
B is the best solution to route traffic to all the EC2 instances across regions.

The key reasons are:

AWS Global Accelerator allows routing traffic to endpoints in multiple AWS Regions. It uses the AWS global network to optimize availability
and performance.
Creating an accelerator with endpoint groups in us-west-2 and eu-west-1 allows traffic to be distributed across both regions.
Adding the NLBs in each region as endpoints allows the traffic to be routed to the EC2 instances behind them.
This provides improved performance and availability compared to just using Route 53 geolocation routing.
upvoted 3 times
  MNotABot 2 months, 2 weeks ago
B
route requests to one of the two NLBs --> hence AD out / Attach Elastic IP addresses --> who will pay for it?
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: B
Option B offers a global solution by utilizing Global Accelerator. By creating a standard accelerator and configuring endpoint groups in
both Regions, the company can route traffic to all the EC2 across multiple regions. Adding the two NLBs as endpoints ensures that traffic
is distributed effectively.

Option A does not directly address the requirement of routing traffic to all EC2 instances. It focuses on routing based on geolocation and
using CloudFront as a distribution, which may not achieve the desired outcome.

Option C involves managing Elastic IP addresses and routing based on geolocation. However, it may not provide the same level of
performance and availability as AWS Global Accelerator.

Option D focuses on ALBs and latency-based routing. While it can be a valid solution, it does not utilize AWS Global Accelerator and may
require more configuration and management compared to option B.
upvoted 3 times

  beginnercloud 3 months, 3 weeks ago


Selected Answer: B
Correctly is B.

if it is self-managed DNS, you cannot use Route 53. There can be only 1 DNS service for the domain.
upvoted 1 times

  studynoplay 4 months, 3 weeks ago


Selected Answer: B
For self-managed DNS solution:
https://aws.amazon.com/blogs/security/how-to-protect-a-self-managed-dns-service-against-ddos-attacks-using-aws-global-accelerator-
and-aws-shield-advanced/
upvoted 2 times

  studynoplay 4 months, 3 weeks ago


Selected Answer: B
Re-wording the correct explanations here:
if it is self-managed DNS, you cannot use Route 53. There can be only 1 DNS service for the domain. If the question didn't mentioned self-
managed DNS and asked for optimal solution, then D is correct.
upvoted 3 times

  Yadav_Sanjay 5 months ago


Using self managed DNS - other three options talking about Route 53 so B can only B answer
upvoted 1 times

  tonyexim 5 months, 1 week ago


I think both answer A and B is solutions
upvoted 1 times

  EricYu2023 5 months, 2 weeks ago


Selected Answer: B
The first half of Option A seems right. "Create an Amazon Route 53 geolocation routing policy to route requests to one of the two NLBs.",
however, for the second part "Create an Amazon CloudFront distribution. Use the Route 53 record as the distribution’s origin." , it's totally
useless. Route 53 can use geolocation routing directly route rquest to the NLBs
upvoted 2 times

  Musti35 5 months, 2 weeks ago


Selected Answer: B
https://docs.aws.amazon.com/global-accelerator/?icmpid=docs_homepage_networking
explanation:
AWS Global Accelerator Documentation
AWS Global Accelerator is a network layer service in which you create accelerators to improve the security, availability, and performance of
your applications for local and global users. Depending on the type of accelerator that you choose, you can gain additional benefits, such
as improving availability or mapping users to specific destination endpoints.
upvoted 1 times

  saransh_001 6 months ago


Selected Answer: B
option A although mentions geolocation routing and would allow the company to route traffic based on the location of the user. However,
the company has already implemented a self-managed DNS solution and wants to use NLBs for load balancing, so it may not be feasible
for them to switch to Route 53 and CloudFront.
upvoted 1 times
  saransh_001 6 months ago
Selected Answer: A
option A although mentions geolocation routing and would allow the company to route traffic based on the location of the user. However,
the company has already implemented a self-managed DNS solution and wants to use NLBs for load balancing, so it may not be feasible
for them to switch to Route 53 and CloudFront.
upvoted 1 times

  TheAbsoluteTruth 6 months ago


Selected Answer: B
La opción A no es la solución óptima porque aunque puede enrutar el tráfico a uno de los dos NLB en función de la geolocalización, aún
no proporciona una solución global para enrutar el tráfico a todas las instancias EC2.

La opción B es la solución adecuada porque permite que la empresa utilice AWS Global Accelerator para enrutar el tráfico a los NLB en
ambas regiones, lo que permite que el tráfico se enrute automáticamente a las instancias EC2 en ambas regiones. AWS Global Accelerator
se encarga de enrutar el tráfico de manera óptima a través de la red global de AWS para minimizar la latencia y mejorar el rendimiento y
la disponibilidad de la solución.
upvoted 3 times

  jcramos 6 months ago


Gracias
upvoted 1 times

  kraken21 6 months ago


Selected Answer: B
?"The company wants to improve the performance and availability of the solution": Geo location might be a good option if the question
stressed on limiting access based on location. Since performance and availability are needed B is the right choice.
upvoted 2 times
Question #121 Topic 1

A company is running an online transaction processing (OLTP) workload on AWS. This workload uses an unencrypted Amazon RDS DB instance in
a Multi-AZ deployment. Daily database snapshots are taken from this instance.
What should a solutions architect do to ensure the database and snapshots are always encrypted moving forward?

A. Encrypt a copy of the latest DB snapshot. Replace existing DB instance by restoring the encrypted snapshot.

B. Create a new encrypted Amazon Elastic Block Store (Amazon EBS) volume and copy the snapshots to it. Enable encryption on the DB
instance.

C. Copy the snapshots and enable encryption using AWS Key Management Service (AWS KMS) Restore encrypted snapshot to an existing DB
instance.

D. Copy the snapshots to an Amazon S3 bucket that is encrypted using server-side encryption with AWS Key Management Service (AWS KMS)
managed keys (SSE-KMS).

Correct Answer: A

Community vote distribution


A (77%) C (22%)

  123jhl0 Highly Voted  11 months, 2 weeks ago


Selected Answer: A
"You can enable encryption for an Amazon RDS DB instance when you create it, but not after it's created. However, you can add encryption
to an unencrypted DB instance by creating a snapshot of your DB instance, and then creating an encrypted copy of that snapshot. You can
then restore a DB instance from the encrypted snapshot to get an encrypted copy of your original DB instance."
https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/encrypt-an-existing-amazon-rds-for-postgresql-db-instance.html
upvoted 40 times

  JoeGuan 1 month, 3 weeks ago


I agree, there is no reason to copy all of the snapshots and ecnrypt them all. You just need one encrypted snapshot, moving forward
they will all be encrypted. C is close but I think there is no reason to copy all the snapshots plural. There is a wizard to go through and
select the snapshot to encrypt. "In the Amazon RDS console navigation pane, choose Snapshots, and select the DB snapshot you
created. For Actions, choose Copy Snapshot. Provide the destination AWS Region and the name of the DB snapshot copy in the
corresponding fields. Select the Enable Encryption checkbox. For Master Key, specify the KMS key identifier to use to encrypt the DB
snapshot copy. Choose Copy Snapshot. For more information, see Copying a snapshot in the Amazon RDS documentation". What if you
had 30 snapshotS? You just need to do it once.
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


In simple terms, you double it the affort of your work and spending money by creating unnessary snapshots... so A is the best
choice
upvoted 1 times

  Futurebones 4 months, 2 weeks ago


How can A gurantee future encryption?
upvoted 2 times

  Smart 2 months, 1 week ago


Once DB is encrypted, newer snapshots and read replicas will also be encrypted.
upvoted 2 times

  cookieMr Most Recent  3 months, 1 week ago


Selected Answer: C
A. Replacing the existing DB instance with an encrypted snapshot can result in downtime and potential data loss during migration.

B. Creating a new encrypted EBS volume for snapshots does not address the encryption of the DB instance itself.

D. Copying snapshots to an encrypted S3 bucket only encrypts the snapshots, but does not address the encryption of the DB instance.

Option C is the most suitable as it involves copying and encrypting the snapshots using AWS KMS, ensuring encryption for both the
database and snapshots.
upvoted 2 times

  Abrar2022 4 months, 1 week ago


If daily snapshots are taken from the daily DB instance. Why create another copy? You just need to encrypt the latest daily DB snapshot
and the restore from the existing encrypted snapshot.
upvoted 2 times
  [Removed] 5 months, 1 week ago
If there is anyone who is willing to share his/her contributor access, then please write to [email protected]
upvoted 1 times

  kruasan 5 months, 1 week ago


Selected Answer: A
You can't restore from a DB snapshot to an existing DB instance; a new DB instance is created when you restore.
upvoted 4 times

  C_M_M 5 months, 2 weeks ago


A and C are almost similar except that A is latest snapshot, while C is snapshots (all the snapshots).
I don't see any other difference btw those two options.
Option A is clearly the correct on as all you need is the latest snapshot.
upvoted 2 times

  JoeGuan 1 month, 3 weeks ago


I agree, in the wizard you would select ONE SNAPSHOT (singular in A), not all of the SNAPSHOTS (Plural in C)
upvoted 1 times

  rushlav 5 months, 2 weeks ago


A
You can only encrypt an Amazon RDS DB instance when you create it, not after the DB instance is created.
However, because you can encrypt a copy of an unencrypted snapshot, you can effectively add encryption to an unencrypted DB instance.
That is, you can create a snapshot of your DB instance, and then create an encrypted copy of that snapshot. You can then restore a DB
instance from the encrypted snapshot, and thus you have an encrypted copy of your original DB instance.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html
upvoted 1 times

  Abhineet9148232 6 months, 1 week ago


Selected Answer: C
Encryption is enabled during the Copy process itself.
https://repost.aws/knowledge-center/encrypt-rds-snapshots
upvoted 1 times

  Bang3R 6 months, 1 week ago


Selected Answer: C
C is the more complete answer as you need KMS to encrypt the snapshot copy prior to restoring it to the Database instance.
upvoted 1 times

  jdr75 5 months, 4 weeks ago


BUT you can't restore encrypted snapshot to an existing DB instance.Only no NEW DB (not an existing one). The procedure described in
this way:
"(...) you can add encryption to an unencrypted DB instance by creating a snapshot of your DB instance, and then creating an encrypted
copy of that snapshot. You can then restore a DB instance from the encrypted snapshot to get an encrypted copy of your original DB
instance."

refers to create a NEW DB instance (this encrypted), never restoring in a existing one.
The RDB engine understands that restoring from a encrypted snapshot is form create an encrypted NEW database.
upvoted 2 times

  TungPham 6 months, 3 weeks ago


Selected Answer: C
A not resolve data create in future.
You can enable encryption for an Amazon RDS DB instance when you create it, but not after it's created.
C will make this, see image below
Architecture
Source architecture

Unencrypted RDS DB instance

Target architecture

Encrypted RDS DB instance

The destination RDS DB instance is created by restoring the DB snapshot copy of the source RDS DB instance.

An AWS KMS key is used for encryption while restoring the snapshot.

An AWS DMS replication task is used to migrate the data.


https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/encrypt-an-existing-amazon-rds-for-postgresql-db-instance.html
upvoted 1 times

  jaswantn 6 months, 1 week ago


Option A seems correct.
With option (A) we already have DB snapshots. Just encrypt the latest available copy of snapshot, why to copy the snapshot once again
(as told in option C).
upvoted 1 times
  jkmaws 7 months, 2 weeks ago
A
You can enable encryption for an Amazon RDS DB instance when you create it, but not after it's created. However, you can add encryption
to an unencrypted DB instance by creating a snapshot of your DB instance, and then creating an encrypted copy of that snapshot. You can
then restore a DB instance from the encrypted snapshot to get an encrypted copy of your original DB instance. If your project allows for
downtime (at least for write transactions) during this activity, this is all you need to do. When the new, encrypted copy of the DB instance
becomes available, you can point your applications to the new database.
upvoted 1 times

  CaoMengde09 7 months, 4 weeks ago


It's A for the following reasons :
--> To restore an Encrypted DB Instance from an encrypted snapshot we'll need to replace the old one - as we cannot enable encryption on
an existing DB Instance
--> We have both Snap/Db Instance encrypted moving forward since all the daily Backups on an already encrypted DB Instance would be
encrypted
upvoted 1 times

  sassy2023 8 months, 1 week ago


Selected Answer: C
C is right
You can enable encryption for an Amazon RDS DB instance when you create it, but not after it's created. However, you can add encryption
to an unencrypted DB instance by creating a snapshot of your DB instance, and then creating an encrypted copy of that snapshot. You can
then restore a DB instance from the encrypted snapshot to get an encrypted copy of your original DB instance.

Tools used to enable encryption:

AWS KMS key for encryption – When you create an encrypted DB instance, you can choose a customer managed key or the AWS managed
key for Amazon RDS to encrypt your DB instance. If you don't specify the key identifier for a customer managed key, Amazon RDS uses the
AWS managed key for your new DB instance. Amazon RDS creates an AWS managed key for Amazon RDS for your AWS account.

https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/encrypt-an-existing-amazon-rds-for-postgresql-db-instance.html
upvoted 2 times

  bullrem 8 months, 1 week ago


The correct answer is C,

Copy the snapshots and enable encryption using AWS Key Management Service (AWS KMS)
Restore encrypted snapshot to an existing DB instance.
This is the correct approach as it allows you to encrypt the existing snapshots and the existing DB instance using AWS KMS. This way, you
can ensure that all data stored in the DB instance and the snapshots are encrypted at rest, providing an additional layer of security.
upvoted 1 times

  jdr75 5 months, 4 weeks ago


BUT you can't restore encrypted snapshot to an existing DB instance.Only no NEW DB (not an existing one). The procedure described in
this way:
"(...) you can add encryption to an unencrypted DB instance by creating a snapshot of your DB instance, and then creating an encrypted
copy of that snapshot. You can then restore a DB instance from the encrypted snapshot to get an encrypted copy of your original DB
instance."

refers to create a NEW DB instance (this encrypted), never restoring in a existing one.
The RDB engine understands that restoring from a encrypted snapshot is form create an encrypted NEW database.
upvoted 1 times

  remand 8 months, 2 weeks ago


Selected Answer: D
D. Copy the snapshots to an Amazon S3 bucket that is encrypted using server-side encryption with AWS Key Management Service (AWS
KMS) managed keys (SSE-KMS).

This option ensures that the database snapshots are encrypted at rest by copying them to an S3 bucket that is encrypted using SSE-KMS.
This option also provides the flexibility to restore the snapshots to a new RDS DB instance in the future, which will also be encrypted by
default.
upvoted 1 times

  goodmail 8 months, 2 weeks ago


Selected Answer: A
If C means doing encryption while making snapshot, then it is incorrect. It is not able to make an encrypted snapshot from unencrypted
RDS. But it will be correct if it means enabling KMS function when restoring DB instance. Bad in wordings.
upvoted 1 times

  imisioluwa 8 months, 3 weeks ago


The correct answer is A. Check this link " https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/encrypt-an-existing-amazon-
rds-for-postgresql-db-instance.html "
" However, you can add encryption to an unencrypted DB instance by creating a snapshot of your DB instance, and then creating an
encrypted copy of that snapshot. You can then restore a DB instance from the encrypted snapshot to get an encrypted copy of your
original DB instance".
upvoted 1 times
Question #122 Topic 1

A company wants to build a scalable key management infrastructure to support developers who need to encrypt data in their applications.
What should a solutions architect do to reduce the operational burden?

A. Use multi-factor authentication (MFA) to protect the encryption keys.

B. Use AWS Key Management Service (AWS KMS) to protect the encryption keys.

C. Use AWS Certificate Manager (ACM) to create, store, and assign the encryption keys.

D. Use an IAM policy to limit the scope of users who have access permissions to protect the encryption keys.

Correct Answer: B

Community vote distribution


B (100%)

  123jhl0 Highly Voted  11 months, 2 weeks ago


Selected Answer: B
If you are a developer who needs to digitally sign or verify data using asymmetric keys, you should use the service to create and manage
the private keys you’ll need. If you’re looking for a scalable key management infrastructure to support your developers and their growing
number of applications, you should use it to reduce your licensing costs and operational burden...
https://aws.amazon.com/kms/faqs/#:~:text=If%20you%20are%20a%20developer%20who%20needs%20to%20digitally,a%20broad%20set
%20of%20industry%20and%20regional%20compliance%20regimes.
upvoted 17 times

  ocbn3wby 10 months, 1 week ago


Most documented answers. Thank you, 123jhl0.
upvoted 3 times

  Guru4Cloud Most Recent  1 month, 2 weeks ago


Selected Answer: B
The main reasons are:

AWS KMS handles the encryption key management, rotation, and auditing. This removes the undifferentiated heavy lifting for developers.
KMS integrates natively with many AWS services like S3, EBS, RDS for encryption. This makes it easy to encrypt data.
KMS scales automatically as key usage increases. Developers don't have to worry about provisioning key infrastructure.
Fine-grained access controls are available via IAM policies and grants. KMS is secure by default.
Features like envelope encryption make compliance easier for regulated workloads.
AWS handles the hardware security modules (HSMs) for cryptographic key storage
upvoted 2 times

  cookieMr 3 months, 1 week ago


Selected Answer: B
By utilizing AWS KMS, the company can offload the operational responsibilities of key management, including key generation, rotation,
and protection. AWS KMS provides a scalable and secure infrastructure for managing encryption keys, allowing developers to easily
integrate encryption into their applications without the need to manage the underlying key infrastructure.

Option A (MFA), option C (ACM), and option D (IAM policy) are not directly related to reducing the operational burden of key management.
While these options may provide additional security measures or access controls, they do not specifically address the scalability and
management aspects of a key management infrastructure. AWS KMS is designed to simplify the key management process and is the most
suitable option for reducing the operational burden in this scenario.
upvoted 2 times

  cheese929 5 months ago


Selected Answer: B
B is correct.
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Selected Answer: B
The correct answer is Option B. To reduce the operational burden, the solutions architect should use AWS Key Management Service (AWS
KMS) to protect the encryption keys.

AWS KMS is a fully managed service that makes it easy to create and manage encryption keys. It allows developers to easily encrypt and
decrypt data in their applications, and it automatically handles the underlying key management tasks, such as key generation, key
rotation, and key deletion. This can help to reduce the operational burden associated with key management.
upvoted 4 times

  career360guru 9 months, 2 weeks ago


Selected Answer: B
Option B
upvoted 1 times

  Wpcorgan 10 months, 2 weeks ago


B is correct
upvoted 1 times

  Wpcorgan 10 months, 2 weeks ago


B is correct
upvoted 1 times

  Jtic 10 months, 3 weeks ago


Selected Answer: B
If you are responsible for securing your data across AWS services, you should use it to centrally manage the encryption keys that control
access to your data. If you are a developer who needs to encrypt data in your applications, you should use the AWS Encryption SDK with
AWS KMS to easily generate, use and protect symmetric encryption keys in your code.
upvoted 2 times
Question #123 Topic 1

A company has a dynamic web application hosted on two Amazon EC2 instances. The company has its own SSL certificate, which is on each
instance to perform SSL termination.
There has been an increase in traffic recently, and the operations team determined that SSL encryption and decryption is causing the compute
capacity of the web servers to reach their maximum limit.
What should a solutions architect do to increase the application's performance?

A. Create a new SSL certificate using AWS Certificate Manager (ACM). Install the ACM certificate on each instance.

B. Create an Amazon S3 bucket Migrate the SSL certificate to the S3 bucket. Configure the EC2 instances to reference the bucket for SSL
termination.

C. Create another EC2 instance as a proxy server. Migrate the SSL certificate to the new instance and configure it to direct connections to the
existing EC2 instances.

D. Import the SSL certificate into AWS Certificate Manager (ACM). Create an Application Load Balancer with an HTTPS listener that uses the
SSL certificate from ACM.

Correct Answer: D

Community vote distribution


D (94%) 6%

  123jhl0 Highly Voted  11 months, 2 weeks ago


Selected Answer: D
This issue is solved by SSL offloading, i.e. by moving the SSL termination task to the ALB.
https://aws.amazon.com/blogs/aws/elastic-load-balancer-support-for-ssl-termination/
upvoted 14 times

  Buruguduystunstugudunstuy Highly Voted  9 months, 1 week ago


Selected Answer: D
The correct answer is D. To increase the application's performance, the solutions architect should import the SSL certificate into AWS
Certificate Manager (ACM) and create an Application Load Balancer with an HTTPS listener that uses the SSL certificate from ACM.

An Application Load Balancer (ALB) can offload the SSL termination process from the EC2 instances, which can help to increase the
compute capacity available for the web application. By creating an ALB with an HTTPS listener and using the SSL certificate from ACM, the
ALB can handle the SSL termination process, leaving the EC2 instances free to focus on running the web application.
upvoted 8 times

  Guru4Cloud Most Recent  1 month, 2 weeks ago


Selected Answer: D
The key reasons are:

Using an Application Load Balancer with an HTTPS listener allows SSL termination to happen at the load balancer layer.
The EC2 instances behind the load balancer receive only unencrypted traffic, reducing load on them.
Importing the custom SSL certificate into ACM allows the ALB to use it for HTTPS listeners.
This removes the need to install and manage SSL certificates on each EC2 instance.
ALB handles the SSL overhead and scales automatically. The EC2 fleet focuses on app logic.
Options A, B, C don't offload SSL overhead from the EC2 instances themselves.
upvoted 2 times

  cookieMr 3 months, 1 week ago


Selected Answer: D
By using ACM to manage the SSL certificate and configuring an ALB with HTTPS listener, the SSL termination will be handled by the load
balancer instead of the web servers. This offloading of SSL processing to the ALB reduces the compute capacity burden on the web
servers and improves their performance by allowing them to focus on serving the dynamic web application.

Option A suggests creating a new SSL certificate using ACM, but it does not address the SSL termination offloading and load balancing
capabilities provided by an ALB.

Option B suggests migrating the SSL certificate to an S3 bucket, but this approach does not provide the necessary SSL termination and
load balancing functionalities.

Option C suggests creating another EC2 instance as a proxy server, but this adds unnecessary complexity and management overhead
without leveraging the benefits of ALB's built-in load balancing and SSL termination capabilities.

Therefore, option D is the most suitable choice to increase the application's performance in this scenario.
upvoted 1 times
  dejung 7 months, 3 weeks ago
Selected Answer: A
Why is A wrong?
upvoted 2 times

  Yadav_Sanjay 5 months ago


Company uses its own SSL certificate. Option A says.. Create a SSL certificate in ACM
upvoted 2 times

  remand 8 months, 2 weeks ago


Selected Answer: D
SSL termination is the process of ending an SSL/TLS connection. This is typically done by a device, such as a load balancer or a reverse
proxy, that is positioned in front of one or more web servers. The device decrypts incoming SSL/TLS traffic and then forwards the
unencrypted request to the web server. This allows the web server to process the request without the overhead of decrypting and
encrypting the traffic. The device then re-encrypts the response from the web server and sends it back to the client. This allows the device
to offload the SSL/TLS processing from the web servers and also allows for features such as SSL offloading, SSL bridging, and SSL
acceleration.
upvoted 4 times

  career360guru 9 months, 2 weeks ago


Selected Answer: D
Option D to offload the SSL encryption workload
upvoted 1 times

  Aamee 9 months, 4 weeks ago


Selected Answer: D
Due to this statement particularly: "The company has its own SSL certificate" as it's not created from AWS ACM itself.
upvoted 1 times

  Wpcorgan 10 months, 2 weeks ago


D is correct
upvoted 1 times

  Six_Fingered_Jose 11 months, 1 week ago


Selected Answer: D
agree with D
upvoted 1 times
Question #124 Topic 1

A company has a highly dynamic batch processing job that uses many Amazon EC2 instances to complete it. The job is stateless in nature, can be
started and stopped at any given time with no negative impact, and typically takes upwards of 60 minutes total to complete. The company has
asked a solutions architect to design a scalable and cost-effective solution that meets the requirements of the job.
What should the solutions architect recommend?

A. Implement EC2 Spot Instances.

B. Purchase EC2 Reserved Instances.

C. Implement EC2 On-Demand Instances.

D. Implement the processing on AWS Lambda.

Correct Answer: A

Community vote distribution


A (100%)

  Kapello10 Highly Voted  10 months, 1 week ago


Selected Answer: A
Cant be implemented on Lambda because the timeout for Lambda is 15mins and the Job takes 60minutes to complete

Answer >> A
upvoted 11 times

  Evangelia Highly Voted  11 months, 2 weeks ago


spot instances
upvoted 5 times

  Guru4Cloud Most Recent  1 month, 2 weeks ago


Selected Answer: A
The key reasons are:

Spot can provide significant cost savings (up to 90%) compared to On-Demand.
Since the job is stateless and can be stopped/restarted anytime, the intermittent availability of Spot is not an issue.
Spot supports the same instance types as On-Demand, so optimal instance types can be chosen.
For a 60+ minute batch job, the chance of Spot interruption is low. But if it happens, the job can just be restarted.
Reserved Instances don't offer any advantage for a highly dynamic job like this.
Lambda is not a good fit given the long runtime requirement.
upvoted 3 times

  cookieMr 3 months, 1 week ago


Selected Answer: A
Spot Instances provide significant cost savings for flexible start and stop batch jobs.
Purchasing Reserved Instances (B) is better for stable workloads, not dynamic ones.
On-Demand Instances (C) are costly and lack potential cost savings like Spot Instances.
AWS Lambda (D) is not suitable for long-running batch jobs.
upvoted 1 times

  beginnercloud 3 months, 3 weeks ago


Selected Answer: A
A is correct
upvoted 1 times

  alexiscloud 6 months ago


Answer A:
typically takes upwards of 60 minutes total to complete.
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Selected Answer: A
The correct answer is Option A. To design a scalable and cost-effective solution for the batch processing job, the solutions architect should
recommend implementing EC2 Spot Instances.

EC2 Spot Instances allow users to bid on spare Amazon EC2 computing capacity and can be a cost-effective solution for stateless,
interruptible workloads that can be started and stopped at any time. Since the batch processing job is stateless, can be started and
stopped at any time, and typically takes upwards of 60 minutes to complete, EC2 Spot Instances would be a good fit for this workload.
upvoted 2 times
  k1kavi1 9 months, 1 week ago
Selected Answer: A
Spot Instances should be good enough and cost effective because the job can be started and stopped at any given time with no negative
impact.
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: A
Option A
upvoted 1 times

  Wpcorgan 10 months, 2 weeks ago


A is correct
upvoted 1 times

  SimonPark 11 months, 1 week ago


Selected Answer: A
A is the answer
upvoted 1 times
Question #125 Topic 1

A company runs its two-tier ecommerce website on AWS. The web tier consists of a load balancer that sends traffic to Amazon EC2 instances. The
database tier uses an Amazon RDS DB instance. The EC2 instances and the RDS DB instance should not be exposed to the public internet. The
EC2 instances require internet access to complete payment processing of orders through a third-party web service. The application must be highly
available.
Which combination of configuration options will meet these requirements? (Choose two.)

A. Use an Auto Scaling group to launch the EC2 instances in private subnets. Deploy an RDS Multi-AZ DB instance in private subnets.

B. Configure a VPC with two private subnets and two NAT gateways across two Availability Zones. Deploy an Application Load Balancer in the
private subnets.

C. Use an Auto Scaling group to launch the EC2 instances in public subnets across two Availability Zones. Deploy an RDS Multi-AZ DB instance
in private subnets.

D. Configure a VPC with one public subnet, one private subnet, and two NAT gateways across two Availability Zones. Deploy an Application
Load Balancer in the public subnet.
D. Configure a VPC with two public subnets, two private subnets, and two NAT gateways across two Availability Zones. Deploy an Application
Load Balancer in the public subnets.

Correct Answer: CE

Community vote distribution


AD (52%) A (22%) AB (22%)

  mabotega Highly Voted  10 months, 3 weeks ago


Selected Answer: AD
Answer A for: The EC2 instances and the RDS DB instance should not be exposed to the public internet. Answer D for: The EC2 instances
require internet access to complete payment processing of orders through a third-party web service. Answer A for: The application must
be highly available.
upvoted 23 times

  oguzbeliren 2 months ago


D allows public internet access which is not desired. The answer is not d.
The most accurate answers are AB
upvoted 1 times

  smd_ 5 months ago


why not option B.The EC2 instances can be launched in private subnets across two Availability Zones, and the Application Load Balancer
can be deployed in the private subnets. NAT gateways can be configured in each private subnet to provide internet access for the EC2
instances to communicate with the third-party web service.
upvoted 1 times

  ruqui 4 months, 1 week ago


B option wrong! NAT gateways must be created in public subnets!!
upvoted 2 times

  x33 3 weeks, 6 days ago


I think you are wrong on this. In fact, NAT gateways are typically created in private subnets.
upvoted 1 times

  AbhiJo 10 months, 2 weeks ago


We will require 2 private subnets, D does mention 1 subnet
upvoted 3 times

  HayLLlHuK Highly Voted  9 months ago


A and E!
Application has to be highly available while the instance and database should not be exposed to the public internet, but the instances still
requires access to the internet. NAT gateway has to be deployed in public subnets in this case while instances and database remain in
private subnets in the VPC, therefore answer is (A) and (E).
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html

If the instances did not require access to the internet, then the answer could have been
(B) to use a private NAT gateway and keep it in the private subnets to communicate only to the VPCs.

https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Scenario2.html
upvoted 15 times
  darn 5 months, 2 weeks ago
your link is right but your voting is wrong, should be AD, although that still doesnt explain why 2 NAT gateways
upvoted 3 times

  tungnguyenduy Most Recent  1 month, 3 weeks ago


Selected Answer: AB
AB. should not be exposed to the public internet => private subnet
upvoted 1 times

  ayrus1992 2 months, 2 weeks ago


Selected Answer: C
CE
Highly Available and Secure
upvoted 1 times

  bahaa_shaker 1 month ago


read the question again, it aks to make the ec2 and rds in privete subnet
do not mislead others if u are not sure of ur answer
C is wrong answer bt 1000000%
its A and D
upvoted 1 times

  omerap12 3 months, 1 week ago


Selected Answer: AD
Answer A for: The EC2 instances and the RDS DB instance should not be exposed to the public internet. Answer D for: The EC2 instances
require internet access to complete payment processing of orders through a third-party web service. Answer A for: The application must
be highly available.
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: AD
Option D configures a VPC with a public subnet for the web tier, allowing customers to access the website. The private subnet provides a
secure environment for the EC2 instances and the RDS DB instance. NAT gateways are used to provide internet access to the EC2 instances
in the private subnet for payment processing.

Option A uses an Auto Scaling group to launch the EC2 instances in private subnets, ensuring they are not directly accessible from the
public internet. The RDS Multi-AZ DB instance is also placed in private subnets, maintaining security.
upvoted 1 times

  beginnercloud 3 months, 2 weeks ago


Selected Answer: AD
Second D so like E.
upvoted 1 times

  fishy_resolver 3 months, 3 weeks ago


Selected Answer: CD
I had it as AD, but for me the question asked for high availability, and A doesn't specify across availability zones. So, A is more secure but
not highly available. C is less secure but highly available
upvoted 1 times

  antropaws 4 months ago


Selected Answer: AD
AD because 2 NAT gateways in 2 public subnets in 2 AZs.
upvoted 2 times

  bgsanata 4 months, 2 weeks ago


Selected Answer: CD
C - provide required HA
E - Best answer to the access requirements. The NAT gateway is required for the EC2 instances to access the third-party web service. This
do not expose them for inbound connections from Internet.
upvoted 1 times

  studynoplay 4 months, 3 weeks ago


Selected Answer: AD
A & the 2nd D. You have to put each NAT gateway in each public subnet
upvoted 2 times

  cheese929 5 months ago


Selected Answer: AD
A and the second D are the correct choices. ALB in the public subnet for access from the internet. NAT gateways and the EC2s in the
private subnet over 2 AZs to meet the requirements.
A. Use an Auto Scaling group to launch the EC2 instances in private subnets. Deploy an RDS Multi-AZ DB instance in private subnets.
D. Configure a VPC with two public subnets, two private subnets, and two NAT gateways across two Availability Zones. Deploy an
Application Load Balancer in the public subnets.
upvoted 1 times
  kruasan 5 months, 1 week ago
Selected Answer: AD
AE
Option B is not a valid solution as it only includes private subnets, and both the NAT gateway and Application Load Balancer require public
subnets.
upvoted 2 times

  kruasan 5 months, 1 week ago


Selected Answer: AB
In option B, an Application Load Balancer (ALB) is deployed in the private subnets, and two NAT gateways are configured across two
Availability Zones to provide internet access to the instances in the private subnets. This allows the web tier to be accessed publicly
through the ALB while still keeping the instances in private subnets. The NAT gateways act as a proxy between the instances and the
internet, allowing only necessary traffic to pass through while blocking all other inbound traffic. This configuration provides additional
security to the application by keeping the instances in private subnets and minimizing the exposure of the infrastructure to the public
internet
upvoted 2 times

  darn 5 months, 2 weeks ago


Selected Answer: AB
private subnets, meaning C D E are not
upvoted 2 times

  darn 5 months, 2 weeks ago


my bad, only RDS are private
upvoted 1 times

  Manjunathkb 5 months, 2 weeks ago


None of the answers provided ensures internet connectivity. NAT gateway alone doesnt provide internet access, it needs internet gateway.
Also, once you have NAT and IGW, you need to edit route tables and then you get internet access
upvoted 2 times

  alexiscloud 6 months ago


Answer AE:
upvoted 2 times
Question #126 Topic 1

A solutions architect needs to implement a solution to reduce a company's storage costs. All the company's data is in the Amazon S3 Standard
storage class. The company must keep all data for at least 25 years. Data from the most recent 2 years must be highly available and immediately
retrievable.
Which solution will meet these requirements?

A. Set up an S3 Lifecycle policy to transition objects to S3 Glacier Deep Archive immediately.

B. Set up an S3 Lifecycle policy to transition objects to S3 Glacier Deep Archive after 2 years.

C. Use S3 Intelligent-Tiering. Activate the archiving option to ensure that data is archived in S3 Glacier Deep Archive.

D. Set up an S3 Lifecycle policy to transition objects to S3 One Zone-Infrequent Access (S3 One Zone-IA) immediately and to S3 Glacier Deep
Archive after 2 years.

Correct Answer: B

Community vote distribution


B (76%) C (18%) 5%

  rjam Highly Voted  10 months, 2 weeks ago


Selected Answer: B
Why Not C? Because Intelligent Tier the objects are automatically moved to different tiers.
The question says "the data from most recent 2 yrs should be highly available and immediately retrievable", which means in intelligent tier
, if you activate archiving option(as Option C specifies) , the objects will be moved to Archive tiers(instant to access to deep archive access
tiers) in 90 to 730 days. Remember these archive tiers performance will be similar to S3 glacier flexible and s3 deep archive which means
files cannot be retrieved immediately within 2 yrs .

We have a hard requirement in question which says it should be retreivable immediately for the 2 yrs. which cannot be acheived in
Intelligent tier. So B is the correct option imho.

Because of the above reason Its possible only in S3 standard and then configure lifecycle configuration to move to S3 Glacier Deep Archive
after 2 yrs.
upvoted 10 times

  Abdou1604 1 month, 2 weeks ago


but your S3 intelligent-tiering will move the object to S3 infrequent access tier which a is a single AZ tier , and then the HA requirement
will not be respected
upvoted 1 times

  MutiverseAgent 2 months, 2 weeks ago


Mmm.. You can enable Intelligent-Tiering and take advantage of of the infrequent Access tier and thus reducing costs. To avoid moving
objects to the deep archive tier before the two years it would be enough to enable ONLY the check "Deep Archive Access tier" and set
days to 720 (two years, which is curiously the maximum value), and keep disabled the check "Archive Access tier" to avoid the
Intelligent-Tiering move objects to the non-instant retrieval tier. That will work, offcourse this specific configuration is not mention in
the question which leaves some doubts about which option is the correct.
upvoted 1 times

  MutiverseAgent 2 months, 2 weeks ago


Just to clarify, my previous comment is about how answer B) might be correct and the MOST cheapest option under the correct
configuration.
upvoted 1 times

  MutiverseAgent 2 months, 2 weeks ago


Sorry, I meant answer C) might be correct
upvoted 1 times

  TelaO Highly Voted  10 months, 2 weeks ago


Selected Answer: B
B is the only right answer. C does not indicate archiving after 2 years. If it did specify 2 years, then C would also be an option.
upvoted 7 times

  TariqKipkemei Most Recent  1 month ago


Selected Answer: B
I would not opt for C simply because S3IT was specifically designed for scenarios where the access patterns are unknown.
This scenario has clearly known access patterns making option B the best.
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: B
Option A is incorrect because immediately transitioning objects to S3 Glacier Deep Archive would not fulfill the requirement of keeping the
most recent 2 years of data highly available and immediately retrievable.

Option C is also incorrect because using S3 Intelligent-Tiering with archiving option would not meet the requirement of immediately
retrievable data for the most recent 2 years.

Option D is not the best choice because transitioning objects to S3 One Zone-Infrequent Access (S3 One Zone-IA) and then to S3 Glacier
Deep Archive would not satisfy the requirement of immediately retrievable data for the most recent 2 years.

Option B is the correct solution. By setting up an S3 Lifecycle policy to transition objects to S3 Glacier Deep Archive after 2 years, the
company can keep all data for at least 25 years while ensuring that data from the most recent 2 years remains highly available and
immediately retrievable in the Amazon S3 Standard storage class. This solution optimizes storage costs by leveraging the Glacier Deep
Archive for long-term storage.
upvoted 1 times

  kambarami 3 weeks ago


this makes sense the question is a bit tricky. I now uderstand that all the data is already kept in S3 Standard meaning immediate
retrieval of the most recent data is remains highly available.
upvoted 1 times

  Yadav_Sanjay 3 months, 2 weeks ago


Why not D
upvoted 2 times

  Robrobtutu 5 months, 2 weeks ago


Selected Answer: B
B is the only one possible.
upvoted 1 times

  rushlav 5 months, 2 weeks ago


C would not work as the names of these S3 archives are called Archive Access Tier and Deep Archive access tiers, so since they mention
glacier in option C , I think its B which is the correct.
upvoted 1 times

  CaoMengde09 7 months, 4 weeks ago


It's pretty straight forward.

S3 Standard answers for High Availaibility/Immediate retrieval for 2 years. S3 Intelligent Tiering would just incur additional cost of analysis
while the company insures that it requires immediate retrieval in any moment and without risk to Availability. So a capital B
upvoted 2 times

  G3 8 months ago
C appears to be appropriate - good case for intelligent tiering
upvoted 1 times

  Robrobtutu 5 months, 2 weeks ago


The option just says Intelligent Tiering, it doesn't specify when it would transition the date to Deep Archive, so how do we know it
would do it at the correct time? It has to be A.
upvoted 1 times

  Sdraju 7 months ago


Intelligent tiering appears to be best suited for unknown usage pattern.. but with a known usage pattern Life cycle policy may be
optimal.
upvoted 1 times

  DaveNL 8 months, 3 weeks ago


Selected Answer: C
C. Use S3 Intelligent-Tiering. Activate the archiving option to ensure that data is archived in S3 Glacier Deep Archive.

S3 Intelligent Tiering supports changing the default archival time to 730 days (2 years) from the default 90 or 180 days. Other levels of
tiering are instant access tiers.
upvoted 2 times

  Zerotn3 9 months ago


Selected Answer: D
Option D is the correct solution for this scenario.

S3 Lifecycle policies allow you to automatically transition objects to different storage classes based on the age of the object or other
specific criteria. In this case, the company needs to keep all data for at least 25 years, and the data from the most recent 2 years must be
highly available and immediately retrievable.
upvoted 2 times

  lfrad 8 months, 3 weeks ago


If the option for D was Infrequent Access it would be good, but here it is One Zone-IA which is not highly available. Then it must be B
upvoted 5 times
  Zerotn3 9 months ago
Option A is not a good solution because it would transition all objects to S3 Glacier Deep Archive immediately, making the data from
the most recent 2 years not immediately retrievable. Option B is not a good solution because it would not make the data from the most
recent 2 years immediately retrievable.

Option C is not a good solution because S3 Intelligent-Tiering is designed to automatically move objects between two storage classes
(Standard and Infrequent Access) based on object access patterns. It does not provide a way to transition objects to S3 Glacier Deep
Archive, which is required for long-term storage.

Option D is the correct solution because it would transition objects to S3 One Zone-Infrequent Access (S3 One Zone-IA) immediately,
making the data from the most recent 2 years immediately retrievable. After 2 years, the objects would be transitioned to S3 Glacier
Deep Archive for long-term storage. This solution meets the requirements of the company to keep all data for at least 25 years and
make the data from the most recent 2 years immediately retrievable.
upvoted 1 times

  Ello2023 8 months, 2 weeks ago


B is immediately retrievable, has high availability and using the lifecycle you can transition to deep archive after the 2 years time
period.
upvoted 1 times

  hahahumble 8 months, 2 weeks ago


S3 One Zone-IA is not highly available compared with S3 standard
https://aws.amazon.com/about-aws/whats-new/2018/04/announcing-s3-one-zone-infrequent-access-a-new-amazon-s3-storage-
class/?nc1=h_ls
upvoted 1 times

  k1kavi1 9 months, 1 week ago


Selected Answer: B
B looks correct
upvoted 2 times

  career360guru 9 months, 2 weeks ago


Selected Answer: B
Option B
upvoted 1 times

  lapaki 9 months, 3 weeks ago


Selected Answer: B
B. Most correct
upvoted 2 times

  Cizzla7049 10 months, 1 week ago


Selected Answer: C
https://aws.amazon.com/blogs/aws/s3-intelligent-tiering-adds-archive-access-tiers/
upvoted 1 times

  JayBee65 9 months, 2 weeks ago


From your link "We added S3 Intelligent-Tiering to Amazon Amazon S3 to solve the problem of using the right storage class and
optimizing costs when access patterns are irregular.". But access patterns are not irregular, they are clearly stated on the question, so
this is not required.
upvoted 3 times

  Wpcorgan 10 months, 2 weeks ago


B is correct
upvoted 1 times

  Jtic 10 months, 3 weeks ago


Selected Answer: C
C - S3 Intelligent-Tiering
Customers saving on storage with S3 Intelligent-Tiering

S3 Intelligent-Tiering automatically stores objects in three access tiers: one tier optimized for frequent access, a lower-cost tier optimized
for infrequent access, and a very-low-cost tier optimized for rarely accessed data. For a small monthly object monitoring and automation
charge, S3 Intelligent-Tiering moves objects that have not been accessed for 30 consecutive days to the Infrequent Access tier for savings
of 40%; and after 90 days of no access, they’re

There are no retrieval charges in S3 Intelligent-Tiering. S3 Intelligent-Tiering has no minimum eligible object size, but objects smaller than
128 KB are not eligible for auto tiering. These smaller objects may be stored, but they’ll always be charged at the Frequent Access tier
rates and don’t incur the monitoring and automation charge
upvoted 1 times

  JayBee65 9 months, 2 weeks ago


"moves objects that have not been accessed for 30 consecutive days to the Infrequent Access tier..." This is not required, they should
remain where they are for 2 years.
upvoted 1 times
  JayBee65 9 months, 2 weeks ago
Once you have activated one or both of the archive access tiers, S3 Intelligent-Tiering will automatically move objects that haven’t
been accessed for 90 days to the Archive Access tier, ...Objects in the archive access tiers are retrieved in 3-5 hours!
Yet the requirements are "Data from the most recent 2 years must be highly available and immediately retrievable". Not C!
upvoted 1 times
Question #127 Topic 1

A media company is evaluating the possibility of moving its systems to the AWS Cloud. The company needs at least 10 TB of storage with the
maximum possible I/O performance for video processing, 300 TB of very durable storage for storing media content, and 900 TB of storage to meet
requirements for archival media that is not in use anymore.
Which set of services should a solutions architect recommend to meet these requirements?

A. Amazon EBS for maximum performance, Amazon S3 for durable data storage, and Amazon S3 Glacier for archival storage

B. Amazon EBS for maximum performance, Amazon EFS for durable data storage, and Amazon S3 Glacier for archival storage

C. Amazon EC2 instance store for maximum performance, Amazon EFS for durable data storage, and Amazon S3 for archival storage

D. Amazon EC2 instance store for maximum performance, Amazon S3 for durable data storage, and Amazon S3 Glacier for archival storage

Correct Answer: A

Community vote distribution


D (72%) A (28%)

  Sauran Highly Voted  11 months, 2 weeks ago


Selected Answer: D
Max instance store possible at this time is 30TB for NVMe which has the higher I/O compared to EBS.

is4gen.8xlarge 4 x 7,500 GB (30 TB) NVMe SSD

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html#instance-store-volumes
upvoted 22 times

  michellemeloc 5 months ago


Update: i3en.metal and i3en.24xlarge = 8 x 7500 GB (60TB)
upvoted 2 times

  ishitamodi4 9 months, 2 weeks ago


instance store volume for the root volume, the size of this volume varies by AMI, but the maximum size is 10 GB
upvoted 1 times

  JayBee65 9 months, 2 weeks ago


This link shows a max capacity of 30TB, so what is the problem?
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html#instance-store-volumes
upvoted 1 times

  JayBee65 9 months, 2 weeks ago


Only the following instance types support an instance store volume as the root device: C3, D2, G2, I2, M3, and R3, and we're using
an I3, so an instance store volume is irrelevant.
upvoted 2 times

  antropaws 4 months ago


THE CORRECT ANSWER IS A.

The biggest Instance Store Storage Optimized option (is4gen.8xlarge) has a capacity of only 3TB.

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-store-volumes.html#instance-store-vol-so
upvoted 1 times

  Six_Fingered_Jose Highly Voted  11 months, 1 week ago


Selected Answer: D
agree with D, since it is only used for video processing instance store should be the fastest here (being ephemeral shouldnt be an issue
because they are moving the data to S3 after processing)
upvoted 7 times

  BrijMohan08 Most Recent  2 weeks, 1 day ago


Selected Answer: D
10tb, good enough for EC2
10 TB required only for processing -> Temp memory

For durable storage s3 is a perfect fit in this scenario.


upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: D
The best set of services to meet the storage requirements are:

D) Amazon EC2 instance store for maximum performance, Amazon S3 for durable data storage, and Amazon S3 Glacier for archival
storage

The rationale is:

EC2 instance store provides the highest performance storage for I/O intensive video processing.
S3 provides durable, scalable object storage for the media content library.
Glacier provides the lowest cost archival storage for media no longer in active use.
EBS volumes don't offer the IOPS needed for video processing.
EFS file storage isn't as durable or cost effective for large media libraries as S3.
By matching each storage need with the optimal storage service - EC2, S3, Glacier - this combination meets the performance, durability,
and cost requirements for each storage use case.
upvoted 2 times

  JummyFash 1 month, 3 weeks ago


Option B suggests using Amazon EFS for durable data storage. While Amazon EFS is a managed file storage service, it may not provide the
same level of performance and cost-effectiveness as Amazon EBS for maximum I/O performance.

Options C and D suggest using Amazon EC2 instance store, which is ephemeral storage that is directly attached to an EC2 instance. While
it can provide high I/O performance, it is not as durable as Amazon EBS or Amazon S3 and does not meet the durability requirements for
long-term data storage.

Therefore, option A is the most suitable recommendation to meet the specified storage requirements for the media company.
upvoted 1 times

  vikashverma93 2 months, 2 weeks ago


A because we need at least 10TB of storage (Persistent) with max I/O, as instance storage is not persistent so that is why it is out of
picture, otherwise answer should be D
upvoted 1 times

  MNotABot 2 months, 2 weeks ago


D
I will go for D as here we need max I/O:
Amazon EC2 Instance Store is suited for temporary storage needs where high performance and low latency are critical. Amazon EBS, on
the other hand, is ideal for long-term data storage with better durability and accessibility features.
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: D
Option D is the recommended solution. Amazon EC2 instance store provides maximum performance for video processing, offering local,
high-speed storage that is directly attached to the EC2 instances. Amazon S3 is suitable for durable data storage, providing the required
capacity of 300 TB for storing media content. Amazon S3 Glacier serves as a cost-effective solution for archival storage, meeting the
requirement of 900 TB of archival media storage.

Option A suggests using Amazon EBS for maximum performance, but it may not deliver the same level of performance as instance store
for I/O-intensive workloads.

Option B recommends Amazon EFS for durable data storage, but it may not provide the required performance for video processing.

Option C suggests using Amazon EC2 instance store for maximum performance and Amazon EFS for durable data storage, but instance
store may not offer the durability and scalability required for the storage needs of the media company.
upvoted 2 times

  antropaws 4 months ago


Selected Answer: A
THE CORRECT ANSWER IS A.

The biggest Instance Store Storage Optimized option (is4gen.8xlarge) has a capacity of only 3TB.

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-store-volumes.html#instance-store-vol-so
upvoted 2 times

  manuelemg2007 2 months, 1 week ago


The instance i4g has capacity 15TB

https://aws.amazon.com/es/ec2/instance-types/
upvoted 1 times

  mell1222 5 months ago


Selected Answer: D
In terms of speed, instance store can generally offer higher I/O performance and lower latency than EBS, due to the fact that it is
physically attached to the host. However, the performance of EBS can be optimized based on the specific use case, by selecting the
appropriate volume type, size, and configuration.
upvoted 2 times
  Ankit_EC_ran 5 months, 1 week ago
Selected Answer: D
INstance store gives the best I/O performance
upvoted 1 times

  C_M_M 5 months, 2 weeks ago


The keyword here is "maximum possible I/O performance".
EBS and Ec2 instance store are good options, but EC2 is higher than EBS in terms of I/O performance. Maximum possible is clearly Ec2
instance storage.
There are some concerns about the 10TB needed, however, storage optimized Ec2 instance stores can take up to 24 x 13980 GB (ie 312 TB)
So option D is the winner here.
upvoted 2 times

  channn 5 months, 3 weeks ago


Selected Answer: D
D of course
upvoted 1 times

  jdr75 5 months, 4 weeks ago


Selected Answer: D
The instance-storage is a block storage directly attached to the EC2 instance (also has options to be accelerated with fast NVMe (Non-
Volatile Memory Express) interface) is ins FASTER than EBS.
Also there're types that reach top value of 30 TB.
upvoted 1 times

  TheAbsoluteTruth 6 months ago


Selected Answer: A
La opción A es la más adecuada para cumplir con los requisitos establecidos por la empresa de medios. Amazon EBS ofrece el máximo
rendimiento de E/S posible y es una opción adecuada para el procesamiento de video, mientras que Amazon S3 es la solución de
almacenamiento de datos duradero que puede manejar 300 TB de contenido multimedia. Amazon S3 Glacier es una opción adecuada
para el almacenamiento de archivos de medios de archivo que ya no están en uso, y su costo es más bajo que el de Amazon S3. Por lo
tanto, la opción A proporcionará la solución de almacenamiento más adecuada para la empresa de medios con una combinación de alto
rendimiento, durabilidad y costo eficacia
upvoted 2 times

  jaswantn 6 months ago


Instance store backed Instances can't be upgraded; means volumes can be added only at the time of launching. If Instance is accidentally
terminated or stopped, all the data is lost. In order to prevent that unto some extent, we need to back up data from Instance store
volumes to persistent storage on a regular basis. So, if we are spending more money on Instance store volume and still we have additional
responsibility of backing them up on regular basis; no worth. We can use EBS volume type that can provide higher I/O performance.
upvoted 1 times

  Erbug 6 months, 2 weeks ago


When you want to compare S3 storage and EBS as durable storage types according to the maximum IOPS, you will see that s3 is better
than EBS based on storage-optimized values.
Exp: Whereas EBS has 40000 max IOPS for storage-optimized value, EC2 provides you a better option with a max of 2146664 random read
and 1073336 write.
To get further information, you can visit the below links:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/compute-optimized-instances.html#compute-ssd-perf
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-optimized.html

So my answer is D
upvoted 2 times
Question #128 Topic 1

A company wants to run applications in containers in the AWS Cloud. These applications are stateless and can tolerate disruptions within the
underlying infrastructure. The company needs a solution that minimizes cost and operational overhead.
What should a solutions architect do to meet these requirements?

A. Use Spot Instances in an Amazon EC2 Auto Scaling group to run the application containers.

B. Use Spot Instances in an Amazon Elastic Kubernetes Service (Amazon EKS) managed node group.

C. Use On-Demand Instances in an Amazon EC2 Auto Scaling group to run the application containers.

D. Use On-Demand Instances in an Amazon Elastic Kubernetes Service (Amazon EKS) managed node group.

Correct Answer: A

Community vote distribution


B (76%) A (21%)

  bgsanata Highly Voted  4 months, 2 weeks ago


Selected Answer: A
Requirement is "minimizes cost and operational overhead"
A is better option than B as EKS add additional cost and operational overhead.
upvoted 8 times

  MutiverseAgent 2 months, 2 weeks ago


In my opinion option A) seems to be a reasonable at first because setting up AWS EKS might be seem as an operation overhead
comparing to the option of running the containers inside the EC2 using docker just as you we do on your own machines. However,
consider installing docker on multiple EC2 instances and manually manage docker instances and images will end up in chaos, so, as a
conclusion, the operational cost of setting up AWS EKS will worth the effort.
upvoted 3 times

  Lalo 4 months ago


USING SPOT INSTANCES WITH EKS
https://ec2spotworkshops.com/using_ec2_spot_instances_with_eks.html
upvoted 1 times

  ruqui 4 months, 1 week ago


option A is the worst option in terms of operational overhead ... you have to install your own kubernetes cluster!!! B is a more suitable
option
upvoted 3 times

  MutiverseAgent 2 months, 2 weeks ago


you do not necessary need to install K8S, in terms of plain containers you can run them using docker just as you do on your own
machine.
upvoted 1 times

  GalileoEC2 Highly Voted  6 months, 4 weeks ago


Answer is A:
Amazon ECS: ECS itself is free, you pay only for Amazon EC2 resources you use.
Amazon EKS: The EKS management layer incurs an additional cost of $144 per month per cluster.
Advantages of Amazon ECS include: Spot instances: Because containers are immutable, you can run many workloads using Amazon EC2
Spot Instances (which can be shut down with no advance notice) and save 90% on on-demand instance costs.
upvoted 6 times

  Modulopi Most Recent  3 days, 10 hours ago


Selected Answer: A
reponse A
upvoted 1 times

  TariqKipkemei 1 month ago


Selected Answer: B
Minimize costs = Spot instances
Minimize operational overhead = Amazon EKS is a managed Kubernetes service that makes it easy for you to run Kubernetes on AWS and
on-premises.

https://aws.amazon.com/pm/eks/?trk=c69c708c-c423-4c07-9fc8-
513781540cc7&sc_channel=ps&ef_id=Cj0KCQjw9MCnBhCYARIsAB1WQVWD7pSyGgjzsk6QHMNAIZrHvuAzZd4cy9b4QAaCcB5QTn6MR_czh
WkaAm6UEALw_wcB:G:s&s_kwcid=AL!4422!3!669047416746!e!!g!!eks!20433874212!155230227787#:~:text=is%20Amazon%20EKS%3F-,Ama
zon%20EKS,-is%20a%20managed
I would not try to overthink this.
upvoted 1 times
  Guru4Cloud 1 month, 2 weeks ago
Selected Answer: B
The key reasons are:

Using Spot Instances reduces EC2 costs significantly compared to On-Demand.


EKS managed node groups simplify running and scaling containerized applications vs self-managed Kubernetes.
Since the applications are stateless and fault-tolerant, intermittent Spot interruptions are acceptable.
The combination of Spot + EKS provides the most cost-efficient infrastructure with minimal operational overhead.
Options A, C and D either use On-Demand instances or self-managed infrastructure, which increases costs and overhead.
upvoted 2 times

  aadityaravi8 3 months ago


to run application with minimum cost, use spot instances and to reduce operational overhead, run it on EKS.
Hence B should be right answer.
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: B
Option B is the recommended solution. Using Spot Instances within an Amazon EKS managed node group allows you to run containers in
a managed Kubernetes environment while taking advantage of the cost savings offered by Spot Instances. Spot Instances provide access
to spare EC2 capacity at significantly lower prices than On-Demand Instances. By utilizing Spot Instances in an EKS managed node group,
you can reduce costs while maintaining high availability for your stateless applications.

Option A suggests using Spot Instances in an EC2 Auto Scaling group, which is a valid approach. However, utilizing Amazon EKS provides a
more streamlined and managed environment for running containers.

Options C and D suggest using On-Demand Instances, which would provide stable capacity but may not be the most cost-effective
solution for minimizing costs, as On-Demand Instances typically have higher prices compared to Spot Instances.
upvoted 2 times

  Abrar2022 4 months, 1 week ago


There are no additional costs to use Amazon EKS managed node groups. You only pay for the AWS resources that you provision.
upvoted 2 times

  TheAbsoluteTruth 6 months ago


Selected Answer: B
La opción B es la mejor para cumplir con los requisitos de minimización de costos y gastos generales operativos mientras se ejecutan
contenedores en la nube de AWS. Amazon EKS es un servicio de orquestación de contenedores altamente escalable y de alta
disponibilidad que se encarga de administrar y escalar automáticamente los nodos de contenedor subyacentes. El uso de instancias de
spot en un grupo de nodos administrados de Amazon EKS ayudará a reducir los costos en comparación con las instancias bajo demanda,
ya que las instancias de spot son instancias de EC2 disponibles a precios significativamente más bajos, pero pueden ser interrumpidas con
poco aviso. Al aprovechar la capacidad no utilizada de EC2 a un precio reducido, la empresa puede ahorrar dinero en costos de
infraestructura sin comprometer la tolerancia a fallos o la escalabilidad de sus aplicaciones en contenedores.
upvoted 2 times

  alexiscloud 6 months ago


B: Sport instance save cost
upvoted 1 times

  bgsanata 6 months, 3 weeks ago


Selected Answer: D
The answer should be D. Spot instance is not good option at all. The question say "...can tolerate disruptions" this doesn't mean it can run
at random time intervals.
upvoted 1 times

  Lalo 4 months ago


USING SPOT INSTANCES WITH EKS
https://ec2spotworkshops.com/using_ec2_spot_instances_with_eks.html
upvoted 1 times

  Robrobtutu 5 months, 2 weeks ago


Spot instances are the correct option for this case.
upvoted 1 times

  Sdraju 7 months ago


Selected Answer: B
Spot instances for cost optimisation and Kubernetes for container management
upvoted 1 times

  Zerotn3 9 months ago


Selected Answer: B
A and B are working. but requirements have "operational overhead". EKS would allow the company to use Amazon EKS to manage the
containerized applications.
upvoted 4 times
  Buruguduystunstugudunstuy 9 months, 1 week ago
Selected Answer: B
The correct answer is B. To minimize cost and operational overhead, the solutions architect should use Spot Instances in an Amazon
Elastic Kubernetes Service (Amazon EKS) managed node group to run the application containers.

Amazon EKS is a fully managed service that makes it easy to run Kubernetes on AWS. By using a managed node group, the company can
take advantage of the operational benefits of Amazon EKS while minimizing the operational overhead of managing the Kubernetes
infrastructure. Spot Instances provide a cost-effective way to run stateless, fault-tolerant applications in containers, making them a good
fit for the company's requirements.
upvoted 5 times

  JayBee65 9 months, 2 weeks ago


Running your Kubernetes and containerized workloads on Amazon EC2 Spot Instances is a great way to save costs. ... AWS makes it easy
to run Kubernetes with Amazon Elastic Kubernetes Service (EKS) a managed Kubernetes service to run production-grade workloads on
AWS. To cost optimize these workloads, run them on Spot Instances. https://aws.amazon.com/blogs/compute/cost-optimization-and-
resilience-eks-with-spot-instances/
upvoted 5 times

  career360guru 9 months, 2 weeks ago


Selected Answer: B
Option B
upvoted 1 times

  Qjb8m9h 9 months, 3 weeks ago


B. Use Spot Instances - Supports Disruption ( stop and start at anytime)
Elastic Kubernetes Service (Amazon EKS) managed node group - Supports containerized application.
upvoted 1 times
Question #129 Topic 1

A company is running a multi-tier web application on premises. The web application is containerized and runs on a number of Linux hosts
connected to a PostgreSQL database that contains user records. The operational overhead of maintaining the infrastructure and capacity planning
is limiting the company's growth. A solutions architect must improve the application's infrastructure.
Which combination of actions should the solutions architect take to accomplish this? (Choose two.)

A. Migrate the PostgreSQL database to Amazon Aurora.

B. Migrate the web application to be hosted on Amazon EC2 instances.

C. Set up an Amazon CloudFront distribution for the web application content.

D. Set up Amazon ElastiCache between the web application and the PostgreSQL database.

E. Migrate the web application to be hosted on AWS Fargate with Amazon Elastic Container Service (Amazon ECS).

Correct Answer: AE

Community vote distribution


AE (95%) 5%

  ArielSchivo Highly Voted  10 months, 3 weeks ago


Selected Answer: AE
I would say A and E since Aurora and Fargate are serverless (less operational overhead).
upvoted 8 times

  baba365 2 weeks, 6 days ago


There’s a difference between Amazon Aurora and Amazon Aurora Serverless
upvoted 1 times

  TariqKipkemei Most Recent  1 month ago


Selected Answer: AE
Requirement is to reduce operational overhead,
Amazon Aurora provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region
replication.
AWS Fargate is a serverless, pay-as-you-go compute engine that lets you focus on building applications without managing servers.
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: AE
The reasons are:

Migrating the database to Amazon Aurora provides a high performance, scalable PostgreSQL-compatible database with minimal
overhead.
Migrating the containerized web app to Fargate removes the need to provision and manage EC2 instances. Fargate auto-scales.
Together, Aurora and Fargate reduce operational overhead and complexity for the data and application tiers.
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: AE
A is the correct answer because migrating the database to Amazon Aurora reduces operational overhead and offers scalability and
automated backups.

E is the correct answer because migrating the web application to AWS Fargate with Amazon ECS eliminates the need for infrastructure
management, simplifies deployment, and improves resource utilization.

B. Migrating the web application to Amazon EC2 instances would not directly address the operational overhead and capacity planning
concerns mentioned in the scenario.

C. Setting up an Amazon CloudFront distribution improves content delivery but does not directly address the operational overhead or
capacity planning limitations.

D. Configuring Amazon ElastiCache improves performance but does not directly address the operational overhead or capacity planning
challenges mentioned.

Therefore, the correct answers are A and E as they address the requirements, while the incorrect answers (B, C, D) do not provide the
desired solutions.
upvoted 1 times

  studynoplay 4 months, 3 weeks ago


Selected Answer: AE
Improve the application's infrastructure = Modernize Infrastructure = Least Operational Overhead = Serverless
upvoted 1 times

  Robrobtutu 5 months, 2 weeks ago


Selected Answer: AE
A and E are the best options.
upvoted 1 times

  bgsanata 6 months, 3 weeks ago


Selected Answer: AE
A and E
upvoted 1 times

  rapatajones 8 months, 1 week ago


Selected Answer: AE
a e..............
upvoted 1 times

  goodmail 8 months, 2 weeks ago


One should that Aurora is not serverless. Aurora serverless and Aurora are 2 Amazon services. I prefer C, however the question does not
mention any frontend requirements.
upvoted 1 times

  aba2s 9 months ago


Selected Answer: AE
Yes, go for A and E since thes two ressources are serverless.
upvoted 2 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Selected Answer: AE
The correct answers are A and E. To improve the application's infrastructure, the solutions architect should migrate the PostgreSQL
database to Amazon Aurora and migrate the web application to be hosted on AWS Fargate with Amazon Elastic Container Service (Amazon
ECS).

Amazon Aurora is a fully managed, scalable, and highly available relational database service that is compatible with PostgreSQL. Migrating
the database to Amazon Aurora would reduce the operational overhead of maintaining the database infrastructure and allow the
company to focus on building and scaling the application.

AWS Fargate is a fully managed container orchestration service that enables users to run containers without the need to manage the
underlying EC2 instances. By using AWS Fargate with Amazon Elastic Container Service (Amazon ECS), the solutions architect can improve
the scalability and efficiency of the web application and reduce the operational overhead of maintaining the underlying infrastructure.
upvoted 1 times

  techhb 9 months, 1 week ago


A and E are obvious choices.
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: AE
Option A and E
upvoted 1 times

  SilentMilli 9 months, 2 weeks ago


Selected Answer: AE
A and E
upvoted 1 times

  333666999 9 months, 3 weeks ago


Selected Answer: CE
C not A. and E
upvoted 1 times

  Wpcorgan 10 months, 2 weeks ago


A and E
upvoted 1 times

  Nigma 10 months, 3 weeks ago


https://www.examtopics.com/discussions/amazon/view/46457-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times

  Nigma 10 months, 3 weeks ago


A and E

Aurora and serverless


upvoted 1 times
Question #130 Topic 1

An application runs on Amazon EC2 instances across multiple Availability Zonas. The instances run in an Amazon EC2 Auto Scaling group behind
an Application Load Balancer. The application performs best when the CPU utilization of the EC2 instances is at or near 40%.
What should a solutions architect do to maintain the desired performance across all instances in the group?

A. Use a simple scaling policy to dynamically scale the Auto Scaling group.

B. Use a target tracking policy to dynamically scale the Auto Scaling group.

C. Use an AWS Lambda function ta update the desired Auto Scaling group capacity.

D. Use scheduled scaling actions to scale up and scale down the Auto Scaling group.

Correct Answer: B

Community vote distribution


B (100%)

  Buruguduystunstugudunstuy Highly Voted  9 months, 1 week ago


Selected Answer: B
The correct answer is B. To maintain the desired performance across all instances in the Amazon EC2 Auto Scaling group, the solutions
architect should use a target tracking policy to dynamically scale the Auto Scaling group.

A target tracking policy allows the Auto Scaling group to automatically adjust the number of EC2 instances in the group based on a target
value for a metric. In this case, the target value for the CPU utilization metric could be set to 40% to maintain the desired performance of
the application. The Auto Scaling group would then automatically scale the number of instances up or down as needed to maintain the
target value for the metric.

https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-simple-step.html
upvoted 7 times

  TariqKipkemei Most Recent  1 month ago


Selected Answer: B
The application performs best when the CPU utilization of the EC2 instances is at or near 40%.
Target tracking will maintain CPU utilization at 40%. When CloudWatch detects that the average CPU utilization is beyond 40%, it will
trigger the target tracking policy to scale out the auto scaling group to meet this target utilization. Once everything is settled and the
average CPU utilization has gone below 40%, another scale in action will kick in and reduce the number of auto scaling instances in the
auto scaling group.
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: B
The key reasons are:

A target tracking policy allows defining a specific target metric value to maintain, in this case 40% CPU utilization.
Auto Scaling will automatically add or remove instances to keep utilization at the target level, without manual intervention.
This will dynamically scale the group to maintain performance as load changes.
A simple scaling policy only responds to breaching thresholds, not maintaining a target.
Scheduled actions and Lambda would require manual calculation and updates to track utilization.
Target tracking policies are the native Auto Scaling feature designed to maintain a metric at a target value.
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: B
Target tracking policy is the most appropriate choice. This policy allows ASG to automatically adjust the desired capacity based on a target
metric, such as CPU utilization. By setting the target metric to 40%, ASG will scale the number of instances up or down as needed to
maintain the desired CPU utilization level. This ensures that the application's performance remains optimal.

A suggests using a simple scaling policy, which allows for scaling based on a fixed metric or threshold. However, it may not be as effective
as a target tracking policy in dynamically adjusting the capacity to maintain a specific CPU utilization level.

C suggests using an Lambda to update the desired capacity. While this can be done programmatically, it would require custom scripting
and may not provide the same level of automation and responsiveness as a target tracking policy.

D suggests using scheduled scaling actions to scale up and down ASG at predefined times. This approach is not suitable for maintaining
the desired performance in real-time based on actual CPU utilization.
upvoted 2 times

  Robrobtutu 5 months, 2 weeks ago


Selected Answer: B
B of course.
upvoted 1 times
  aba2s 9 months ago
Selected Answer: B
B seem to the correct response.

With a target tracking scaling policy, you can increase or decrease the current capacity of the group based on a target value for a specific
metric. This policy will help resolve the over-provisioning of your resources. The scaling policy adds or removes capacity as required to
keep the metric at, or close to, the specified target value. In addition to keeping the metric close to the target value, a target tracking
scaling policy also adjusts to changes in the metric due to a changing load pattern.
upvoted 3 times

  orionizzie 9 months, 1 week ago


Selected Answer: B
target tracking - CPU at 40%
upvoted 2 times

  career360guru 9 months, 2 weeks ago


Selected Answer: B
Option B
upvoted 1 times

  Wpcorgan 10 months, 2 weeks ago


B is correct
upvoted 1 times

  ArielSchivo 10 months, 3 weeks ago


Selected Answer: B
Option B. Target tracking policy.

https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-target-tracking.html
upvoted 4 times

  Nigma 10 months, 3 weeks ago


B

CPU utilization = target tracking


upvoted 2 times

  SimonPark 11 months, 1 week ago


Selected Answer: B
B is the answer
upvoted 1 times
Question #131 Topic 1

A company is developing a file-sharing application that will use an Amazon S3 bucket for storage. The company wants to serve all the files
through an Amazon CloudFront distribution. The company does not want the files to be accessible through direct navigation to the S3 URL.
What should a solutions architect do to meet these requirements?

A. Write individual policies for each S3 bucket to grant read permission for only CloudFront access.

B. Create an IAM user. Grant the user read permission to objects in the S3 bucket. Assign the user to CloudFront.

C. Write an S3 bucket policy that assigns the CloudFront distribution ID as the Principal and assigns the target S3 bucket as the Amazon
Resource Name (ARN).

D. Create an origin access identity (OAI). Assign the OAI to the CloudFront distribution. Configure the S3 bucket permissions so that only the
OAI has read permission.

Correct Answer: D

Community vote distribution


D (100%)

  123jhl0 Highly Voted  11 months, 2 weeks ago


Selected Answer: D
I want to restrict access to my Amazon Simple Storage Service (Amazon S3) bucket so that objects can be accessed only through my
Amazon CloudFront distribution. How can I do that?
Create a CloudFront origin access identity (OAI)
https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-access-to-amazon-s3/
upvoted 25 times

  SimonPark 11 months, 1 week ago


Thanks it convinces me
upvoted 1 times

  Guru4Cloud Most Recent  1 month, 2 weeks ago


Selected Answer: D
The key reasons are:

An OAI provides secure access between CloudFront and S3 without exposing the S3 bucket publicly.
The OAI is associated with the CloudFront distribution.
The S3 bucket policy limits access only to that OAI.
This ensures only CloudFront can access the objects, not direct S3 access.
Option A is complex to manage individual bucket policies.
Option B exposes credentials that aren't needed.
Option C works but OAI is the preferred method.
So using an origin access identity provides the most secure way to serve private S3 content through CloudFront. The OAI prevents direct
public access to the S3 bucket.
upvoted 2 times

  cookieMr 3 months, 1 week ago


Selected Answer: D
To meet the requirements of serving files through CloudFront while restricting direct access to the S3 bucket URL, the recommended
approach is to use an origin access identity (OAI). By creating an OAI and assigning it to the CloudFront distribution, you can control
access to the S3 bucket.
This setup ensures that the files stored in the S3 bucket are only accessible through CloudFront and not directly through the S3 bucket
URL. Requests made directly to the S3 URL will be blocked.

Option A suggests writing individual policies for each S3 bucket, which can be cumbersome and difficult to manage, especially if there are
multiple buckets involved.

Option B suggests creating an IAM user and assigning it to CloudFront, but this does not address restricting direct access to the S3 bucket
URL.

Option C suggests writing an S3 bucket policy with CloudFront distribution ID as the Principal, but this alone does not provide the
necessary restrictions to prevent direct access to the S3 bucket URL.
upvoted 2 times

  antropaws 4 months ago


DECEMBER 2022 UPDATE:

Restricting access to an Amazon S3 origin:


CloudFront provides two ways to send authenticated requests to an Amazon S3 origin: origin access control (OAC) and origin access
identity (OAI). We recommend using OAC because it supports:

All Amazon S3 buckets in all AWS Regions, including opt-in Regions launched after December 2022
Amazon S3 server-side encryption with AWS KMS (SSE-KMS)
Dynamic requests (PUT and DELETE) to Amazon S3

OAI doesn't work for the scenarios in the preceding list, or it requires extra workarounds in those scenarios.

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html
upvoted 1 times
  Buruguduystunstugudunstuy 9 months, 1 week ago
Selected Answer: D
The correct answer is D. To meet the requirements, the solutions architect should create an origin access identity (OAI) and assign it to the
CloudFront distribution. The S3 bucket permissions should be configured so that only the OAI has read permission.

An OAI is a special CloudFront user that is associated with a CloudFront distribution and is used to give CloudFront access to the files in an
S3 bucket. By using an OAI, the company can serve the files through the CloudFront distribution while preventing direct access to the S3
bucket.

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html
upvoted 3 times

  career360guru 9 months, 2 weeks ago


Selected Answer: D
D is the right answer
upvoted 1 times

  gloritown 9 months, 3 weeks ago


Selected Answer: D
D is correct but instead of OAI using OAC would be better since OAI is legacy
upvoted 3 times

  Robrobtutu 5 months, 2 weeks ago


Thanks, I didn't know about OAC.
upvoted 1 times

  Wpcorgan 10 months, 1 week ago


D is correct
upvoted 1 times
Question #132 Topic 1

A company’s website provides users with downloadable historical performance reports. The website needs a solution that will scale to meet the
company’s website demands globally. The solution should be cost-effective, limit the provisioning of infrastructure resources, and provide the
fastest possible response time.
Which combination should a solutions architect recommend to meet these requirements?

A. Amazon CloudFront and Amazon S3

B. AWS Lambda and Amazon DynamoDB

C. Application Load Balancer with Amazon EC2 Auto Scaling

D. Amazon Route 53 with internal Application Load Balancers

Correct Answer: A

Community vote distribution


A (93%) 4%

  G3 Highly Voted  8 months ago


Selected Answer: A
Historical reports = Static content = S3
upvoted 13 times

  dokaedu Highly Voted  11 months, 1 week ago


A is the correct answer
The solution should be cost-effective, limit the provisioning of infrastructure resources, and provide the fastest possible response time.
upvoted 9 times

  TariqKipkemei Most Recent  4 weeks ago


Selected Answer: A
Global, cost-effective, serverless, low latency = CloudFront with S3
Static content = S3
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: A
Historical reports = Static content = S3
upvoted 2 times

  cookieMr 3 months, 1 week ago


By using CloudFront, the website can leverage the global network of edge locations to cache and deliver the performance reports to users
from the nearest edge location, reducing latency and providing fast response times. Amazon S3 serves as the origin for the files, where
the reports are stored.

Option B is incorrect because AWS Lambda and Amazon DynamoDB are not the most suitable services for serving downloadable files and
meeting the website demands globally.

Option C is incorrect because using an Application Load Balancer with Amazon EC2 Auto Scaling may require more infrastructure
provisioning and management compared to the CloudFront and S3 combination. Additionally, it may not provide the same level of global
scalability and fast response times as CloudFront.

Option D is incorrect because while Amazon Route 53 is a global DNS service, it alone does not provide the caching and content delivery
capabilities required for serving the downloadable reports. Internal Application Load Balancers do not address the global scalability and
caching requirements specified in the scenario.
upvoted 4 times

  Bmarodi 2 months, 3 weeks ago


Very good explanations!
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Selected Answer: A
The correct answer is Option A. To meet the requirements, the solutions architect should recommend using Amazon CloudFront and
Amazon S3.

By combining Amazon CloudFront and Amazon S3, the solutions architect can provide a scalable and cost-effective solution that limits the
provisioning of infrastructure resources and provides the fastest possible response time.

https://aws.amazon.com/cloudfront/
https://aws.amazon.com/s3/
upvoted 3 times
  techhb 9 months, 1 week ago
A is correct
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: A
A is the best and most cost effective option if only download of the static pre-created report(no data processing before downloading) is a
requirement.
upvoted 1 times

  Wpcorgan 10 months, 1 week ago


A is correct
upvoted 1 times

  sdasdawa 10 months, 3 weeks ago


Selected Answer: A
https://www.examtopics.com/discussions/amazon/view/27935-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 2 times

  Nirmal3331 10 months, 3 weeks ago


Selected Answer: A
https://www.examtopics.com/discussions/amazon/view/27935-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 2 times

  samplunk 10 months, 3 weeks ago


Selected Answer: A
See this discussion:
https://www.examtopics.com/discussions/amazon/view/27935-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 2 times

  manu427 10 months, 3 weeks ago


Selected Answer: C
load balancing + scalability + cost effective
upvoted 1 times

  MyNameIsJulien 10 months, 3 weeks ago


Selected Answer: B
I think the answer is B
upvoted 1 times
Question #133 Topic 1

A company runs an Oracle database on premises. As part of the company’s migration to AWS, the company wants to upgrade the database to the
most recent available version. The company also wants to set up disaster recovery (DR) for the database. The company needs to minimize the
operational overhead for normal operations and DR setup. The company also needs to maintain access to the database's underlying operating
system.
Which solution will meet these requirements?

A. Migrate the Oracle database to an Amazon EC2 instance. Set up database replication to a different AWS Region.

B. Migrate the Oracle database to Amazon RDS for Oracle. Activate Cross-Region automated backups to replicate the snapshots to another
AWS Region.

C. Migrate the Oracle database to Amazon RDS Custom for Oracle. Create a read replica for the database in another AWS Region.

D. Migrate the Oracle database to Amazon RDS for Oracle. Create a standby database in another Availability Zone.

Correct Answer: D

Community vote distribution


C (52%) A (39%) 9%

  ArielSchivo Highly Voted  10 months, 3 weeks ago


Option C since RDS Custom has access to the underlying OS and it provides less operational overhead. Also, a read replica in another
Region can be used for DR activities.

https://aws.amazon.com/blogs/database/implementing-a-disaster-recovery-strategy-with-amazon-rds/
upvoted 21 times

  KalarAzar 3 months, 2 weeks ago


You can't create cross-Region replicas in RDS Custom for Oracle: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/custom-
rr.html#custom-rr.limitations
upvoted 8 times

  brushek Highly Voted  11 months, 3 weeks ago


Selected Answer: C
It should be C:

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/rds-custom.html
and
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/working-with-custom-oracle.html
upvoted 15 times

  bhgt 1 day, 21 hours ago


how it is C when the read replica is not meant for DR
upvoted 1 times

  clark777 Most Recent  4 days ago


Selected Answer: A
1.maintain access to the database's underlying operating system.
2.can't create cross-Region replicas in RDS Custom
upvoted 2 times

  BrijMohan08 2 weeks, 1 day ago


Selected Answer: A
EC2 - to maintain the underlying OS
upvoted 1 times

  TariqKipkemei 4 weeks ago


Selected Answer: C
Technically both A and C would work. But:
Amazon RDS Custom is a managed database service for legacy, custom, and packaged applications that require access to the underlying
OS and DB environment. It was specifically designed to handle these kind of scenarios.
First create Oracle replicas for the RDS Custom for Oracle DB instances then manually change the mode of mounted replicas to read-only.
upvoted 1 times

  Nava702 1 month, 1 week ago


Selected Answer: A
C is wrong because RDS custom for Oracle does not support read replicas or cross-regional replication.
upvoted 3 times
  Guru4Cloud 1 month, 2 weeks ago
Selected Answer: C
Option C is the best solution to meet the requirements:

Migrate the Oracle database to Amazon RDS Custom for Oracle.


Create a read replica for the database in another AWS Region for disaster recovery.
The reasons are:

RDS Custom provides a fully managed Oracle database instance. This reduces operational overhead compared to EC2.
RDS Custom allows accessing the underlying OS which is required.
Creating a read replica in another Region provides a simple DR solution.
RDS Automated Backups are within a single region. Cross-region DR requires replication.
RDS standby in the same AZ doesn't provide geographic diversity for DR.
So RDS Custom meets the managed database, OS access, and simple DR needs. The cross-region read replica provides geographic
diversity for DR. This is the right fit based on the requirements.
upvoted 2 times

  DannyKang5649 1 month, 2 weeks ago


Selected Answer: A
The company also needs to maintain access to the database's underlying operating system.
-> OS is needed.
upvoted 1 times

  ERHANKORKUT16 1 month, 3 weeks ago


Selected Answer: A
General limitations for RDS Custom for Oracle replication
RDS Custom for Oracle replicas have the following limitations:

You can't create RDS Custom for Oracle replicas in read-only mode. However, you can manually change the mode of mounted replicas to
read-only, and from read-only to mounted. For more information, see the documentation for the create-db-instance-read-replica AWS CLI
command.

You can't create cross-Region RDS Custom for Oracle replicas.

You can't change the value of the Oracle Data Guard CommunicationTimeout parameter. This parameter is set to 15 seconds for RDS
Custom for Oracle DB instances.
upvoted 2 times

  RupeC 2 months, 1 week ago


Selected Answer: A
Initially, I thought C, but as shown in the links below, C fails to support the latest version and replicas in another region. Hence A is the
only possible answer.
upvoted 1 times

  jaydesai8 2 months, 3 weeks ago


https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/custom-rr.html

this link clearly mentions, you cant create cross region replicas RDS custom for oracle
upvoted 2 times

  jaydesai8 2 months, 3 weeks ago


Selected Answer: A
For Option C - You can sue custom Oracle for RDS, but You can't create cross-Region replicas in RDS Custom for Oracle:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/custom-rr.html#custom-rr.limitations
Hence, answer is option A
upvoted 1 times

  live_reply_developers 3 months ago


Selected Answer: A
"You can't create cross-Region RDS Custom for Oracle replicas."

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/custom-rr.html#custom-rr.limitations
upvoted 1 times

  abdelbz01 3 months ago


Selected Answer: A
Option C is wrong: Migrating the Oracle database to Amazon RDS Custom for Oracle, and creating a read replica for the database in
another AWS Region would not meet the requirement of upgrading the database to the most recent available version. Amazon RDS
Custom for Oracle is a managed service that enables you to access and customize your database environment and operating system.
However, Amazon RDS Custom for Oracle does not support all versions and editions of Oracle Database. The latest version supported by
Amazon RDS Custom for Oracle is 19c . A read replica is a feature of Amazon RDS that creates a copy of your source DB instance in the
same or different AWS Region. A read replica can be used for read-heavy workloads or disaster recovery purposes. However, a read replica
cannot be upgraded independently from its source DB instance .
upvoted 5 times
  Mia2009687 3 months ago
Selected Answer: A
It requires accessing to the underlying OS , so B/D out. And you can't create cross-Region RDS Custom for Oracle replicas, so C out.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/custom-rr.html#custom-rr.limitations
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: C
By choosing Option C, the company can upgrade the Oracle database, leverage the benefits of Amazon RDS, and have a disaster recovery
solution with minimal operational overhead.

Option A suggests migrating the Oracle database to an Amazon EC2 instance and setting up database replication to a different AWS
Region. This approach requires more operational overhead and management compared to using a managed service like Amazon RDS.

Option B suggests migrating the Oracle database to Amazon RDS for Oracle and activating Cross-Region automated backups. While this
provides backups in another AWS Region, it does not provide the same level of disaster recovery and failover capabilities as a read replica
in another Region.

Option D suggests migrating the Oracle database to Amazon RDS for Oracle and creating a standby database in another Availability Zone.
However, this solution only provides availability within the same Region and does not meet the requirement of having disaster recovery
across AWS Regions.
upvoted 1 times

  KalarAzar 3 months, 2 weeks ago


Selected Answer: A
You can't create cross-Region read replicas for RDS Custom for Oracle. Please do not select C, despite it having the highest community
rating on here.

Official article that states this here: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/custom-rr.html#custom-rr.limitations

So, as access to the OS is needed and RDS Custom is ruled out (which DOES give you access), the answer is clearly A.
upvoted 3 times
Question #134 Topic 1

A company wants to move its application to a serverless solution. The serverless solution needs to analyze existing and new data by using SL.
The company stores the data in an Amazon S3 bucket. The data requires encryption and must be replicated to a different AWS Region.
Which solution will meet these requirements with the LEAST operational overhead?

A. Create a new S3 bucket. Load the data into the new S3 bucket. Use S3 Cross-Region Replication (CRR) to replicate encrypted objects to an
S3 bucket in another Region. Use server-side encryption with AWS KMS multi-Region kays (SSE-KMS). Use Amazon Athena to query the data.

B. Create a new S3 bucket. Load the data into the new S3 bucket. Use S3 Cross-Region Replication (CRR) to replicate encrypted objects to an
S3 bucket in another Region. Use server-side encryption with AWS KMS multi-Region keys (SSE-KMS). Use Amazon RDS to query the data.

C. Load the data into the existing S3 bucket. Use S3 Cross-Region Replication (CRR) to replicate encrypted objects to an S3 bucket in another
Region. Use server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Use Amazon Athena to query the data.

D. Load the data into the existing S3 bucket. Use S3 Cross-Region Replication (CRR) to replicate encrypted objects to an S3 bucket in another
Region. Use server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Use Amazon RDS to query the data.

Correct Answer: A

Community vote distribution


A (50%) C (50%)

  123jhl0 Highly Voted  11 months, 2 weeks ago


Selected Answer: C
SSE-KMS vs SSE-S3 - The last seems to have less overhead (as the keys are automatically generated by S3 and applied on data at upload,
and don't require further actions. KMS provides more flexibility, but in turn involves a different service, which finally is more "complex"
than just managing one (S3). So A and B are excluded. If you are in doubt, you are having 2 buckets in A and B, while just keeping one in C
and D.
https://s3browser.com/server-side-encryption-types.aspx
Decide between C and D is deciding on Athena or RDS. RDS is a relational db, and we have documents on S3, which is the use case for
Athena. Athena is also serverless, which eliminates the need of controlling the underlying infrastructure and capacity. So C is the answer.
https://aws.amazon.com/athena/
upvoted 46 times

  MutiverseAgent 2 months, 2 weeks ago


It'a since replication works for new objects but not for the existing ones, untless you use batch replication which is not the case.
upvoted 1 times

  markw92 3 months, 2 weeks ago


See comment from Nicknameinvalid below. You get your answer.
upvoted 1 times

  dokaedu Highly Voted  11 months, 1 week ago


Answer is A:
Amazon S3 Bucket Keys reduce the cost of Amazon S3 server-side encryption using AWS Key Management Service (SSE-KMS). This new
bucket-level key for SSE can reduce AWS KMS request costs by up to 99 percent by decreasing the request traffic from Amazon S3 to AWS
KMS. With a few clicks in the AWS Management Console, and without any changes to your client applications, you can configure your
bucket to use an S3 Bucket Key for AWS KMS-based encryption on new objects.
The Existing S3 bucket might have uncrypted data - encryption will apply new data received after the applying of encryption on the new
bucket.
upvoted 20 times

  AKBM7829 1 month ago


But in server side encryption Multi Region Keys is not possible which leaves to Option C
upvoted 1 times

  MutiverseAgent 2 months, 2 weeks ago


Both answers A & C can be possible from the certificate perspective because in both regions will be certificates to encrypt/decrypt, SSE-
KMS and SSE-S3 respectively. But the difference is that replication works for new objects and not existing ones, so that leaves answer A
as the only right option.
upvoted 1 times

  ruqui 4 months, 1 week ago


If you want to use the cost argument: SSE-S3 is free so it's cheaper than any other encryption solution (all of the others have a cost), so
the answer should be C
upvoted 1 times

  MutiverseAgent 2 months, 2 weeks ago


Replication does not work for existing objects, only for new ones.
upvoted 1 times
  s50600822 4 months, 3 weeks ago
Don't know what "kays" are, could they be a trap?
upvoted 1 times

  Bmarodi 3 months, 3 weeks ago


Kays = keys, mistype i think.
upvoted 1 times

  DamyanG Most Recent  12 hours, 53 minutes ago


Selected Answer: C
Answer C I think
upvoted 1 times

  JKevin778 1 week, 1 day ago


Selected Answer: C
Athena to query from S3.
SSE-S3 is least operation overhead than SSE-KMS
so, C.
upvoted 1 times

  hieulam 1 week, 5 days ago


Selected Answer: A
The question should be A.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication-what-is-isnot-
replicated.html#:~:text=Objects%20created%20after%20you%20add%20a%20replication%20configuration.
upvoted 1 times

  XCheng 2 weeks, 4 days ago


C
https://docs.aws.amazon.com/zh_cn/AmazonS3/latest/userguide/bucket-encryption.html#bucket-encryption-replication
upvoted 1 times

  frankie270299 3 weeks, 3 days ago


Selected Answer: C
i think C is correct answer,i asked chatgpt
upvoted 1 times

  TariqKipkemei 4 weeks ago


Selected Answer: A
Technically both A and C will work, but there is a requirement for 'LEAST operational overhead'.
Multi-Region keys are a flexible and powerful solution for many common data security scenarios such as this:
Global data management
Businesses that operate globally need globally distributed data that is available consistently across AWS Regions. You can create multi-
Region keys in all Regions where your data resides, then use the keys as though they were a single-Region key without the latency of a
cross-Region call or the cost of re-encrypting data under a different key in each Region.
upvoted 2 times

  Jeyaluxshan 1 month ago


If you use S3 managed encryption key , it will apply to newly uploaded objects, not to existing objects. C & D is wrong. which state use
existing bucket.
Athena is to query in S3 so no need of RDS. B is wrong.
Correct Answer is A - use KMS key
upvoted 1 times

  AKBM7829 1 month ago


C is right Answer
upvoted 1 times

  sohailn 1 month, 3 weeks ago


C is the best answer because encrypted s3 replication is not as simple,
if you have an unencrypted data or encrypted with sse-s3 it will replicate by default.
if you have encrypted sse-c client side encryption it will not replicate at all because you need to access the key all the time.
if you encrypted with sse-kms by default it will not encrypt from source to target by default you ll need to perform addition steps and we
cant use KMS-Multi key because aws s3 still consider it independent key, so you must first need to decrypt the data in source bucket and
reencrypt in target bucket this solution is 100% true as per stephen udemy instrcutor.
upvoted 1 times

  GC2023 1 month, 2 weeks ago


Please remember that enabling encryption on a bucket does not retroactively encrypt existing objects. You would need to perform a
copy operation to re-upload existing objects with encryption enabled if you want to ensure that all objects are encrypted(from chatGPT)
upvoted 1 times

  Fielies23 1 month, 3 weeks ago


As a side note, if a bucket already exists and you enable replication, you CAN actually now also replicate the existing object in the bucket
with "Amazon S3 Batch Replication".
https://aws.amazon.com/blogs/aws/new-replicate-existing-objects-with-amazon-s3-batch-
replication/#:~:text=S3%20Replication%20is%20a%20fully,or%20to%20multiple%20destination%20buckets.
upvoted 2 times

  RupeC 2 months, 1 week ago


Selected Answer: C
A and C are valid, but C has less overhead and the key management is also serverless.
upvoted 2 times

  fuzzycr 2 months, 2 weeks ago


Selected Answer: A
without any changes to your client applications
upvoted 1 times

  MNotABot 2 months, 2 weeks ago


A
KMS will give least operational overhead as it needs key rotation in 3 years which is 1 year in S3-SSE
upvoted 1 times

  sosda 2 months, 3 weeks ago


Selected Answer: C
SSE-S3 less operational overhead
upvoted 1 times

  jaydesai8 2 months, 3 weeks ago


Selected Answer: A
A, since we can use multi-keys in another region with aws kms keys
upvoted 1 times
Question #135 Topic 1

A company runs workloads on AWS. The company needs to connect to a service from an external provider. The service is hosted in the provider's
VPC. According to the company’s security team, the connectivity must be private and must be restricted to the target service. The connection
must be initiated only from the company’s VPC.
Which solution will mast these requirements?

A. Create a VPC peering connection between the company's VPC and the provider's VPC. Update the route table to connect to the target
service.

B. Ask the provider to create a virtual private gateway in its VPC. Use AWS PrivateLink to connect to the target service.

C. Create a NAT gateway in a public subnet of the company’s VPUpdate the route table to connect to the target service.

D. Ask the provider to create a VPC endpoint for the target service. Use AWS PrivateLink to connect to the target service.

Correct Answer: D

Community vote distribution


D (100%)

  123jhl0 Highly Voted  11 months, 2 weeks ago


Selected Answer: D
**AWS PrivateLink provides private connectivity between VPCs, AWS services, and your on-premises networks, without exposing your
traffic to the public internet**. AWS PrivateLink makes it easy to connect services across different accounts and VPCs to significantly
simplify your network architecture.
Interface **VPC endpoints**, powered by AWS PrivateLink, connect you to services hosted by AWS Partners and supported solutions
available in AWS Marketplace.
https://aws.amazon.com/privatelink/
upvoted 24 times

  remand Highly Voted  8 months, 2 weeks ago


Selected Answer: D
The solution that meets these requirements best is option D.

By asking the provider to create a VPC endpoint for the target service, the company can use AWS PrivateLink to connect to the target
service. This enables the company to access the service privately and securely over an Amazon VPC endpoint, without requiring a NAT
gateway, VPN, or AWS Direct Connect. Additionally, this will restrict the connectivity only to the target service, as required by the
company's security team.

Option A VPC peering connection may not meet security requirement as it can allow communication between all resources in both VPCs.
Option B, asking the provider to create a virtual private gateway in its VPC and use AWS PrivateLink to connect to the target service is not
the optimal solution because it may require the provider to make changes and also you may face security issues.
Option C, creating a NAT gateway in a public subnet of the company’s VPC can expose the target service to the internet, which would not
meet the security requirements.
upvoted 6 times

  TariqKipkemei Most Recent  4 weeks ago


Selected Answer: D
option D is correct
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: D
The best solution to meet the requirements is option D:

Ask the provider to create a VPC endpoint for the target service
Use AWS PrivateLink to connect to the target service
The reasons are:

PrivateLink provides private connectivity between VPCs without using public internet.
The provider creates a VPC endpoint in their VPC for the target service.
The company uses PrivateLink to securely access the endpoint from their VPC.
Connectivity is restricted only to the target service.
The connection is initiated only from the company's VPC.
Options A, B, C would expose the connection to the public internet or require infrastructure changes in the provider's VPC.

PrivateLink enables private, restricted connectivity to the target service without VPC peering or public exposure.
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: D
Option C meets the requirements of establishing a private and restricted connection to the service hosted in the provider's VPC. By asking
the provider to create a VPC endpoint for the target service, you can establish a direct and private connection from your company's VPC to
the target service. AWS PrivateLink ensures that the connectivity remains within the AWS network and does not require internet access.
This ensures both privacy and restriction to the target service, as the connection can only be initiated from your company's VPC.

A. VPC peering does not restrict access only to the target service.
B. PrivateLink is typically used for accessing AWS services, not external services in a provider's VPC.
C. NAT gateway does not provide a private and restricted connection to the target service.

Option D is the correct choice as it uses AWS PrivateLink and VPC endpoint to establish a private and restricted connection from the
company's VPC to the target service in the provider's VPC.
upvoted 2 times

  Abrar2022 4 months ago


VPC Endpoint (Target Service) - for specific services (not accessing whole vpc)
VPC Peering - (accessing whole VPC)
upvoted 3 times

  Abrar2022 4 months, 1 week ago


VPC Peering Connection:
All resources in a VPC, such as ECSs and load balancers, can be accessed.

VPC Endpoint:
Allows access to a specific service or application. Only the ECSs and load balancers in the VPC for which VPC endpoint services are created
can be accessed.
upvoted 1 times

  eugene_stalker 4 months, 1 week ago


Selected Answer: D
Option D, but seems that it is vise versa. Customer needs to create Privatelink and and you VPC endpoint to connect to Privatelink
upvoted 1 times

  studynoplay 4 months, 3 weeks ago


AWS PrivateLink / VPC Endpoint Services:
• Connect services privately from your service VPC to customers VPC
• Doesn’t need VPC Peering, public Internet, NAT Gateway, Route Tables
• Must be used with Network Load Balancer & ENI
upvoted 2 times

  Help2023 7 months, 2 weeks ago


Selected Answer: D
D. Here you are the one initiating the connection
upvoted 1 times

  devonwho 8 months ago


Selected Answer: D
PrivateLink is a more generalized technology for linking VPCs to other services. This can include multiple potential endpoints: AWS
services, such as Lambda or EC2; Services hosted in other VPCs; Application endpoints hosted on-premises.

https://www.tinystacks.com/blog-post/aws-vpc-peering-vs-privatelink-which-to-use-and-when/
upvoted 1 times

  devonwho 8 months ago


Selected Answer: D
While VPC peering enables you to privately connect VPCs, AWS PrivateLink enables you to configure applications or services in VPCs as
endpoints that your VPC peering connections can connect to.
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Selected Answer: D
The solution that meets these requirements is Option D:

* Ask the provider to create a VPC endpoint for the target service.
* Use AWS PrivateLink to connect to the target service.

Option D involves asking the provider to create a VPC endpoint for the target service, which is a private connection to the service that is
hosted in the provider's VPC. This ensures that the connection is private and restricted to the target service, as required by the company's
security team. The company can then use AWS PrivateLink to connect to the target service over the VPC endpoint. AWS PrivateLink is a
fully managed service that enables you to privately access services hosted on AWS, on-premises, or in other VPCs. It provides secure and
private connectivity to services by using private IP addresses, which ensures that traffic stays within the Amazon network and does not
traverse the public internet.

Therefore, Option D is the solution that meets the requirements.


upvoted 2 times
  Buruguduystunstugudunstuy 9 months, 1 week ago
AWS PrivateLink documentation: https://docs.aws.amazon.com/privatelink/latest/userguide/what-is-privatelink.html
upvoted 1 times

  techhb 9 months, 1 week ago


D is right,if requirement was to be ok with public internet then option C was ok.
upvoted 1 times

  k1kavi1 9 months, 1 week ago


Selected Answer: D
D (VPC endpoint) looks correct. Below are the differences between VPC Peering & VPC endpoints.

https://support.huaweicloud.com/intl/en-
us/vpcep_faq/vpcep_04_0004.html#:~:text=You%20can%20create%20a%20VPC%20endpoint%20to%20connect%20your%20local,connectio
n%20over%20an%20internal%20network.&text=VPC%20Peering%20supports%20only%20communications%20between%20two%20VPCs
%20in%20the%20same%20region.&text=You%20can%20use%20Cloud%20Connect,between%20VPCs%20in%20different%20regions.
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: D
D is the right answer
upvoted 1 times

  Sahilbhai 9 months, 3 weeks ago


answer is D
upvoted 1 times
Question #136 Topic 1

A company is migrating its on-premises PostgreSQL database to Amazon Aurora PostgreSQL. The on-premises database must remain online and
accessible during the migration. The Aurora database must remain synchronized with the on-premises database.
Which combination of actions must a solutions architect take to meet these requirements? (Choose two.)

A. Create an ongoing replication task.

B. Create a database backup of the on-premises database.

C. Create an AWS Database Migration Service (AWS DMS) replication server.

D. Convert the database schema by using the AWS Schema Conversion Tool (AWS SCT).

E. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to monitor the database synchronization.

Correct Answer: CD

Community vote distribution


AC (88%) 12%

  123jhl0 Highly Voted  11 months, 2 weeks ago


Selected Answer: AC
AWS Database Migration Service (AWS DMS) helps you migrate databases to AWS quickly and securely. The source database remains fully
operational during the migration, minimizing downtime to applications that rely on the database.
... With AWS Database Migration Service, you can also continuously replicate data with low latency from any supported source to any
supported target.
https://aws.amazon.com/dms/
upvoted 21 times

  TariqKipkemei Most Recent  3 weeks, 5 days ago


Selected Answer: AC
Create an AWS Database Migration Service (AWS DMS) replication server then create an ongoing replication task
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: AC
A) Create an ongoing replication task

C) Create an AWS Database Migration Service (AWS DMS) replication server

The key reasons are:

An ongoing DMS replication task keeps the source and target databases synchronized during the migration.
The DMS replication server manages and executes the replication tasks.
Together, these will continuously replicate changes from on-prem to Aurora to keep them in sync.
A database backup alone wouldn't maintain synchronization.
upvoted 1 times

  MutiverseAgent 2 months, 2 weeks ago


Selected Answer: AC
https://docs.aws.amazon.com/dms/latest/sbs/chap-manageddatabases.postgresql-rds-postgresql.html
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_GettingStarted.Replication.html
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: AC
These two actions (AC) will help meet the requirements of migrating the on-premises PostgreSQL database to Amazon Aurora PostgreSQL
while keeping the on-premises database accessible and synchronized with the Aurora database. The ongoing replication task will ensure
continuous data replication between the on-premises database and Aurora. The AWS DMS replication server will facilitate the migration
process and handle the data replication.

B. Creating a database backup does not ensure ongoing synchronization.


D. Converting the database schema does not address the requirement of synchronization.
E. Creating an EventBridge rule only monitors synchronization, but doesn't handle migration.
The correct combination is A and C.
upvoted 3 times

  Nandha707 3 months, 3 weeks ago


Answer is CD. Postgresql to Aurora Postgresql needed SCT.
https://aws.amazon.com/ko/dms/schema-conversion-tool/
upvoted 1 times
  Bmarodi 3 months, 3 weeks ago
Selected Answer: AC
Option A & C are the right answer.
upvoted 1 times

  kruasan 5 months, 1 week ago


Selected Answer: AC
https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-an-on-premises-postgresql-database-to-aurora-
postgresql.html
upvoted 1 times

  osmk 6 months, 1 week ago


A->https://docs.aws.amazon.com/dms/latest/sbs/chap-manageddatabases.oracle2rds.replication.html
C->https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html
upvoted 2 times

  Erbug 6 months, 1 week ago


Selected Answer: AC
This question is giving us two conditions to solve it. One of them is on-premise database must remain online and accessible during the
migration and the second one is Aurora database must remain synchronized with the on-premises database. So to meet them all A and C
will be the correct options for us.

PS: if the question was just asking us something related to the DB migration process alone, all options would be correct.
upvoted 2 times

  G3 8 months ago
https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-an-on-premises-postgresql-database-to-aurora-
postgresql.html

This link talks about using DMS . I saw the other link pointing to SCT - not sure which one is correct
upvoted 1 times

  aba2s 9 months ago


Selected Answer: CD
DMS for database migration
SCT for having the same scheme
upvoted 3 times

  Help2023 7 months, 2 weeks ago


The source and destination are both MySQL so schema is not needed.
upvoted 2 times

  SilentMilli 9 months ago


Selected Answer: AC
AWS Database Migration Service (AWS DMS)
upvoted 1 times

  gustavtd 9 months ago


Selected Answer: AC
AC, here it is clearly shown https://docs.aws.amazon.com/zh_cn/dms/latest/sbs/chap-manageddatabases.postgresql-rds-postgresql.html
upvoted 3 times

  LuckyAro 8 months, 2 weeks ago


You nailed it !
upvoted 1 times

  bamishr 9 months, 1 week ago


A. Create an ongoing replication task: An ongoing replication task can be used to continuously replicate data from the on-premises
database to the Aurora database. This will ensure that the Aurora database remains in sync with the on-premises database.

D. Convert the database schema by using the AWS Schema Conversion Tool (AWS SCT): The AWS SCT can be used to convert the schema of
the on-premises database to a format that is compatible with Aurora. This will ensure that the data can be properly migrated and that the
Aurora database can be used with the same applications and queries as the on-premises database.
upvoted 2 times

  Help2023 7 months, 2 weeks ago


The source and destination are both MySQL so schema is not needed.
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Selected Answer: AC
To meet the requirements of maintaining an online and accessible on-premises database while migrating to Amazon Aurora PostgreSQL
and keeping the databases synchronized, a solutions architect should take the following actions:
Option A. Create an ongoing replication task. This will allow the architect to continuously replicate data from the on-premises database to
the Aurora database.

Option C. Create an AWS Database Migration Service (AWS DMS) replication server. This will allow the architect to use AWS DMS to migrate
data from the on-premises database to the Aurora database. AWS DMS can also be used to continuously replicate data between the two
databases to keep them synchronized.
upvoted 3 times
  techhb 9 months, 1 week ago
Selected Answer: CD
C&D ,SCT is required,its a mandate not an option.
upvoted 2 times
Question #137 Topic 1

A company uses AWS Organizations to create dedicated AWS accounts for each business unit to manage each business unit's account
independently upon request. The root email recipient missed a notification that was sent to the root user email address of one account. The
company wants to ensure that all future notifications are not missed. Future notifications must be limited to account administrators.
Which solution will meet these requirements?

A. Configure the company’s email server to forward notification email messages that are sent to the AWS account root user email address to
all users in the organization.

B. Configure all AWS account root user email addresses as distribution lists that go to a few administrators who can respond to alerts.
Configure AWS account alternate contacts in the AWS Organizations console or programmatically.

C. Configure all AWS account root user email messages to be sent to one administrator who is responsible for monitoring alerts and
forwarding those alerts to the appropriate groups.

D. Configure all existing AWS accounts and all newly created accounts to use the same root user email address. Configure AWS account
alternate contacts in the AWS Organizations console or programmatically.

Correct Answer: D

Community vote distribution


B (86%) 14%

  123jhl0 Highly Voted  11 months, 2 weeks ago


Selected Answer: B
Use a group email address for the management account's root user
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_best-practices_mgmt-acct.html#best-practices_mgmt-acct_email-
address
upvoted 23 times

  cookieMr Highly Voted  3 months, 1 week ago


Selected Answer: B
Option B ensures that all future notifications are not missed by configuring the AWS account root user email addresses as distribution lists
that are monitored by a few administrators. By setting up alternate contacts in the AWS Organizations console or programmatically, the
notifications can be sent to the appropriate administrators responsible for monitoring and responding to alerts. This solution allows for
centralized management of notifications and ensures they are limited to account administrators.

A. Floods all users with notifications, lacks granularity.


C. Manual forwarding introduces delays, centralizes responsibility.
D. No flexibility for specific account administrators, limits customization.
upvoted 5 times

  Guru4Cloud Most Recent  1 month, 2 weeks ago


Selected Answer: B
The reasons are:

Alternate contacts allow defining other users to receive root emails.


Distribution lists ensure multiple admins get notified.
Limits notifications to account admins rather than all users.
Using the same root email address for all accounts (Option D) is not recommended.
Relying on one admin or external forwarding (Options A, C) introduces delays or single points of failure.
upvoted 1 times

  Itsume 3 months, 2 weeks ago


all admins need access or else some wont get the right mails and cant do their job,
sending it only to a few would disrupt the workflowso it is D
upvoted 1 times

  fishy_resolver 3 months, 3 weeks ago


Selected Answer: D
From the links provided below there are no mention of having a distribution list capability within AWS:
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_best-practices_mgmt-acct.html#best-practices_mgmt-acct_email-
address

As per link for best practices:


Use a group email address for the management account's root user!
upvoted 1 times
  Abrar2022 4 months ago
The clue is in the pudding!!

Question: account "administrators"


Answer: Configure all AWS account root user email addresses as distribution lists that go to a few "administrators"
upvoted 1 times

  Rainchild 5 months, 1 week ago


Selected Answer: B
Option A: wrong - sends email to everybody
Option B: correct (but sub-optimal because distribution lists aren't all that secure)
Option C: wrong - single point of failure on the new administrator
Option D: wrong - each root email address must be unique, you can't change them all to the same one
upvoted 1 times

  jdr75 5 months, 3 weeks ago


Selected Answer: B
The more aligned answer to this article:

https://docs.aws.amazon.com/organizations/latest/userguide/orgs_best-practices_mgmt-acct.html#best-practices_mgmt-acct_email-
address

is B.

D would be best if it'd said that the email you configure as "root user email address" will be a distribution list.
The phrase "all future notifications are not missed" points to D, cos' it said:
".. and all newly created accounts to use the same root user email address"
so the future account that will be created will be covered with the business policy.

It's not 100% clear, but I'll choose B.


upvoted 2 times

  TheAbsoluteTruth 6 months ago


Una pregunta si la gente va votando las preguntas por que los administradores no cambian la respuesta correcta. Es a interpretación y ya?
upvoted 1 times

  jdr75 5 months, 3 weeks ago


El administrador de "examtopics" pasa olímpicamente de marcar la respuesta correcta y es evidente que muchas respuestas que indica
como "correctas" no lo son. Dice muy poco del servicio que dan.
upvoted 1 times

  jaswantn 6 months, 1 week ago


Using the method of crossing out the option that does not fit....
Option A: address to all users of organization (wrong)
Option B: go to a few administration who can respond to alerts (question says to send notification to administrators not a selected few )
Option C: send to one administrator and giving him responsibility (wrong)
Option D: correct (as this is the one option left after checking all others).
upvoted 1 times

  Zerotn3 9 months ago


Selected Answer: D
Option B does not meet the requirements because it would require configuring all AWS account root user email addresses as distribution
lists, which is not necessary to meet the requirements.
upvoted 2 times

  mp165 9 months ago


Unless I am reading this wrong from AWS, it seems D is proper as it says to use a single account and then set to forward to other emails.

Use an email address that forwards received messages directly to a list of senior business managers. In the event that AWS needs to
contact the owner of the account, for example, to confirm access, the email is distributed to multiple parties. This approach helps to
reduce the risk of delays in responding, even if individuals are on vacation, out sick, or leave the business.
upvoted 2 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Selected Answer: D
To meet the requirements of ensuring that all future notifications are not missed and are limited to account administrators, the company
should take the following action:

Option D. Configure all existing AWS accounts and all newly created accounts to use the same root user email address. Configure AWS
account alternate contacts in the AWS Organizations console or programmatically.

By configuring all AWS accounts to use the same root user email address and setting up AWS account alternate contacts, the company can
ensure that all notifications are sent to a single email address that is monitored by one or more administrators. This will allow the
company to ensure that all notifications are received and responded to promptly, without the risk of notifications being missed.
upvoted 3 times
  bullrem 8 months, 1 week ago
Option D would not meet the requirement of limiting the notifications to account administrators. Instead, it is better to use option B,
which is to configure all AWS account root user email addresses as distribution lists that go to a few administrators who can respond to
alerts. This way, the company can ensure that the notifications are received by the appropriate people and that they are not missed.
Additionally, AWS account alternate contacts can be configured in the AWS Organizations console or programmatically, which allows
the company to have more granular control over who receives the notifications.
upvoted 4 times

  techhb 9 months, 1 week ago


B makes more sense
upvoted 1 times

  Sahilbhai 9 months, 2 weeks ago


answer b is makes more sense
upvoted 1 times

  PS_R 10 months, 4 weeks ago


Selected Answer: B
B makes more sense and is a best practise
upvoted 1 times

  Chunsli 11 months, 2 weeks ago


Selected Answer: B
B makes better sense in the context
upvoted 3 times
Question #138 Topic 1

A company runs its ecommerce application on AWS. Every new order is published as a massage in a RabbitMQ queue that runs on an Amazon EC2
instance in a single Availability Zone. These messages are processed by a different application that runs on a separate EC2 instance. This
application stores the details in a PostgreSQL database on another EC2 instance. All the EC2 instances are in the same Availability Zone.
The company needs to redesign its architecture to provide the highest availability with the least operational overhead.
What should a solutions architect do to meet these requirements?

A. Migrate the queue to a redundant pair (active/standby) of RabbitMQ instances on Amazon MQ. Create a Multi-AZ Auto Scaling group for
EC2 instances that host the application. Create another Multi-AZ Auto Scaling group for EC2 instances that host the PostgreSQL database.

B. Migrate the queue to a redundant pair (active/standby) of RabbitMQ instances on Amazon MQ. Create a Multi-AZ Auto Scaling group for
EC2 instances that host the application. Migrate the database to run on a Multi-AZ deployment of Amazon RDS for PostgreSQL.

C. Create a Multi-AZ Auto Scaling group for EC2 instances that host the RabbitMQ queue. Create another Multi-AZ Auto Scaling group for EC2
instances that host the application. Migrate the database to run on a Multi-AZ deployment of Amazon RDS for PostgreSQL.

D. Create a Multi-AZ Auto Scaling group for EC2 instances that host the RabbitMQ queue. Create another Multi-AZ Auto Scaling group for EC2
instances that host the application. Create a third Multi-AZ Auto Scaling group for EC2 instances that host the PostgreSQL database

Correct Answer: B

Community vote distribution


B (100%)

  123jhl0 Highly Voted  11 months, 2 weeks ago


Selected Answer: B
Migrating to Amazon MQ reduces the overhead on the queue management. C and D are dismissed.
Deciding between A and B means deciding to go for an AutoScaling group for EC2 or an RDS for Postgress (both multi- AZ). The RDS
option has less operational impact, as provide as a service the tools and software required. Consider for instance, the effort to add an
additional node like a read replica, to the DB.
https://docs.aws.amazon.com/amazon-mq/latest/developer-guide/active-standby-broker-deployment.html
https://aws.amazon.com/rds/postgresql/
upvoted 17 times

  EKA_CloudGod 10 months ago


This also helps anyone in doubt; https://docs.aws.amazon.com/amazon-mq/latest/developer-guide/active-standby-broker-
deployment.html
upvoted 1 times

  UWSFish 11 months, 1 week ago


Yes but active/standby is fault tolerance, not HA. I would concede after thinking about it that B is probably the answer that will be
marked correct but its not a great question.
upvoted 2 times

  chandu7024 Most Recent  1 week, 3 days ago


Agree with B
upvoted 1 times

  TariqKipkemei 3 weeks, 5 days ago


Selected Answer: B
B offers high availability and low operational overheads.
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: B
Option B is the best solution to meet the high availability and low overhead requirements:

Migrate the queue to redundant Amazon MQ


Use Auto Scaling groups across AZs for the application
Migrate the database to Multi-AZ RDS PostgreSQL
The reasons are:

Amazon MQ provides a managed, highly available RabbitMQ cluster


Multi-AZ Auto Scaling distributes the application across AZs
RDS PostgreSQL is managed, multi-AZ capable database
Together this architecture removes single points of failure
RDS and MQ reduce operational overhead over self-managed
upvoted 3 times
  MNotABot 2 months, 2 weeks ago
B
least operational overhead (Amazon RDS for PostgreSQL --> hence AD out / C says EC2 so out --> Hence B)
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: B
Option B provides the highest availability with the least operational overhead. By migrating the queue to a redundant pair of RabbitMQ
instances on Amazon MQ, the messaging system becomes highly available. Creating a Multi-AZ Auto Scaling group for EC2 instances
hosting the application ensures that it can automatically scale and maintain availability across multiple Availability Zones. Migrating the
database to a Multi-AZ deployment of Amazon RDS for PostgreSQL provides automatic failover and data replication across multiple
Availability Zones, enhancing availability and reducing operational overhead.

A. Incorrect because it does not address the high availability requirement for the RabbitMQ queue and the PostgreSQL database.

C. Incorrect because it does not provide redundancy for the RabbitMQ queue and does not address the high availability requirement for
the PostgreSQL database.

D. Incorrect because it does not address the high availability requirement for the RabbitMQ queue and does not provide redundancy for
the application instances.
upvoted 2 times

  Gary_Phillips_2007 7 months ago


Selected Answer: B
B for me.
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Selected Answer: B
To meet the requirements of providing the highest availability with the least operational overhead, the solutions architect should take the
following actions:

* By migrating the queue to Amazon MQ, the architect can take advantage of the built-in high availability and failover capabilities of the
service, which will help ensure that messages are delivered reliably and without interruption.

* By creating a Multi-AZ Auto Scaling group for the EC2 instances that host the application, the architect can ensure that the application is
highly available and able to handle increased traffic without the need for manual intervention.

* By migrating the database to a Multi-AZ deployment of Amazon RDS for PostgreSQL, the architect can take advantage of the built-in
high availability and failover capabilities of the service, which will help ensure that the database is always available and able to handle
increased traffic.

Therefore, the correct answer is Option B.


upvoted 4 times

  techhb 9 months, 1 week ago


Selected Answer: B
B is right all explanations below are correct
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: B
Option B is right answer
upvoted 1 times

  Wpcorgan 10 months, 1 week ago


B for me
upvoted 1 times
Question #139 Topic 1

A reporting team receives files each day in an Amazon S3 bucket. The reporting team manually reviews and copies the files from this initial S3
bucket to an analysis S3 bucket each day at the same time to use with Amazon QuickSight. Additional teams are starting to send more files in
larger sizes to the initial S3 bucket.
The reporting team wants to move the files automatically analysis S3 bucket as the files enter the initial S3 bucket. The reporting team also wants
to use AWS Lambda functions to run pattern-matching code on the copied data. In addition, the reporting team wants to send the data files to a
pipeline in Amazon SageMaker Pipelines.
What should a solutions architect do to meet these requirements with the LEAST operational overhead?

A. Create a Lambda function to copy the files to the analysis S3 bucket. Create an S3 event notification for the analysis S3 bucket. Configure
Lambda and SageMaker Pipelines as destinations of the event notification. Configure s3:ObjectCreated:Put as the event type.

B. Create a Lambda function to copy the files to the analysis S3 bucket. Configure the analysis S3 bucket to send event notifications to
Amazon EventBridge (Amazon CloudWatch Events). Configure an ObjectCreated rule in EventBridge (CloudWatch Events). Configure Lambda
and SageMaker Pipelines as targets for the rule.

C. Configure S3 replication between the S3 buckets. Create an S3 event notification for the analysis S3 bucket. Configure Lambda and
SageMaker Pipelines as destinations of the event notification. Configure s3:ObjectCreated:Put as the event type.

D. Configure S3 replication between the S3 buckets. Configure the analysis S3 bucket to send event notifications to Amazon EventBridge
(Amazon CloudWatch Events). Configure an ObjectCreated rule in EventBridge (CloudWatch Events). Configure Lambda and SageMaker
Pipelines as targets for the rule.

Correct Answer: A

Community vote distribution


D (71%) B (22%) 4%

  Six_Fingered_Jose Highly Voted  11 months, 1 week ago


Selected Answer: D
i go for D here
A and B says you are copying the file to another bucket using lambda,
C an D just uses S3 replication to copy the files,

They are doing exactly the same thing while C and D do not require setting up of lambda, which should be more efficient

The question says the team is manually copying the files, automatically replicating the files should be the most efficient method vs
manually copying or copying with lambda.
upvoted 20 times

  vipyodha 3 months, 2 weeks ago


yes d because of least operational overhead and also s3 event notification can only send to sns.sqs.and lambda , not to
sagemaker.eventbridge can send to sagemaker
upvoted 6 times

  123jhl0 Highly Voted  11 months, 2 weeks ago


Selected Answer: B
C and D aren't answers as replicating the S3 bucket isn't efficient, as other teams are starting to use it to store larger docs not related to
the reporting, making replication not useful.
As Amazon SageMaker Pipelines, ..., is now supported as a target for routing events in Amazon EventBridge, means the answer is B
https://aws.amazon.com/about-aws/whats-new/2021/04/new-options-trigger-amazon-sagemaker-pipeline-executions/
upvoted 18 times

  vipyodha 3 months, 2 weeks ago


but B is not least operational overhead , D is least operational overhead
upvoted 1 times

  LuckyAro 8 months, 2 weeks ago


Nowhere in the question did they mention that other files were unrelated to reporting ....
"The reporting team wants to move the files automatically to analysis S3 bucket as the files enter the initial S3 bucket" where did it say
they were unrelated files ? except for conjecture.
upvoted 6 times

  jdr75 5 months, 3 weeks ago


You misinterpret it ... the reporting team is overload, cos' more teams request their services uploading more data to the bucket. That's
the reason reporting team need to automate the process. So ALL the bucket objects need to be copied to other bucket, and the
replication is better an cheaper than use Lambda. So the answer is D.
upvoted 2 times
  JayBee65 9 months, 1 week ago
I think you are mis-interpreting the question. I think you need to use all files, including the ones provided by other teams, otherwise
how can you tell what files to copy? I think the point of this statement is to show that more files are in use, and being copied at
different times, rather than suggesting you need to differentiate between the two sources of files.
upvoted 5 times

  vijaykamal Most Recent  4 days, 18 hours ago


Selected Answer: D
Create lambda for replication is overhead. This dismisses A and B
S3 event notification cannot be directed to Sagemaker directly. This dismisses C
Correct Answer: D
upvoted 1 times

  TariqKipkemei 3 weeks, 5 days ago


Selected Answer: D
D provide the least operational overhead
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: D
Option D is the solution with the least operational overhead:

Use S3 replication between buckets


Send S3 events to EventBridge
Add Lambda and SageMaker as EventBridge rule targets
The reasons this has the least overhead:

S3 replication automatically copies new objects to analysis bucket


EventBridge allows easily adding multiple targets for events
No custom Lambda function needed for copying objects
Leverages managed services for event processing
upvoted 2 times

  MutiverseAgent 2 months, 2 weeks ago


Selected Answer: D
Correct: D
B & D the only possible as Sagemaker is not supported as target for S3 events. Using bucket replication as D mention is more efficient
than using a lambda as B mention.
upvoted 2 times

  cookieMr 3 months, 1 week ago


Selected Answer: D
Option D is correct because it combines S3 replication, event notifications, and Amazon EventBridge to automate the copying of files from
the initial S3 bucket to the analysis S3 bucket. It also allows for the execution of Lambda functions and integration with SageMaker
Pipelines.
Option A is incorrect because it suggests manually copying the files using a Lambda function and event notifications, but it does not utilize
S3 replication or EventBridge for automation.
Option B is incorrect because it suggests using S3 event notifications directly with EventBridge, but it does not involve S3 replication or
utilize Lambda for copying the files.
Option C is incorrect because it only involves S3 replication and event notifications without utilizing EventBridge or Lambda functions for
further processing.
upvoted 2 times

  studynoplay 4 months, 3 weeks ago


Selected Answer: D
https://docs.aws.amazon.com/AmazonS3/latest/userguide/notification-how-to-event-types-and-destinations.html#supported-notification-
destinations
S3 can NOT send event notification to SageMaker. This rules out C. you have to send to • Amazon EventBridge 1st then to SageMaker
upvoted 5 times

  eendee 5 months, 3 weeks ago


Selected Answer: D
Why I believe it is not C? The key here is in the s3:ObjectCreated:"Put". The replication will not fire the s3:ObjectCreated:Put. event. See link
here: https://aws.amazon.com/blogs/aws/s3-event-notification/
upvoted 4 times

  kraken21 6 months ago


Selected Answer: D
D takes care of automated moving and lambda for pattern matching are covered efficiently in D.
upvoted 1 times

  SuketuKohli 6 months, 2 weeks ago


only one destination type can be specified for each event notification in S3 event notifications
upvoted 1 times
  gmehra 6 months, 3 weeks ago
Selected Answer: A
Answer is A
The statement says move the file. Replication won't move the file it will just create a copy. so Obviously C and D are out. When you Event
notification and Lambda why we need EVent bridge as more service. So answer is A
upvoted 2 times

  markw92 3 months, 2 weeks ago


I searched S3 documentation and couldn't find where s3 event notification can trigger sagemaker pipelines. It can SNS,SQS and
lambda. I am not sure A is the right choice.
upvoted 1 times

  Kaireny54 6 months ago


A and B says : create a lambda function to COPY also. Then, folowing your idea, A and B are out too... ;)
obviously move argument isn't accute in this question
upvoted 1 times

  Steve_4542636 7 months ago


Selected Answer: B
Using lambda is one of the requirements. Sns, sqs, lambda, and event bridge are the only s3 notification destinations
https://docs.aws.amazon.com/AmazonS3/latest/userguide/notification-how-to-event-types-and-destinations.html.
upvoted 1 times

  bullrem 8 months, 1 week ago


both A and D options can meet the requirements with the least operational overhead as they both use automatic event-driven
mechanisms (S3 event notifications and EventBridge rules) to trigger the Lambda function and copy the files to the analysis S3 bucket. The
Lambda function can then run the pattern-matching code, and the files can be sent to the SageMaker pipeline.
Option A, directly copying the files to the analysis S3 bucket using a Lambda function, is more straight forward, option D using S3
replication and EventBridge rules is more flexible and can be more powerful as it allows you to use more complex event-driven flows.
upvoted 2 times

  AHUI 8 months, 3 weeks ago


Ans : D

S3 event notification can only send notifications to SQS. SNS and Lambda, BUT not Sagamaker
https://docs.aws.amazon.com/AmazonS3/latest/userguide/NotificationHowTo.html

S3 event notification can send notification to SNS, SQS and Lambda, but not SageMaker
upvoted 8 times

  RBKumaran 8 months, 3 weeks ago


Selected Answer: D
A and B are ruled out as it requires an extra Lambda job to do the copy while S3 replication will take care of it with little to no overhead.
C is incorrect because, S3 notifcations are not supported on Sagemake pipeline
(https://docs.aws.amazon.com/AmazonS3/latest/userguide/notification-how-to-event-types-and-destinations.html#supported-
notification-destinations)
upvoted 6 times

  Mahadeva 8 months, 4 weeks ago


Selected Answer: C
Since we are working already on S3 buckets, configuring S3 event notification (with evet type: s3:ObjectCreated:Put) is much easier than
doing the same through EventBridge (which is an additional service in this case). Less operational overhead.
upvoted 4 times
Question #140 Topic 1

A solutions architect needs to help a company optimize the cost of running an application on AWS. The application will use Amazon EC2
instances, AWS Fargate, and AWS Lambda for compute within the architecture.
The EC2 instances will run the data ingestion layer of the application. EC2 usage will be sporadic and unpredictable. Workloads that run on EC2
instances can be interrupted at any time. The application front end will run on Fargate, and Lambda will serve the API layer. The front-end
utilization and API layer utilization will be predictable over the course of the next year.
Which combination of purchasing options will provide the MOST cost-effective solution for hosting this application? (Choose two.)

A. Use Spot Instances for the data ingestion layer

B. Use On-Demand Instances for the data ingestion layer

C. Purchase a 1-year Compute Savings Plan for the front end and API layer.

D. Purchase 1-year All Upfront Reserved instances for the data ingestion layer.

E. Purchase a 1-year EC2 instance Savings Plan for the front end and API layer.

Correct Answer: AC

Community vote distribution


AC (100%)

  SimonPark Highly Voted  11 months, 1 week ago


Selected Answer: AC
EC2 instance Savings Plan saves 72% while Compute Savings Plans saves 66%. But according to link, it says "Compute Savings Plans
provide the most flexibility and help to reduce your costs by up to 66%. These plans automatically apply to EC2 instance usage regardless
of instance family, size, AZ, region, OS or tenancy, and also apply to Fargate and Lambda usage." EC2 instance Savings Plans are not
applied to Fargate or Lambda
upvoted 15 times

  aba2s Highly Voted  9 months ago


Selected Answer: AC
Compute Savings Plans can be used for EC2 instances and Fargate. Whereas EC2 Savings Plans support EC2 only.
upvoted 5 times

  TariqKipkemei Most Recent  3 weeks, 5 days ago


Selected Answer: AC
Compute Savings Plans can also apply to Fargate and Lambda Usage.
upvoted 1 times

  AKBM7829 1 month ago


BC is the answer
data ingestion = Spot Instance but
Keyword "Usage Unpredictable" : On-Demand

and for APi its Compute Savings Plan


upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: AC
The two most cost-effective purchasing options for this architecture are:

A) Use Spot Instances for the data ingestion layer

C) Purchase a 1-year Compute Savings Plan for the front end and API layer

The reasons are:

Spot Instances provide the greatest savings for flexible, interruptible EC2 workloads like data ingestion.
Savings Plans offer significant discounts for predictable usage like the front end and API layer.
All Upfront and partial/no Upfront RI's don't align well with the sporadic EC2 usage.
On-Demand is more expensive than Spot for flexible EC2 workloads.
By matching purchasing options to the workload patterns, Spot for unpredictable EC2 and Savings Plans for steady-state usage, the
solutions architect optimizes cost efficiency.
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: AC
Using Spot Instances for the data ingestion layer will provide the most cost-effective option for sporadic and unpredictable workloads, as
Spot Instances offer significant cost savings compared to On-Demand Instances (Option A).

Purchasing a 1-year Compute Savings Plan for the front end and API layer will provide cost savings for predictable utilization over the
course of a year (Option C).

Option B is less cost-effective as it suggests using On-Demand Instances for the data ingestion layer, which does not take advantage of
cost-saving opportunities.

Option D suggests purchasing 1-year All Upfront Reserved instances for the data ingestion layer, which may not be optimal for sporadic
and unpredictable workloads.

Option E suggests purchasing a 1-year EC2 instance Savings Plan for the front end and API layer, but Compute Savings Plans are typically
more suitable for predictable workloads.
upvoted 3 times
  Abrar2022 4 months ago
Spot instances for data injection because the task can be terminated at anytime and tolerate disruption. Compute Saving Plan is cheaper
than EC2 instance Savings plan.
upvoted 1 times

  Abrar2022 4 months, 1 week ago


EC2 instance Savings Plans are not applied to Fargate or Lambda
upvoted 1 times

  Noviiiice 6 months, 2 weeks ago


Why not B?
upvoted 1 times

  SkyZeroZx 6 months ago


because onDemand is more expensive than spot additionally that the workload has no problem with being interrupted at any time
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Selected Answer: AC
To optimize the cost of running this application on AWS, you should consider the following options:

A. Use Spot Instances for the data ingestion layer


C. Purchase a 1-year Compute Savings Plan for the front-end and API layer

Therefore, the most cost-effective solution for hosting this application would be to use Spot Instances for the data ingestion layer and to
purchase either a 1-year Compute Savings Plan or a 1-year EC2 instance Savings Plan for the front-end and API layer.
upvoted 1 times

  AKBM7829 1 month ago


Yes, but in the question it also states that it is 'Unpredictable' So, On-Demand is suitable over Spot Instance right which makes BC as
the answer
upvoted 1 times

  techhb 9 months, 1 week ago


Selected Answer: AC
Too obvious answer.
upvoted 1 times

  berks 9 months, 1 week ago


Selected Answer: AC
AC
can be interrupted at any time => spot
upvoted 2 times

  TECHNOWARRIOR 9 months, 1 week ago


A,E::
Savings Plan — EC2
Savings Plan offers almost the same savings from a cost as RIs and adds additional Automation around how the savings are being applied.
One way to understand is to say that EC2 Savings Plan are Standard Reserved Instances with automatic switching depending on Instance
types being used within the same instance family and additionally applied to ECS Fargate and Lambda.

Savings Plan — Compute


Savings Plan offers almost the same savings from a cost as RIs and adds additional Automation around how the savings are being applied.
For example, they provide flexibility around instance types and regions so that you don’t have to monitor new instance types that are
being launched. It is also applied to Lambda and ECS Fargate workloads. One way to understand is to say that Compute Savings Plan are
Convertible Reserved Instances with automatic switching depending on Instance types being used.
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: AC
A and C
upvoted 1 times
  rjam 10 months, 2 weeks ago
its A and C . https://www.densify.com/finops/aws-savings-plan
upvoted 1 times

  bunnychip 11 months, 1 week ago


Selected Answer: AC
api is not EC2.need to use compute savings plan
upvoted 4 times

  Chunsli 11 months, 2 weeks ago


E makes more sense than C. See https://aws.amazon.com/savingsplans/faq/, EC2 instance Savings Plan (up to 72% saving) costs less than
Compute Savings Plan (up to 66% saving)
upvoted 4 times

  Yadav_Sanjay 5 months ago


I Agree
upvoted 1 times

  capepenguin 11 months, 1 week ago


Isn't the EC2 Instance Savings Plan not applicable to Fargate and Lambda?
https://aws.amazon.com/savingsplans/compute-pricing/
upvoted 6 times
Question #141 Topic 1

A company runs a web-based portal that provides users with global breaking news, local alerts, and weather updates. The portal delivers each
user a personalized view by using mixture of static and dynamic content. Content is served over HTTPS through an API server running on an
Amazon EC2 instance behind an Application Load Balancer (ALB). The company wants the portal to provide this content to its users across the
world as quickly as possible.
How should a solutions architect design the application to ensure the LEAST amount of latency for all users?

A. Deploy the application stack in a single AWS Region. Use Amazon CloudFront to serve all static and dynamic content by specifying the ALB
as an origin.

B. Deploy the application stack in two AWS Regions. Use an Amazon Route 53 latency routing policy to serve all content from the ALB in the
closest Region.

C. Deploy the application stack in a single AWS Region. Use Amazon CloudFront to serve the static content. Serve the dynamic content
directly from the ALB.

D. Deploy the application stack in two AWS Regions. Use an Amazon Route 53 geolocation routing policy to serve all content from the ALB in
the closest Region.

Correct Answer: B

Community vote distribution


A (74%) B (24%)

  huiy Highly Voted  11 months, 3 weeks ago


Selected Answer: A
Answer is A.
Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content
https://www.examtopics.com/discussions/amazon/view/81081-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 23 times

  MutiverseAgent 2 months, 2 weeks ago


Also, option B does not use CloudFront which means all the traffic will go through the internet; So, despite deploying resources in two
regions and using the lowest latency point, that public internet connection might probably be slower than a connection through a
private aws network as Cloudfront can use.
upvoted 1 times

  Six_Fingered_Jose Highly Voted  11 months, 1 week ago


Selected Answer: B
Answer should be B,

CloudFront reduces latency if its only static content, which is not the case here.
For Dynamic content, CF cant cache the content so it sends the traffic through the AWS Network which does reduces latency, but it still has
to travel through another region.

For the case with 2 region and Route 53 latency routing, Route 53 detects the nearest resouce (with lowest latency) and routes the traffic
there. Because the traffic does not have to travel to resources far away, it should have the least latency in this case here.
upvoted 8 times

  Mahadeva 8 months, 3 weeks ago


CloudFront does not cache dynamic content. But Latency can be still low for dynamic content because the traffic is on the AWS global
network which is faster than the internet.
upvoted 5 times

  Joxtat 8 months, 2 weeks ago


Amazon CloudFront speeds up distribution of your static and dynamic web content, such as .html, .css, .php, image, and media files.
When users request your content, CloudFront delivers it through a worldwide network of edge locations that provide low latency
and high performance.
upvoted 4 times

  Onimole 11 months ago


Cf works for both static and dynamic content
upvoted 8 times

  Aamee 10 months ago


Can you pls. provide a ref. link from where this info. got extracted?
upvoted 1 times
  manuelemg2007 2 months, 1 week ago
this is link https://aws.amazon.com/es/blogs/aws-spanish/cloudfront-para-la-distribucion-de-contenido-estatico-y-dinamico/
upvoted 1 times

  BrijMohan08 Most Recent  2 weeks, 1 day ago


Selected Answer: D
Option D is the most suitable choice for minimizing latency for all users. It leverages the use of multiple AWS regions, geolocation routing,
and the ALB to ensure that users are directed to the closest region, reducing latency for both static and dynamic content. This approach
provides a high level of availability and performance for global users.
upvoted 1 times

  TariqKipkemei 3 weeks, 4 days ago


Selected Answer: A
CloudFront to the rescue....whoosh
upvoted 2 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: A
The solution that will ensure the LEAST amount of latency for all users is:

A. Deploy the application stack in a single AWS Region. Use Amazon CloudFront to serve all static and dynamic content by specifying the
ALB as an origin.

Here's why:

Option A (Single AWS Region, Amazon CloudFront for both static and dynamic content):

Deploying the application stack in a single AWS Region helps reduce complexity and potential data synchronization issues that might arise
from using multiple regions
upvoted 1 times

  MM_Korvinus 1 month, 3 weeks ago


Selected Answer: B
I think CloudFront does not improve latency in this case, because CF works as kind of cache of data. Cache works fine in case of static data,
but here each user can have its own dynamically created data, this every user will need to go to origin. So in this case CF can make the
latency worse. On the other hand route53 with latency routing to ALB in different regions may actually increase the average user latency.
upvoted 1 times

  MutiverseAgent 2 months, 2 weeks ago


Selected Answer: A
It's A, according this page (https://aws.amazon.com/cloudfront/dynamic-content/) CloudFront is commonly used for "News, sports, local,
weather" as this is content mostly bounded to a region.
upvoted 2 times

  MutiverseAgent 2 months, 2 weeks ago


Also, option B does not use CloudFront which means all the traffic will go through the internet; So, despite deploying resources in two
regions and using the lowest latency point, that public internet connection might probably be slower than a connection through a
private aws network as Cloudfront can use.
upvoted 1 times

  ayeah 3 months, 1 week ago


Selected Answer: A
CloudFront is a CDN thats is well adapted for dynamic content.
News, sports, local, weather
Web applications of this type often have a geographic focus with customized content for end users. Content can be cached at edge
locations for varying lengths of time depending on type of content. For example, hourly updates can be cached for up to an hour, while
urgent alerts may only be cached for a few seconds so end users always have the most up to date information available to them. A
content delivery network is a great platform for serving common types of experiences for news and weather such as articles, dynamic
map tiles, overlays, forecasts, breaking news or alert tickers, and video.

https://aws.amazon.com/cloudfront/dynamic-content/
upvoted 1 times

  smartegnine 4 months ago


I would definitely go to C
f you are serving dynamic content such as web applications or APIs directly from an Amazon Elastic Load Balancer (ELB) or Amazon EC2
instances to end users on the internet, you can improve the performance, availability, and security of your content by using Amazon
CloudFront as your content delivery network.

https://aws.amazon.com/cloudfront/dynamic-content/
upvoted 1 times

  antropaws 4 months ago


Selected Answer: A
A is correct. CF distributes the content globally. Why not deploy the application in 4 o 5 regions instead of 2? It's an arbitrary choice, that's
one of the reasons why B and D are not a solid solution.
upvoted 1 times
  Bmarodi 4 months, 1 week ago
Selected Answer: A
I gor for option A, CF uses edge locations to speed up S3 content, both static and dynamic, hence A is right ans.
upvoted 1 times

  bgsanata 4 months, 1 week ago


Selected Answer: B
I would say B.
2 regions is always better if you aim for better distribution of the traffic. This will split the amount of request send to the Single EC2
instance by half => indirectly improve latency.
It's true that CloudFront improve latency but it's hard to say if this will be true for ALL users. Having second region will definitely improve
the performance for the users will less latency atm.
upvoted 1 times

  cheese929 5 months ago


Selected Answer: A
A is correct. Cloudfront can serve both static and dynamic content fast.

https://aws.amazon.com/cloudfront/dynamic-content/
upvoted 2 times

  kevinkn 5 months ago


Selected Answer: B
the lowest latency (option B) is not always equal to the closest resource (option D). And the requirement ask for lowest latency
upvoted 1 times

  Shrestwt 5 months, 2 weeks ago


A.
If you are serving dynamic content such as web applications or APIs directly from an Amazon Elastic Load Balancer (ELB) or Amazon EC2
instances to end users on the internet, you can improve the performance, availability, and security of your content by using Amazon
CloudFront as your content delivery network.

https://aws.amazon.com/cloudfront/dynamic-content/
upvoted 2 times

  C_M_M 5 months, 2 weeks ago


CloudFront caches the static content. It also accepts requests for dynamic content and forward it to the ALB via AWS backbone (very fast).
upvoted 1 times

  TECHNOWARRIOR 5 months, 3 weeks ago


ANSWER -B :To achieve the least amount of latency for all users, the best approach would be to deploy the application stack in two AWS
Regions and use an Amazon Route 53 latency routing policy to serve all content from the ALB in the closest region. This approach will
ensure that users are directed to the lowest latency endpoint available based on their location, which can significantly reduce latency and
improve the performance of the application.
While using Amazon CloudFront to serve all static and dynamic content by specifying the ALB as an origin can also improve the
performance of the application, it may not be the best approach to achieve the least amount of latency for all users. This is because
CloudFront may not always direct users to the closest endpoint based on their location, which can result in higher latency for some users.
Therefore, using an Amazon Route 53 latency routing policy to serve all content from the ALB in the closest region is the best approach to
achieve the least amount of latency for all users
upvoted 2 times
Question #142 Topic 1

A gaming company is designing a highly available architecture. The application runs on a modified Linux kernel and supports only UDP-based
traffic. The company needs the front-end tier to provide the best possible user experience. That tier must have low latency, route traffic to the
nearest edge location, and provide static IP addresses for entry into the application endpoints.
What should a solutions architect do to meet these requirements?

A. Configure Amazon Route 53 to forward requests to an Application Load Balancer. Use AWS Lambda for the application in AWS Application
Auto Scaling.

B. Configure Amazon CloudFront to forward requests to a Network Load Balancer. Use AWS Lambda for the application in an AWS Application
Auto Scaling group.

C. Configure AWS Global Accelerator to forward requests to a Network Load Balancer. Use Amazon EC2 instances for the application in an
EC2 Auto Scaling group.

D. Configure Amazon API Gateway to forward requests to an Application Load Balancer. Use Amazon EC2 instances for the application in an
EC2 Auto Scaling group.

Correct Answer: C

Community vote distribution


C (100%)

  dokaedu Highly Voted  11 months ago


Correct Answer: C
AWS Global Accelerator and Amazon CloudFront are separate services that use the AWS global network and its edge locations around the
world. CloudFront improves performance for both cacheable content (such as images and videos) and dynamic content (such as API
acceleration and dynamic site delivery). Global Accelerator improves performance for a wide range of applications over TCP or UDP by
proxying packets at the edge to applications running in one or more AWS Regions. Global Accelerator is a good fit for non-HTTP use cases,
such as gaming (UDP), IoT (MQTT), or Voice over IP, as well as for HTTP use cases that specifically require static IP addresses or
deterministic, fast regional failover. Both services integrate with AWS Shield for DDoS protection.
upvoted 54 times

  stepman 9 months, 3 weeks ago


On top of this, lambda would not be able to run application that is running on a modified Linux kernel. The answer is C .
upvoted 4 times

  praveenas400 9 months ago


Explained very well. ty
upvoted 2 times

  iCcma 10 months, 1 week ago


Thank you, your explanation helped me to better understand even the answer of question 29
upvoted 3 times

  Buruguduystunstugudunstuy Highly Voted  9 months, 1 week ago


Selected Answer: C
The correct answer is Option C. To meet the requirements;

* AWS Global Accelerator is a service that routes traffic to the nearest edge location, providing low latency and static IP addresses for the
front-end tier. It supports UDP-based traffic, which is required by the application.

* A Network Load Balancer is a layer 4 load balancer that can handle UDP traffic and provide static IP addresses for the application
endpoints.

* An EC2 Auto Scaling group ensures that the required number of Amazon EC2 instances is available to meet the demand of the
application. This will help the front-end tier to provide the best possible user experience.

Option A is not a valid solution because Amazon Route 53 does not support UDP traffic.
Option B is not a valid solution because Amazon CloudFront does not support UDP traffic.
Option D is not a valid solution because Amazon API Gateway does not support UDP traffic.
upvoted 5 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


My mistake, correction on Option A, it is the Application Load Balancers do not support UDP traffic. They are designed to load balance
HTTP and HTTPS traffic, and they do not support other protocols such as UDP.
upvoted 1 times

  TariqKipkemei Most Recent  3 weeks, 4 days ago


Selected Answer: C
UDP, static IP = Global Accelerator and Network Load Balancer
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: C
AWS Global Accelerator provides static IP addresses that serve as a fixed entry point to application endpoints. This allows optimal routing
to the nearest edge location.
Using a Network Load Balancer (NLB) allows support for UDP traffic, as NLBs can handle TCP and UDP protocols.
The application runs on a modified Linux kernel, so using Amazon EC2 instances directly will provide the needed customization and low
latency.
The EC2 instances can be auto scaled based on demand to provide high availability.
API Gateway and Application Load Balancer are more suited for HTTP/HTTPS and REST API type workloads. For a UDP gaming workload,
Global Accelerator + NLB + EC2 is a better architectural fit.
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: C
AWS Global Accelerator is designed to improve the availability and performance of applications by routing traffic through the AWS global
network to the nearest edge locations, reducing latency. By configuring AWS Global Accelerator to forward requests to a Network Load
Balancer, UDP-based traffic can be efficiently distributed across multiple EC2 instances in an Auto Scaling group. Using Amazon EC2
instances for the application allows for customization of the Linux kernel and support for UDP-based traffic. This solution provides static IP
addresses for entry into the application endpoints, ensuring consistent access for users.

Option A suggests using AWS Lambda for the application, but Lambda is not suitable for long-running UDP-based applications and may
not provide the required low latency.
Option B suggests using CloudFront, which is primarily designed for HTTP/HTTPS traffic and does not have native support for UDP-based
traffic.
Option D suggests using API Gateway, which is primarily used for RESTful APIs and does not support UDP-based traffic.
upvoted 2 times

  Abrar2022 4 months ago


aws global accelarator provides static IP addresses.
upvoted 1 times

  Bmarodi 4 months, 1 week ago


Selected Answer: C
My choice is option C, due to the followings: Amazon Global accelator route the traffic to nearest edge locations, it supports UDP-based
traffic, and it provides static ip addresses as well, hence C is right answer.
upvoted 1 times

  bakamon 6 months ago


Answer : C
CloudFront : Doesn't support static IP addresses
ALB : Doesn't support UDP
upvoted 1 times

  Devsin2000 6 months, 3 weeks ago


C - https://aws.amazon.com/global-accelerator/
upvoted 1 times

  SilentMilli 8 months, 3 weeks ago


Selected Answer: C
To meet the requirements of providing low latency, routing traffic to the nearest edge location, and providing static IP addresses for entry
into the application endpoints, the best solution would be to use AWS Global Accelerator. This service routes traffic to the nearest edge
location and provides static IP addresses for the application endpoints. The front-end tier should be configured with a Network Load
Balancer, which can handle UDP-based traffic and provide high availability. Option C, "Configure AWS Global Accelerator to forward
requests to a Network Load Balancer. Use Amazon EC2 instances for the application in an EC2 Auto Scaling group," is the correct answer.
upvoted 1 times

  techhb 9 months, 1 week ago


Selected Answer: C
C is obvious choice here.
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: C
C as Global Accelerator is the best choice for UDP based traffic needing static IP address.
upvoted 1 times

  Certified101 9 months, 3 weeks ago


Selected Answer: C
c correct
upvoted 1 times
  Qjb8m9h 9 months, 3 weeks ago
CloudFront is designed to handle HTTP protocol meanwhile Global Accelerator is best used for both HTTP and non-HTTP protocols such as
TCP and UDP. HENCE C is the ANSWER!
upvoted 1 times

  Wpcorgan 10 months, 1 week ago


C is correct
upvoted 1 times

  PS_R 10 months, 4 weeks ago


Selected Answer: C
Cloud Fronts supports both Static and Dynamic and Global Accelerator means low latency over UDP
upvoted 1 times
Question #143 Topic 1

A company wants to migrate its existing on-premises monolithic application to AWS. The company wants to keep as much of the front-end code
and the backend code as possible. However, the company wants to break the application into smaller applications. A different team will manage
each application. The company needs a highly scalable solution that minimizes operational overhead.
Which solution will meet these requirements?

A. Host the application on AWS Lambda. Integrate the application with Amazon API Gateway.

B. Host the application with AWS Amplify. Connect the application to an Amazon API Gateway API that is integrated with AWS Lambda.

C. Host the application on Amazon EC2 instances. Set up an Application Load Balancer with EC2 instances in an Auto Scaling group as
targets.

D. Host the application on Amazon Elastic Container Service (Amazon ECS). Set up an Application Load Balancer with Amazon ECS as the
target.

Correct Answer: D

Community vote distribution


D (82%) Other

  Ken701 Highly Voted  11 months, 1 week ago


I think the answer here is "D" because usually when you see terms like "monolithic" the answer will likely refer to microservices.
upvoted 26 times

  Bevemo Highly Voted  10 months, 3 weeks ago


Selected Answer: D
D is organic pattern, lift and shift, decompose to containers, first making most use of existing code, whilst new features can be added over
time with lambda+api gw later.
A is leapfrog pattern. requiring refactoring all code up front.
upvoted 13 times

  TariqKipkemei Most Recent  3 weeks, 4 days ago


Selected Answer: D
'Non-monolithic', 'smaller applications', 'minimized operational overhead' all screaming 'microservices'.
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: D
The reasons are:

ECS allows running Docker containers, so the existing monolithic app can be containerized and run on ECS with minimal code changes.
The app can be broken into smaller microservices by containerizing different components and managing them separately.
ECS provides auto scaling capabilities to scale each microservice independently.
Using an Application Load Balancer with ECS enables distributing traffic across containers and auto scaling.
ECS has minimal operational overhead compared to managing EC2 instances directly.
Serverless options like Lambda and API Gateway would require significant code refactoring which is not ideal for migrating an existing
app.
upvoted 2 times

  MM_Korvinus 1 month, 3 weeks ago


Selected Answer: B
Honestly, from my experience, the minimal operational overhead is with Amplify and API Gateway with lambdas. Both services have neat
release features, you do not need to fiddle around ECS configurations as everything is server-less, which is also highly scalable.
Eventhough it is much harder to refactor monolithic app to this set-up it is definitely easier to operate. Not talking about complexities
around ALB.
upvoted 5 times

  Fielies23 1 month, 3 weeks ago


I actually agree with this, they have a monolithic application that contains the Front-end and Back-end. They clearly state they want
different teams managing different applications. This is telling me they want a team to manage the front-end and a team to manage
the back-end. A,C and D suggest simply running copies of the monolith application (containing front and back end). So how will
different teams manage different applications?? B is the only one that actually splits front and back end
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: D
ECS provides a highly scalable and managed environment for running containerized applications, reducing operational overhead. By
setting up an ALB with ECS as the target, traffic can be distributed across multiple instances of the application for scalability and
availability. This solution enables different teams to manage each application independently, promoting team autonomy and efficient
development.

A is more suitable for event-driven and serverless workloads. It may not be the ideal choice for migrating a monolithic application and
maintaining the existing codebase.

B integrates with Lambda and API Gateway, it may not provide the required flexibility for breaking the application into smaller applications
and managing them independently.

C would involve managing the infrastructure and scaling manually. It may result in higher operational overhead compared to using a
container service like ECS.
upvoted 2 times
  antropaws 4 months ago
Selected Answer: D
I was confused about this, but actually Amazon ECS service can be configured to use Elastic Load Balancing to distribute traffic evenly
across the tasks in your service.

https://docs.aws.amazon.com/AmazonECS/latest/userguide/create-application-load-balancer.html
upvoted 1 times

  studynoplay 4 months, 3 weeks ago


Selected Answer: D
monolithic = microservices = ECS
upvoted 3 times

  C_M_M 5 months, 2 weeks ago


I thought ALB is about distributing load. How do we want to use it to connect decoupled applications that needs to call themselves. I am
kind of confused why most people are going with D.
I think I will go with A.
upvoted 2 times

  Devsin2000 6 months, 3 weeks ago


I think the answer is A
B is wrong because the requirement is not for the backend. C and D are not suitable because the ALB Is not best suited for middle tier
applications.
upvoted 2 times

  aws4myself 8 months, 1 week ago


I will go with A because - less operational and High availability (Lambda has these)

If it is ECS, operational overhead and can only be scaled up to an EC2 assigned under it.
upvoted 2 times

  SilentMilli 8 months, 3 weeks ago


Selected Answer: D
To meet the requirement of breaking the application into smaller applications that can be managed by different teams, while minimizing
operational overhead and providing high scalability, the best solution would be to host the applications on Amazon Elastic Container
Service (Amazon ECS). Amazon ECS is a fully managed container orchestration service that makes it easy to run, scale, and maintain
containerized applications. Additionally, setting up an Application Load Balancer with Amazon ECS as the target will allow the company to
easily scale the application as needed. Option D, "Host the application on Amazon Elastic Container Service (Amazon ECS). Set up an
Application Load Balancer with Amazon ECS as the target," is the correct answer.
upvoted 1 times

  Zerotn3 9 months ago


Selected Answer: D
. Host the application on Amazon Elastic Container Service (Amazon ECS). Set up an Application Load Balancer with Amazon ECS as the
target.

Hosting the application on Amazon ECS would allow the company to break the monolithic application into smaller, more manageable
applications that can be managed by different teams. Amazon ECS is a fully managed container orchestration service that makes it easy to
deploy, run, and scale containerized applications. By setting up an Application Load Balancer with Amazon ECS as the target, the company
can ensure that the solution is highly scalable and minimizes operational overhead.
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Selected Answer: D
The correct answer is Option D. To meet the requirements, the company should host the application on Amazon Elastic Container Service
(Amazon ECS) and set up an Application Load Balancer with Amazon ECS as the target.

Option A is not a valid solution because AWS Lambda is not suitable for hosting long-running applications.

Option B is not a valid solution because AWS Amplify is a framework for building, deploying, and managing web applications, not a
hosting solution.
Option C is not a valid solution because Amazon EC2 instances are not fully managed container orchestration services. The company will
need to manage the EC2 instances, which will increase operational overhead.
upvoted 3 times
  career360guru 9 months, 2 weeks ago
Selected Answer: D
It can be C or D depending on how easy it would be to containerize the application. If application needs persistent local data store then C
would be a better choice.
Also from the usecase description it is not clear whether application is http based application or not though all options uses ALB only so
we can safely assume that this is http based application only.
upvoted 2 times

  career360guru 9 months, 1 week ago


After reading this question again A will be minimum operational overhead.
D has higher operational overhead as D will have operational overhead of scaling EC2 servers up/down for running ECS containers.
upvoted 1 times

  Wpcorgan 10 months, 1 week ago


D is correct
upvoted 1 times

  backbencher2022 11 months ago


Selected Answer: D
I think D is the right choice as they want application to be managed by different people which could be enabled by breaking it into
different containers
upvoted 1 times
Question #144 Topic 1

A company recently started using Amazon Aurora as the data store for its global ecommerce application. When large reports are run, developers
report that the ecommerce application is performing poorly. After reviewing metrics in Amazon CloudWatch, a solutions architect finds that the
ReadIOPS and CPUUtilizalion metrics are spiking when monthly reports run.
What is the MOST cost-effective solution?

A. Migrate the monthly reporting to Amazon Redshift.

B. Migrate the monthly reporting to an Aurora Replica.

C. Migrate the Aurora database to a larger instance class.

D. Increase the Provisioned IOPS on the Aurora instance.

Correct Answer: B

Community vote distribution


B (100%)

  TariqKipkemei 3 weeks, 4 days ago


Selected Answer: B
Migrate the monthly reporting to an Aurora Replica
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: B
Aurora Replicas utilize the same storage as the primary instance so there is no additional storage cost.
Replicas can be created and destroyed easily to match reporting needs.
The primary Aurora instance size does not need to be changed, avoiding additional cost.
Workload is offloaded from the primary instance, improving its performance.
No major software/configuration changes needed compared to options like Redshift.
upvoted 1 times

  cd93 1 month, 2 weeks ago


I don't understand why doubling everything (instances, network cost, maintenance effort, and especially storage) can be considered "cost-
saving" for a simple monthly report...
An instance upgrade can very well be much cheaper. This question is very vague and does not provide enough information.
upvoted 1 times

  cd93 1 month, 2 weeks ago


Silly me, I thought upgrading instance type includes storage upgrade (increase read iops) lol. The question pointed out that hard drive
is also a limiting factor, so correct answer is B.
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: B
B is correct because migrating the monthly reporting to an Aurora Replica can offload the reporting workload from the primary Aurora
instance, reducing the impact on its performance during large reports. Using an Aurora Replica provides scalability and allows the replica
to handle the read-intensive reporting queries, improving the overall performance of the ecommerce application.

A is wrong because migrating to Amazon Redshift introduces additional costs and complexity, and it may not be necessary to switch to a
separate data warehousing service for this specific issue.

C is wrong because simply increasing the instance class of the Aurora database may not be the most cost-effective solution if the
performance issue can be resolved by offloading the reporting workload to an Aurora Replica.

D is wrong because increasing the Provisioned IOPS alone may not address the issue of spikes in CPUUtilization during large reports, as it
primarily focuses on storage performance rather than overall database performance.
upvoted 3 times

  Abrar2022 4 months, 1 week ago


By using an Aurora Replica for running large reports, the primary database will be relieved of the additional read load, improving
performance for the ecommerce application.
upvoted 1 times

  Bmarodi 4 months, 1 week ago


Selected Answer: B
Option B is right answer.
upvoted 1 times
  studynoplay 4 months, 3 weeks ago
Finally a question where there are no controversies
upvoted 2 times

  SilentMilli 8 months, 3 weeks ago


Selected Answer: B
The most cost-effective solution for addressing high ReadIOPS and CPU utilization when running large reports would be to migrate the
monthly reporting to an Aurora Replica. An Aurora Replica is a read-only copy of an Aurora database that is updated in real-time with the
primary database. By using an Aurora Replica for running large reports, the primary database will be relieved of the additional read load,
improving performance for the ecommerce application. Option B, "Migrate the monthly reporting to an Aurora Replica," is the correct
answer.
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


Selected Answer: B
Option B: Migrating the monthly reporting to an Aurora Replica may be the most cost-effective solution because it involves creating a
read-only copy of the database that can be used specifically for running large reports without impacting the performance of the primary
database. This solution allows the company to scale the read capacity of the database without incurring additional hardware or I/O costs.
upvoted 3 times

  Buruguduystunstugudunstuy 9 months, 2 weeks ago


The incorrect solutions are:

Option A: Migrating the monthly reporting to Amazon Redshift may not be cost-effective because it involves creating a new data store
and potentially significant data migration and ETL costs.

Option C: Migrating the Aurora database to a larger instance class may not be cost-effective because it involves changing the
underlying hardware of the database and potentially incurring additional costs for the larger instance.

Option D: Increasing the Provisioned IOPS on the Aurora instance may not be cost-effective because it involves paying for additional
I/O capacity that may not be necessary for other workloads on the database.
upvoted 5 times

  career360guru 9 months, 2 weeks ago


Selected Answer: B
B is the best option
upvoted 2 times

  sanket1990 9 months, 3 weeks ago


Selected Answer: B
B is correct
upvoted 1 times

  Wpcorgan 10 months, 1 week ago


B is correct
upvoted 1 times

  backbencher2022 11 months ago


Selected Answer: B
ReadIOPS issue inclining towards Read Replica as the most cost effective solution here
upvoted 4 times

  rjam 11 months ago


Answer B
upvoted 2 times
Question #145 Topic 1

A company hosts a website analytics application on a single Amazon EC2 On-Demand Instance. The analytics software is written in PHP and uses
a MySQL database. The analytics software, the web server that provides PHP, and the database server are all hosted on the EC2 instance. The
application is showing signs of performance degradation during busy times and is presenting 5xx errors. The company needs to make the
application scale seamlessly.
Which solution will meet these requirements MOST cost-effectively?

A. Migrate the database to an Amazon RDS for MySQL DB instance. Create an AMI of the web application. Use the AMI to launch a second
EC2 On-Demand Instance. Use an Application Load Balancer to distribute the load to each EC2 instance.

B. Migrate the database to an Amazon RDS for MySQL DB instance. Create an AMI of the web application. Use the AMI to launch a second EC2
On-Demand Instance. Use Amazon Route 53 weighted routing to distribute the load across the two EC2 instances.

C. Migrate the database to an Amazon Aurora MySQL DB instance. Create an AWS Lambda function to stop the EC2 instance and change the
instance type. Create an Amazon CloudWatch alarm to invoke the Lambda function when CPU utilization surpasses 75%.

D. Migrate the database to an Amazon Aurora MySQL DB instance. Create an AMI of the web application. Apply the AMI to a launch template.
Create an Auto Scaling group with the launch template Configure the launch template to use a Spot Fleet. Attach an Application Load Balancer
to the Auto Scaling group.

Correct Answer: D

Community vote distribution


D (72%) A (22%) 6%

  BrijMohan08 2 weeks, 1 day ago


Selected Answer: B
Option B is a cost-effective choice that combines the benefits of database migration to RDS, horizontal scaling with EC2 instances, and
control over traffic distribution with Route 53 weighted routing, making it the best solution for the given requirements.
upvoted 2 times

  TariqKipkemei 3 weeks, 4 days ago


Selected Answer: D
Scale seamlessly = Autoscaling group, Amazon Aurora MySQL DB instance
Cost effective = Spot Fleet
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: D
The key reasons are:

Migrating the database to Amazon Aurora MySQL provides a scalable, high performance database to support the application.
Creating an AMI of the web application and using it in an Auto Scaling group with Spot instances allows cheap and efficient scaling of the
web tier.
The Application Load Balancer distributes traffic across the Auto Scaling group.
Spot instances in an Auto Scaling group allow cost-optimized automatic scaling based on demand.
This approach provides high availability and seamless scaling without manual intervention.
upvoted 2 times

  cookieMr 3 months, 1 week ago


D is correct because migrating the database to Amazon Aurora provides better scalability and performance compared to Amazon RDS for
MySQL. Creating an AMI of the web application allows for easy replication of the application on multiple instances. Using a launch
template and Auto Scaling group with Spot Fleet provides cost optimization by leveraging spot instances. Adding an Application Load
Balancer ensures the load is distributed across the instances for seamless scaling.

A is incorrect because using an Application Load Balancer with multiple EC2 instances is a better approach for scalability compared to
relying on a single instance.

B is incorrect because weighted routing in Amazon Route 53 distributes traffic based on fixed weights, which may not dynamically adjust
to the changing load.

C is incorrect because using AWS Lambda to stop and change the instance type based on CPU utilization is not an efficient way to handle
scaling for a web application. Auto Scaling is a better approach for dynamic scaling.
upvoted 2 times

  Konb 5 months ago


Selected Answer: D
I was tempted to pick A but then I realized there are two key requirements:
- scale seamlessly
- cost-effectively

None of A-C give seamless scalability. A and B are about adding second instance (which I assume does not match to "scale seamlessly"). C
is about changing instance type.

Therefore D is only workable solution to the scalability requirement


upvoted 4 times

  pbpally 4 months, 3 weeks ago


Yup. Got me too. I picked A, saw D, and then reread the "scale seamlessly" part. D is correct.
upvoted 3 times

  genny 5 months, 3 weeks ago


Selected Answer: A
I wouldn't run my website on spot instances. Spot instances might be terminated at any time, and since I need to run analytics application
it's not an option for me. And using route 53 for load balancing of 2 instances is an overkill. I go with A.
upvoted 4 times

  jdr75 5 months, 3 weeks ago


Selected Answer: D
the options that said "launch a second EC2", have no sense ... why 2?, why not 3 or 4 or 5?
so options A and B drop.
C is no sense (Lambda doing this like a Scaling Group?, absurd)
Has to be D. Little extrange cos' Aurora is a very good solution, but NOT CHEAP (remember: cost-effectively).
To be honest, the most cost-effectively is B je je
upvoted 2 times

  SuketuKohli 6 months, 2 weeks ago


A Spot Fleet is a set of Spot Instances and optionally On-Demand Instances that is launched based on criteria that you specify. The Spot
Fleet selects the Spot capacity pools that meet your needs and launches Spot Instances to meet the target capacity for the fleet. By
default, Spot Fleets are set to maintain target capacity by launching replacement instances after Spot Instances in the fleet are terminated.
You can submit a Spot Fleet as a one-time request, which does not persist after the instances have been terminated. You can include On-
Demand Instance requests in a Spot Fleet request.
upvoted 2 times

  KZM 7 months, 3 weeks ago


Ans: D
Both Amazon RDS for MySQL and Amazon Aurora MySQL are designed to provide customers with fully managed relational database
services, but Amazon Aurora MySQL is designed to provide better performance, scalability, and reliability, making it a better option for
customers who need high-performance database services.
upvoted 1 times

  bullrem 8 months, 1 week ago


Selected Answer: D
Using an Auto Scaling group with a launch template and a Spot Fleet allows the company to scale the application seamlessly and cost-
effectively, by automatically adding or removing instances based on the demand, and using Spot instances which are spare compute
capacity available in the AWS region at a lower price than On-Demand instances. And also by migrating the database to Amazon Aurora
MySQL DB instance, it provides higher scalability, availability, and performance than traditional MySQL databases.
upvoted 2 times

  BakedBacon 8 months, 2 weeks ago


Selected Answer: D
The answer is D:
Migrate the database to Amazon Aurora MySQL - this will let the DB scale on it's own; it'll scale automatically without needing adjustment.
Create AMI of the web app and using a launch template - this will make the creating of any future instances of the app seamless. They can
then be added to the auto scaling group which will save them money as it will scale up and down based on demand.
Using a spot fleet to launch instances- This solves the "MOST cost-effective" portion of the question as spot instances come at a huge
discount at the cost of being terminated at any time Amazon deems fit. I think this is why there's a bit of disagreement on this. While it's
the most cost effective, it would be a terrible choice if amazon were to terminate that spot instance during a busy period.
upvoted 1 times

  gustavtd 9 months ago


But I have a question,
For Spot instance, is it possible that at some time there is no spot resources available at all? because it is not guaranteed, right?
upvoted 4 times

  Rupak10 7 months, 3 weeks ago


Spot fleet not spot instance mentioned over there. Spot fleet = Spot instance + on-demand instance. If we cannot manage the spot
instance then we can use an on-demand instance.
upvoted 4 times

  RupeC 2 months, 1 week ago


Super bit of info. Thanks
upvoted 1 times
  Zerotn3 9 months ago
Selected Answer: D
Option D is the most cost-effective solution that meets the requirements.

Migrating the database to Amazon Aurora MySQL will allow the database to scale automatically, so it can handle an increase in traffic
without manual intervention. Creating an AMI of the web application and using a launch template will allow the company to quickly and
easily launch new instances of the application, which can then be added to an Auto Scaling group. This will allow the application to
automatically scale up and down based on demand, ensuring that there are enough resources to handle busy times without incurring the
cost of running idle resources.

Using a Spot Fleet to launch the instances will allow the company to take advantage of Amazon's spare capacity and get a discount on
their EC2 instances. Attaching an Application Load Balancer to the Auto Scaling group will allow the load to be distributed across all of the
available instances, improving the performance and reliability of the application.
upvoted 3 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Selected Answer: D
Option D is the most cost-effective solution because;

* it uses an Auto Scaling group with a launch template and a Spot Fleet to automatically scale the number of EC2 instances based on the
workload.

* using a Spot Fleet allows the company to take advantage of the lower prices of Spot Instances while still providing the required
performance and availability for the application.

* using an Aurora MySQL database instance allows the company to take advantage of the scalability and performance of Aurora.
upvoted 2 times

  techhb 9 months, 1 week ago


D ,as only this has auto scaling
upvoted 1 times

  Sahilbhai 9 months, 1 week ago


ANSWER IS D
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: D
D is the right option. A is possible but it will have high cost due to on-demand instances. It is not mentioned that 24x7 application
availability is high priority goal.
upvoted 1 times
Question #146 Topic 1

A company runs a stateless web application in production on a group of Amazon EC2 On-Demand Instances behind an Application Load Balancer.
The application experiences heavy usage during an 8-hour period each business day. Application usage is moderate and steady overnight.
Application usage is low during weekends.
The company wants to minimize its EC2 costs without affecting the availability of the application.
Which solution will meet these requirements?

A. Use Spot Instances for the entire workload.

B. Use Reserved Instances for the baseline level of usage. Use Spot instances for any additional capacity that the application needs.

C. Use On-Demand Instances for the baseline level of usage. Use Spot Instances for any additional capacity that the application needs.

D. Use Dedicated Instances for the baseline level of usage. Use On-Demand Instances for any additional capacity that the application needs.

Correct Answer: B

Community vote distribution


B (67%) C (17%) D (15%)

  rob74 Highly Voted  11 months ago


Selected Answer: B
In the Question is mentioned that it has o Demand instances...so I think is more cheapest reserved and spot
upvoted 13 times

  Qjb8m9h Highly Voted  9 months, 3 weeks ago


Answer is B: Reserved is cheaper than on demand the company has. And it's meet the availabilty (HA) requirement as to spot instance that
can be disrupted at any time.
PRICING BELOW.
On-Demand: 0% There’s no commitment from you. You pay the most with this option.
Reserved : 40%-60%1-year or 3-year commitment from you. You save money from that commitment.
Spot 50%-90% Ridiculously inexpensive because there’s no commitment from the AWS side.
upvoted 7 times

  Modulopi Most Recent  3 days, 5 hours ago


Selected Answer: C
For 8 hours/day on demand works best
upvoted 1 times

  TariqKipkemei 3 weeks, 4 days ago


Selected Answer: B
Main concern here is cost and availability. Reserved Instances provide you with significant savings on your Amazon EC2 costs compared to
On-Demand Instance pricing. Spot instances et you take advantage of unused EC2 capacity in the AWS cloud. Spot Instances are available
at up to a 90% discount compared to On-Demand prices. You can use Spot Instances for various stateless, fault-tolerant, or flexible
applications.
upvoted 2 times

  Valder21 4 weeks ago


Selected Answer: D
the application has STEADY workload in the non peak hours therefore it can not be spot instances
upvoted 2 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: B
The reasons are:

On-Demand Instances provide stable, reliable baseline capacity for the normal workload.
Spot Instances can provide the additional capacity needed during peak periods at a much lower hourly rate compared to On-Demand.
The stateless nature of the application allows taking advantage of Spot without affecting availability. If Spot is interrupted, the baseline
On-Demand capacity remains available.
Reserved Instances require upfront commitment and may not match the variable workload.
Dedicated Instances are more expensive than On-Demand for baseline capacity.
Using only Spot Instances risks interruption during peak times if capacity is not available.
upvoted 2 times

  toussyn 2 months ago


Selected Answer: C
On demand for baseline:
lets say it cost $100 per hour, then the cost of running it for a day would be: $100 * 8 = 800. Times 8 because we’ll only be running for 8
hours in a day.

With Reserve instance on the other hand we are locked in for a year, but at 60% discount. That means we’ll be paying $40 per hour.
Running it for a day: $40 * 24 = $960
upvoted 3 times
  cookieMr 3 months, 1 week ago
Selected Answer: B
B is correct because it combines the use of Reserved Instances and Spot Instances to minimize EC2 costs while ensuring availability.
Reserved Instances provide cost savings for the baseline level of usage during the heavy usage period, while Spot Instances are utilized
for any additional capacity needed during peak times, taking advantage of their cost-effectiveness.

A is incorrect because relying solely on Spot Instances for the entire workload can result in potential interruptions and instability during
peak usage periods.

C is incorrect because using On-Demand Instances for the baseline level of usage does not provide the cost savings and long-term
commitment benefits that Reserved Instances offer.

D is incorrect because using Dedicated Instances for the baseline level of usage incurs additional costs without significant benefits for this
scenario. Dedicated Instances are typically used for compliance or regulatory requirements rather than cost optimization.
upvoted 2 times

  Bmarodi 3 months, 3 weeks ago


Selected Answer: B
A company runs a stateless web application in production. This means that the application can be stopped and restarted again.
upvoted 1 times

  justhereforccna 4 months, 3 weeks ago


Selected Answer: D
Answer is D, you cannot guarantee availability with spot instances
upvoted 3 times

  KalarAzar 3 months, 2 weeks ago


The application is stateless.
upvoted 1 times

  justhereforccna 4 months, 3 weeks ago


Answer is D, you cannot guarantee availability with spot instances
upvoted 1 times

  arjundevops 5 months, 2 weeks ago


Selected Answer: D
To make the application scale seamlessly, Option D is more suitable because it involves using Auto Scaling with Spot Fleet. Auto Scaling
allows you to automatically adjust the number of EC2 instances in response to changes in demand, while Spot Fleet allows you to request
and maintain a fleet of Spot Instances at a lower cost compared to On-Demand Instances.
upvoted 2 times

  bakamon 6 months ago


strange, it wants a solution without affecting availability but has not given the right option.. spot instances cannot guarantee availability
even at night... or whatever...
upvoted 2 times

  KalarAzar 3 months, 2 weeks ago


The application is stateless.
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Selected Answer: B
Option B is the most cost-effective solution that meets the requirements.

* Using Reserved Instances for the baseline level of usage will provide a discount on the EC2 costs for steady overnight and weekend
usage.

* Using Spot Instances for any additional capacity that the application needs during peak usage times will allow the company to take
advantage of spare capacity in the region at a lower cost than On-Demand Instances.
upvoted 4 times

  techhb 9 months, 1 week ago


Selected Answer: B
B is correct
upvoted 2 times

  career360guru 9 months, 2 weeks ago


Selected Answer: B
Option B is most cost effective without compromising the availability for baseline load.
upvoted 1 times

  Wpcorgan 10 months, 1 week ago


B IS CORRECT
upvoted 1 times
Question #147 Topic 1

A company needs to retain application log files for a critical application for 10 years. The application team regularly accesses logs from the past
month for troubleshooting, but logs older than 1 month are rarely accessed. The application generates more than 10 TB of logs per month.
Which storage option meets these requirements MOST cost-effectively?

A. Store the logs in Amazon S3. Use AWS Backup to move logs more than 1 month old to S3 Glacier Deep Archive.

B. Store the logs in Amazon S3. Use S3 Lifecycle policies to move logs more than 1 month old to S3 Glacier Deep Archive.

C. Store the logs in Amazon CloudWatch Logs. Use AWS Backup to move logs more than 1 month old to S3 Glacier Deep Archive.

D. Store the logs in Amazon CloudWatch Logs. Use Amazon S3 Lifecycle policies to move logs more than 1 month old to S3 Glacier Deep
Archive.

Correct Answer: B

Community vote distribution


B (100%)

  rjam Highly Voted  10 months, 2 weeks ago


Selected Answer: B
Why not AwsBackup? No Glacier Deep is supported by AWS Backup

https://docs.aws.amazon.com/aws-backup/latest/devguide/s3-backups.html
AWS Backup allows you to backup your S3 data stored in the following S3 Storage Classes:
• S3 Standard
• S3 Standard - Infrequently Access (IA)
• S3 One Zone-IA
• S3 Glacier Instant Retrieval
• S3 Intelligent-Tiering (S3 INT)
upvoted 7 times

  tdkcumberland 10 months, 1 week ago


AWS BackUp costs something, setting up S3 LCP doesn't.
upvoted 3 times

  TariqKipkemei Most Recent  3 weeks, 4 days ago


Selected Answer: B
S3 Lifecycle policies to the rescue
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: B
B is the most cost-effective solution. Storing the logs in S3 and using S3 Lifecycle policies to transition logs older than 1 month to S3
Glacier Deep Archive allows for cost optimization based on data access patterns. Since logs older than 1 month are rarely accessed,
moving them to S3 Glacier Deep Archive helps minimize storage costs while still retaining the logs for the required 10-year period.

A is incorrect because using AWS Backup to move logs to S3 Glacier Deep Archive can incur additional costs and complexity compared to
using S3 Lifecycle policies directly.

C adds unnecessary complexity and costs by involving CloudWatch Logs and AWS Backup when direct management through S3 is
sufficient.

D is incorrect because using S3 Lifecycle policies to move logs from CloudWatch Logs to S3 Glacier Deep Archive is not a valid option.
CloudWatch Logs and S3 have separate storage mechanisms, and S3 Lifecycle policies cannot be applied directly to CloudWatch Logs.
upvoted 2 times

  Mamiololo 8 months, 2 weeks ago


B is correct..
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Selected Answer: B
Option B (Store the logs in Amazon S3. Use S3 Lifecycle policies to move logs more than 1-month-old to S3 Glacier Deep Archive) would
meet these requirements in the most cost-effective manner.

This solution would allow the application team to quickly access the logs from the past month for troubleshooting, while also providing a
cost-effective storage solution for the logs that are rarely accessed and need to be retained for 10 years.
upvoted 1 times
  career360guru 9 months, 2 weeks ago
Selected Answer: B
Option B is most cost effective. Moving logs to Cloudwatch logs may incure additional cost.
upvoted 1 times

  Wpcorgan 10 months, 1 week ago


B is correct
upvoted 1 times

  ArielSchivo 10 months, 3 weeks ago


Selected Answer: B
S3 + Glacier is the most cost effective.
upvoted 3 times

  Bevemo 10 months, 3 weeks ago


Selected Answer: B
D works, archive cloudwatch logs to S3 .... but is an additional service to pay for over B.
upvoted 1 times

  Aamee 10 months ago


CloudWatch logs can't store around 10 TB of data per month I believe so both C and D options are ruled out already.
upvoted 1 times

  masetromain 11 months ago


Selected Answer: B
https://www.examtopics.com/discussions/amazon/view/80772-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times
Question #148 Topic 1

A company has a data ingestion workflow that includes the following components:
An Amazon Simple Notification Service (Amazon SNS) topic that receives notifications about new data deliveries
An AWS Lambda function that processes and stores the data
The ingestion workflow occasionally fails because of network connectivity issues. When failure occurs, the corresponding data is not ingested
unless the company manually reruns the job.
What should a solutions architect do to ensure that all notifications are eventually processed?

A. Configure the Lambda function for deployment across multiple Availability Zones.

B. Modify the Lambda function's configuration to increase the CPU and memory allocations for the function.

C. Configure the SNS topic’s retry strategy to increase both the number of retries and the wait time between retries.

D. Configure an Amazon Simple Queue Service (Amazon SQS) queue as the on-failure destination. Modify the Lambda function to process
messages in the queue.

Correct Answer: D

Community vote distribution


D (85%) C (15%)

  bunnychip Highly Voted  11 months, 1 week ago


Selected Answer: D
*ensure that all notifications are eventually processed*
upvoted 11 times

  Guru4Cloud Most Recent  1 month, 2 weeks ago


Selected Answer: D
Configure an Amazon Simple Queue Service (Amazon SQS) queue as the on-failure destination. Modify the Lambda function to process
messages in the queue.
upvoted 1 times

  Help2023 7 months, 2 weeks ago


Selected Answer: D
This is why https://docs.aws.amazon.com/sns/latest/dg/sns-message-delivery-retries.html
upvoted 3 times

  CaoMengde09 7 months, 3 weeks ago


C is not the right answer since after several retries SNS discard the message which doesn't align with the reqirement. D is the right answer
upvoted 3 times

  CaoMengde09 7 months, 3 weeks ago


Best solution to process failed SNS notifications is using sns-dead-letter-queues (SQS Queue for reprocessing)
https://docs.aws.amazon.com/sns/latest/dg/sns-dead-letter-queues.html
upvoted 3 times

  SilentMilli 8 months, 3 weeks ago


Selected Answer: D
To ensure that all notifications are eventually processed, the best solution would be to configure an Amazon Simple Queue Service (SQS)
queue as the on-failure destination for the SNS topic. This will allow the notifications to be retried until they are successfully processed.
The Lambda function can then be modified to process messages in the queue, ensuring that all notifications are eventually processed.
Option D, "Configure an Amazon Simple Queue Service (Amazon SQS) queue as the on-failure destination. Modify the Lambda function to
process messages in the queue," is the correct answer.
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Selected Answer: D
I choose Option D as the correct answer.

To ensure that all notifications are eventually processed, the solutions architect can set up an Amazon SQS queue as the on-failure
destination for the Amazon SNS topic. This way, when the Lambda function fails due to network connectivity issues, the notification will be
sent to the queue instead of being lost. The Lambda function can then be modified to process messages in the queue, ensuring that all
notifications are eventually processed.
upvoted 3 times

  techhb 9 months, 1 week ago


Selected Answer: D
Option D to ensure that all notifications are eventually processed you need to use SQS.
upvoted 2 times

  career360guru 9 months, 2 weeks ago


Selected Answer: C
Option C is right option.
SNS does not have any "On Failure" delivery destination. One need to configure dead-letter queue and configure SQS to read from there.
So given this option D is incorrect.
upvoted 2 times

  JayBee65 9 months, 1 week ago


I don't think that's right
"A dead-letter queue is an Amazon SQS queue that an Amazon SNS subscription can target for messages that can't be delivered to
subscribers successfully. Messages that can't be delivered due to client errors or server errors are held in the dead-letter queue for
further analysis or reprocessing" from https://docs.aws.amazon.com/sns/latest/dg/sns-dead-letter-queues.html.
This is pretty much what is being described in D.
Plus C will only retry message processing, and network problems could still prevent the message from being processed, but the
question states "ensure that all notifications are eventually processed". So C does not meet the requirements but D does look to do
this.
upvoted 4 times

  NikaCZ 9 months, 2 weeks ago


Selected Answer: D
Is correct.
upvoted 1 times

  NikaCZ 9 months, 2 weeks ago


If you want to ensure that all notifications are eventually processed you need to use SQS.
upvoted 1 times

  Wajif 10 months ago


Selected Answer: D
C isnt specific. Hence D
upvoted 1 times

  LeGloupier 10 months, 1 week ago


Selected Answer: C
"on-failure destination" doesn't exist, only dead letter queue exist.
that's why I am leaning for C
upvoted 1 times

  Wajif 10 months ago


Dead letter queue doesnt exist in SNS. They are specifically saying a new queue will be configured for failures from SNS. Hence D
upvoted 1 times

  Wpcorgan 10 months, 1 week ago


D is correct
upvoted 1 times

  ds0321 10 months, 2 weeks ago


Selected Answer: D
D is the answer
upvoted 1 times

  ArielSchivo 10 months, 3 weeks ago


Selected Answer: D
Option C could work but the max retries attempts is 23 days. After that messages are deleted. And you do not want that to happen! So,
Option D.
upvoted 4 times

  SimonPark 11 months ago


Selected Answer: D
imho, D is the answer
upvoted 1 times
Question #149 Topic 1

A company has a service that produces event data. The company wants to use AWS to process the event data as it is received. The data is written
in a specific order that must be maintained throughout processing. The company wants to implement a solution that minimizes operational
overhead.
How should a solutions architect accomplish this?

A. Create an Amazon Simple Queue Service (Amazon SQS) FIFO queue to hold messages. Set up an AWS Lambda function to process
messages from the queue.

B. Create an Amazon Simple Notification Service (Amazon SNS) topic to deliver notifications containing payloads to process. Configure an
AWS Lambda function as a subscriber.

C. Create an Amazon Simple Queue Service (Amazon SQS) standard queue to hold messages. Set up an AWS Lambda function to process
messages from the queue independently.

D. Create an Amazon Simple Notification Service (Amazon SNS) topic to deliver notifications containing payloads to process. Configure an
Amazon Simple Queue Service (Amazon SQS) queue as a subscriber.

Correct Answer: A

Community vote distribution


A (100%)

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: A
Create an Amazon Simple Queue Service (Amazon SQS) FIFO queue to hold messages. Set up an AWS Lambda function to process
messages from the queue.
upvoted 1 times

  cookieMr 3 months, 1 week ago


A is the correct solution. By creating an Amazon Simple Queue Service (Amazon SQS) FIFO queue to hold messages and setting up an AWS
Lambda function to process messages from the queue, the company can ensure that the order of the event data is maintained
throughout processing. SQS FIFO queues guarantee the order of messages and are suitable for scenarios where strict message ordering
is required.

B is incorrect because Amazon Simple Notification Service (Amazon SNS) topics are not designed to preserve message order. SNS is a
publish-subscribe messaging service and does not guarantee the order of message delivery.

C is incorrect because using an SQS standard queue does not guarantee the order of message processing. SQS standard queues provide
high throughput and scale, but they do not guarantee strict message ordering.

D is incorrect because configuring an SQS queue as a subscriber to an SNS topic does not ensure message ordering. SNS topics distribute
messages to subscribers independently, and the order of message processing is not guaranteed.
upvoted 2 times

  cheese929 5 months ago


Selected Answer: A
A is correct. Use FIFO to process in the specific order required
upvoted 2 times

  WherecanIstart 7 months ago


Selected Answer: A
Option A is correct...data is processed in the correct order
upvoted 1 times

  Buruguduystunstugudunstuy 9 months ago


Selected Answer: A
The correct solution is Option A. Creating an Amazon Simple Queue Service (Amazon SQS) FIFO queue to hold messages and setting up an
AWS Lambda function to process messages from the queue will ensure that the event data is processed in the correct order and minimize
operational overhead.

Option B is incorrect because using Amazon Simple Notification Service (Amazon SNS) does not guarantee the order in which messages
are delivered.

Option C is incorrect because using an Amazon SQS standard queue does not guarantee the order in which messages are processed.

Option D is incorrect because using an Amazon SQS queue as a subscriber to an Amazon SNS topic does not guarantee the order in which
messages are processed.
upvoted 3 times
  techhb 9 months, 1 week ago
Only A is right option here.
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: A
Option A is the best option.
upvoted 2 times

  alect096 9 months, 2 weeks ago


Selected Answer: A
"The data is written in a specific order that must be maintained throughout processing" --> FIFO
upvoted 4 times

  NikaCZ 9 months, 2 weeks ago


Selected Answer: A
specific order = FIFO
upvoted 1 times

  k1kavi1 9 months, 2 weeks ago


Selected Answer: A
A is correct
upvoted 1 times

  david76x 9 months, 3 weeks ago


Selected Answer: A
Definitely A
upvoted 1 times

  Wpcorgan 10 months, 1 week ago


A is correct
upvoted 1 times

  ArielSchivo 10 months, 3 weeks ago


Selected Answer: A
FIFO means order, so Option A.
upvoted 4 times

  rjam 11 months ago


Order --- means FIFO option A
upvoted 3 times
Question #150 Topic 1

A company is migrating an application from on-premises servers to Amazon EC2 instances. As part of the migration design requirements, a
solutions architect must implement infrastructure metric alarms. The company does not need to take action if CPU utilization increases to more
than 50% for a short burst of time. However, if the CPU utilization increases to more than 50% and read IOPS on the disk are high at the same time,
the company needs to act as soon as possible. The solutions architect also must reduce false alarms.
What should the solutions architect do to meet these requirements?

A. Create Amazon CloudWatch composite alarms where possible.

B. Create Amazon CloudWatch dashboards to visualize the metrics and react to issues quickly.

C. Create Amazon CloudWatch Synthetics canaries to monitor the application and raise an alarm.

D. Create single Amazon CloudWatch metric alarms with multiple metric thresholds where possible.

Correct Answer: A

Community vote distribution


A (100%)

  123jhl0 Highly Voted  11 months, 2 weeks ago


Selected Answer: A
Composite alarms determine their states by monitoring the states of other alarms. You can **use composite alarms to reduce alarm
noise**. For example, you can create a composite alarm where the underlying metric alarms go into ALARM when they meet specific
conditions. You then can set up your composite alarm to go into ALARM and send you notifications when the underlying metric alarms go
into ALARM by configuring the underlying metric alarms never to take actions. Currently, composite alarms can take the following actions:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Create_Composite_Alarm.html
upvoted 21 times

  Modulopi Most Recent  3 days, 5 hours ago


Selected Answer: A
A: Composite alarms determine their states by monitoring the states of other alarms. You can use composite alarms to reduce alarm
noise. For example, you can create a composite alarm where the underlying metric alarms go into ALARM when they meet specific
conditions.
upvoted 1 times

  TariqKipkemei 3 weeks, 4 days ago


Selected Answer: A
Composite alarms was designed to handle this scenario.
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: A
The key reasons are:

Composite alarms allow defining alarms with multiple metrics and conditions, like high CPU AND high read IOPS in this case.
Composite alarms can avoid false positives triggered by a single metric spike.
Dashboards help visualize but won't take automated action. Synthetics tests application availability but doesn't address the metrics.
Single metric alarms with multiple thresholds can't correlate across metrics and may still trigger false positives.
Composite alarms allow acting quickly when both CPU and IOPS are high, per the stated need.
upvoted 2 times

  cookieMr 3 months, 1 week ago


Selected Answer: A
By creating composite alarms in CloudWatch, the solutions architect can combine multiple metrics, such as CPU utilization and read IOPS,
into a single alarm. This allows the company to take action only when both conditions are met, reducing false alarms and focusing on
meaningful alerts.

B can help in monitoring the overall health and performance of the application. However, it does not directly address the specific
requirement of triggering an action when CPU utilization and read IOPS exceed certain thresholds simultaneously.

C. Creating CloudWatch Synthetics canaries is useful for actively monitoring the application's behavior and availability. However, it does
not directly address the specific requirement of monitoring CPU utilization and read IOPS to trigger an action.

D. Creating single CloudWatch metric alarms with multiple metric thresholds where possible can be an option, but it does not address the
requirement of triggering an action only when both CPU utilization and read IOPS exceed their respective thresholds simultaneously.
upvoted 3 times

  Abrar2022 4 months ago


The composite alarm goes into ALARM state only if all conditions of the rule are met.
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Selected Answer: A
Option A, creating Amazon CloudWatch composite alarms, is correct because it allows the solutions architect to create an alarm that is
triggered only when both CPU utilization is above 50% and read IOPS on the disk are high at the same time. This meets the requirement to
act as soon as possible if both conditions are met, while also reducing the number of false alarms by ensuring that the alarm is triggered
only when both conditions are met.
upvoted 2 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


The incorrect solutions are:

In contrast, Option B, creating Amazon CloudWatch dashboards, would not directly address the requirement to trigger an alarm when
both CPU utilization is high and read IOPS on the disk are high at the same time. Dashboards can be useful for visualizing metric data
and identifying trends, but they do not have the capability to trigger alarms based on multiple metric thresholds.

Option C, using Amazon CloudWatch Synthetics canaries, may not be the best choice for this scenario, as canaries are used for
synthetic testing rather than for monitoring live traffic. Canaries can be useful for monitoring the availability and performance of an
application, but they may not be the most effective way to monitor the specific metric thresholds and conditions described in this
scenario.
upvoted 2 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Option D, creating single Amazon CloudWatch metric alarms with multiple metric thresholds, would not allow the solutions architect
to create an alarm that is triggered only when both CPU utilization and read IOPS on the disk are high at the same time. Instead, the
alarm would be triggered whenever any of the specified metric thresholds are exceeded, which may result in a higher number of
false alarms.
upvoted 3 times

  BENICE 9 months, 2 weeks ago


A is correct answer
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: A
Option A
upvoted 1 times

  Qjb8m9h 9 months, 3 weeks ago


The AWS::CloudWatch::CompositeAlarm type creates or updates a composite alarm. When you create a composite alarm, you specify a
rule expression for the alarm that takes into account the alarm states of other alarms that you have created. The composite alarm goes
into ALARM state only if all conditions of the rule are met.

The alarms specified in a composite alarm's rule expression can include metric alarms and other composite alarms.Using composite
alarms can reduce alarm noise.
upvoted 3 times

  Wpcorgan 10 months, 1 week ago


A is correct
upvoted 1 times
Question #151 Topic 1

A company wants to migrate its on-premises data center to AWS. According to the company's compliance requirements, the company can use
only the ap-northeast-3 Region. Company administrators are not permitted to connect VPCs to the internet.
Which solutions will meet these requirements? (Choose two.)

A. Use AWS Control Tower to implement data residency guardrails to deny internet access and deny access to all AWS Regions except ap-
northeast-3.

B. Use rules in AWS WAF to prevent internet access. Deny access to all AWS Regions except ap-northeast-3 in the AWS account settings.

C. Use AWS Organizations to configure service control policies (SCPS) that prevent VPCs from gaining internet access. Deny access to all
AWS Regions except ap-northeast-3.

D. Create an outbound rule for the network ACL in each VPC to deny all traffic from 0.0.0.0/0. Create an IAM policy for each user to prevent the
use of any AWS Region other than ap-northeast-3.

E. Use AWS Config to activate managed rules to detect and alert for internet gateways and to detect and alert for new resources deployed
outside of ap-northeast-3.

Correct Answer: AC

Community vote distribution


AC (65%) CE (16%) Other

  Six_Fingered_Jose Highly Voted  11 months, 1 week ago


Selected Answer: AC
agree with A and C
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_examples_vpc.html#example_vpc_2
upvoted 15 times

  rjam Highly Voted  10 months, 2 weeks ago


https://aws.amazon.com/blogs/aws/new-for-aws-control-tower-region-deny-and-guardrails-to-help-you-meet-data-residency-
requirements/
*Disallow internet access for an Amazon VPC instance managed by a customer
upvoted 9 times

  rjam 10 months, 2 weeks ago


Option A and C
upvoted 2 times

  rjam 10 months, 2 weeks ago


*You can use data-residency guardrails to control resources in any AWS Region.
upvoted 1 times

  BrijMohan08 Most Recent  2 weeks, 1 day ago


Selected Answer: AC
A. Use AWS Control Tower to implement data residency guardrails to deny internet access and deny access to all AWS Regions except ap-
northeast-3.
C. Use AWS Organizations to configure service control policies (SCPs) that prevent VPCs from gaining internet access. Deny access to all
AWS Regions except ap-northeast-3.
upvoted 1 times

  TariqKipkemei 3 weeks, 3 days ago


Selected Answer: AC
Use Control Tower to implement data residency guardrails and Service Control Policies (SCPS) to prevent VPCs from gaining internet
access.
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: AC
AWS Control Tower guardrails and AWS Organizations SCPs provide centralized, automated mechanisms to enforce no internet
connectivity for VPCs and restrict Region access to only ap-northeast-3.
upvoted 2 times

  cookieMr 3 months, 1 week ago


Selected Answer: AC
A. By using Control Tower, the company can enforce data residency guardrails and restrict internet access for VPCs and denies access to all
Regions except the required ap-northeast-3 Region.
C. With Organizations, the company can configure SCPs to prevent VPCs from gaining internet access. By denying access to all Regions
except ap-northeast-3, the company ensures that VPCs can only be deployed in the specified Region.

Option B is incorrect because using rules in AWS WAF alone does not address the requirement of denying access to all AWS Regions except
ap-northeast-3.

Option D is incorrect because configuring outbound rules in network ACLs and IAM policies for users can help restrict traffic and access,
but it does not enforce the company's requirement of denying access to all Regions except ap-northeast-3.

Option E is incorrect because using AWS Config and managed rules can help detect and alert for specific resources and configurations, but
it does not directly enforce the restriction of internet access or deny access to specific Regions.
upvoted 3 times
  Abrar2022 3 months, 3 weeks ago
Didn't know that SCPS (Service Control Policies) could be used to deny users internet access. Good to know. Always thought it's got
controlling who can and can't access AWS Services.
upvoted 1 times

  hicham0101 5 months, 1 week ago


Agree with Aand C
https://aws.amazon.com/blogs/aws/new-for-aws-control-tower-region-deny-and-guardrails-to-help-you-meet-data-residency-
requirements/
upvoted 1 times

  yallahool 5 months, 3 weeks ago


I choose C and D.
For control tower, it can't be A because ap-northeast-3 doesn't support it!
Also, in the case of E, it is detection and warning, so it is difficult to prevent internet connection (although the view is a little obscure).
upvoted 1 times

  michellemeloc 5 months ago


I just check, now it's supported!!!
upvoted 1 times

  notacert 5 months, 3 weeks ago


Selected Answer: AC
A and C
upvoted 1 times

  datz 5 months, 3 weeks ago


Selected Answer: CD
C/D

A - CANNOT BE!!! AWS Control Tower is not available in ap-northeast-3! Check your
B- for sure no
C - SCPS (Service Control Policies)- For sure
D - Deny outbound rule to be place in prod and also IAM Policy to deny Users creating services in AP-Northeast3
E - it creates an alert, which means it happens but an alert is triggered. so I think it's not good either.
upvoted 2 times

  darn 5 months, 2 weeks ago


False, Control Tower is in Osaka NorthEast 3
https://docs.aws.amazon.com/controltower/latest/userguide/region-how.html
upvoted 1 times

  Kaireny54 6 months ago


Selected Answer: CD
Control tower isn't available in AP-northeast-3 (only available in ap-northeast1 and 2 : https://www.aws-services.info/controltower.html)
For answer E, it creates an alert, wich means it happens but an alert is triggered. so i think it's not good either.
That's why i would go for C and D
upvoted 2 times

  Bmarodi 4 months, 1 week ago


It's availabe now on the same tink u pasted in earlier: ap-northeast-3 Asia Pacific (Osaka) 2023-04-20.
upvoted 1 times

  darn 5 months, 2 weeks ago


same page you posted:
ap-northeast-3 Asia Pacific (Osaka) 2023-04-20 https://aws.amazon.com/controltower
upvoted 1 times

  darn 5 months, 2 weeks ago


False, Control Tower is in Osaka NorthEast 3
https://docs.aws.amazon.com/controltower/latest/userguide/region-how.html
upvoted 1 times
  WherecanIstart 6 months, 1 week ago
Selected Answer: CE
AWS Control tower is not available in ap-northeast-3!

https://www.aws-services.info/controltower.html
upvoted 1 times

  warioverde 6 months, 1 week ago


What's wrong with B?
upvoted 2 times

  AlessandraSAA 6 months, 2 weeks ago


Selected Answer: CE
A - CANNOT BE!!! AWS Control Tower is not available in ap-northeast-3! Check your consolle.
upvoted 4 times

  moaaz86 7 months, 1 week ago


From ChatGPT :)

Control Tower: Can


Yes, AWS Control Tower can implement data residency guardrails to deny internet access and restrict access to AWS Regions except for
one.
To restrict access to AWS regions, you can create a guardrail using AWS Organizations to deny access to all AWS regions except for the one
that you want to allow. This can be done by creating an organizational policy that restricts access to specific AWS services and resources
based on region.

Config: Can(not).
Yes, AWS Config can help you enforce restrictions on internet access and control access to specific AWS Regions using AWS Config Rules.
It's worth noting that AWS Config is a monitoring service that provides continuous assessment of your AWS resources against desired
configurations. While AWS Config can alert you when a configuration change occurs, it cannot directly restrict access to resources or
enforce specific policies. For that, you may need to use other AWS services such as AWS Identity and Access Management (IAM), AWS
Firewall Manager, or AWS Organizations.
upvoted 3 times

  KZM 7 months, 3 weeks ago


Option A uses AWS Control Tower to implement data residency guardrails, but it does not prevent internet access by itself. It only denies
access to all AWS Regions except ap-northeast-3. The requirement states that administrators are not permitted to connect VPCs to the
internet, so Option A does not meet this requirement.
upvoted 2 times
Question #152 Topic 1

A company uses a three-tier web application to provide training to new employees. The application is accessed for only 12 hours every day. The
company is using an Amazon RDS for MySQL DB instance to store information and wants to minimize costs.
What should a solutions architect do to meet these requirements?

A. Configure an IAM policy for AWS Systems Manager Session Manager. Create an IAM role for the policy. Update the trust relationship of the
role. Set up automatic start and stop for the DB instance.

B. Create an Amazon ElastiCache for Redis cache cluster that gives users the ability to access the data from the cache when the DB instance
is stopped. Invalidate the cache after the DB instance is started.

C. Launch an Amazon EC2 instance. Create an IAM role that grants access to Amazon RDS. Attach the role to the EC2 instance. Configure a
cron job to start and stop the EC2 instance on the desired schedule.

D. Create AWS Lambda functions to start and stop the DB instance. Create Amazon EventBridge (Amazon CloudWatch Events) scheduled rules
to invoke the Lambda functions. Configure the Lambda functions as event targets for the rules.

Correct Answer: D

Community vote distribution


D (79%) A (21%)

  study_aws1 Highly Voted  11 months ago


https://aws.amazon.com/blogs/database/schedule-amazon-rds-stop-and-start-using-aws-lambda/

It is option D. Option A could have been applicable had it been AWS Systems Manager State Manager & not AWS Systems Manager
Session Manager
upvoted 26 times

  123jhl0 Highly Voted  11 months, 2 weeks ago


Selected Answer: A
A is true for sure. "Schedule Amazon RDS stop and start using AWS Systems Manager" Steps in the documentation:
1. Configure an AWS Identity and Access Management (IAM) policy for State Manager.
2. Create an IAM role for the new policy.
3. Update the trust relationship of the role so Systems Manager can use it.
4. Set up the automatic stop with State Manager.
5. Set up the automatic start with State Manager.
https://aws.amazon.com/blogs/database/schedule-amazon-rds-stop-and-start-using-aws-systems-manager/
upvoted 8 times

  ArielSchivo 10 months, 3 weeks ago


Option A refers to Session Manager, not State Manager as you pointed, so it is wrong. Option D is valid.
upvoted 7 times

  Bevemo 10 months, 3 weeks ago


Agree A, free to use state manager within limits, and don't need to code or manage lambda.
upvoted 1 times

  Kien048 11 months, 1 week ago


Look like State manager and Session manager use for difference purpose even both in same dashboard console.
upvoted 1 times

  Kien048 11 months, 1 week ago


And ofcause, D is working, so if A also right, the question is wrong.
upvoted 3 times

  TariqKipkemei Most Recent  3 weeks, 3 days ago


Selected Answer: D
You can use AWS Lambda and Amazon EventBridge to schedule a Lambda function to stop and start the idle databases with specific tags
to save on compute costs.

https://aws.amazon.com/blogs/database/schedule-amazon-rds-stop-and-start-using-aws-
lambda/#:~:text=you%20to%20schedule%20a-,Lambda%20function,-to%20stop%20and%20start
upvoted 1 times

  lemur88 1 month, 1 week ago


Selected Answer: D
Here is the recommended solutions which describes choice D - https://aws.amazon.com/blogs/database/save-costs-by-automating-the-
start-and-stop-of-amazon-rds-instances-with-aws-lambda-and-amazon-eventbridge/
upvoted 1 times
  Guru4Cloud 1 month, 2 weeks ago
Selected Answer: D
AWS Lambda functions can be used to start and stop RDS instances programmatically.
EventBridge scheduled rules can trigger the Lambda functions at specified times daily.
This allows fully automating the starting and stopping of RDS on a schedule to match usage patterns.
RDS billing is per hour when instance is running, so stopping when not in use significantly reduces costs.
Using Lambda and EventBridge is simpler and more robust than cron jobs on EC2.
ElastiCache and Systems Manager Session Manager are useful tools but do not directly address scheduled RDS start/stop.
upvoted 2 times

  cookieMr 3 months, 1 week ago


Selected Answer: D
By using AWS Lambda functions triggered by Amazon EventBridge scheduled rules, the company can automate the start and stop actions
for the Amazon RDS for MySQL DB instance based on the 12-hour access period. This allows them to minimize costs by only running the
DB instance when it is needed.

Option A is not the most suitable solution because it refers to IAM policies for AWS Systems Manager Session Manager, which is primarily
used for interactive shell access to EC2 instances and does not directly address the requirement of starting and stopping the DB instance.

Option B is not the most suitable solution because it suggests using Amazon ElastiCache for Redis as a cache cluster, which may not
provide the desired cost optimization for the DB instance.

Option C is not the most suitable solution because launching an EC2 instance and configuring cron jobs to start and stop it does not
directly address the requirement of minimizing costs for the Amazon RDS DB instance.
upvoted 2 times

  Siva007 4 months, 1 week ago


Selected Answer: D
I got this question in real exam!
upvoted 2 times

  srijrao 3 months, 1 week ago


why we need more than one lambda function to start and stop DB instance? btw how many questions came from this site?
upvoted 1 times

  ccmc 4 months, 3 weeks ago


State Manager, a capability of AWS Systems Manager
upvoted 1 times

  Ankit_EC_ran 5 months, 1 week ago


Selected Answer: D
Option D is correct
upvoted 2 times

  Musti35 5 months, 2 weeks ago


Selected Answer: D
In a typical development environment, dev and test databases are mostly utilized for 8 hours a day and sit idle when not in use. However,
the databases are billed for the compute and storage costs during this idle time. To reduce the overall cost, Amazon RDS allows instances
to be stopped temporarily. While the instance is stopped, you’re charged for storage and backups, but not for the DB instance hours.
Please note that a stopped instance will automatically be started after 7 days.

This post presents a solution using AWS Lambda and Amazon EventBridge that allows you to schedule a Lambda function to stop and
start the idle databases with specific tags to save on compute costs. The second post presents a solution that accomplishes stop and start
of the idle Amazon RDS databases using AWS Systems Manager.
upvoted 2 times

  test_devops_aws 6 months, 2 weeks ago


Selected Answer: D
https://docs.aws.amazon.com/systems-manager-automation-runbooks/latest/userguide/automation-ref-rds.html
upvoted 1 times

  aba2s 9 months ago


Selected Answer: D
AWS Lambda and Amazon EventBridge that allows you to schedule a Lambda function to stop and start the idle databases with specific
tags to save on compute costs. https://aws.amazon.com/blogs/database/schedule-amazon-rds-stop-and-start-using-aws-lambda/
upvoted 2 times

  Zerotn3 9 months ago


Selected Answer: D
The correct answer is D. Creating AWS Lambda functions to start and stop the DB instance and using Amazon EventBridge (Amazon
CloudWatch Events) scheduled rules to invoke the Lambda functions is the most cost-effective way to meet the requirements. The Lambda
functions can be configured as event targets for the scheduled rules, which will allow the DB instance to be started and stopped on the
desired schedule.
upvoted 4 times
  jupa 9 months, 2 weeks ago
Selected Answer: D
Its D. confirmed via others exam test pages
upvoted 2 times

  career360guru 9 months, 2 weeks ago


Selected Answer: D
Option D is the best option. Session Manager access can not be used to start and stop DB instances.It is used for the Brower based SSH
access to instances.
upvoted 2 times

  ArielSchivo 10 months, 3 weeks ago


Selected Answer: D
Option D is the one. Option A could be as well if it referred to State Manager instead of Session Manager.
upvoted 5 times

  rob74 11 months ago


Selected Answer: D
I think A or D but D is cheaper (mimimize costs) because you pay Lambda only if you use it.
upvoted 1 times
Question #153 Topic 1

A company sells ringtones created from clips of popular songs. The files containing the ringtones are stored in Amazon S3 Standard and are at
least 128 KB in size. The company has millions of files, but downloads are infrequent for ringtones older than 90 days. The company needs to
save money on storage while keeping the most accessed files readily available for its users.
Which action should the company take to meet these requirements MOST cost-effectively?

A. Configure S3 Standard-Infrequent Access (S3 Standard-IA) storage for the initial storage tier of the objects.

B. Move the files to S3 Intelligent-Tiering and configure it to move objects to a less expensive storage tier after 90 days.

C. Configure S3 inventory to manage objects and move them to S3 Standard-Infrequent Access (S3 Standard-1A) after 90 days.

D. Implement an S3 Lifecycle policy that moves the objects from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-1A) after 90
days.

Correct Answer: D

Community vote distribution


D (62%) B (38%)

  rjam Highly Voted  10 months, 2 weeks ago


Selected Answer: D
Answer D
Why Optoin D ?
The Question talks about downloads are infrequent older than 90 days which means files less than 90 days are accessed frequently.
Standard-Infrequent Access (S3 Standard-IA) needs a minimum 30 days if accessed before, it costs more.
So to access the files frequently you need a S3 Standard . After 90 days you can move it to Standard-Infrequent Access (S3 Standard-IA) as
its going to be less frequently accessed
upvoted 31 times

  MutiverseAgent 2 months, 2 weeks ago


I do not agree. The MOST cheaper option is B, because by choosing:
D) Files older than 90 days will live eternally in the S3 Infrequently access layer at $0.0125/GB.
B) Using Intelligent-Tiering files older than 90 days can be moved DIRECTLY to the "Archive access tier" (Glacier instant retrieval) at
$0.004/GB, avoiding/skipping the "S3 Infrequently access layer". The question also seems to be according this assuption as says "and
configure it to move objects to a less expensive storage tier after 90 days".

https://aws.amazon.com/s3/pricing/?nc=sn&loc=4
upvoted 1 times

  MutiverseAgent 2 months, 2 weeks ago


I am taking back my answer, the right is D) as the "Archive access tier" check present in the "Intelligent-Tiering Archive
configurations" is for "S3 Glacier flexible retrieval" which is not instant retrieval.
upvoted 1 times

  zeronine75 Highly Voted  10 months, 1 week ago


Selected Answer: B
B/D seems possible answer. But, I'll go with "B".
In the following table, S3 Intelligent-Tiering seems not so expansive than S3 Standard.
https://aws.amazon.com/s3/pricing/?nc1=h_ls
And, in the question "128KB" size is talking about S3 Intelligent-Tiering stuff.
upvoted 11 times

  ruqui 4 months, 1 week ago


have you tried to implement B? how do you configure Intelligent Tiering to move objects to a less expensive storage tier after 90 days?
and which storage tier is this 'less expensive' ? the answer is clearly wrong ... correct answer is D
upvoted 1 times

  Wajif 10 months ago


S3 Intelligent tiering is used when the access frequency is not known. I think 128KB is a deflector.
upvoted 6 times

  FNJ1111 9 months ago


also, there are probably several ringtones which aren't popular/used. Why keep them in S3 standard? The company would save money
if s3 intelligent-tiering moves the unpopular ringtones to a more cost-effective tier than s3 standard.
upvoted 1 times

  Wilson_S 10 months, 1 week ago


This link also has me going with “B.” Specifying 128 KB in size is not a coincidence. https://aws.amazon.com/s3/storage-
classes/intelligent-tiering/
upvoted 3 times

  javitech83 9 months, 4 weeks ago


because of tha link it is D.
There are no retrieval charges in S3 Intelligent-Tiering. S3 Intelligent-Tiering has no minimum eligible object size, but objects smaller
than 128 KB are not eligible for auto tiering. These smaller objects may be stored, but they’ll always be charged at the Frequent
Access tier
upvoted 1 times

  javitech83 9 months, 4 weeks ago


oh sorry it states objects are bigger than 128 KB. B is correct
upvoted 1 times

  TariqKipkemei Most Recent  3 weeks, 3 days ago


Selected Answer: D
Implement an S3 Lifecycle policy that moves the objects from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-1A) after 90
days.

I would not try to overthink this.


upvoted 1 times

  Valder21 4 weeks ago


Selected Answer: D
Not B because Intelligent-tiering = unkown patterns
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: D
The key reasons:

S3 Lifecycle policies can automatically transition objects from S3 Standard to S3 Standard-IA after 90 days.
S3 Standard provides high performance for frequently accessed newer files.
S3 Standard-IA costs 20-30% less than S3 Standard for infrequently accessed files.
This matches access patterns - high performance for new files, cost savings for older files.
S3 Intelligent Tiering has higher request costs and complexity for this simple access pattern.
S3 Inventory lists objects and their properties but does not directly transition objects.
Lifecycle policies provide automated transitions without manual intervention.
upvoted 1 times

  Smart 2 months, 1 week ago


Selected Answer: D
As per AWS Best Practices, S3 Intelligent Tier is designed for [unknown & changing] access patterns. Alternatively, if you do know the
access pattern, use lifecycle policies.
upvoted 2 times

  MutiverseAgent 2 months, 2 weeks ago


Selected Answer: B
The MOST cheaper option is B, because by choosing:
D) Files older than 90 days will live eternally in the S3 Infrequently access layer at $0.0125/GB.
B) Using Intelligent-Tiering files older than 90 days can be moved DIRECTLY to the "Archive access tier" (Glacier instant retrieval) at
$0.004/GB, avoiding/skipping the "S3 Infrequently access layer". The question also seems to be according this assuption as says "and
configure it to move objects to a less expensive storage tier after 90 days".

https://aws.amazon.com/s3/pricing/?nc=sn&loc=4
upvoted 1 times

  MutiverseAgent 2 months, 2 weeks ago


I am taking back my answer, the right is D) as the "Archive access tier" check present in the "Intelligent-Tiering Archive configurations"
is for "S3 Glacier flexible retrieval" which is not instant retrieval.
upvoted 1 times

  vini15 2 months, 2 weeks ago


should be D
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: B
By using S3 IT, the company can take advantage of automatic cost optimization. IT moves objects between two access tiers: frequent
access and infrequent access. In this case, since downloads for ringtones older than 90 days are infrequent, IT will automatically move
those objects to the less expensive infrequent access tier, reducing storage costs while keeping the most accessed files readily available.

A is not the most cost-effective solution because it doesn't consider the requirement of keeping the most accessed files readily available.
S3 Standard-IA is designed for data that is accessed less frequently, but it still incurs higher costs compared to IT.

C is not the most suitable solution for reducing storage costs. S3 inventory provides a list of objects and their metadata, but it does not
offer direct cost optimization features.
D is not the most cost-effective solution because it only moves objects from S3 Standard to S3 Standard-IA after 90 days. It doesn't take
advantage of the benefits of IT, which automatically optimizes costs based on access patterns.
upvoted 3 times
  kelvintoys93 3 months, 4 weeks ago
Selected Answer: D
128kB is a just a trap.
It cannot be B because:
1. Intelligent-tiering requires no configuration for class transitions - your option is just whether to opt into Archive/Deep Archive Access
tier, which does not make sense for the requirement. Those two classes are cheapest in terms of storage but charges high for retrieval.
2. Nowhere has it mentioned that the access pattern is unpredictable. If we really have to assume, I would rather assume that new songs
have higher access frequency. In this case, you dont really benefit from the auto-transition feature that Intel-tier provides. You will be
paying the same rate as S3 Standard class + additional fee for using Intel-tiering. Since the req is to have the most cost-efficient solution, D
is the answer.
upvoted 1 times

  kelvintoys93 3 months, 4 weeks ago


To add to my point above, for intel-tiering to move a file from:
Frequent tier > Infrequent tier - requires object to not be accessed for 30 consecutive days
Infrequent tier > Archive/Deep Archive - requires object to not be accessed for 90 days and above.
Can one guarantee that a new song will not be downloaded for 30 consecutive days in order to take advantage of intel-tier's automated
storage class transition? Even if that's the case, there is nothing that user need to "configure".. B would only be a valid solution if the
configuration part is taken out.
https://aws.amazon.com/s3/storage-classes/intelligent-tiering/
upvoted 1 times

  Deansylla 4 months ago


Selected Answer: B
S3 Intelligent-Tiering is designed to optimize costs by automatically moving objects between two access tiers: frequent access and
infrequent access. By moving the files to S3 Intelligent-Tiering, the company can take advantage of the automatic tiering feature to save
costs on storage. Initially, the files will be stored in the frequent access tier for quick and easy access. However, since downloads for
ringtones older than 90 days are infrequent, after that period, the objects will automatically be moved to the infrequent access tier, which
offers a lower storage cost compared to the frequent access tier
upvoted 1 times

  Abrar2022 4 months ago


In the question it mentions that the files are stored in S3 Standard. So you need to transition them from S3 standard using S3 Lifecycle
policy that moves the objects from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-1A) after 90 days.
upvoted 1 times

  ccmc 4 months, 3 weeks ago


Selected Answer: B
are at least 128 KB in size. The company has millions of files, but downloads are infrequent for ringtones older than 90 days. The company
needs to save money on storage while keeping the most accessed files readily available for its users. -- means some most accessed files
are can be more than 90 days old. so should go with intelligent tiering as the patterns are unpredictable
upvoted 1 times

  cheese929 5 months ago


Selected Answer: B
Answer should be B.
S3 Standard and S3 Intelligent - Tiering are both $0.023 per GB per month.
However S3 Standard - Infrequent Access is $0.0125 per GB while S3 Intelligent - Tiering Archive Access Tier is $0.0036 per GB. S3
Intelligent - Tiering Deep Archive Access Tier is even cheaper at $0.00099 per GB. Thus the answer is B.
upvoted 1 times

  sitro95 5 months ago


Selected Answer: D
I vote for option D
B is saying that it will move to less expensive storage which can be also Glacier but this does not fill requirements of the question
upvoted 2 times

  kruasan 5 months, 1 week ago


Selected Answer: D
D is more cost effective and pattern is known
upvoted 1 times

  darn 5 months, 2 weeks ago


Selected Answer: B
S3 Intelligent Tiering
upvoted 1 times
Question #154 Topic 1

A company needs to save the results from a medical trial to an Amazon S3 repository. The repository must allow a few scientists to add new files
and must restrict all other users to read-only access. No users can have the ability to modify or delete any files in the repository. The company
must keep every file in the repository for a minimum of 1 year after its creation date.
Which solution will meet these requirements?

A. Use S3 Object Lock in governance mode with a legal hold of 1 year.

B. Use S3 Object Lock in compliance mode with a retention period of 365 days.

C. Use an IAM role to restrict all users from deleting or changing objects in the S3 bucket. Use an S3 bucket policy to only allow the IAM role.

D. Configure the S3 bucket to invoke an AWS Lambda function every time an object is added. Configure the function to track the hash of the
saved object so that modified objects can be marked accordingly.

Correct Answer: B

Community vote distribution


B (85%) A (15%)

  Qjb8m9h Highly Voted  10 months, 3 weeks ago


Answer : B
Reason: Compliance Mode. The key difference between Compliance Mode and Governance Mode is that there are NO users that can
override the retention periods set or delete an object, and that also includes your AWS root account which has the highest privileges.
upvoted 18 times

  abhishek2021 4 months, 1 week ago


Compliance mode controls the object life span after creation.
how this option restricts all scientists from adding new file? please explain.
upvoted 2 times

  Zerotn3 9 months ago


How about: The repository must allow a few scientists to add new files
upvoted 1 times

  JayBee65 8 months, 4 weeks ago


Adding is not the same as changing :)
upvoted 7 times

  elmogy Highly Voted  4 months, 1 week ago


Selected Answer: B
B,
The key is "No users can have the ability to modify or delete any files" and compliance mode supports that.
I remember it this way: ( governance is like government, they set the rules but they can allow some people to break it :D )
upvoted 16 times

  TariqKipkemei Most Recent  3 weeks, 3 days ago


Selected Answer: B
Compliance Mode best suits this scenario because once an object is locked in compliance mode, its retention mode can't be changed, and
its retention period can't be shortened.
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: B
B) seems to be the right option, because: Both option A) & B) allow to:
- Scientists add new files & other users read-only access.
- Keep files for a minimum of 1 year
Only option B allows to:
- Disable all users the ability to modify or delete any file.
If A) were the correct option some scientis will be able to modify files, as if they were in charge of put an object lock same permission
would allow them to remove the lock and consequently delete the file.
upvoted 1 times

  MutiverseAgent 2 months, 2 weeks ago


Selected Answer: B
B) seems to be the right option, because: Both option A) & B) allow to:
- Scientists add new files & other users read-only access.
- Keep files for a minimum of 1 year
Only option B allows to:
- Disable all users the ability to modify or delete any file.
If A) were the correct option some scientis will be able to modify files, as if they were in charge of put an object lock same permission
would allow them to remove the lock and consequently delete the file.
upvoted 1 times
  cookieMr 3 months, 1 week ago
Selected Answer: B
S3 Object Lock provides the necessary features to enforce immutability and retention of objects in an S3. Compliance mode ensures that
the locked objects cannot be deleted or modified by any user, including those with write access. By setting a retention period of 365 days,
the company can ensure that every file in the repository is kept for a minimum of 1 year after its creation date.

A does not provide the same level of protection as compliance mode. In governance mode, there is a possibility for authorized users to
remove the legal hold, potentially allowing objects to be modified or deleted.

C can restrict users from deleting or changing objects, but it does not enforce the retention period requirement. It also does not provide
the same level of immutability and protection against accidental or malicious modifications.

D does not address the requirement of preventing users from modifying or deleting files. It provides a mechanism for tracking changes
but does not enforce the desired access restrictions or retention period.
upvoted 3 times

  norris81 4 months, 1 week ago


Am I the only one to worry about leap years ?
upvoted 1 times

  cheese929 5 months ago


Selected Answer: B
In compliance mode, a protected object version can't be overwritten or deleted by any user, including the root user in your AWS account.
When an object is locked in compliance mode, its retention mode can't be changed, and its retention period can't be shortened.
Compliance mode helps ensure that an object version can't be overwritten or deleted for the duration of the retention period.
In governance mode, users can't overwrite or delete an object version or alter its lock settings unless they have special permissions. With
governance mode, you protect objects against being deleted by most users, but you can still grant some users permission to alter the
retention settings or delete the object if necessary.
In Governance mode, Objects can be deleted by some users with special permissions, this is against the requirement.
upvoted 2 times

  darn 5 months, 2 weeks ago


Selected Answer: B
its B, legal hold has no retention
upvoted 3 times

  Shrestwt 5 months, 2 weeks ago


https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-overview.html
upvoted 1 times

  jaswantn 6 months ago


Both Compliance & Governance mode protect objects against being deleted or changed. But in Governance mode some people can have
special permissions. In this question, no user can delete or modify files; so the answer is Compliance mode only. Neither of these modes
restrict user from adding new files.
upvoted 2 times

  ProfXsamson 8 months ago


B. Compliance mode helps ensure that an object version can't be overwritten or deleted for the duration of the retention period.
upvoted 1 times

  aba2s 9 months ago


Selected Answer: B
users can have the ability to modify or delete any files in the repository ==> Compliance Mode
upvoted 1 times

  aba2s 8 months, 3 weeks ago


users cannot have the ability to modify or delete any files in the repository ==> Compliance Mode
upvoted 2 times

  Zerotn3 9 months ago


Selected Answer: A
B would also meet the requirement to keep every file in the repository for at least 1 year after its creation date, as you can specify a
retention period of 365 days. However, it would not meet the requirement to restrict all users except a few scientists to read-only access.
S3 Object Lock in compliance mode only allows you to specify retention periods and does not have any options for controlling access to
objects in the bucket.

To meet all the requirements, you should use S3 Object Lock in governance mode and use IAM policies to control access to the objects in
the bucket. This would allow you to specify a legal hold with a retention period of at least 1 year and to restrict all users except a few
scientists to read-only access.
upvoted 3 times
  notacert 5 months, 3 weeks ago
Legal hold needs to be removed manually.

"The Object Lock legal hold operation enables you to place a legal hold on an object version. Like setting a retention period, a legal
hold prevents an object version from being overwritten or deleted. However, a legal hold doesn't have an associated retention period
and remains in effect until removed. "
upvoted 1 times

  techhb 9 months, 1 week ago


Selected Answer: B
No users can have the ability to modify or delete any files in the repository. hence it must be compliance mode.
upvoted 2 times

  lazyyoung 9 months, 2 weeks ago


Selected Answer: B
Answer is B
Compliance:
- Object versions can't be overwritten or deleted by any user, including the root user
- Objects retention modes can't be changed, and retention periods can't be shortened

Governance:
- Most users can't overwrite or delete an object version or alter its lock settings
- Some users have special permissions to change the retention or delete the object
upvoted 3 times

  career360guru 9 months, 2 weeks ago


Selected Answer: B
B is best answer but I feel none of the answers covers the requirement for only few users(scientiest) are able to upload(create) the file in
the bucket and all other users has Read only access.
upvoted 3 times
Question #155 Topic 1

A large media company hosts a web application on AWS. The company wants to start caching confidential media files so that users around the
world will have reliable access to the files. The content is stored in Amazon S3 buckets. The company must deliver the content quickly, regardless
of where the requests originate geographically.
Which solution will meet these requirements?

A. Use AWS DataSync to connect the S3 buckets to the web application.

B. Deploy AWS Global Accelerator to connect the S3 buckets to the web application.

C. Deploy Amazon CloudFront to connect the S3 buckets to CloudFront edge servers.

D. Use Amazon Simple Queue Service (Amazon SQS) to connect the S3 buckets to the web application.

Correct Answer: C

Community vote distribution


C (100%)

  rjam Highly Voted  11 months ago


key :caching
Option C
upvoted 11 times

  TariqKipkemei Most Recent  3 weeks, 3 days ago


Selected Answer: C
Amazon CloudFront to the rescue
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: C
The reasons are:

Amazon CloudFront is a content delivery network (CDN) that caches content at edge locations around the world.
Connecting the S3 buckets containing the media files to CloudFront will cache the content at global edge locations.
This provides fast reliable access to users everywhere by serving content from the nearest edge location.
CloudFront integrates tightly with S3 for secure, durable storage.
Global Accelerator improves availability and performance for TCP/UDP traffic, not HTTP-based content delivery.
DataSync and SQS are not technologies for a global CDN like CloudFront.
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: C
CloudFront is a content delivery network (CDN) service provided by AWS. It caches content at edge locations worldwide, allowing users to
access the content quickly regardless of their geographic location. By connecting the S3 to CloudFront, the media files can be cached at
edge locations, ensuring reliable and fast delivery to users.

A. is a data transfer service that is not designed for caching or content delivery. It is used for transferring data between on-premises
storage systems and AWS services.

B. is a service that improves the performance and availability of applications for global users. While it can provide fast and reliable access,
it is not specifically designed for caching media files or connecting directly to S3.

D. is a message queue service that is not suitable for caching or content delivery. It is used for decoupling and coordinating message-
based communication between different components of an application.

Therefore, the correct solution is option C, deploying CloudFront to connect the S3 to CloudFront edge servers.
upvoted 2 times

  jackky3123213 3 months, 2 weeks ago


Global Accelerator does not support Edge Caching
upvoted 1 times

  Bmarodi 4 months, 1 week ago


Selected Answer: C
Option C is correct answer.
upvoted 1 times

  warioverde 6 months, 1 week ago


As far as I understand, Global Accelerator does not have caching features, so CloudFront would be the recommended service for that
purpose
upvoted 1 times

  Americo32 7 months, 2 weeks ago


Selected Answer: C
C correto
upvoted 1 times

  ProfXsamson 8 months ago


C, Caching == Edge location == CloudFront
upvoted 2 times

  career360guru 9 months, 2 weeks ago


Selected Answer: C
C right answer
upvoted 2 times

  k1kavi1 9 months, 2 weeks ago


Selected Answer: C
Agreed
upvoted 1 times

  Wpcorgan 10 months, 1 week ago


C is correct
upvoted 1 times

  MyNameIsJulien 10 months, 3 weeks ago


Selected Answer: C
Answer is C
upvoted 1 times
Question #156 Topic 1

A company produces batch data that comes from different databases. The company also produces live stream data from network sensors and
application APIs. The company needs to consolidate all the data into one place for business analytics. The company needs to process the
incoming data and then stage the data in different Amazon S3 buckets. Teams will later run one-time queries and import the data into a business
intelligence tool to show key performance indicators (KPIs).
Which combination of steps will meet these requirements with the LEAST operational overhead? (Choose two.)

A. Use Amazon Athena for one-time queries. Use Amazon QuickSight to create dashboards for KPIs.

B. Use Amazon Kinesis Data Analytics for one-time queries. Use Amazon QuickSight to create dashboards for KPIs.

C. Create custom AWS Lambda functions to move the individual records from the databases to an Amazon Redshift cluster.

D. Use an AWS Glue extract, transform, and load (ETL) job to convert the data into JSON format. Load the data into multiple Amazon
OpenSearch Service (Amazon Elasticsearch Service) clusters.

E. Use blueprints in AWS Lake Formation to identify the data that can be ingested into a data lake. Use AWS Glue to crawl the source, extract
the data, and load the data into Amazon S3 in Apache Parquet format.

Correct Answer: AC

Community vote distribution


AE (83%) Other

  Wazhija Highly Voted  11 months, 2 weeks ago


Selected Answer: AE
I believe AE makes the most sense
upvoted 10 times

  Six_Fingered_Jose Highly Voted  11 months, 1 week ago


Selected Answer: AE
yeah AE makes sense, only E is working with S3 here and questions wants them to be in S3
upvoted 8 times

  Guru4Cloud Most Recent  1 month, 2 weeks ago


Selected Answer: AE
The reasons are:

AWS Lake Formation and Glue provide automated data lake creation with minimal coding. Glue crawlers identify sources and ETL jobs load
to S3.
Athena allows ad-hoc queries directly on S3 data with no infrastructure to manage.
QuickSight provides easy cloud BI for dashboards.
Options C and D require significant custom coding for ETL and queries.
Redshift and OpenSearch would require additional setup and management overhead.
upvoted 3 times

  Mia2009687 2 months, 4 weeks ago


Selected Answer: AE
It combines data from database and stream data, so data lake needs to be used.
And it wants to do one time query, so Athena is better.
upvoted 1 times

  TTaws 3 months, 1 week ago


@Golcha once the data comes from different sources then you use GLUE
upvoted 1 times

  Jeeva28 4 months, 1 week ago


Selected Answer: AC
Less Overhead with option AC .No need to manage
upvoted 1 times

  Golcha 5 months, 2 weeks ago


Selected Answer: AC
No specific use case for GLUE
upvoted 1 times

  TTaws 3 months, 1 week ago


once the data comes from different sources then you use GLUE
upvoted 1 times

  TECHNOWARRIOR 5 months, 3 weeks ago


The Apache Parquet format is a performance-oriented, column-based data format designed for storage and retrieval. It is generally faster
for reads than writes because of its columnar storage layout and a pre-computed schema that is written with the data into the files. AWS
Glue’s Parquet writer offers fast write performance and flexibility to handle evolving datasets. You can use AWS Glue to read Parquet files
from Amazon S3 and from streaming sources as well as write Parquet files to Amazon S3. When using AWS Glue to build a data lake
foundation, it automatically crawls your Amazon S3 data, identifies data formats, and then suggests schemas for use with other AWS
analytic services[1][2][3][4].
upvoted 2 times

  TECHNOWARRIOR 5 months, 3 weeks ago


ANSWER - AE:Amazon Athena is the best choice for running one-time queries on streaming data. Although Amazon Kinesis Data Analytics
provides an easy and familiar standard SQL language to analyze streaming data in real-time, it is designed for continuous queries rather
than one-time queries[1]. On the other hand, Amazon Athena is a serverless interactive query service that allows querying data in Amazon
S3 using SQL. It is optimized for ad-hoc querying and is ideal for running one-time queries on streaming data[2].AWS Lake Formation uses
as a central place to have all your data for analytics purposes (E). Athena integrate perfect with S3 and can makes queries (A).
upvoted 2 times

  jcramos 5 months, 3 weeks ago


Selected Answer: AE
AWS Lake Formation uses as a central place to have all your data for analytics purposes (E). Athena integrate perfect with S3 and can
makes queries (A).
upvoted 2 times

  jcramos 5 months, 3 weeks ago


Why S3 in Apache Parquet? https://aws.amazon.com/about-aws/whats-new/2018/12/amazon-s3-announces-parquet-output-format-
for-inventory/
upvoted 1 times

  JiyuKim 7 months, 3 weeks ago


Can anyone please explain me why B cannot be an answer?
upvoted 3 times

  Shrestwt 5 months, 2 weeks ago


Kinesis Data Analytics is designed for continuous queries rather than one-time queries.
upvoted 3 times

  ashishvineetlko 8 months, 1 week ago


can anyone help me in below question
36. A company has a Java application that uses Amazon Simple Queue Service (Amazon SOS) to parse messages. The application cannot
parse messages that are large on 256KB size. The company wants to implement a solution to give the application the ability to parse
messages as large as 50 MB.
Which solution will meet these requirements with the FEWEST changes to the code?
a) Use the Amazon SOS Extended Client Library for Java to host messages that are larger than 256 KB in Amazon S3.
b) Use Amazon EventBridge to post large messages from the application instead of Aaron SOS
c) Change the limit in Amazon SQS to handle messages that are larger than 256 KB
d) Store messages that are larger than 256 KB in Amazon Elastic File System (Amazon EFS) Configure Amazon SQS to reference this
location in the messages.
upvoted 1 times

  skondey 7 months, 1 week ago


I will do "A" as well.
upvoted 1 times

  ProfXsamson 8 months ago


A would probably be the best answer. Sqs extended client library is for Java apps.
upvoted 1 times

  bullrem 8 months, 1 week ago


Selected Answer: DE
I believe DE makes the most sense
upvoted 1 times

  ShinobiGrappler 8 months, 2 weeks ago


Selected Answer: AE
stored in s3 -> data lake -> athena (process the SQL parquet format)-> quicksight visualize
upvoted 4 times

  Zerotn3 9 months ago


Selected Answer: BE
While Amazon Athena is a fully managed service that makes it easy to analyze data stored in Amazon S3 using SQL, it is primarily designed
for running ad-hoc queries on data stored in Amazon S3. It may not be the best choice for running one-time queries on streaming data, as
it is not designed to process data in real-time.
Additionally, using Amazon Athena for one-time queries on streaming data could potentially lead to higher operational overhead, as you
would need to set up and maintain the necessary infrastructure to stream the data into Amazon S3, and then query the data using
Athena.

Using Amazon Kinesis Data Analytics, as mentioned in option B, would be a better choice for running one-time queries on streaming data,
as it is specifically designed to process data in real-time and can automatically scale to match the incoming data rate.
upvoted 2 times

  JayBee65 8 months, 4 weeks ago


"Company needs to consolidate all the data into one place" -> S3 bucket, which is happening in E, which means Athena would not have
an issue, so A is ok.
upvoted 2 times

  jainparag1 8 months, 1 week ago


Absolutely, querying data is after staging and so Athena fits perfectly.
upvoted 1 times

  techhb 9 months, 1 week ago


Selected Answer: AE
C can work it out ,but has additional overhead.
upvoted 2 times

  career360guru 9 months, 2 weeks ago


Selected Answer: AE
A and E
upvoted 2 times
Question #157 Topic 1

A company stores data in an Amazon Aurora PostgreSQL DB cluster. The company must store all the data for 5 years and must delete all the data
after 5 years. The company also must indefinitely keep audit logs of actions that are performed within the database. Currently, the company has
automated backups configured for Aurora.

Which combination of steps should a solutions architect take to meet these requirements? (Choose two.)

A. Take a manual snapshot of the DB cluster.

B. Create a lifecycle policy for the automated backups.

C. Configure automated backup retention for 5 years.

D. Configure an Amazon CloudWatch Logs export for the DB cluster.

E. Use AWS Backup to take the backups and to keep the backups for 5 years.

Correct Answer: BE

Community vote distribution


DE (80%) AD (17%)

  JayBee65 Highly Voted  8 months, 4 weeks ago


I tend to agree D and E...

A - Manual task that can be automated, so why make life difficult?


B - The maximum retention period is 35 days, so would not help
C - The maximum retention period is 35 days, so would not help
D - Only option that deals with logs, so makes sense
E - Partially manual but only option that achieves the 5 year goal
upvoted 21 times

  aadityaravi8 2 months, 4 weeks ago


100% agree
upvoted 3 times

  kmaneith Highly Voted  10 months, 1 week ago


Selected Answer: DE
dude trust me
upvoted 16 times

  Priyanshugpt486 1 week, 3 days ago


hehe... hehe
upvoted 1 times

  JayBee65 8 months, 4 weeks ago


No, please show your reasoning, you may be wrong. Remember, no one thinks they are wrong, but some always are :)
upvoted 13 times

  kambarami Most Recent  2 weeks, 5 days ago


D AND E- makes more sense as we automate backups in Aurora DB
- Export data to CloudWatch to capture all log events and configure CloudWatch to retain logs indefinitely.
upvoted 1 times

  TariqKipkemei 3 weeks, 3 days ago


Selected Answer: DE
DE makes more sense
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: CD
The reasons are:

Configuring the automated backups for the Aurora PostgreSQL DB cluster to retain backups for 5 years will meet the requirement to store
all data for that duration.
Exporting the database logs to CloudWatch Logs will capture the audit logs of actions performed in the database. CloudWatch Logs
retention can be configured to store logs indefinitely.
This meets the need to keep audit logs available beyond the 5 year data retention period.
Additional manual snapshots or using AWS Backup for backups is not necessary since automated backups are already enabled.
A lifecycle policy is useful for transitioning storage classes but does not apply here for a set 5 year retention.
upvoted 2 times
  neverdie 6 months, 1 week ago
Selected Answer: AD
Automated backup is limited 35 days
upvoted 1 times

  Training4aBetterLife 8 months, 1 week ago


Selected Answer: DE
Previously, you had to create custom scripts to automate backup scheduling, enforce retention policies, or consolidate backup activity for
manual Aurora cluster snapshots, especially when coordinating backups across AWS services. With AWS Backup, you gain a fully managed,
policy-based backup solution with snapshot scheduling and snapshot retention management. You can now create, manage, and restore
Aurora backups directly from the AWS Backup console for both PostgreSQL-compatible and MySQL-compatible versions of Aurora.
To get started, select an Amazon Aurora cluster from the AWS Backup console and take an on-demand backup or simply assign the cluster
to a backup plan.
upvoted 4 times

  Training4aBetterLife 8 months, 1 week ago


https://aws.amazon.com/about-aws/whats-new/2020/06/amazon-aurora-snapshots-can-be-managed-via-aws-backup/?nc1=h_ls
upvoted 2 times

  Zerotn3 9 months ago


Selected Answer: DE
A is not a valid option for meeting the requirements. A manual snapshot of the DB cluster is a point-in-time copy of the data in the cluster.
While taking manual snapshots can be useful for creating backups of the data, it is not a reliable or efficient way to meet the requirement
of storing all the data for 5 years and deleting it after 5 years. It would be difficult to ensure that manual snapshots are taken regularly
and retained for the required period of time. It is recommended to use a fully managed backup service like AWS Backup, which can
automate and centralize the process of taking and retaining backups.
upvoted 3 times

  Zerotn3 9 months ago


Sorry, B and E that correct
B. Create a lifecycle policy for the automated backups.
This would ensure that the backups taken using AWS Backup are retained for the desired period of time.
upvoted 1 times

  JayBee65 8 months, 4 weeks ago


I think a lifecycle policy would only keep backups for 35 days
upvoted 3 times

  techhb 9 months, 1 week ago


Selected Answer: DE
D and E only
upvoted 2 times

  Chirantan 9 months, 1 week ago


AD
is correct as you can keep backup of snapshot indifferently.
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: DE
D and E
upvoted 2 times

  Qjb8m9h 9 months, 3 weeks ago


Aurora backups are continuous and incremental so you can quickly restore to any point within the backup retention period. No
performance impact or interruption of database service occurs as backup data is being written. You can specify a backup retention period,
from 1 to 35 days, when you create or modify a DB cluster.

If you want to retain a backup beyond the backup retention period, you can also take a snapshot of the data in your cluster volume.
Because Aurora retains incremental restore data for the entire backup retention period, you only need to create a snapshot for data that
you want to retain beyond the backup retention period. You can create a new DB cluster from the snapshot.
upvoted 3 times

  Marge_Simpson 9 months, 3 weeks ago


Selected Answer: DE
D is the only one that resolves the logging situation
"automated backup" = AWS Backup
https://aws.amazon.com/backup/faqs/?nc=sn&loc=6
AWS Backup provides a centralized console, automated backup scheduling, backup retention management, and backup monitoring and
alerting. AWS Backup offers advanced features such as lifecycle policies to transition backups to a low-cost storage tier. It also includes
backup storage and encryption independent from its source data, audit and compliance reporting capabilities with AWS Backup Audit
Manager, and delete protection with AWS Backup Vault Lock.
upvoted 2 times
  Qjb8m9h 9 months, 3 weeks ago
AD
Reason: When creating Aurora back up, you will need to specify the retention period which is between 1-35days. This does not meet the
5years retention requirement in this case.
Hence taking a snap manual snap shot is the best solution.

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Managing.Backups.html
upvoted 2 times

  Heyang 9 months, 4 weeks ago


Selected Answer: AD
no more than 35 days
upvoted 4 times

  kmliuy73 9 months, 3 weeks ago


https://aws.amazon.com/about-aws/whats-new/2020/06/amazon-aurora-snapshots-can-be-managed-via-aws-backup/?nc1=h_ls AWS
Backup
upvoted 3 times

  VicBucket1996 9 months, 4 weeks ago


We all are agree with letter D but based in this documentation I think A could be the other correct answer:
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Managing.Backups.html

But if I wrong, let me know, please :)


upvoted 3 times

  kmliuy73 9 months, 3 weeks ago


https://aws.amazon.com/about-aws/whats-new/2020/06/amazon-aurora-snapshots-can-be-managed-via-aws-backup/?nc1=h_ls AWS
Backup
upvoted 1 times

  DivaLight 10 months, 1 week ago


Selected Answer: DE
DE Option
upvoted 3 times
Question #158 Topic 1

A solutions architect is optimizing a website for an upcoming musical event. Videos of the performances will be streamed in real time and then
will be available on demand. The event is expected to attract a global online audience.

Which service will improve the performance of both the real-time and on-demand streaming?

A. Amazon CloudFront

B. AWS Global Accelerator

C. Amazon Route 53

D. Amazon S3 Transfer Acceleration

Correct Answer: A

Community vote distribution


A (60%) B (40%)

  Nigma Highly Voted  10 months, 2 weeks ago


A is right

You can use CloudFront to deliver video on demand (VOD) or live streaming video using any HTTP origin

Global Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over IP, as well as for HTTP use cases
that specifically require static IP addresses
upvoted 23 times

  TariqKipkemei Most Recent  3 weeks, 3 days ago


Selected Answer: A
Amazon CloudFront is a content delivery network (CDN) service that helps you distribute your static and dynamic content quickly and
reliably with high speed.
upvoted 1 times

  Chiquitabandita 3 weeks, 6 days ago


chatgpt went with cloudfront on this question, so answer A
upvoted 2 times

  coolkidsclubvip 1 month ago


Selected Answer: B
https://aws.amazon.com/cn/blogs/networking-and-content-delivery/how-flowplayer-improved-live-video-ingest-with-aws-global-
accelerator/
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


The reasons are:

CloudFront is a content delivery network (CDN) that caches content at edge locations around the world.
Caching the video content globally brings it closer to viewers, reducing latency.
This improves performance for both live streaming and on-demand playback for the global audience.
Route 53 provides DNS resolution but does not cache content locally.
Global Accelerator improves TCP traffic routing performance but is not a caching CDN.
S3 Transfer Acceleration optimizes uploads to S3 over long distances but does not help with content delivery.
upvoted 1 times

  Chan1010 2 months, 1 week ago


Selected Answer: B
Global Accelerator good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over IP
upvoted 1 times

  cookieMr 3 months, 1 week ago


Amazon CloudFront is a content delivery network (CDN) that can deliver both real-time and on-demand streaming. It caches content at
edge locations worldwide, providing low-latency delivery to a global audience.

B. AWS Global Accelerator: Global Accelerator is more suitable for non-HTTP use cases or when static IP addresses are required.
C. Amazon Route 53: Route 53 is a DNS service and not designed specifically for streaming video.
D. Amazon S3 Transfer Acceleration: S3 Transfer Acceleration improves upload speeds to Amazon S3 but does not directly enhance
streaming performance.
upvoted 2 times
  Jeeva28 4 months, 1 week ago
Selected Answer: A
Serve video on demand or live streaming video
CloudFront offers several options for streaming your media to global viewers—both pre-recorded files and live events.

For video on demand (VOD) streaming, you can use CloudFront to stream in common formats such as MPEG DASH, Apple HLS, Microsoft
Smooth Streaming, and CMAF, to any device.

For broadcasting a live stream, you can cache media fragments at the edge, so that multiple requests for the manifest file that delivers the
fragments in the right order can be combined, to reduce the load on your origin server.
upvoted 1 times

  Kumaran1508 4 months, 1 week ago


Selected Answer: B
I vote for B. Global Accelerator.
CloudFront Video on Demand is specifically designed for delivering on-demand video content, meaning pre-recorded videos that can be
streamed or downloaded. It is not suitable for streaming real-time videos or live video broadcasts.
Global Accelerator help in reducing network hops between the user and AWS making real-time streams smoother.
upvoted 3 times

  eugene_stalker 4 months, 1 week ago


Selected Answer: B
To get the benefit of CloudFront video needs to be cached, so requests should be frequent. On demand video - I vote for B
upvoted 1 times

  warioverde 6 months, 1 week ago


How can Cloudfront help with real-time use case?
upvoted 2 times

  Mamiololo 8 months, 2 weeks ago


Amazon CloudFront
upvoted 1 times

  aba2s 9 months ago


Selected Answer: A
CloudFront offers several options for streaming your media to global viewers—both pre-recorded files and live events.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/IntroductionUseCases.html#IntroductionUseCasesStreaming
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: A
A Cloudfront
upvoted 1 times

  Baba_Eni 9 months, 3 weeks ago


Selected Answer: A
Cloudfront is used for live streaming and video on-demand

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/IntroductionUseCases.html
upvoted 2 times

  leonnnn 10 months, 1 week ago


Selected Answer: A
I thought the real-time streaming comes with rtsp protocol for which B is better.
But I realized now real-time streaming also has http way now (like HLS, etc.).
So the answer should be A.
upvoted 2 times

  Wpcorgan 10 months, 1 week ago


A is correct
upvoted 1 times
Question #159 Topic 1

A company is running a publicly accessible serverless application that uses Amazon API Gateway and AWS Lambda. The application’s traffic
recently spiked due to fraudulent requests from botnets.

Which steps should a solutions architect take to block requests from unauthorized users? (Choose two.)

A. Create a usage plan with an API key that is shared with genuine users only.

B. Integrate logic within the Lambda function to ignore the requests from fraudulent IP addresses.

C. Implement an AWS WAF rule to target malicious requests and trigger actions to filter them out.

D. Convert the existing public API to a private API. Update the DNS records to redirect users to the new API endpoint.

E. Create an IAM role for each user attempting to access the API. A user will assume the role when making the API call.

Correct Answer: CD

Community vote distribution


AC (71%) CE (29%)

  jdr75 Highly Voted  5 months, 3 weeks ago


Selected Answer: CE
C) WAF has bot identification and remedial tools, so it's CORRECT.

A) remember the question : "...block requests from unauthorized users?" -- an api key is involved in a authorization process. It's not the
more secure process, but it's better than an totoally anonymous process. If you don't know the key, you can't authenticate. So the bots, at
least the first days/weeks could not access the service (at the end they'll do, cos' the key will be spread informally). So it's CORRECT.

B) Implement a logic in the Lambda to detect fraudulent ip's is almost impossible, cos' it's a dynamic and changing pattern that you
cannot handle easily.

D) creating a rol is not going to imply be more protected from unauth. request, because a rol is a "principal", it's not involved in the
authorization process.
upvoted 5 times

  TariqKipkemei Most Recent  3 weeks, 3 days ago


Selected Answer: AC
AWS WAF rule to target and filter out malicious requests and API key to authorize users.
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: AC
The reasons are:

An API key with a usage plan limits access to only authorized apps and users. This prevents general public access.
WAF rules can identify and block malicious bot traffic through pattern matching and IP reputation lists.
Together, the API key and WAF provide preventative and detective controls against unauthorized requests.
The other options add complexity or are reactive. IAM roles per user is not feasible for a public API.
Ignoring requests in Lambda and changing DNS are response actions after an attack.
upvoted 2 times

  zjcorpuz 2 months, 1 week ago


AC

it's essential to note that while API keys are commonly associated with private APIs, they can also be used in conjunction with public APIs.
In some cases, even public APIs may require API keys to control usage and monitor how the API is being utilized. The API provider might
enforce usage limits, track API usage, or monitor for potential misuse, all of which can be managed effectively using API keys.

In summary, API keys are not exclusive to private APIs and can be used for both private and public APIs, depending on the specific
requirements and use case of the API provider.
upvoted 1 times

  MutiverseAgent 2 months, 1 week ago


Selected Answer: AC
Why option C) vs option E)
- It's simpler
- We want to protect general access to the API and not granular method/user access. The API is already public so If a user API key is in
several usage plans that is not a problem (The API is currently public). The objective is to protect API from abuse from malicious internet
users and to NOT protect granular method/user access from users that are using the API in the correct way.
upvoted 2 times
  Mia2009687 2 months, 4 weeks ago
Selected Answer: CE
Important
Don't use API keys for authentication or authorization for your APIs. If you have multiple APIs in a usage plan, a user with a valid API key
for one API in that usage plan can access all APIs in that usage plan. Instead, use an IAM role, a Lambda authorizer, or an Amazon Cognito
user pool.
upvoted 2 times

  Abrar2022 3 months, 3 weeks ago


Selected Answer: AC
If you're wondering why A. It's because you can configure usage plans and API keys to allow customers to access selected APIs, and begin
throttling requests to those APIs based on defined limits and quotas. As for C. It's because AWS WAF has bot detection capabilities.
upvoted 1 times

  sachin 7 months ago


It should be A and C
But API Key alone can not help

API keys are alphanumeric string values that you distribute to application developer customers to grant access to your API. You can use
API keys together with Lambda authorizers, IAM roles, or Amazon Cognito to control access to your APIs.
upvoted 1 times

  Steve_4542636 7 months ago


Selected Answer: CE
Here https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-usage-plans.html it says this:

Don't use API keys for authentication or authorization for your APIs. If you have multiple APIs in a usage plan, a user with a valid API key
for one API in that usage plan can access all APIs in that usage plan. Instead, use an IAM role, a Lambda authorizer, or an Amazon Cognito
user pool.

API keys are intended for software developers wanting to access an API from their application. This link then goes on to say an IAM role
should be used instead.
upvoted 1 times

  Steve_4542636 7 months ago


Nevermind my answer. I switch it to A/C because the question states the application is *using* the API Gateway so A will make sense
upvoted 1 times

  simplimarvelous 8 months, 2 weeks ago


Selected Answer: AC
A/C for security to prevent anonymous access
upvoted 3 times

  JayBee65 8 months, 4 weeks ago


I'm thinking A and C
A - the API is publicly accessible but there is nothing to stop the company requiring users to register for access.
B - you can do this with Lambda, AWS Network Firewall and Amazon GuardDuty, see
https://aws.amazon.com/blogs/security/automatically-block-suspicious-traffic-with-aws-network-firewall-and-amazon-guardduty/, but
these components are not mentioned
C - a WAF is the logical choice with it's bot detection capabilities
D - a private API is only accessible within a VPC, so this would not work
E - would be even more work than A
upvoted 3 times

  HayLLlHuK 9 months ago


Selected Answer: AC
https://www.examtopics.com/discussions/amazon/view/61082-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 2 times

  techhb 9 months, 1 week ago


Selected Answer: AC
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-usage-plans.html
https://medium.com/@tshemku/aws-waf-vs-firewall-manager-vs-shield-vs-shield-advanced-4c86911e94c6
upvoted 2 times

  SoluAWS 9 months, 1 week ago


I do not agree with A as it mentioned the application is publically accessible. "A company is running a publicly accessible serverless
application that uses Amazon API Gateway and AWS Lambda". If this is public how can we ensure that genuine user?

I will go with CD
upvoted 3 times

  techhb 9 months, 1 week ago


Selected Answer: AC
A and C ,C is obivious ,however A is the only other which seems to put quota API keys are alphanumeric string values that you distribute to
application developer customers to grant access to your API. You can use API keys together with Lambda authorizers, IAM roles, or
Amazon Cognito to control access to your APIs
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: AC
A and C
upvoted 1 times

  Phinx 10 months, 1 week ago


Selected Answer: AC
A and C are the correct choices.
upvoted 1 times
Question #160 Topic 1

An ecommerce company hosts its analytics application in the AWS Cloud. The application generates about 300 MB of data each month. The data
is stored in JSON format. The company is evaluating a disaster recovery solution to back up the data. The data must be accessible in milliseconds
if it is needed, and the data must be kept for 30 days.

Which solution meets these requirements MOST cost-effectively?

A. Amazon OpenSearch Service (Amazon Elasticsearch Service)

B. Amazon S3 Glacier

C. Amazon S3 Standard

D. Amazon RDS for PostgreSQL

Correct Answer: C

Community vote distribution


C (91%) 9%

  babaxoxo Highly Voted  10 months, 2 weeks ago


Selected Answer: C
Ans C:
Cost-effective solution with milliseconds of retrieval -> it should be s3 standard
upvoted 8 times

  Its_SaKar Most Recent  2 weeks, 6 days ago


Selected Answer: C
Answer is not B because S3 glacier and S3 glacier instant storage are two different types of storage class. So, answer here is C: S3 standard
upvoted 1 times

  TariqKipkemei 3 weeks, 3 days ago


Selected Answer: C
Data must be accessible in milliseconds and must be kept for 30 days = Amazon S3 Standard
upvoted 1 times

  chanchal133 1 month ago


Selected Answer: C
ANS - C
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: C
The reasons are:

S3 Standard provides high durability and availability for storage


It allows millisecond access to retrieve objects
Objects can be stored for any duration, meeting the 30 day retention need
Storage costs are low, around $0.023 per GB/month
OpenSearch and RDS require running and managing a cluster for DR storage
Glacier has lower cost but retrieval time is too high at 3-5 hours
S3 Standard's simplicity, high speed access, and low cost make it optimal for this small DR dataset that needs to be accessed quickly
upvoted 2 times

  Nazmul123 2 months, 1 week ago


Selected Answer: C
https://aws.amazon.com/s3/storage-classes/glacier/instant-retrieval/
upvoted 1 times

  cookieMr 3 months, 1 week ago


S3 Standard is a highly durable and scalable storage option suitable for backup and disaster recovery purposes. It offers millisecond
access to data when needed and provides durability guarantees. It is also cost-effective compared to other storage options like
OpenSearch Service, S3 Glacier, and RDS for PostgreSQL, which may have higher costs or longer access times for retrieving the data.

A. OpenSearch Service (Elasticsearch Service): While it offers fast data retrieval, it may incur higher costs compared to storing data directly
in S3, especially considering the amount of data being generated.

B. S3 Glacier: While it provides long-term archival storage at a lower cost, it does not meet the requirement of immediate access in
milliseconds. Retrieving data from Glacier typically takes several hours.

D. RDS for PostgreSQL: While it can be used for data storage, it may be overkill and more expensive for a backup and disaster recovery
solution compared to S3 Standard, which is more suitable and cost-effective for storing and retrieving data.
upvoted 2 times
  joehong 3 months, 3 weeks ago
Selected Answer: B
https://aws.amazon.com/s3/storage-classes/glacier/instant-retrieval/
upvoted 2 times

  KZM 7 months, 3 weeks ago


A. Incorrect
Amazon OpenSearch Service (Amazon Elasticsearch Service) is designed for full-text search and analytics, but it may not be the most cost-
effective solution for this use case
B. Incorrect
S3 Glacier is a cold storage solution that is designed for long-term data retention and infrequent access.
C. Correct
S3 standard is cost-effective and meets the requirement. S3 Standard allows for data retention for a specific number of days.

D. PostgreSQL is a relational database service and may not be the most cost-effective solution.
upvoted 3 times

  ngochieu276 8 months, 4 weeks ago


Selected Answer: B
S3 Glacier Instant Retrieval – Use for archiving data that is rarely accessed and requires milliseconds retrieval.
https://docs.aws.amazon.com/amazonglacier/latest/dev/introduction.html
upvoted 3 times

  career360guru 9 months, 2 weeks ago


Selected Answer: C
Option C
upvoted 1 times

  lapaki 9 months, 3 weeks ago


Selected Answer: C
JSON is object notation. S3 stores objects.
upvoted 1 times

  hpipit 10 months ago


Selected Answer: C
c IS correct
upvoted 1 times

  Wpcorgan 10 months, 1 week ago


C is correct
upvoted 1 times

  sdasdawa 10 months, 2 weeks ago


Selected Answer: C
IMHO
Normally ElasticSearch would be ideal here, however as question states "Most cost-effective"
S3 is the best choice in this case
upvoted 3 times

  Aamee 10 months ago


ElasticSearch is a search service, the question states here about the backup service reqd. for the DR scenario.
upvoted 2 times
Question #161 Topic 1

A company has a small Python application that processes JSON documents and outputs the results to an on-premises SQL database. The
application runs thousands of times each day. The company wants to move the application to the AWS Cloud. The company needs a highly
available solution that maximizes scalability and minimizes operational overhead.

Which solution will meet these requirements?

A. Place the JSON documents in an Amazon S3 bucket. Run the Python code on multiple Amazon EC2 instances to process the documents.
Store the results in an Amazon Aurora DB cluster.

B. Place the JSON documents in an Amazon S3 bucket. Create an AWS Lambda function that runs the Python code to process the documents
as they arrive in the S3 bucket. Store the results in an Amazon Aurora DB cluster.

C. Place the JSON documents in an Amazon Elastic Block Store (Amazon EBS) volume. Use the EBS Multi-Attach feature to attach the volume
to multiple Amazon EC2 instances. Run the Python code on the EC2 instances to process the documents. Store the results on an Amazon RDS
DB instance.

D. Place the JSON documents in an Amazon Simple Queue Service (Amazon SQS) queue as messages. Deploy the Python code as a container
on an Amazon Elastic Container Service (Amazon ECS) cluster that is configured with the Amazon EC2 launch type. Use the container to
process the SQS messages. Store the results on an Amazon RDS DB instance.

Correct Answer: D

Community vote distribution


B (94%) 6%

  babaxoxo Highly Voted  10 months, 2 weeks ago


Selected Answer: B
solution should remove operation overhead -> s3 -> lambda -> aurora
upvoted 11 times

  markw92 3 months, 2 weeks ago


Aurora supports mysql and postgresql but question has database sql server. So, that eliminates B. So, the other logical answer is D.
IMHO. Btw, i also thought the answer is B and started re-reading question carefully.
upvoted 3 times

  JIJIJIXI 2 days, 22 hours ago


sql database, not sql server
upvoted 1 times

  Zerotn3 Highly Voted  9 months ago


Selected Answer: B
By placing the JSON documents in an S3 bucket, the documents will be stored in a highly durable and scalable object storage service. The
use of AWS Lambda allows the company to run their Python code to process the documents as they arrive in the S3 bucket without having
to worry about the underlying infrastructure. This also allows for horizontal scalability, as AWS Lambda will automatically scale the
number of instances of the function based on the incoming rate of requests. The results can be stored in an Amazon Aurora DB cluster,
which is a fully-managed, high-performance database service that is compatible with MySQL and PostgreSQL. This will provide the
necessary durability and scalability for the results of the processing.
upvoted 8 times

  Mandar15 Most Recent  5 days, 20 hours ago


Selected Answer: B
B is correc
upvoted 1 times

  TariqKipkemei 3 weeks ago


Selected Answer: B
Main requirement is: 'scalability and minimized operational overhead' = serverless = Amazon S3 bucket, AWS Lambda function, Amazon
Aurora DB cluster
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: B
- Using Lambda functions triggered by S3 events allows the Python code to automatically scale up and down based on the number of
incoming JSON documents. This provides high availability and maximizes scalability.
- Storing the results in an Amazon Aurora DB cluster provides a managed, scalable, and highly available database.
- This serverless approach minimizes operational overhead since Lambda and Aurora handle provisioning infrastructure, deploying code,
monitoring, patching, etc.
upvoted 2 times
  aadityaravi8 2 months, 4 weeks ago
The answer is B. Place the JSON documents in an Amazon S3 bucket. Create an AWS Lambda function that runs the Python code to
process the documents as they arrive in the S3 bucket. Store the results in an Amazon Aurora DB cluster.
This solution is highly available because Lambda functions are automatically scaled up or down based on the number of requests they
receive. It is also scalable because you can easily add more Lambda functions to process more documents. Finally, it minimizes
operational overhead because you do not need to manage any EC2 instances.
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: B
Using Lambda eliminates the need to manage and provision servers, ensuring scalability and minimizing operational overhead. S3
provides durable and highly available storage for the JSON documents. Lambda can be triggered automatically whenever new documents
are added to the S3 bucket, allowing for real-time processing. Storing the results in an Aurora DB cluster ensures high availability and
scalability for the processed data. This solution leverages serverless architecture, allowing for automatic scaling and high availability
without the need for managing infrastructure, making it the most suitable choice.

A. This option requires manual management and scaling of EC2 instances, resulting in higher operational overhead and complexity.

C. This approach still involves manual management and scaling of EC2 instances, increasing operational complexity and overhead.

D. This solution requires managing and scaling an ECS cluster, adding operational overhead and complexity. Utilizing SQS adds complexity
to the system, requiring custom handling of message consumption and processing in the Python code.
upvoted 2 times

  Bmarodi 4 months, 1 week ago


Selected Answer: B
Keywords here are : "maximizes scalability and minimizes operational overhead, hence option B is correct answer.
upvoted 1 times

  channn 5 months, 3 weeks ago


Selected Answer: D
i vote for D as 'on-premises SQL database' is not mysql/postgre which can replace by aurora
upvoted 2 times

  perception 7 months, 1 week ago


does somebody had contributor access and want to share. i would really appreciate it.
here's my email
[email protected]
Thanks
upvoted 1 times

  kerin 7 months, 1 week ago


B is the best option. https://aws.amazon.com/rds/aurora/
upvoted 1 times

  mp165 9 months ago


Selected Answer: B
agree...B is the best option S3, Lambda , Aurora.
upvoted 1 times

  techhb 9 months, 1 week ago


Selected Answer: B
Choosing B as "The company needs a highly available solution that maximizes scalability and minimizes operational overhead"
upvoted 1 times

  studis 9 months, 2 weeks ago


B is tempting but this sentence "runs thousands of times each day." If we use lambda as in B, won't this incur a high bill at the end?
upvoted 1 times

  techhb 9 months, 1 week ago


Agree,but question doesnt have Cost as criteria to choose solution, Criteria is "The company needs a highly available solution that
maximizes scalability and minimizes operational overhead". Hence B
upvoted 2 times

  career360guru 9 months, 2 weeks ago


Selected Answer: B
Option B
upvoted 1 times

  Phinx 10 months, 1 week ago


Selected Answer: B
D is incorrect because using ECS entails a lot of admin overhead. so B is the correct one.
upvoted 1 times
  Wpcorgan 10 months, 1 week ago
B is correct
upvoted 1 times
Question #162 Topic 1

A company wants to use high performance computing (HPC) infrastructure on AWS for financial risk modeling. The company’s HPC workloads run
on Linux. Each HPC workflow runs on hundreds of Amazon EC2 Spot Instances, is short-lived, and generates thousands of output files that are
ultimately stored in persistent storage for analytics and long-term future use.

The company seeks a cloud storage solution that permits the copying of on-premises data to long-term persistent storage to make data available
for processing by all EC2 instances. The solution should also be a high performance file system that is integrated with persistent storage to read
and write datasets and output files.

Which combination of AWS services meets these requirements?

A. Amazon FSx for Lustre integrated with Amazon S3

B. Amazon FSx for Windows File Server integrated with Amazon S3

C. Amazon S3 Glacier integrated with Amazon Elastic Block Store (Amazon EBS)

D. Amazon S3 bucket with a VPC endpoint integrated with an Amazon Elastic Block Store (Amazon EBS) General Purpose SSD (gp2) volume

Correct Answer: A

Community vote distribution


A (100%)

  Marge_Simpson Highly Voted  9 months, 3 weeks ago


Selected Answer: A
If you see HPC and Linux both in the question.. Pick Amazon FSx for Lustre
upvoted 18 times

  HayLLlHuK 9 months ago


yeap, you’re right!
upvoted 1 times

  aba2s Highly Voted  9 months ago


Selected Answer: A
Additional keywords: make data available for processing by all EC2 instances ==> FSx

In absence of EFS, it should be FSx. Amazon FSx For Lustre provides a high-performance, parallel file system for hot data
upvoted 5 times

  TariqKipkemei Most Recent  2 weeks, 6 days ago


Selected Answer: A
HPC workloads running on Linux = Amazon FSx for Lustre
upvoted 1 times

  Jeyaluxshan 1 month ago


High performance - Lustre
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: A
The reasons are:

Amazon FSx for Lustre provides a high-performance, scalable file system optimized for compute-intensive workloads like HPC. It has
native integration with Amazon S3.
Data can be copied from on-premises to an S3 bucket, acting as persistent long-term storage.
The FSx for Lustre file system can then access the S3 data for high speed processing of datasets and output files.
FSx for Lustre is designed for the Linux environments used in this HPC workload.
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: A
FSx for Lustre is a high-performance file system optimized for compute-intensive workloads. It provides scalable, parallel access to data
and is suitable for HPC applications.
By integrating FSx for Lustre with S3, you can easily copy on-premises data to long-term persistent storage in S3, making it available for
processing by EC2 instances.
S3 serves as the durable and highly scalable object storage for storing the output files, allowing for analytics and long-term future use.
Option B, FSx for Windows File Server, is not suitable because the workloads run on Linux, and this option is designed for Windows file
sharing.

Option C, S3 Glacier integrated with EBS, is not the best choice as it is a low-cost archival storage service and not optimized for high-
performance file system requirements.

Option D, using an S3 bucket with a VPC endpoint integrated with an Amazon EBS General Purpose SSD (gp2) volume, does not provide
the required high-performance file system capabilities for HPC workloads.
upvoted 2 times
  Bmarodi 4 months, 1 week ago
Selected Answer: A
Option A is right answer.
upvoted 1 times

  kerin 7 months, 1 week ago


FSx for Lustre makes it easy and cost-effective to launch and run the popular, high-performance Lustre file system. You use Lustre for
workloads where speed matters, such as machine learning, high performance computing (HPC), video processing, and financial modeling.
Amazon Fsx for Lustre is integrated with Amazon S3.
upvoted 2 times

  SilentMilli 9 months ago


Selected Answer: A
Amazon FSx for Lustre integrated with Amazon S3
upvoted 1 times

  techhb 9 months, 1 week ago


Selected Answer: A
A is right choice here.
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: A
Option A is the best high performance storage with integration to S3
upvoted 1 times

  wly_al 9 months, 3 weeks ago


Selected Answer: A
requirement is File System and workload running on linux. so S3 and FSx for windows is not an option
upvoted 1 times

  Shasha1 9 months, 3 weeks ago


A
The Amazon FSx for Lustre service is a fully managed, high-performance file system that makes it easy to move and process large
amounts of data quickly and cost-effectively. It provides a fully managed, cloud-native file system with low operational overhead, designed
for massively parallel processing and high-performance workloads. The Lustre file system is a popular, open source parallel file system
that is well-suited for a variety of applications such as HPC, image processing, AI/ML, media processing, data analytics, and financial
modeling, among others. With Amazon FSx for Lustre, you can quickly create and configure new file systems in minutes, and easily scale
the size of your file system up or down
upvoted 2 times

  Wpcorgan 10 months, 1 week ago


A is correct
upvoted 1 times

  BENICE 10 months, 2 weeks ago


A - for HPC "Amazon FSx for Lustre" and long-term persistence "S3"
upvoted 1 times

  rjam 10 months, 2 weeks ago


Amazon FSx for Lustre:
• HPC optimized distributed file system, millions of IOPS
• Backed by S3
upvoted 3 times

  rjam 10 months, 2 weeks ago


Answer A
upvoted 1 times

  babaxoxo 10 months, 2 weeks ago


Selected Answer: A
FxS Lustre integrated with S3
upvoted 1 times
Question #163 Topic 1

A company is building a containerized application on premises and decides to move the application to AWS. The application will have thousands
of users soon after it is deployed. The company is unsure how to manage the deployment of containers at scale. The company needs to deploy
the containerized application in a highly available architecture that minimizes operational overhead.

Which solution will meet these requirements?

A. Store container images in an Amazon Elastic Container Registry (Amazon ECR) repository. Use an Amazon Elastic Container Service
(Amazon ECS) cluster with the AWS Fargate launch type to run the containers. Use target tracking to scale automatically based on demand.

B. Store container images in an Amazon Elastic Container Registry (Amazon ECR) repository. Use an Amazon Elastic Container Service
(Amazon ECS) cluster with the Amazon EC2 launch type to run the containers. Use target tracking to scale automatically based on demand.

C. Store container images in a repository that runs on an Amazon EC2 instance. Run the containers on EC2 instances that are spread across
multiple Availability Zones. Monitor the average CPU utilization in Amazon CloudWatch. Launch new EC2 instances as needed.

D. Create an Amazon EC2 Amazon Machine Image (AMI) that contains the container image. Launch EC2 instances in an Auto Scaling group
across multiple Availability Zones. Use an Amazon CloudWatch alarm to scale out EC2 instances when the average CPU utilization threshold
is breached.

Correct Answer: C

Community vote distribution


A (100%)

  goatbernard Highly Voted  10 months, 2 weeks ago


Selected Answer: A
AWS Fargate
upvoted 10 times

  TariqKipkemei Most Recent  2 weeks, 6 days ago


Selected Answer: A
Highly available architecture that minimizes operational overhead = Severless = Elastic Container Registry, Amazon Elastic Container
Service with AWS Fargate launch type
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: A
Using ECR provides a fully managed container image registry.
ECS with Fargate launch type allows running containers without managing servers or clusters. Fargate will handle scaling and
optimization.
Target tracking autoscaling will allow automatically adjusting capacity based on demand.
The serverless approach with Fargate minimizes operational overhead.
upvoted 1 times

  MikeDu 1 month, 2 weeks ago


Selected Answer: A
AWF Fargate should be the best chonice
upvoted 1 times

  aadityaravi8 2 months, 4 weeks ago


A is the right answers undoubtedly.
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: A
ECR provides a secure and scalable repository to store and manage container images. ECS with the Fargate launch type allows you to run
containers without managing the underlying infrastructure, providing a serverless experience. Target tracking in ECS can automatically
scale the number of tasks or services based on a target value such as CPU or memory utilization, ensuring that the application can handle
increasing demand without manual intervention.

Option B is not the best choice because using the EC2 launch type requires managing and scaling EC2 instances, which increases
operational overhead.

Option C is not the optimal solution as it involves managing the container repository on an EC2 instance and manually launching EC2
instances, which adds complexity and operational overhead.
Option D also requires managing EC2 instances, configuring ASGs, and setting up manual scaling rules based on CloudWatch alarms,
which is not as efficient or scalable as using Fargate in combination with ECS.
upvoted 4 times

  Bmarodi 2 months, 3 weeks ago


Nice exlanations!
upvoted 1 times

  Bmarodi 4 months, 1 week ago


Selected Answer: A
ECS + Fargate satisfy requirements, hence option A is the best solution.
upvoted 1 times

  studynoplay 4 months, 3 weeks ago


Selected Answer: A
minimize operational overhead = Serverless
Fargate is Serverless
upvoted 1 times

  NoinNothing 5 months, 3 weeks ago


Selected Answer: A
Correct is "A"
upvoted 1 times

  jaswantn 5 months, 4 weeks ago


You can place Fargate launch type all in one AZ, or across multiple AZs.But Option A does not take care of High Availability requirement of
question. With Option C we have multi AZ.
upvoted 2 times

  SkyZeroZx 6 months ago


Selected Answer: A
A
Why ?
Because fargate provisioned on demand resource
upvoted 2 times

  CheckpointMaster 9 months, 1 week ago


Option A

AWS Fargate is a technology that you can use with Amazon ECS to run containers without having to manage servers or clusters of Amazon
EC2 instances. With Fargate, you no longer have to provision, configure, or scale clusters of virtual machines to run containers. This
removes the need to choose server types, decide when to scale your clusters, or optimize cluster packing.
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: A
Option A
upvoted 1 times

  alect096 9 months, 2 weeks ago


Selected Answer: A
"minimizes operational overhead" --> Fargate is serverless
upvoted 2 times

  Shasha1 9 months, 3 weeks ago


A
AWS Fargate is a serverless experience for user applications, allowing the user to concentrate on building applications instead of
configuring and managing servers. Fargate also automates resource management, allowing users to easily scale their applications in
response to demand.
upvoted 1 times

  Phinx 10 months, 1 week ago


Selected Answer: A
Fargate is the only serverless option.
upvoted 1 times

  Wpcorgan 10 months, 1 week ago


A is correct
upvoted 1 times
Question #164 Topic 1

A company has two applications: a sender application that sends messages with payloads to be processed and a processing application intended
to receive the messages with payloads. The company wants to implement an AWS service to handle messages between the two applications. The
sender application can send about 1,000 messages each hour. The messages may take up to 2 days to be processed: If the messages fail to
process, they must be retained so that they do not impact the processing of any remaining messages.

Which solution meets these requirements and is the MOST operationally efficient?

A. Set up an Amazon EC2 instance running a Redis database. Configure both applications to use the instance. Store, process, and delete the
messages, respectively.

B. Use an Amazon Kinesis data stream to receive the messages from the sender application. Integrate the processing application with the
Kinesis Client Library (KCL).

C. Integrate the sender and processor applications with an Amazon Simple Queue Service (Amazon SQS) queue. Configure a dead-letter queue
to collect the messages that failed to process.

D. Subscribe the processing application to an Amazon Simple Notification Service (Amazon SNS) topic to receive notifications to process.
Integrate the sender application to write to the SNS topic.

Correct Answer: С

Community vote distribution


C (87%) 13%

  aba2s Highly Voted  9 months ago


Selected Answer: C
Amazon SQS supports dead-letter queues (DLQ), which other queues (source queues) can target for messages that can't be processed
(consumed) successfully.

https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-dead-letter-queues.html
upvoted 7 times

  TariqKipkemei Most Recent  2 weeks, 6 days ago


Selected Answer: C
Implement an AWS service to handle messages between the two applications = Amazon Simple Queue Service
If the messages fail to process, they must be retained = a dead-letter queue
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: C
SQS provides a fully managed message queuing service that meets all the requirements:

SQS can handle the sending and processing of 1,000 messages per hour
Messages can be retained for up to 14 days to allow the full 2 days for processing
Using a dead-letter queue will retain failed messages without impacting other processing
SQS requires minimal operational overhead compared to running your own message queue server
upvoted 2 times

  MutiverseAgent 2 months, 1 week ago


Selected Answer: B
Answer is B), the reason is:
- Because messages might up to 2 days to be processed. Visibility timeout of SQS is 12 hours, so after 12 hours another consumer might
take a message from the queue which is currently being processed.
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: C
By integrating both the sender and processor applications with an SQS, messages can be reliably sent from the sender to the processor
application for processing. SQS provides at-least-once delivery, ensuring that messages are not lost in transit. If a message fails to
process, it can be retained in the queue and retried without impacting the processing of other messages. Configuring a DLQ allows for the
collection of messages that repeatedly fail to process, providing visibility into failed messages for troubleshooting and analysis.

A is not the optimal choice as it involves managing and configuring an EC2 instance running a Redis, which adds operational overhead
and maintenance requirements.

B is not the most operationally efficient solution as it introduces additional complexity by using Amazon Kinesis data streams and
integrating with the Kinesis Client Library for message processing.

D, using SNS, is not the best fit for the scenario as it is more suitable for pub/sub messaging and broadcasting notifications rather than
the specific requirement of message processing between two applications.
upvoted 3 times

  Bmarodi 2 months, 3 weeks ago


Nice exlanations always, thanks a lot!
upvoted 1 times

  Bmarodi 1 month, 2 weeks ago


Nice explanations always, thanks a lot
upvoted 1 times

  Jeeva28 4 months, 1 week ago


Selected Answer: C
Answer C, In Question if Keyword have Processing Failed >> SQS
upvoted 1 times

  Bmarodi 4 months, 1 week ago


Selected Answer: C
solution that meets these requirements and is the MOST operationally efficient will be option C. SQS is buffer between 2 APPs.
upvoted 1 times

  norris81 4 months, 1 week ago


The visibility timeout must not be more than 12 hours. ( For SQS )

Jobs may take 2 days to process


upvoted 2 times

  studynoplay 4 months, 3 weeks ago


Selected Answer: C
operationally efficient = Serverless
SQS is serverless
upvoted 1 times

  studynoplay 4 months, 3 weeks ago


SNS too is serverless, but it is obvious that it is not the correct answer in this case
upvoted 1 times

  apchandana 5 months, 1 week ago


Selected Answer: C
more realistic option is C.

only problem with this is the limit of the visibility timeout is 12H max. as the second application take 2 days to process, there will be a
duplicate of processing messages in the queue. this might complicate things.
upvoted 2 times

  nilandd44gg 2 months, 2 weeks ago


Amazon SQS automatically deletes messages that have been in a queue for more than the maximum message retention period. The
default message retention period is 4 days. However, you can set the message retention period to a value from 60 seconds to 1,209,600
seconds (14 days) using the SetQueueAttributes action.
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-basic-architecture.html
upvoted 1 times

  vherman 7 months ago


SQS has a limit 12h for visibility time out
upvoted 1 times

  bullrem 8 months, 1 week ago


Selected Answer: B
Option C, using Amazon SQS, is a valid solution that meets the requirements of the company. However, it may not be the most
operationally efficient solution because SQS is a managed message queue service that requires additional operational overhead to handle
the retention of messages that failed to process. Option B, using Amazon Kinesis Data Streams, is more operationally efficient for this use
case because it can handle the retention of messages that failed to process automatically and provides the ability to process and analyze
streaming data in real-time.
upvoted 1 times

  UnluckyDucky 7 months ago


Kinesis stream save data for up to 24 hours, doesn't meet the 2 day requirement.
Kinesis streams don't have fail-safe for failed processing, unlike SQS.
The correct answer is C - SQS.
upvoted 3 times

  apchandana 5 months, 1 week ago


this is not a correct statement.
A data stream is a logical grouping of shards. There are no bounds on the number of shards within a data stream (request a limit
increase if you need more). A data stream will retain data for 24 hours by default, or optionally up to 365 days.
Shard
https://aws.amazon.com/kinesis/data-streams/getting-started/
upvoted 1 times

  LuckyAro 8 months ago


There's no way for kinesis to know whether the message processing failed.
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: C
Option C.
upvoted 1 times

  ocbn3wby 10 months ago


Selected Answer: C
This matches mostly the job of Dead Letter Q:

https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-dead-letter-queues.html
vs
https://docs.aws.amazon.com/streams/latest/dev/shared-throughput-kcl-consumers.html
upvoted 4 times

  Kapello10 10 months, 1 week ago


Selected Answer: C
Option C is the correct ans
upvoted 1 times

  Gabs90 10 months, 1 week ago


Selected Answer: C
C is correct. The B is wrong because the question ask for a way to let the two application to comunicate, so che process is already done
upvoted 1 times

  TelaO 10 months, 1 week ago


Selected Answer: B
Please explain by "B" is incorrect? How does SQS process data?

"KCL helps you consume and process data from a Kinesis data stream by taking care of many of the complex tasks associated with
distributed computing."

https://docs.aws.amazon.com/streams/latest/dev/shared-throughput-kcl-consumers.html
upvoted 2 times

  HayLLlHuK 9 months ago


As per question, the processing application will take messages.
"The company wants to implement an AWS service to handle messages between the two applications."
upvoted 1 times

  ocbn3wby 10 months ago


The processing is done at the 2nd application level.

This seems to be the job of Dead Letter Q


upvoted 1 times

  KADSM 10 months, 1 week ago


Kinesis may not be having message retry - there is no way for kinesis to know whether the message processing failed. message can be
there till their retention period.
upvoted 4 times
Question #165 Topic 1

A solutions architect must design a solution that uses Amazon CloudFront with an Amazon S3 origin to store a static website. The company’s
security policy requires that all website traffic be inspected by AWS WAF.

How should the solutions architect comply with these requirements?

A. Configure an S3 bucket policy to accept requests coming from the AWS WAF Amazon Resource Name (ARN) only.

B. Configure Amazon CloudFront to forward all incoming requests to AWS WAF before requesting content from the S3 origin.

C. Configure a security group that allows Amazon CloudFront IP addresses to access Amazon S3 only. Associate AWS WAF to CloudFront.

D. Configure Amazon CloudFront and Amazon S3 to use an origin access identity (OAI) to restrict access to the S3 bucket. Enable AWS WAF
on the distribution.

Correct Answer: D

Community vote distribution


D (55%) B (45%)

  Nigma Highly Voted  10 months, 2 weeks ago


Answer D. Use an OAI to lockdown CloudFront to S3 origin & enable WAF on CF distribution
upvoted 20 times

  FNJ1111 9 months ago


https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-access-to-amazon-s3/ confirms use of OAI (and option D).
upvoted 7 times

  javiems Most Recent  3 days, 15 hours ago


Selected Answer: D
Answer D. By configuring an OAI, you restrict direct access to your S3 bucket, ensuring that only CloudFront can access the content in the
bucket. This enhances security by preventing direct access to the S3 origin. Enabling AWS WAF on the CloudFront distribution allows you to
inspect all incoming traffic through CloudFront before it reaches the S3 origin. This ensures that all website traffic is inspected for security
threats as required by the company's security policy.
upvoted 1 times

  vijaykamal 4 days, 15 hours ago


Answer is D. B option doesn't involve S3 or the use of an origin access identity (OAI) to restrict access to the S3 bucket. It's important to
ensure that unauthorized users cannot access S3 objects directly.
upvoted 1 times

  JKevin778 1 week ago


Selected Answer: D
https://docs.aws.amazon.com/waf/latest/developerguide/cloudfront-features.html
D
upvoted 1 times

  BrijMohan08 2 weeks, 1 day ago


Selected Answer: D
Using an Origin Access Identity (OAI) allows you to restrict direct access to the S3 bucket and ensure all traffic comes through CloudFront.

AWS WAF can then be enabled on the CloudFront distribution to inspect all incoming traffic.

The correct answer is D. Configure Amazon CloudFront and Amazon S3 to use an origin access identity (OAI) to restrict access to the S3
bucket. Enable AWS WAF on the distribution.
upvoted 1 times

  TariqKipkemei 2 weeks, 6 days ago


Selected Answer: D
Configure Amazon CloudFront and Amazon S3 to use an origin access identity (OAI) to restrict access to the S3 bucket. Enable AWS WAF on
the distribution.
upvoted 1 times

  mtmayer 1 month, 1 week ago


Selected Answer: D
D for me.
upvoted 2 times
  Guru4Cloud 1 month, 2 weeks ago
Selected Answer: B
This option meets the requirements by:

Using CloudFront with an S3 origin to store the static website


Configuring CloudFront to forward requests to AWS WAF first for inspection before fetching content from S3
This allows AWS WAF to inspect all traffic to the website per the security policy
upvoted 1 times

  Willnotsin 2 months ago


Answer D
upvoted 2 times

  Nazmul123 2 months, 1 week ago


D

CloudFront's Origin Access Identity (OAI) is a special CloudFront user that you can associate with your distribution. By applying an OAI to
your S3 bucket, you're able to securely lock down all direct access to your S3 files and require all requests to come through CloudFront.

Amazon Web Application Firewall (WAF) is a security feature that helps protect your resources against common exploits. You can configure
AWS WAF directly on your CloudFront distribution to inspect incoming requests to your web application.
upvoted 2 times

  sosda 2 months, 3 weeks ago


Selected Answer: D
WAF associated with cloudfront on creation/distribution. No need to forward request to WAF
upvoted 2 times

  Dhaysindhu 3 months ago


Selected Answer: B
I vote for B!
Option D is not correct, OAI in CloudFront and restricting access to the S3 bucket does not ensure that all website traffic is inspected by
AWS WAF.
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: B
By configuring CloudFront to forward all incoming requests to AWS WAF, the traffic will be inspected by AWS WAF before reaching the S3
origin, complying with the security policy requirement. This approach ensures that all website traffic is inspected by AWS WAF, providing
an additional layer of security before accessing the content stored in the S3 origin.

Option A is not the correct choice as configuring an S3 bucket policy to accept requests from the AWS WAF ARN only would bypass the
inspection of traffic by AWS WAF. It does not ensure that all website traffic is inspected.

Option C is not the optimal solution as it focuses on controlling access to S3 using a security group. Although it associates AWS WAF with
CloudFront, it does not guarantee that all incoming requests are inspected by AWS WAF.

Option D is not the recommended solution as configuring an OAI in CloudFront and restricting access to the S3 bucket does not ensure
that all website traffic is inspected by AWS WAF. The OAI is used for restricting direct access to S3 content, but the traffic should still pass
through AWS WAF for inspection.
upvoted 4 times

  baba365 3 months, 2 weeks ago


Answer B:
If your origin is an Amazon S3 bucket configured as a website endpoint, you must set it up with CloudFront as a custom origin. That
means you can't use OAC (or OAI). However, you can restrict access to a custom origin by setting up custom headers and configuring the
origin to require them. For more information, see Restricting access to files on custom origins.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html
upvoted 1 times

  srijrao 3 months, 1 week ago


wtf dot com?
upvoted 1 times

  Bmarodi 4 months, 1 week ago


Selected Answer: D
the solutions architect that comply with these requirements is option D.
CF+ S3= OAI/ OAC are the best solutions.
upvoted 1 times

  Ankit_EC_ran 5 months, 1 week ago


Selected Answer: D
Use an OAI to have access only from CloudFront to S3 origin & enable WAF on CF distribution
upvoted 1 times
  Robrobtutu 5 months, 2 weeks ago
Selected Answer: B
I'm voting B because the traffic flows from the user to CloudFront, then from CloudFront to AWS WAF, and then back to CloudFront before
being sent to the S3 origin.

Regarding answer D, from what I can tell when you use OAI (or OAC) you don't use WAF, and the question specifically asks for us to use
WAF.
upvoted 1 times

  apchandana 5 months ago


actually speaking; you are able to enable WAF in cloudfront. there is nothing called fording.
upvoted 2 times
Question #166 Topic 1

Organizers for a global event want to put daily reports online as static HTML pages. The pages are expected to generate millions of views from
users around the world. The files are stored in an Amazon S3 bucket. A solutions architect has been asked to design an efficient and effective
solution.

Which action should the solutions architect take to accomplish this?

A. Generate presigned URLs for the files.

B. Use cross-Region replication to all Regions.

C. Use the geoproximity feature of Amazon Route 53.

D. Use Amazon CloudFront with the S3 bucket as its origin.

Correct Answer: D

Community vote distribution


D (100%)

  Buruguduystunstugudunstuy Highly Voted  9 months, 1 week ago


Selected Answer: D
The most effective and efficient solution would be Option D (Use Amazon CloudFront with the S3 bucket as its origin.)

Amazon CloudFront is a content delivery network (CDN) that speeds up the delivery of static and dynamic web content, such as HTML
pages, images, and videos. By using CloudFront, the HTML pages will be served to users from the edge location that is closest to them,
resulting in faster delivery and a better user experience. CloudFront can also handle the high traffic and large number of requests
expected for the global event, ensuring that the HTML pages are available and accessible to users around the world.
upvoted 7 times

  TariqKipkemei Most Recent  2 weeks, 6 days ago


Selected Answer: D
Global users = Amazon CloudFront
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: D
CloudFront is the best solution for this use case because:

CloudFront is a content delivery network (CDN) that caches content at edge locations around the world. This brings content closer to users
for fast performance.
For high traffic global events with millions of viewers, a CDN is necessary for effective distribution.
Using the S3 bucket as the origin, CloudFront can fetch the files once and cache them globally.
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: D
CloudFront is well-suited for efficiently serving static HTML pages to users around the world. By using itwith the S3 as its origin, the static
HTML pages can be cached and distributed globally to edge locations, reducing latency and improving performance for users accessing
the pages from different regions. This solution ensures efficient and effective delivery of the daily reports to millions of users worldwide,
providing a scalable and high-performance solution for the global event.

A would allow temporary access to the files, but it does not address the scalability and performance requirements of serving millions of
views globally.

B is not necessary for this scenario as the goal is to distribute the static HTML pages efficiently to users worldwide, not replicate the files
across multiple Regions.

C is primarily used for routing DNS traffic based on the geographic location of users, but it does not provide the caching and content
delivery capabilities required for this use case.
upvoted 2 times

  career360guru 9 months, 2 weeks ago


Selected Answer: D
Option D
upvoted 1 times

  k1kavi1 9 months, 2 weeks ago


Selected Answer: D
Agreed
upvoted 1 times
  Sahilbhai 9 months, 3 weeks ago
answer is D agree with Shasha1
upvoted 1 times

  Shasha1 9 months, 3 weeks ago


D
CloudFront is a content delivery network (CDN) offered by Amazon Web Services (AWS). It functions as a reverse proxy service that caches
web content across AWS's global data centers, improving loading speeds and reducing the strain on origin servers. CloudFront can be
used to efficiently deliver large amounts of static or dynamic content anywhere in the world.
upvoted 2 times

  Wpcorgan 10 months, 1 week ago


D is correct
upvoted 2 times

  Nigma 10 months, 2 weeks ago


D

Static content on S3 and hence Cloudfront is the best way


upvoted 2 times

  Pamban 10 months, 2 weeks ago


Selected Answer: D
D is the correct answer
upvoted 2 times
Question #167 Topic 1

A company runs a production application on a fleet of Amazon EC2 instances. The application reads the data from an Amazon SQS queue and
processes the messages in parallel. The message volume is unpredictable and often has intermittent traffic. This application should continually
process messages without any downtime.

Which solution meets these requirements MOST cost-effectively?

A. Use Spot Instances exclusively to handle the maximum capacity required.

B. Use Reserved Instances exclusively to handle the maximum capacity required.

C. Use Reserved Instances for the baseline capacity and use Spot Instances to handle additional capacity.

D. Use Reserved Instances for the baseline capacity and use On-Demand Instances to handle additional capacity.

Correct Answer: C

Community vote distribution


D (51%) C (48%)

  taer Highly Voted  10 months, 2 weeks ago


Selected Answer: D
D is the correct answer
upvoted 17 times

  Drayen25 7 months, 3 weeks ago


C is correct, read for cost effectiveness
upvoted 3 times

  diabloexodia 2 months, 2 weeks ago


AWS has stopped issuing spot instances so i think C
upvoted 1 times

  diabloexodia 2 months, 2 weeks ago


so i think C is incorrect*. the Correct ans is D.
upvoted 1 times

  sezer 6 months ago


if you cannot find enough spot instance you will have downtime
you cannot always find spot instance
upvoted 9 times

  Kumaran1508 4 months, 1 week ago


Why downtime when there are baseline reserved instances?
upvoted 2 times

  Sutariya 4 weeks ago


Baseline reserved instances and ondemand Spot instance is cost saver
upvoted 1 times

  HayLLlHuK Highly Voted  9 months ago


Selected Answer: C
"without any downtime" - Reserved Instances for the baseline capacity
"MOST cost-effectively" - Spot Instances to handle additional capacity
upvoted 16 times

  kraken21 6 months ago


How can you have baseline capacity when your message volume is unpredictable and often has intermittent traffic?
upvoted 2 times

  MutiverseAgent 2 months, 1 week ago


For this reason I think correct answer is A
upvoted 1 times

  LuckyAro 8 months, 2 weeks ago


Dude, read the question, cost consideration was not mentioned in the question.
upvoted 1 times
  ShinobiGrappler 8 months, 2 weeks ago
Dude, read the question, "Which solution meets these requirements MOST cost-effectively?"
upvoted 16 times

  MrSaint 5 months ago


cost-effectively means, Cheapest solution (cost) that achieve all the requirements (effectively). Its not cost-effectively if is just
cheapest solution that fail to address all the requirements, in this case. (This application should continually process messages
without any downtime) no matter the volume, since it is unpredictable. B for example, address the requirement but not the
cheapest solution that achieve it. D is the cheaper choice that address the requirement (without any downtime). and C is cheaper
than D but do not garantee that you wont have downtime since it is SPOT instances.
upvoted 3 times

  kraken21 6 months ago


I am leaning towards C because the idea of having a queue is to decouple the processing. If an instance goes down(spot) while
processing will it not show up back after the visibility timeout? So using spot meets the cost-effective objective.
upvoted 5 times

  Sutariya 4 weeks ago


Intermediate data stored in SQS queue so once free then it take data and process.
upvoted 1 times

  mildewCake Most Recent  1 day, 22 hours ago


Selected Answer: D
D: On-demand instances would always be available, whereas Spot (C) would not.
upvoted 1 times

  Guru4Cloud 2 weeks, 5 days ago


Selected Answer: D
without any downtime = NO Spot Instances
upvoted 1 times

  daniel33 2 weeks, 5 days ago


Selected Answer: D
D is the answer - the question states "should continually process messages without any downtime".
So, spot instances can offer up to 90% discount but quickly get interrupted.
upvoted 1 times

  TariqKipkemei 2 weeks, 6 days ago


Selected Answer: D
'This application should continually process messages without any downtime'.
With spot instances there are chances of downtime.
On-demand will handle the peak times with no downtime.
upvoted 1 times

  Valder21 3 weeks, 6 days ago


Selected Answer: D
without any downtime=no spot instances
upvoted 1 times

  coolkidsclubvip 1 month ago


Selected Answer: C
C has no downtime either
upvoted 1 times

  AKBM7829 1 month ago


D is correct answer
Keyword: unpredictable and often has intermittent traffic is ON-Demand
upvoted 1 times

  samiizduma 1 month, 1 week ago


Selected Answer: C
El proceso es continuo con las instancias reservadas y las instancias spot son más rentables para procesos adicionales.
upvoted 1 times

  040AG 1 month, 2 weeks ago


Selected Answer: D
D is CORRECT (NOT C)
The question states "... application should continually process messages without any downtime." it then asks which solution meets the
requirements MOST cost-effectively. Hence, GIVEN that there can be no downtime, which is most cost-effective?
This therefore eliminates option C as it is vulnerable to "Spot Instance Interruptions", i.e. downtime. In addition to this, from the
perspective of scaling out for increased application demand, you are at the mercy of spot instance availability which at that given time
might be scarce / limited and thus there is wait time before capacity is freed up and a new spot instance can be added to the target group,
i.e. downtime.
upvoted 1 times
  miniranda 1 month, 2 weeks ago
Selected Answer: D
D가 정답입니다
upvoted 1 times

  Sat897 1 month, 2 weeks ago


Selected Answer: D
D is correct without any doubts.. Even though cost effectiveness: required is - unpredictable and often has intermittent traffic - On-
Demand
upvoted 1 times

  Moss2011 1 month, 3 weeks ago


Selected Answer: D
The EC2 instance fleet does not mention any external storage. If you are processing data, where are you saving it? I will choose D just to
have the flexibility to add more instances from a snapshot if necessary
upvoted 1 times

  n43u435b543ht2b 1 month, 3 weeks ago


Selected Answer: C
C for cost-effectiveness.
Reserved instances will cover the baseline compute, and spot instances can be utilized for the unpredictable workloads. Even if a spot
instance is unavailable when required, the base reserved compute can handle the workload until a spot instance becomes available.

While using on-demand for the unpredictable workloads would mean that they are always available when required, this question asks for
the most cost effective solution, and does not specify for the most operationally efficient solution.
upvoted 1 times

  oguzbeliren 2 months ago


Common confusion about the C and D. Both sounds suitable but using spot instance for additional capacity is more cost-effective than
using on-demand instance. In the quieston, it doesnt have any restriction for the instance cutdown. The answer is C
upvoted 2 times

  Nazmul123 2 months, 1 week ago


With reserved instances your service will never have any downtime. Given that you have spot-instances, the quality of the service may
degrade but you will not get any downtime

C is correct.
upvoted 1 times
Question #168 Topic 1

A security team wants to limit access to specific services or actions in all of the team’s AWS accounts. All accounts belong to a large organization
in AWS Organizations. The solution must be scalable and there must be a single point where permissions can be maintained.

What should a solutions architect do to accomplish this?

A. Create an ACL to provide access to the services or actions.

B. Create a security group to allow accounts and attach it to user groups.

C. Create cross-account roles in each account to deny access to the services or actions.

D. Create a service control policy in the root organizational unit to deny access to the services or actions.

Correct Answer: D

Community vote distribution


D (100%)

  Nigma Highly Voted  10 months, 2 weeks ago


D. Service control policies (SCPs) are one type of policy that you can use to manage your organization. SCPs offer central control over the
maximum available permissions for all accounts in your organization, allowing you to ensure your accounts stay within your
organization's access control guidelines. See
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scp.html.
upvoted 13 times

  TariqKipkemei Most Recent  2 weeks, 6 days ago


Selected Answer: D
Service control policies (SCPs) are a type of organization policy that you can use to manage permissions in your organization. SCPs offer
central control over the maximum available permissions for all accounts in your organization. SCPs help you to ensure your accounts stay
within your organization’s access control guidelines.
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: D
D. Service control policies (SCPs) are one type of policy that you can use to manage your organization. SCPs offer central control over the
maximum available permissions for all accounts in your organization, allowing you to ensure your accounts stay within your
organization's access control guidelines. See
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scp.html.
upvoted 1 times

  cookieMr 3 months, 1 week ago


By creating an SCP in the root organizational unit, the security team can define and enforce fine-grained permissions that limit access to
specific services or actions across all member accounts. The SCP acts as a guardrail, denying access to specified services or actions,
ensuring that the permissions are consistent and applied uniformly across the organization. SCPs are scalable and provide a single point
of control for managing permissions, allowing the security team to centrally manage access restrictions without needing to modify
individual account settings.

Option A and option B are not suitable for controlling access across multiple accounts in AWS Organizations. ACLs and security groups are
typically used for managing network traffic and access within a single account or a specific resource.

Option C is not the recommended approach. Cross-account roles are used for granting access, and denying access through cross-account
roles can be complex and less manageable compared to using SCPs.
upvoted 4 times

  Bmarodi 4 months, 1 week ago


Selected Answer: D
I vote for option D by Creating a service control policy ( SCP) in the root organizational unit to deny access to the services or actions, meets
the requirements.
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Selected Answer: D
To limit access to specific services or actions in all of the team's AWS accounts and maintain a single point where permissions can be
managed, the solutions architect should create a service control policy (SCP) in the root organizational unit to deny access to the services
or actions (Option D).

Service control policies (SCPs) are policies that you can use to set fine-grained permissions for your AWS accounts within your
organization. SCPs are attached to the root of the organizational unit (OU) or to individual accounts, and they specify the permissions that
are allowed or denied for the accounts within the scope of the policy. By creating an SCP in the root organizational unit, the security team
can set permissions for all of the accounts in the organization from a single location, ensuring that the permissions are consistently
applied across all accounts.
upvoted 4 times
  career360guru 9 months, 2 weeks ago
Selected Answer: D
Option D
upvoted 1 times

  Wpcorgan 10 months, 1 week ago


D iscorrect
upvoted 1 times

  babaxoxo 10 months, 2 weeks ago


an organization and requires single point place to manage permissions
upvoted 2 times

  goatbernard 10 months, 2 weeks ago


Selected Answer: D
SCP for organization
upvoted 3 times
Question #169 Topic 1

A company is concerned about the security of its public web application due to recent web attacks. The application uses an Application Load
Balancer (ALB). A solutions architect must reduce the risk of DDoS attacks against the application.

What should the solutions architect do to meet this requirement?

A. Add an Amazon Inspector agent to the ALB.

B. Configure Amazon Macie to prevent attacks.

C. Enable AWS Shield Advanced to prevent attacks.

D. Configure Amazon GuardDuty to monitor the ALB.

Correct Answer: C

Community vote distribution


C (100%)

  TariqKipkemei 2 weeks, 6 days ago


Selected Answer: C
AWS Shield is a managed DDoS protection service that safeguards applications running on AWS.
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: C
Enable AWS Shield Advanced to prevent attacks.
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: C
By enabling Shield Advanced, the web application benefits from automatic protection against common and sophisticated DDoS attacks. It
utilizes advanced detection and mitigation techniques, including ML algorithms and traffic analysis, to provide effective DDoS protection.
It also includes features like real-time monitoring, attack notifications, and detailed attack reports.

A is not related to DDoS protection. Amazon Inspector is a security assessment service that helps identify vulnerabilities and security
issues in applications and EC2.

B is also not the appropriate solution. Macie is a service that uses machine learning to discover, classify, and protect sensitive data stored
in AWS. It focuses on data security and protection, not specifically on DDoS prevention.

D is not the most effective solution. GuardDuty is a threat detection service that analyzes events and network traffic to identify potential
security threats and anomalies. While it can provide insights into potential DDoS attacks, it does not actively prevent or mitigate them.
upvoted 2 times

  studynoplay 4 months, 3 weeks ago


What's going on, suddenly the questions are so easy
upvoted 4 times

  Sutariya 2 months, 1 week ago


Its due to confidence level going up after experience.
upvoted 1 times

  techhb 9 months, 1 week ago


Explained in details here https://medium.com/@tshemku/aws-waf-vs-firewall-manager-vs-shield-vs-shield-advanced-4c86911e94c6
upvoted 2 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Selected Answer: C
To reduce the risk of DDoS attacks against the application, the solutions architect should enable AWS Shield Advanced (Option C).

AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that helps protect web applications running on AWS from
DDoS attacks. AWS Shield Advanced is an additional layer of protection that provides enhanced DDoS protection capabilities, including
proactive monitoring and automatic inline mitigations, to help protect against even the largest and most sophisticated DDoS attacks. By
enabling AWS Shield Advanced, the solutions architect can help protect the application from DDoS attacks and reduce the risk of
disruption to the application.
upvoted 4 times

  career360guru 9 months, 2 weeks ago


Selected Answer: C
C is right answer
upvoted 1 times

  Wpcorgan 10 months, 1 week ago


C is correct
upvoted 1 times

  goatbernard 10 months, 2 weeks ago


Selected Answer: C
AWS Shield Advanced
upvoted 3 times

  Nigma 10 months, 2 weeks ago


DDOS = AWS Shield
upvoted 4 times
Question #170 Topic 1

A company’s web application is running on Amazon EC2 instances behind an Application Load Balancer. The company recently changed its policy,
which now requires the application to be accessed from one specific country only.

Which configuration will meet this requirement?

A. Configure the security group for the EC2 instances.

B. Configure the security group on the Application Load Balancer.

C. Configure AWS WAF on the Application Load Balancer in a VPC.

D. Configure the network ACL for the subnet that contains the EC2 instances.

Correct Answer: C

Community vote distribution


C (100%)

  handyplazt Highly Voted  10 months, 2 weeks ago


Selected Answer: C
Geographic (Geo) Match Conditions in AWS WAF. This new condition type allows you to use AWS WAF to restrict application access based
on the geographic location of your viewers. With geo match conditions you can choose the countries from which AWS WAF should allow
access.
https://aws.amazon.com/about-aws/whats-new/2017/10/aws-waf-now-supports-geographic-match/
upvoted 16 times

  Guru4Cloud Most Recent  1 month, 2 weeks ago


Selected Answer: C
C. Configure AWS WAF on the Application Load Balancer in a VPC
upvoted 1 times

  Sutariya 2 months, 1 week ago


We can use AWS WAF to configure access control rule to access from specific location.
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: C
By configuring AWS WAF on the ALB in a VPC, you can apply access control rules based on the geographic location of the incoming
requests. AWS WAF allows you to create rules that include conditions based on the IP addresses' country of origin. You can specify the
desired country and deny access to requests originating from any other country by leveraging AWS WAF's Geo Match feature.

Option A and option B focus on network-level access control and do not provide country-specific filtering capabilities.

Option D is not the ideal solution for restricting access based on country. Network ACLs primarily control traffic at the subnet level based
on IP addresses and port numbers, but they do not have built-in capabilities for country-based filtering.
upvoted 3 times

  Abrar2022 4 months ago


Configure AWS WAF for Geo Match Policy
upvoted 1 times

  aba2s 8 months, 3 weeks ago


Selected Answer: C
Source from an AWS link
Geographic (Geo) Match Conditions in AWS WAF. This condition type allows you to use AWS WAF to restrict application access based on the
geographic location of your viewers.
With geo match conditions you can choose the countries from which AWS WAF should allow access.
upvoted 2 times

  techhb 9 months, 1 week ago


Selected Answer: C
WAF Shield Advanced for DDOS,
GuardDuty is a continuous monitoring service that alerts you of potential threats, while Inspector is a one-time assessment service that
provides a report of vulnerabilities and deviations from best practices.
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Selected Answer: C
To meet the requirement of allowing the web application to be accessed from one specific country only, the company should configure
AWS WAF (Web Application Firewall) on the Application Load Balancer in a VPC (Option C).

AWS WAF is a web application firewall service that helps protect web applications from common web exploits that could affect application
availability, compromise security, or consume excessive resources. AWS WAF allows you to create rules that block or allow traffic based on
the values of specific request parameters, such as IP address, HTTP header, or query string value. By configuring AWS WAF on the
Application Load Balancer and creating rules that allow traffic from a specific country, the company can ensure that the web application is
only accessible from that country.
upvoted 4 times

  career360guru 9 months, 2 weeks ago


Selected Answer: C
OptionC. Configure WAF for Geo Match Policy
upvoted 1 times

  Wpcorgan 10 months, 1 week ago


C is correct
upvoted 1 times

  mricee9 10 months, 2 weeks ago


Selected Answer: C
C
https://aws.amazon.com/about-aws/whats-new/2017/10/aws-waf-now-supports-geographic-match/
upvoted 2 times

  Nigma 10 months, 2 weeks ago


C. WAF with ALB is the right option
upvoted 1 times
Question #171 Topic 1

A company provides an API to its users that automates inquiries for tax computations based on item prices. The company experiences a larger
number of inquiries during the holiday season only that cause slower response times. A solutions architect needs to design a solution that is
scalable and elastic.

What should the solutions architect do to accomplish this?

A. Provide an API hosted on an Amazon EC2 instance. The EC2 instance performs the required computations when the API request is made.

B. Design a REST API using Amazon API Gateway that accepts the item names. API Gateway passes item names to AWS Lambda for tax
computations.

C. Create an Application Load Balancer that has two Amazon EC2 instances behind it. The EC2 instances will compute the tax on the received
item names.

D. Design a REST API using Amazon API Gateway that connects with an API hosted on an Amazon EC2 instance. API Gateway accepts and
passes the item names to the EC2 instance for tax computations.

Correct Answer: D

Community vote distribution


B (95%) 5%

  bullrem Highly Voted  8 months, 1 week ago


Selected Answer: B
Option D is similar to option B in that it uses Amazon API Gateway to handle the API requests, but it also includes an EC2 instance to
perform the tax computations. However, using an EC2 instance in this way is less scalable and less elastic than using AWS Lambda to
perform the computations. An EC2 instance is a fixed resource and requires manual scaling and management, while Lambda is an event-
driven, serverless compute service that automatically scales with the number of requests, making it more suitable for handling variable
workloads and reducing response times during high traffic periods. Additionally, Lambda is more cost-efficient than EC2 instances, as you
only pay for the compute time consumed by your functions, making it a more cost-effective solution.
upvoted 16 times

  vijaykamal Most Recent  4 days, 15 hours ago


Selected Answer: B
Options A, C, and D involve EC2 instances, which are not as inherently scalable and elastic as serverless AWS Lambda functions, and they
would require more manual management and operational overhead. Therefore, option B is the most appropriate choice for a scalable and
elastic API solution.
upvoted 1 times

  Guru4Cloud 2 weeks, 5 days ago


Selected Answer: B
REST API using Amazon API Gateway and integrating it with AWS Lambda (option B) is the recommended approach to achieve a scalable
and elastic solution for the company's API during the holiday season.
________
No good EC2 in this case
using an EC2 instance in this way is less scalable and less elastic than using AWS Lambda to perform the computations
upvoted 1 times

  TariqKipkemei 2 weeks, 6 days ago


Selected Answer: B
scalable and elastic = serverless = API gateway and AWS Lambda
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


B) Design a REST API using Amazon API Gateway that accepts the item names. API Gateway passes item names to AWS Lambda for tax
computations.

This option provides the most scalable and elastic solution:

API Gateway handles creating the REST API frontend to receive requests
Lambda functions scale automatically to handle spikes in traffic during peak seasons
No servers to manage for the computations, providing high scalability
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: B
Option A (hosting an API on an Amazon EC2 instance) would require manual management and scaling of the EC2 instances, making it less
scalable and elastic compared to a serverless solution.

Option C (creating an Application Load Balancer with EC2 instances for tax computations) also involves manual management of the
instances and does not offer the same level of scalability and elasticity as a serverless solution.

Option D (designing a REST API using API Gateway and connecting it with an API hosted on an EC2 instance) adds unnecessary complexity
and management overhead. It is more efficient to directly integrate API Gateway with AWS Lambda for tax computations.

Therefore, designing a REST API using Amazon API Gateway and integrating it with AWS Lambda (option B) is the recommended approach
to achieve a scalable and elastic solution for the company's API during the holiday season.
upvoted 2 times
  Bmarodi 4 months, 1 week ago
Selected Answer: B
Option B is the solution that is scalable and elastic, hence this meets requirements.
upvoted 1 times

  jayce5 5 months, 1 week ago


Selected Answer: B
I also prefer B over D. However, it is quite vague since the question doesn't provide the processing time. The maximum processing time for
AWS Lambda is 15 minutes.
upvoted 1 times

  ProfXsamson 8 months ago


B. Serverless option wins over EC2
upvoted 4 times

  sona21 9 months, 1 week ago


Lambda is serverless is scalable so answer should be B.
upvoted 2 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Selected Answer: D
To design a scalable and elastic solution for providing an API for tax computations, the solutions architect should design a REST API using
Amazon API Gateway that connects with an API hosted on an Amazon EC2 instance (Option D).

API Gateway is a fully managed service that makes it easy to create, publish, maintain, monitor, and secure APIs at any scale. By designing
a REST API using API Gateway, the solutions architect can create an API that is scalable, flexible, and easy to use. The API Gateway can
accept and pass the item names to the EC2 instance for tax computations, and the EC2 instance can perform the required computations
when the API request is made.
upvoted 2 times

  markw92 3 months, 2 weeks ago


You are only explained the "front" part of scalable, unless you have end to end scalable solution it doesn't matter how scalable is your
front end. Here in D it ONLY covers the api front end but the constraint is EC2 instance which is ONE and not in a scalable mode. I think
B is more suitable given how little information is provided.
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Option A (providing an API hosted on an EC2 instance) would not be a suitable solution as it may not be scalable or elastic enough to
handle the increased demand during the holiday season.

Option B (designing a REST API using API Gateway that passes item names to Lambda for tax computations) would not be a suitable
solution as it may not be suitable for computations that require a larger amount of resources or longer execution times.

Option C (creating an Application Load Balancer with two EC2 instances behind it) would not be a suitable solution as it may not
provide the necessary scalability and elasticity. Additionally, it would not provide the benefits of using API Gateway, such as API
management and monitoring capabilities.
upvoted 1 times

  JayBee65 8 months, 4 weeks ago


But Option D is not scalable. The requirements state "A solutions architect needs to design a solution that is scalable and elastic". D
fails to meet these requirements. C on the other hand is scalable. There is nothing in the question to suggest that a longer execution
than lambda can handle happens. Therefore D is wrong, and C is possible.
upvoted 2 times

  JayBee65 8 months, 4 weeks ago


Sorry, it should say "Therefore D is wrong, and B is possible."
upvoted 2 times

  BENICE 9 months, 2 weeks ago


B is the option
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: B
Option B. Though D is also possible B is more scalable as Lambda will autoscale to meet the dynamic load.
upvoted 4 times

  Gil80 10 months ago


Selected Answer: B
B. Lambda scales much better
upvoted 2 times

  Kapello10 10 months, 1 week ago


B is the correct ans
upvoted 1 times

  Gabs90 10 months, 1 week ago


Selected Answer: B
B is correct, lamba is a better choice
upvoted 1 times

  VISHNUKANDH 10 months, 1 week ago


B is the right answer
upvoted 2 times
Question #172 Topic 1

A solutions architect is creating a new Amazon CloudFront distribution for an application. Some of the information submitted by users is
sensitive. The application uses HTTPS but needs another layer of security. The sensitive information should.be protected throughout the entire
application stack, and access to the information should be restricted to certain applications.

Which action should the solutions architect take?

A. Configure a CloudFront signed URL.

B. Configure a CloudFront signed cookie.

C. Configure a CloudFront field-level encryption profile.

D. Configure CloudFront and set the Origin Protocol Policy setting to HTTPS Only for the Viewer Protocol Policy.

Correct Answer: A

Community vote distribution


C (76%) B (24%)

  Bobbybash Highly Voted  10 months, 1 week ago


CCCCCCCCC
Field-level encryption allows you to enable your users to securely upload sensitive information to your web servers. The sensitive
information provided by your users is encrypted at the edge, close to the user, and remains encrypted throughout your entire application
stack. This encryption ensures that only applications that need the data—and have the credentials to decrypt it—are able to do so.
upvoted 31 times

  vijaykamal Most Recent  4 days, 15 hours ago


Selected Answer: C
Options A and B (signed URL and signed cookie) are used for controlling access to specific resources and are typically used for restricting
access based on URLs or cookies. They do not provide field-level encryption for sensitive data within HTTP requests.

Option D (configuring CloudFront with the Origin Protocol Policy set to HTTPS Only for the Viewer Protocol Policy) is related to enforcing
HTTPS communication between CloudFront and the viewer (end-user). While important for security, it doesn't address the specific
requirement of protecting sensitive data within the application stack.
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: C
C) Configure a CloudFront field-level encryption profile.

Field-level encryption allows you to encrypt sensitive information at the edge before distributing content through CloudFront. It provides
an additional layer of security for sensitive user-submitted data.

The other options would not provide field-level encryption


upvoted 1 times

  mr_D3v1n3 2 months ago


Would the HTTPS imply that the cert was signed by a CA
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: C
Option A and Option B are used for controlling access to specific resources or content based on signed URLs or cookies. While they
provide security and access control, they do not provide field-level encryption for sensitive data within the requests.

Option D ensures that communication between the viewer and CloudFront is encrypted with HTTPS. However, it does not specifically
address the protection and encryption of sensitive information within the application stack.

Therefore, the most appropriate action to protect sensitive information throughout the entire application stack and restrict access to
certain applications is to configure a CloudFront field-level encryption profile (Option C).
upvoted 2 times

  Jeeva28 4 months, 1 week ago


Selected Answer: C
With Amazon CloudFront, you can enforce secure end-to-end connections to origin servers by using HTTPS. Field-level encryption adds an
additional layer of security that lets you protect specific data throughout system processing so that only certain applications can see it.
upvoted 1 times
  WherecanIstart 6 months, 4 weeks ago
Selected Answer: C
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/field-level-encryption.html

"Field-level encryption allows you to enable your users to securely upload sensitive information to your web servers. The sensitive
information provided by your users is encrypted at the edge, close to the user, and remains encrypted throughout your entire application
stack".
upvoted 2 times

  bdp123 8 months ago


Selected Answer: C
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/field-levelencryption.
html
"With Amazon CloudFront, you can enforce secure end-to-end connections to origin servers by using
HTTPS. Field-level encryption adds an additional layer of security that lets you protect specific data
throughout system processing so that only certain applications can see it."
upvoted 3 times

  ProfXsamson 8 months ago


C, field-level encryption should be used when necessary to protect sensitive data.
upvoted 1 times

  ayanshbhaiji 8 months, 3 weeks ago


It should be C
upvoted 2 times

  HayLLlHuK 9 months ago


Selected Answer: C
C!
CloudFront’s field-level encryption further encrypts sensitive data in an HTTPS form using field-specific encryption keys (which you supply)
before a POST request is forwarded to your origin. This ensures that sensitive data can only be decrypted and viewed by certain
components or services in your application stack.
https://aws.amazon.com/about-aws/whats-new/2017/12/introducing-field-level-encryption-on-amazon-cloudfront/
upvoted 3 times

  kbaruu 9 months ago


Selected Answer: C
Field-Level Encryption allows you to securely upload user-submitted sensitive information to your web servers. x Signed cookie - provides
access to download multiple private files (from Tutorial Dojo)
upvoted 1 times

  Mindvision 9 months ago


C = Answer

I concur. why? CloudFront's field-level encryption further encrypts sensitive data in an HTTPS form using field-specific encryption keys
(which you supply) before a POST request is forwarded to your origin. This ensures that sensitive data can only be decrypted and viewed
by certain components or services in your application stack.
upvoted 2 times

  Zerotn3 9 months ago


Selected Answer: B
he correct answer is B. Configure a CloudFront signed cookie.

CloudFront signed cookies can be used to protect sensitive information by requiring users to authenticate with a signed cookie before
they can access content that is served through CloudFront. This can be used to restrict access to certain applications and ensure that the
sensitive information is protected throughout the entire application stack.

Option A, Configure a CloudFront signed URL, would also provide an additional layer of security by requiring users to authenticate with a
signed URL before they can access content served through CloudFront. However, this option would not protect the sensitive information
throughout the entire application stack.
upvoted 2 times

  Zerotn3 9 months ago


Option C, Configure a CloudFront field-level encryption profile, can be used to protect sensitive information that is stored in Amazon S3
and served through CloudFront. However, this option would not provide an additional layer of security for the entire application stack.
upvoted 1 times

  JayBee65 8 months, 4 weeks ago


CloudFront signed cookie are used to control user access to sensitive documents but that is not what is required. "Some of the
information submitted by users is sensitive" This is what you are looking to protect, when it's in the system, (not when users are
trying to access it and this is not mentioned in the Q).
Field-level encryption encrypts sensitive data ... This ensures sensitive data can only be decrypted and viewed by certain components
or services. (q states "access to the information should be restricted to certain applications."), so C is a perfect match
upvoted 1 times

  muhtoy 9 months, 1 week ago


Selected Answer: B
configuring a CloudFront signed cookie is a better solution for protecting sensitive information and restricting access to certain
applications throughout the entire application stack, This will allow them to restrict access to content based on the viewer’s identity and
ensure that the sensitive information is protected throughout the entire application stack
upvoted 1 times

  techhb 9 months, 1 week ago


Selected Answer: C
Option B, "Configure a CloudFront signed cookie," is not a suitable solution for this scenario because signed cookies are used to grant
temporary access to specific content in your CloudFront distribution. They do not provide an additional layer of security for the sensitive
information submitted by users, nor do they allow you to restrict access to certain applications.
upvoted 1 times

  NV305 9 months, 1 week ago


Selected Answer: B
Field-level encryption profiles, which you create in CloudFront, define the fields that you want to be encrypted.
upvoted 1 times
Question #173 Topic 1

A gaming company hosts a browser-based application on AWS. The users of the application consume a large number of videos and images that
are stored in Amazon S3. This content is the same for all users.

The application has increased in popularity, and millions of users worldwide accessing these media files. The company wants to provide the files
to the users while reducing the load on the origin.

Which solution meets these requirements MOST cost-effectively?

A. Deploy an AWS Global Accelerator accelerator in front of the web servers.

B. Deploy an Amazon CloudFront web distribution in front of the S3 bucket.

C. Deploy an Amazon ElastiCache for Redis instance in front of the web servers.

D. Deploy an Amazon ElastiCache for Memcached instance in front of the web servers.

Correct Answer: B

Community vote distribution


B (93%) 7%

  Nigma Highly Voted  10 months, 2 weeks ago


B. Cloud front is best for content delivery. Global Accelerator is best for non-HTTP (TCP/UDP) cases and supports HTTP cases as well but
with static IP (elastic IP) or anycast IP address only.
upvoted 17 times

  TariqKipkemei Most Recent  2 weeks, 6 days ago


Selected Answer: B
Deploy an Amazon CloudFront web distribution in front of the S3 bucket
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: B
B) Deploy an Amazon CloudFront web distribution in front of the S3 bucket.

CloudFront is the most cost-effective solution for this use case because:

CloudFront can cache static assets like videos and images at edge locations closer to users. This improves performance.
Serving files from the CloudFront cache reduces load on the S3 origin.
CloudFront pricing is very low for data transfer and requests.
upvoted 1 times

  Kiki_Pass 2 months, 1 week ago


Selected Answer: B
ElasticCache is for DB Cache(RDS) nor for S3
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: B
Option A is not the most cost-effective solution for this scenario. While Global Accelerator can improve global application performance, it
is primarily used for accelerating TCP and UDP traffic, such as gaming and real-time applications, rather than serving static media files.

Options C and D are used for caching frequently accessed data in-memory to improve application performance. However, they are not
specifically designed for caching and serving media files like CloudFront, and therefore, may not provide the same cost-effectiveness and
scalability for this use case.

Hence, deploying an CloudFront web distribution in front of the S3 is the most cost-effective solution for delivering media files to millions
of users worldwide while reducing the load on the origin.
upvoted 3 times

  kruasan 5 months, 1 week ago


Selected Answer: B
ElastiCache, enhances the performance of web applications by quickly retrieving information from fully-managed in-memory data stores.
It utilizes Memcached and Redis, and manages to considerably reduce the time your applications would, otherwise, take to read data from
disk-based databases.

Amazon CloudFront supports dynamic content from HTTP and WebSocket protocols, which are based on the Transmission Control
Protocol (TCP) protocol. Common use cases include dynamic API calls, web pages and web applications, as well as an application's static
files such as audio and images. It also supports on-demand media streaming over HTTP.

AWS Global Accelerator supports both User Datagram Protocol (UDP) and TCP-based protocols. It is commonly used for non-HTTP use
cases, such as gaming, IoT and voice over IP. It is also good for HTTP use cases that need static IP addresses or fast regional failover
upvoted 2 times
  LuckyAro 8 months, 2 weeks ago
Selected Answer: C
The company wants to provide the files to the users while reducing the load on the origin.
Cloudfront speeds-up content delivery but I'm not sure it reduces the load on the origin.
Some form of caching would cache content and deliver to users without going to the origin for each request.
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Selected Answer: B
To provide media files to users while reducing the load on the origin and meeting the requirements cost-effectively, the gaming company
should deploy an Amazon CloudFront web distribution in front of the S3 bucket (Option B).

CloudFront is a content delivery network (CDN) that speeds up the delivery of static and dynamic web content, such as images and videos,
to users. By using CloudFront, the media files will be served to users from the edge location that is closest to them, resulting in faster
delivery and a better user experience. CloudFront can also handle the high traffic and large number of requests expected from the
millions of users, ensuring that the media files are available and accessible to users around the world.
upvoted 3 times

  techhb 9 months, 1 week ago


Please dont post ChatGPT answers here,chatgpt keeps on changing its answers,its not the right way to copy paste,thanks.
upvoted 2 times

  Bofi 7 months ago


why not? if the answers are correct and offer best possible explanation for the wrong options, I see no reason why it shouldn't be
posted here. Also, most of his answers were right, although reasons for the wrong options were sometimes lacking, but all in all, his
responses were very good.
upvoted 1 times

  ocbn3wby 8 months ago


Woaaaa! I always wondered where this kind of logic and explanation came from in this guy's answers. Nice catch TECHHB!
upvoted 2 times

  ocbn3wby 8 months ago


Answers are mostly correct. Only a small percentage were wrong
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: B
Option B
upvoted 1 times

  k1kavi1 9 months, 2 weeks ago


Selected Answer: B
Agreed
upvoted 1 times

  rewdboy 10 months, 1 week ago


Selected Answer: B
B is the correct answer
upvoted 1 times

  Wpcorgan 10 months, 1 week ago


B is correct
upvoted 1 times
Question #174 Topic 1

A company has a multi-tier application that runs six front-end web servers in an Amazon EC2 Auto Scaling group in a single Availability Zone
behind an Application Load Balancer (ALB). A solutions architect needs to modify the infrastructure to be highly available without modifying the
application.

Which architecture should the solutions architect choose that provides high availability?

A. Create an Auto Scaling group that uses three instances across each of two Regions.

B. Modify the Auto Scaling group to use three instances across each of two Availability Zones.

C. Create an Auto Scaling template that can be used to quickly create more instances in another Region.

D. Change the ALB in front of the Amazon EC2 instances in a round-robin configuration to balance traffic to the web tier.

Correct Answer: B

Community vote distribution


B (100%)

  Nigma Highly Voted  10 months, 2 weeks ago


B. auto scaling groups can not span multi region
upvoted 22 times

  TariqKipkemei Most Recent  2 weeks, 6 days ago


Selected Answer: B
Modify the Auto Scaling group to use three instances across each of two Availability Zones
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: B
Option B. Modify the Auto Scaling group to use three instances across each of the two Availability Zones.
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: B
Option A (creating an Auto Scaling group across two Regions) introduces additional complexity and potential replication challenges, which
may not be necessary for achieving high availability within a single Region.

Option C (creating an Auto Scaling template for another Region) suggests multi-region redundancy, which may not be the most
straightforward solution for achieving high availability without modifying the application.

Option D (changing the ALB to a round-robin configuration) does not provide the desired high availability. Round-robin configuration
alone does not ensure fault tolerance and does not leverage multiple Availability Zones for resilience.

Hence, modifying the Auto Scaling group to use three instances across each of two Availability Zones is the appropriate choice to provide
high availability for the multi-tier application.
upvoted 4 times

  techhb 9 months, 1 week ago


B. auto scaling groups cannot span multi region
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Selected Answer: B
Option B. Modify the Auto Scaling group to use three instances across each of the two Availability Zones.

This option would provide high availability by distributing the front-end web servers across multiple Availability Zones. If there is an issue
with one Availability Zone, the other Availability Zone would still be available to serve traffic. This would ensure that the application
remains available and highly available even if there is a failure in one of the Availability Zones.
upvoted 4 times

  career360guru 9 months, 2 weeks ago


Selected Answer: B
Option B
upvoted 1 times

  k1kavi1 9 months, 2 weeks ago


Selected Answer: B
Agreed
upvoted 1 times

  Shasha1 9 months, 3 weeks ago


B
option B This architecture provides high availability by having multiple Availability Zones hosting the same application. This allows for
redundancy in case one Availability Zone experiences downtime, as traffic can be served by the other Availability Zone. This solution also
increases scalability and performance by allowing traffic to be spread across two Availability Zones.
upvoted 1 times

  mricee9 10 months, 1 week ago


Selected Answer: B
B is rightt
upvoted 1 times

  Wpcorgan 10 months, 1 week ago


B is correct
upvoted 1 times

  xua81376 10 months, 2 weeks ago


B auto scaling i multiple AZ
upvoted 1 times
Question #175 Topic 1

An ecommerce company has an order-processing application that uses Amazon API Gateway and an AWS Lambda function. The application
stores data in an Amazon Aurora PostgreSQL database. During a recent sales event, a sudden surge in customer orders occurred. Some
customers experienced timeouts, and the application did not process the orders of those customers.

A solutions architect determined that the CPU utilization and memory utilization were high on the database because of a large number of open
connections. The solutions architect needs to prevent the timeout errors while making the least possible changes to the application.

Which solution will meet these requirements?

A. Configure provisioned concurrency for the Lambda function. Modify the database to be a global database in multiple AWS Regions.

B. Use Amazon RDS Proxy to create a proxy for the database. Modify the Lambda function to use the RDS Proxy endpoint instead of the
database endpoint.

C. Create a read replica for the database in a different AWS Region. Use query string parameters in API Gateway to route traffic to the read
replica.

D. Migrate the data from Aurora PostgreSQL to Amazon DynamoDB by using AWS Database Migration Service (AWS DMS). Modify the Lambda
function to use the DynamoDB table.

Correct Answer: B

Community vote distribution


B (100%)

  handyplazt Highly Voted  10 months, 2 weeks ago


Selected Answer: B
Many applications, including those built on modern serverless architectures, can have a large number of open connections to the
database server and may open and close database connections at a high rate, exhausting database memory and compute resources.
Amazon RDS Proxy allows applications to pool and share connections established with the database, improving database efficiency and
application scalability.
https://aws.amazon.com/id/rds/proxy/
upvoted 24 times

  babaxoxo Highly Voted  10 months, 2 weeks ago


Selected Answer: B
Issue related to opening many connections and the solution requires least code changes so B satisfies the conditions
upvoted 6 times

  TariqKipkemei Most Recent  2 weeks, 6 days ago


Selected Answer: B
Use Amazon RDS Proxy to create a proxy for the database. Modify the Lambda function to use the RDS Proxy endpoint instead of the
database endpoint.
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: B
using Amazon RDS Proxy and modifying the Lambda function to use the RDS Proxy endpoint is the recommended solution to prevent
timeout errors and reduce the impact on the database during peak loads.
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: B
Option A (configuring provisioned concurrency and creating a global database) does not directly address the high connection utilization
issue on the database, and creating a global database may introduce additional complexity without immediate benefit to solving the
timeout errors.

Option C (creating a read replica in a different AWS Region) introduces additional data replication and management complexity, which
may not be necessary to address the timeout errors.

Option D (migrating to Amazon DynamoDB) involves a significant change in the data storage technology and requires modifying the
application to use DynamoDB instead of Aurora PostgreSQL. This may not be the most suitable solution when the goal is to make minimal
changes to the application.

Therefore, using Amazon RDS Proxy and modifying the Lambda function to use the RDS Proxy endpoint is the recommended solution to
prevent timeout errors and reduce the impact on the database during peak loads.
upvoted 3 times
  obifranky 6 months ago
its there anyone that would love to share his/her contributor access? please write me [email protected] thanks
upvoted 1 times

  sairam 8 months, 2 weeks ago


I also think the answer is B. However can RDS Proxy be used with Amazon Aurora PostgreSQL database?
upvoted 1 times

  everfly 7 months ago


RDS Proxy can be used with Aurora
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/rds-proxy.html
upvoted 3 times

  gustavtd 9 months ago


Selected Answer: B
I expect a answer with database replica but there is not, so B is most suitable
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Selected Answer: B
Option B. Use Amazon RDS Proxy to create a proxy for the database. Modify the Lambda function to use the RDS Proxy endpoint instead
of the database endpoint.

Using Amazon RDS Proxy can help reduce the number of connections to the database and improve the performance of the application.
RDS Proxy establishes a connection pool to the database and routes connections to the available connections in the pool. This can help
reduce the number of open connections to the database and improve the performance of the application. The Lambda function can be
modified to use the RDS Proxy endpoint instead of the database endpoint to take advantage of this improvement.
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Option A is not a valid solution because configuring provisioned concurrency for the Lambda function does not address the issue of
high CPU utilization and memory utilization on the database.

Option C is not a valid solution because creating a read replica in a different Region does not address the issue of high CPU utilization
and memory utilization on the database.

Option D is not a valid solution because migrating the data from Aurora PostgreSQL to DynamoDB would require significant changes to
the application and may not be the best solution for this particular problem.
upvoted 2 times

  BENICE 9 months, 2 weeks ago


Option --- B
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: B
As it is mentioned that issue was due to high CPU and Memory due to many open corrections to DB, B is the right answer.
upvoted 1 times

  Shasha1 9 months, 3 weeks ago


B
Using Amazon RDS Proxy will allow the application to handle more connections and higher loads without timeouts, while making the least
possible changes to the application. The RDS Proxy will enable connection pooling, allowing multiple connections from the Lambda
function to be served from a single proxy connection. This will reduce the number of open connections on the database, which is causing
high CPU and memory utilization
upvoted 3 times

  Wpcorgan 10 months, 1 week ago


B is correct
upvoted 1 times

  xua81376 10 months, 2 weeks ago


B - Proxy to manage connections
upvoted 2 times

  Nigma 10 months, 2 weeks ago


Correct B
upvoted 1 times
Question #176 Topic 1

An application runs on Amazon EC2 instances in private subnets. The application needs to access an Amazon DynamoDB table.

What is the MOST secure way to access the table while ensuring that the traffic does not leave the AWS network?

A. Use a VPC endpoint for DynamoDB.

B. Use a NAT gateway in a public subnet.

C. Use a NAT instance in a private subnet.

D. Use the internet gateway attached to the VPC.

Correct Answer: D

Community vote distribution


A (100%)

  mabotega Highly Voted  10 months, 2 weeks ago


Selected Answer: A
VPC endpoints for service in private subnets
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/vpc-endpoints-dynamodb.html
upvoted 9 times

  vijaykamal Most Recent  4 days, 15 hours ago


Selected Answer: A
Using an internet gateway (Option D) is used for enabling outbound internet connectivity from resources in your VPC. It's not the
appropriate choice for securely accessing DynamoDB within your VPC.
upvoted 1 times

  Ramdi1 2 weeks, 3 days ago


Selected Answer: A
A gateway VPC Endpoint is designed for supported AWS service such as dynamo db or s3 in this case i assume the endpoint is still the
valid option
upvoted 1 times

  TariqKipkemei 2 weeks, 6 days ago


Selected Answer: A
Use a VPC endpoint for DynamoDB. A VPC endpoint enables customers to privately connect to supported AWS services: Amazon
DynamoDB or Amazon Simple Storage Service (Amazon S3).
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: A
A VPC endpoint enables private connectivity between VPCs and AWS services without requiring an internet gateway, NAT device, VPN
connection, or AWS Direct Connect. Traffic remains within the AWS network.
upvoted 1 times

  MikeDu 1 month, 2 weeks ago


Selected Answer: A
VPC endpoints for service in private subnets
upvoted 1 times

  RashiJaiswal 2 months, 3 weeks ago


Selected Answer: A
VPC endpoint for dynamodb and S3
upvoted 1 times

  cookieMr 3 months, 1 week ago


Option B (using a NAT gateway in a public subnet) and option C (using a NAT instance in a private subnet) are not the most secure options
because they involve routing traffic through a network address translation (NAT) device, which requires an internet gateway and traverses
the public internet.

Option D (using the internet gateway attached to the VPC) would require routing traffic through the internet gateway, which would result
in the traffic leaving the AWS network.

Therefore, the recommended and most secure approach is to use a VPC endpoint for DynamoDB to ensure private and secure access to
the DynamoDB table from your EC2 instances in private subnets, without the need to traverse the internet or leave the AWS network.
upvoted 4 times
  markw92 3 months, 2 weeks ago
VPC endpoints for DynamoDB can alleviate these challenges. A VPC endpoint for DynamoDB enables Amazon EC2 instances in your VPC to
use their private IP addresses to access DynamoDB with no exposure to the public internet. Your EC2 instances do not require public IP
addresses, and you don't need an internet gateway, a NAT device, or a virtual private gateway in your VPC. You use endpoint policies to
control access to DynamoDB. Traffic between your VPC and the AWS service does not leave the Amazon network.
upvoted 1 times

  dmt6263 4 months, 3 weeks ago


AAAAAAAAA
upvoted 1 times

  gx2222 5 months, 4 weeks ago


Selected Answer: A
Option A: Use a VPC endpoint for DynamoDB - This is the correct option. A VPC endpoint for DynamoDB allows communication between
resources in your VPC and Amazon DynamoDB without traversing the internet or a NAT instance, which is more secure.
upvoted 2 times

  GalileoEC2 6 months, 3 weeks ago


A
The most secure way to access an Amazon DynamoDB table from Amazon EC2 instances in private subnets while ensuring that the traffic
does not leave the AWS network is to use Amazon VPC Endpoints for DynamoDB.

Amazon VPC Endpoints enable private communication between Amazon EC2 instances in a VPC and Amazon services such as DynamoDB,
without the need for an internet gateway, NAT device, or VPN connection. When you create a VPC endpoint for DynamoDB, traffic from the
EC2 instances to the DynamoDB table remains within the AWS network and does not traverse the public internet.
upvoted 1 times

  AllGOD 7 months, 2 weeks ago


private...backend Answer A
upvoted 1 times

  bdp123 8 months ago


Selected Answer: A
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/vpc-endpointsdynamodb.
html A VPC endpoint for DynamoDB enables Amazon EC2 instances in your VPC to use
their private IP addresses to access DynamoDB with no exposure to the public internet. Your EC2
instances do not require public IP addresses, and you don't need an internet gateway, a NAT device,
or a virtual private gateway in your VPC. You use endpoint policies to control access to DynamoDB.
Traffic between your VPC and the AWS service does not leave the Amazon network.
upvoted 2 times

  ProfXsamson 8 months ago


ExamTopics.com should be sued for this answer tagged as Correct answer.
upvoted 4 times

  mp165 9 months ago


Selected Answer: A
A is correct. VPC end point. D exposed to the internet
upvoted 3 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Selected Answer: A
The most secure way to access the DynamoDB table while ensuring that the traffic does not leave the AWS network is Option A (Use a VPC
endpoint for DynamoDB.)

A VPC endpoint for DynamoDB allows you to privately connect your VPC to the DynamoDB service without requiring an Internet Gateway,
VPN connection, or AWS Direct Connect connection. This ensures that the traffic between the application and the DynamoDB table stays
within the AWS network and is not exposed to the public Internet.
upvoted 2 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Option B, using a NAT gateway in a public subnet, would allow the traffic to leave the AWS network and traverse the public Internet,
which is less secure.

Option C, using a NAT instance in a private subnet, would also allow the traffic to leave the AWS network but would require you to
manage the NAT instance yourself.

Option D, using the internet gateway attached to the VPC, would also expose the traffic to the public Internet.
upvoted 2 times
Question #177 Topic 1

An entertainment company is using Amazon DynamoDB to store media metadata. The application is read intensive and experiencing delays. The
company does not have staff to handle additional operational overhead and needs to improve the performance efficiency of DynamoDB without
reconfiguring the application.

What should a solutions architect recommend to meet this requirement?

A. Use Amazon ElastiCache for Redis.

B. Use Amazon DynamoDB Accelerator (DAX).

C. Replicate data by using DynamoDB global tables.

D. Use Amazon ElastiCache for Memcached with Auto Discovery enabled.

Correct Answer: B

Community vote distribution


B (100%)

  techhb Highly Voted  9 months, 1 week ago


Selected Answer: B
DAX stands for DynamoDB Accelerator, and it's like a turbo boost for your DynamoDB tables. It's a fully managed, in-memory cache that
speeds up the read and write performance of your DynamoDB tables, so you can get your data faster than ever before.
upvoted 14 times

  TariqKipkemei Most Recent  2 weeks, 6 days ago


Selected Answer: B
Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available caching service built for Amazon DynamoDB. DAX delivers up to
a 10 times performance improvement—from milliseconds to microseconds—even at millions of requests per second.

https://aws.amazon.com/dynamodb/dax/#:~:text=Amazon%20DynamoDB%20Accelerator%20(-,DAX),-is%20a%20fully
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: B
A. Using Amazon ElastiCache for Redis would require modifying the application code and is not specifically designed to enhance
DynamoDB performance.

C. Replicating data with DynamoDB global tables would require additional configuration and operational overhead.

D. Using Amazon ElastiCache for Memcached with Auto Discovery enabled would also require application code modifications and is not
specifically designed for improving DynamoDB performance.

In contrast, option B, using Amazon DynamoDB Accelerator (DAX), is the recommended solution as it is purpose-built for enhancing
DynamoDB performance without the need for application reconfiguration. DAX provides a managed caching layer that significantly
reduces read latency and offloads traffic from DynamoDB tables.
upvoted 4 times

  Abrar2022 4 months ago


Selected Answer: B
improve the performance efficiency of DynamoDB
upvoted 1 times

  gx2222 5 months, 4 weeks ago


Selected Answer: B
Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for DynamoDB that helps improve the read
performance of DynamoDB tables. DAX provides a caching layer between the application and DynamoDB, reducing the number of read
requests made directly to DynamoDB. This can significantly reduce read latencies and improve overall application performance.
upvoted 2 times

  osmk 6 months, 1 week ago


B-->Applications that are read-intensive===>https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DAX.html#DAX.use-
cases
upvoted 1 times

  LuckyAro 8 months, 2 weeks ago


Selected Answer: B
DynamoDB Accelerator, less over head.
upvoted 2 times
  wmp7039 8 months, 2 weeks ago
Option B is incorrect as the constraint in the question is not to recode the application. DAX requires application to be reconfigured and
point to DAX instead of DynamoDB
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DAX.client.modify-your-app.html
Answer should be A
upvoted 2 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Selected Answer: B
To improve the performance efficiency of DynamoDB without reconfiguring the application, a solutions architect should recommend using
Amazon DynamoDB Accelerator (DAX) which is Option B as the correct answer.

DAX is a fully managed, in-memory cache that can be used to improve the performance of read-intensive workloads on DynamoDB. DAX
stores frequently accessed data in memory, allowing the application to retrieve data from the cache rather than making a request to
DynamoDB. This can significantly reduce the number of read requests made to DynamoDB, improving the performance and reducing the
latency of the application.
upvoted 3 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Option A, using Amazon ElastiCache for Redis, would not be a good fit because it is not specifically designed for use with DynamoDB
and would require reconfiguring the application to use it.

Option C, replicating data using DynamoDB global tables, would not directly improve the performance of reading requests and would
require additional operational overhead to maintain the replication.

Option D, using Amazon ElastiCache for Memcached with Auto Discovery enabled, would also not be a good fit because it is not
specifically designed for use with DynamoDB and would require reconfiguring the application to use it.
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: B
Option B
upvoted 2 times

  k1kavi1 9 months, 2 weeks ago


Selected Answer: B
Agreed
upvoted 2 times

  Shasha1 9 months, 3 weeks ago


B
DAX is a fully managed, highly available, in-memory cache for DynamoDB that delivers lightning-fast performance and consistent low-
latency responses. It provides fast performance without requiring any application reconfiguration
upvoted 3 times

  Wpcorgan 10 months, 1 week ago


B is correct
upvoted 1 times

  goatbernard 10 months, 2 weeks ago


Selected Answer: B
DAX is the cache for this
upvoted 1 times

  nhlegend 10 months, 2 weeks ago


B is correct, DAX provides caching + no changes
upvoted 2 times
Question #178 Topic 1

A company’s infrastructure consists of Amazon EC2 instances and an Amazon RDS DB instance in a single AWS Region. The company wants to
back up its data in a separate Region.

Which solution will meet these requirements with the LEAST operational overhead?

A. Use AWS Backup to copy EC2 backups and RDS backups to the separate Region.

B. Use Amazon Data Lifecycle Manager (Amazon DLM) to copy EC2 backups and RDS backups to the separate Region.

C. Create Amazon Machine Images (AMIs) of the EC2 instances. Copy the AMIs to the separate Region. Create a read replica for the RDS DB
instance in the separate Region.

D. Create Amazon Elastic Block Store (Amazon EBS) snapshots. Copy the EBS snapshots to the separate Region. Create RDS snapshots.
Export the RDS snapshots to Amazon S3. Configure S3 Cross-Region Replication (CRR) to the separate Region.

Correct Answer: A

Community vote distribution


A (95%) 5%

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: A
AWS Backup provides a fully managed, centralized backup service across AWS services. It can be configured to automatically copy backups
across Regions.

This requires minimal operational overhead compared to the other options:


upvoted 1 times

  oguzbeliren 2 months ago


D would have been a great option but the questions requires less mannual effort. So, A is better.
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: A
Using AWS Backup to copy EC2 and RDS backups to the separate Region is the solution that meets the requirements with the least
operational overhead. AWS Backup simplifies the backup process and automates the copying of backups to another Region, reducing the
manual effort and operational complexity involved in managing separate backup processes for EC2 instances and RDS databases.

Option B is incorrect because Amazon Data Lifecycle Manager (Amazon DLM) is not designed for directly copying RDS backups to a
separate region.

Option C is incorrect because creating Amazon Machine Images (AMIs) and read replicas adds complexity and operational overhead
compared to a dedicated backup solution.

Option D is incorrect because using Amazon EBS snapshots, RDS snapshots, and S3 Cross-Region Replication (CRR) involves multiple
manual steps and additional configuration, increasing complexity.
upvoted 3 times

  cheese929 4 months, 4 weeks ago


Selected Answer: A
A is correct
upvoted 2 times

  kruasan 5 months, 1 week ago


Selected Answer: A
Option B, using Amazon Data Lifecycle Manager (Amazon DLM) to copy EC2 backups and RDS backups to the separate Region, would
require more operational overhead because DLM is primarily designed for managing the lifecycle of Amazon EBS snapshots, and would
require additional configuration to manage RDS backups.

Option C, creating AMIs of the EC2 instances and read replicas of the RDS DB instance in the separate Region, would require more manual
effort to manage the backup and disaster recovery process, as it requires manual creation and management of AMIs and read replicas.
upvoted 2 times

  kruasan 5 months, 1 week ago


Option D, creating EBS snapshots and RDS snapshots, exporting them to Amazon S3, and configuring S3 Cross-Region Replication
(CRR) to the separate Region, would require more configuration and management effort. Additionally, S3 CRR can have additional
charges for data transfer and storage in the destination region.
Therefore, option A is the best choice for meeting the company's requirements with the least operational overhead.
upvoted 2 times
  gx2222 5 months, 4 weeks ago
Selected Answer: A
Option A, using AWS Backup to copy EC2 backups and RDS backups to the separate region, is the correct answer for the given scenario.

Using AWS Backup is a simple and efficient way to backup EC2 instances and RDS databases to a separate region. It requires minimal
operational overhead and can be easily managed through the AWS Backup console or API. AWS Backup can also provide automated
scheduling and retention management for backups, which can help ensure that backups are always available and up to date.
upvoted 2 times

  vtbk 9 months ago


Selected Answer: A
Cross-Region backup
Using AWS Backup, you can copy backups to multiple different AWS Regions on demand or automatically as part of a scheduled backup
plan. Cross-Region backup is particularly valuable if you have business continuity or compliance requirements to store backups a
minimum distance away from your production data.
https://docs.aws.amazon.com/aws-backup/latest/devguide/whatisbackup.html
upvoted 4 times

  dan80 9 months ago


A is correct - you need to find a backup solution for EC2 and RDS. DLM doent work with RDS , only with snapshots.
upvoted 1 times

  techhb 9 months, 1 week ago


Selected Answer: A
using Amazon DLM to copy EC2 backups and RDS backups to the separate region, is not a valid solution because Amazon DLM does not
support backing up data across regions.
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Selected Answer: B
Option B. Use Amazon Data Lifecycle Manager (Amazon DLM) to copy EC2 backups and RDS backups to the separate Region.

Amazon DLM is a fully managed service that helps automate the creation and retention of Amazon EBS snapshots and RDS DB snapshots.
It can be used to create and manage backup policies that specify when and how often snapshots should be created, as well as how long
they should be retained. With Amazon DLM, you can easily and automatically create and manage backups of your EC2 instances and RDS
DB instances in a separate Region, with minimal operational overhead.
upvoted 1 times

  YogK 4 months, 1 week ago


AWS DLM does not support RDS backups, only works with EBS storage.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/snapshot-lifecycle.html
upvoted 1 times

  HayLLlHuK 9 months ago


Buruguduystunstugudunstuy, sorry, but I haven’t found any info about copying RDS backups by DLM. The DLM works only with EBS.
So the only answer is A - AWS Backup
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Option A, using AWS Backup to copy EC2 backups and RDS backups to the separate Region, would also work, but it may require more
manual configuration and management.

Option C, creating AMIs of the EC2 instances and copying them to the separate Region, and creating a read replica for the RDS DB
instance in the separate Region, would work, but it may require more manual effort to set up and maintain.

Option D, creating EBS snapshots and copying them to the separate Region, creating RDS snapshots, and exporting them to Amazon
S3, and configuring S3 CRR to the separate Region, would also work, but it would involve multiple steps and may require more manual
effort to set up and maintain. Overall, using Amazon DLM is likely to be the easiest and most efficient option for meeting the
requirements with the least operational overhead.
upvoted 1 times

  Kruiz29 8 months, 2 weeks ago


This guy is giving wrong answers in detail...lol
upvoted 4 times

  PassNow1234 9 months, 1 week ago


Some of your answers are very detailed. Can you back them up with a reference?
upvoted 2 times

  jwu413 8 months, 1 week ago


All of their answers are from ChatGPT
upvoted 5 times
  techhb 9 months, 1 week ago
using Amazon DLM to copy EC2 backups and RDS backups to the separate region, is not a valid solution because Amazon DLM does
not support backing up data across regions.
upvoted 4 times

  egmiranda 8 months, 2 weeks ago


I choose A, but DLM support cross regions. DLM doesn't support RDS. Cross region copy rules it's a feature of DLM ("For each
schedule, you can define the frequency, fast snapshot restore settings (snapshot lifecycle policies only), cross-Region copy rules,
and tags")
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/snapshot-lifecycle.html
upvoted 1 times

  PassNow1234 9 months, 1 week ago


Thanks techhb
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: A
Option A as it is fully managed service with least operational overhead
upvoted 1 times

  Shasha1 9 months, 3 weeks ago


A
AWS Backup is a fully managed service that handles the process of copying backups to a separate Region automatically
upvoted 1 times

  babaxoxo 10 months, 2 weeks ago


Selected Answer: A
Ans A with least operational overhead
upvoted 1 times

  rjam 10 months, 2 weeks ago


AWS Backup supports Supports cross-region backups
upvoted 3 times

  rjam 10 months, 2 weeks ago


Selected Answer: A
Option A
Aws back up supports , EC2, RDS
upvoted 3 times

  rjam 10 months, 2 weeks ago


AWS Backup suports Supports cross-region backups
upvoted 1 times
Question #179 Topic 1

A solutions architect needs to securely store a database user name and password that an application uses to access an Amazon RDS DB
instance. The application that accesses the database runs on an Amazon EC2 instance. The solutions architect wants to create a secure
parameter in AWS Systems Manager Parameter Store.

What should the solutions architect do to meet this requirement?

A. Create an IAM role that has read access to the Parameter Store parameter. Allow Decrypt access to an AWS Key Management Service (AWS
KMS) key that is used to encrypt the parameter. Assign this IAM role to the EC2 instance.

B. Create an IAM policy that allows read access to the Parameter Store parameter. Allow Decrypt access to an AWS Key Management Service
(AWS KMS) key that is used to encrypt the parameter. Assign this IAM policy to the EC2 instance.

C. Create an IAM trust relationship between the Parameter Store parameter and the EC2 instance. Specify Amazon RDS as a principal in the
trust policy.

D. Create an IAM trust relationship between the DB instance and the EC2 instance. Specify Systems Manager as a principal in the trust policy.

Correct Answer: A

Community vote distribution


A (92%) 8%

  Buruguduystunstugudunstuy Highly Voted  9 months, 1 week ago


Selected Answer: A
CORRECT Option A

To securely store a database user name and password in AWS Systems Manager Parameter Store and allow an application running on an
EC2 instance to access it, the solutions architect should create an IAM role that has read access to the Parameter Store parameter and
allow Decrypt access to an AWS KMS key that is used to encrypt the parameter. The solutions architect should then assign this IAM role to
the EC2 instance.

This approach allows the EC2 instance to access the parameter in the Parameter Store and decrypt it using the specified KMS key while
enforcing the necessary security controls to ensure that the parameter is only accessible to authorized parties.
upvoted 6 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Option B, would not be sufficient, as IAM policies cannot be directly attached to EC2 instances.

Option C, would not be a valid solution, as the Parameter Store parameter and the EC2 instance are not entities that can be related
through an IAM trust relationship.

Option D, would not be a valid solution, as the trust policy would not allow the EC2 instance to access the parameter in the Parameter
Store or decrypt it using the specified KMS key.
upvoted 4 times

  sdasdawa Highly Voted  10 months, 2 weeks ago


Selected Answer: A
Agree with A, IAM role is for services (EC2 for example)
IAM policy is more for users and groups
upvoted 5 times

  TariqKipkemei Most Recent  2 weeks, 6 days ago


Selected Answer: A
Create an IAM role that has read access to the Parameter Store parameter. Allow Decrypt access to an AWS Key Management Service (AWS
KMS) key that is used to encrypt the parameter. Assign this IAM role to the EC2 instance
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: A
CORRECT Option A
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: A
By creating an IAM role with read access to the Parameter Store parameter and Decrypt access to the associated AWS KMS key, the EC2
will have the necessary permissions to securely retrieve and decrypt the database user name and password from the Parameter Store.
This approach ensures that the sensitive information is protected and can be accessed only by authorized entities.
Answers B, C, and D are not correct because they do not provide a secure way to store and retrieve the database user name and password
from the Parameter Store. IAM policies, trust relationships, and associations with the DB instance are not the appropriate mechanisms for
securely managing sensitive credentials in this scenario. Answer A is the correct choice as it involves creating an IAM role with the
necessary permissions and assigning it to the EC2 instance to access the Parameter Store securely.
upvoted 2 times
  cheese929 4 months, 3 weeks ago
Selected Answer: A
A is correct
upvoted 1 times

  kruasan 5 months, 1 week ago


Selected Answer: A
By creating an IAM role and assigning it to the EC2 instance, the application running on the EC2 instance can access the Parameter Store
parameter securely without the need for hard-coding the database user name and password in the application code.

The IAM role should have read access to the Parameter Store parameter and Decrypt access to an AWS KMS key that is used to encrypt the
parameter to ensure that the parameter is protected at rest.
upvoted 1 times

  HayLLlHuK 9 months ago


There should be the Decrypt access to KMS.
"If you choose the SecureString parameter type when you create your parameter, Systems Manager uses AWS KMS to encrypt the
parameter value."
https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html

IAM role - for EC2


upvoted 1 times

  BENICE 9 months, 2 weeks ago


A -- is correct option
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Option A.
upvoted 1 times

  k1kavi1 9 months, 2 weeks ago


Selected Answer: A
A is correct
upvoted 1 times

  Shasha1 9 months, 3 weeks ago


Answer A
Create an IAM role that has read access to the Parameter Store parameter. Allow Decrypt access to an AWS Key Management Service (AWS
KMS) key that is used to encrypt the parameter. Assign this IAM role to the EC2 instance. This solution will allow the application to securely
access the database user name and password stored in the parameter store.
upvoted 1 times

  [Removed] 10 months, 1 week ago


Selected Answer: B
i think policy
upvoted 1 times

  [Removed] 10 months, 1 week ago


Access to Parameter Store is enabled by IAM policies and supports resource level permissions for access. An IAM policy that grants
permissions to specific parameters or a namespace can be used to limit access to these parameters. CloudTrail logs, if enabled for the
service, record any attempt to access a parameter.
upvoted 1 times

  [Removed] 10 months, 1 week ago


https://aws.amazon.com/blogs/compute/managing-secrets-for-amazon-ecs-applications-using-parameter-store-and-iam-roles-for-
tasks/
upvoted 1 times

  JayBee65 8 months, 4 weeks ago


This link gives the example "Walkthrough: Securely access Parameter Store resources with IAM roles for tasks" - essentially A above.
It doe snot show how this can be done using a policy (B) alone.
upvoted 1 times

  turalmth 10 months ago


can you attach policy to ec2 directly ?
upvoted 1 times

  EKA_CloudGod 10 months, 2 weeks ago


Selected Answer: A
A. Attach IAM role to EC2 Instance
https://aws.amazon.com/blogs/security/digital-signing-asymmetric-keys-aws-kms/
upvoted 1 times

  babaxoxo 10 months, 2 weeks ago


Selected Answer: A
Attach IAM role to EC2 Instance profile
upvoted 3 times

  goatbernard 10 months, 2 weeks ago


Selected Answer: B
IAM policy
upvoted 1 times
Question #180 Topic 1

A company is designing a cloud communications platform that is driven by APIs. The application is hosted on Amazon EC2 instances behind a
Network Load Balancer (NLB). The company uses Amazon API Gateway to provide external users with access to the application through APIs. The
company wants to protect the platform against web exploits like SQL injection and also wants to detect and mitigate large, sophisticated DDoS
attacks.

Which combination of solutions provides the MOST protection? (Choose two.)

A. Use AWS WAF to protect the NLB.

B. Use AWS Shield Advanced with the NLB.

C. Use AWS WAF to protect Amazon API Gateway.

D. Use Amazon GuardDuty with AWS Shield Standard

E. Use AWS Shield Standard with Amazon API Gateway.

Correct Answer: BC

Community vote distribution


BC (93%) 4%

  babaxoxo Highly Voted  10 months, 2 weeks ago


Selected Answer: BC
Shield - Load Balancer, CF, Route53
AWF - CF, ALB, API Gateway
upvoted 34 times

  YogK 4 months, 1 week ago


Shield - Amazon Elastic Compute Cloud (EC2), Elastic Load Balancing (ELB), Amazon CloudFront, AWS Global Accelerator, and Route 53.

WAF - Amazon CloudFront, the Application Load Balancer (ALB), Amazon API Gateway, and AWS AppSync
upvoted 3 times

  Ouk 9 months, 1 week ago


Thank u U meant WAF* - CloudFormation, right? haha
upvoted 4 times

  rjam Highly Voted  10 months, 2 weeks ago


Selected Answer: BC
AWS Shield Advanced - DDos attacks
AWS WAF to protect Amazon API Gateway, because WAF sits before the API Gateway and then comes NLB.
upvoted 6 times

  studynoplay 4 months, 3 weeks ago


don't agree that NLB sits before API gateway. it should be other way around
upvoted 2 times

  aadityaravi8 2 months, 4 weeks ago


yes.. coming from outside to inside... first of all DDos protection is required so the outer most NLB with Shield Advanced and then
filter particular request doing SQL injection and all i.e API Gateway with WAF
upvoted 1 times

  Guru4Cloud Most Recent  1 month, 2 weeks ago


Selected Answer: BC
B) Use AWS Shield Advanced with the NLB

C) Use AWS WAF to protect Amazon API Gateway

The key reasons are:

AWS Shield Advanced provides expanded DDoS protection against larger and more sophisticated attacks
Using it with the NLB helps protect against network floods
WAF still provides critical protection against exploits at the API lay
upvoted 1 times

  Sat897 1 month, 2 weeks ago


Selected Answer: BC
WAF - can't support NLB and its supports API Gateway
AWS Shield Advanced - NLB - DDOS
upvoted 1 times
  cookieMr 3 months, 1 week ago
B. AWS Shield Advanced provides advanced DDoS protection for the NLB, making it the appropriate choice for protecting against large and
sophisticated DDoS attacks at the network layer.

C. AWS WAF is designed to provide protection at the application layer, making it suitable for securing the API Gateway against web exploits
like SQL injection.

A. AWS WAF is not compatible with NLB as it operates at the application layer, whereas NLB operates at the transport layer.

D. While GuardDuty helps detect threats, it does not directly protect against web exploits or DDoS attacks. Shield Standard focuses on
edge resources, not specifically NLBs.

E. Shield Standard provides basic DDoS protection for edge resources, but it does not directly protect the NLB or address web exploits at
the application layer.
upvoted 2 times

  cheese929 4 months, 3 weeks ago


Selected Answer: BC
B and C is correct
upvoted 1 times

  kruasan 5 months, 1 week ago


Selected Answer: BC
NLB is a Lyer 3/4 component while WAF is a Layer 7 protection component.

That is why WAF is only available for Application Load Balancer in the ELB portfolio. NLB does not terminate the TLS session therefore WAF
is not capable of acting on the content. I would consider using AWS Shield at Layer 3/4.
https://repost.aws/questions/QU2fYXwSWUS0q9vZiWDoaEzA/nlb-need-to-attach-aws-waf
upvoted 4 times

  jdr75 5 months, 3 weeks ago


Selected Answer: C
• A. Use AWS WAF to protect the NLB.
INCORRECT, cos' WAF not integrate with network LB
• B. Use AWS Shield Advanced with the NLB.

YES. AWS Shield Advanced provides additional protections against more sophisticated and larger attacks for your applications running in
AWS.
The doubt is : why apply the protection in the NLB when the facing of the app. is the API Gateway?, because Shield shoud be in front of the
communications, not behind.
Nevertheless, this is the best option.

• C. Use AWS WAF to protect Amazon API Gateway.


YES, https://aws.amazon.com/es/waf/faqs/
• D. Use Amazon GuardDuty with AWS Shield Standard
INCORRECT, GuardDuty not prevent attacks.
•E. Use AWS Shield Standard with Amazon API Gateway.
INCORRECT. It could be, in principle, a good option, cos' it's in front of the gateway, but the questions said explicity:
"wants to detect and mitigate large, sophisticated DDoS attacks",
and Standard not provide this feature.
upvoted 1 times

  kerl 8 months ago


for those who select A, it is wrong, WAF is Layer 7, it only support ABL, APIGateway, CloudFront,COgnito User Pool and AppSync graphQL
API (https://docs.aws.amazon.com/waf/latest/developerguide/waf-chapter.html). NLB is NOT supported. Answer is BC
upvoted 4 times

  bullrem 8 months, 1 week ago


Selected Answer: AB
A and B are the best options to provide the greatest protection for the platform against web vulnerabilities and large, sophisticated DDoS
attacks.
Option A: Use AWS WAF to protect the NLB. This will provide protection against common web vulnerabilities such as SQL injection.
Option B: Use AWS Shield Advanced with the NLB. This will provide additional protection against large and sophisticated DDoS attacks.
upvoted 2 times

  omoakin 4 months, 1 week ago


correct
upvoted 1 times

  bullrem 8 months, 1 week ago


The best protection for the platform would be to use A and C together because it will protect both the NLB and the API Gateway from
web vulnerabilities and DDoS attacks.
upvoted 1 times
  bullrem 8 months, 1 week ago
A and C are the best options for protecting the platform against web vulnerabilities and detecting and mitigating large and
sophisticated DDoS attacks.
A: AWS WAF can be used to protect the NLB from web vulnerabilities such as SQL injection.
C: AWS WAF can be used to protect Amazon API Gateway and also provide protection against DDoS attacks.
B: AWS Shield Advanced is used to protect resources from DDoS attacks, but it is not specific to the NLB and may not provide the same
level of protection as using WAF specifically on the NLB.
D and E: Amazon GuardDuty and AWS Shield Standard are primarily used for threat detection and may not provide the same level of
protection as using WAF and Shield Advanced.
upvoted 1 times

  Arifzefen 2 months, 3 weeks ago


A is not correct as WAF doesn't support Network Load Balancer
upvoted 1 times

  drabi 9 months, 1 week ago


Selected Answer: BC
WS Shield Advanced can help protect your Amazon EC2 instances and Network Load Balancers against infrastructure-layer Distributed
Denial of Service (DDoS) attacks. Enable AWS Shield Advanced on an AWS Elastic IP address and attach the address to an internet-facing
EC2 instance or Network Load Balancer.https://aws.amazon.com/blogs/security/tag/network-load-balancers/
upvoted 2 times

  duriselvan 9 months, 1 week ago


Regional resources

You can protect regional resources in all Regions where AWS WAF is available. You can see the list at AWS WAF endpoints and quotas in the
Amazon Web Services General Reference.

You can use AWS WAF to protect the following regional resource types:

Amazon API Gateway REST API

Application Load Balancer

AWS AppSync GraphQL API

Amazon Cognito user pool

You can only associate a web ACL to an Application Load Balancer that's within AWS Regions. For example, you cannot associate a web ACL
to an Application Load Balancer that's on AWS Outposts.
upvoted 1 times

  duriselvan 9 months, 1 week ago


Ans:-a and C
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Selected Answer: AC
***CORRECT***

A. Use AWS WAF to protect the NLB.


C. Use AWS WAF to protect Amazon API Gateway.

AWS WAF is a web application firewall that helps protect web applications from common web exploits such as SQL injection and cross-site
scripting attacks. By using AWS WAF to protect the NLB and Amazon API Gateway, the company can provide an additional layer of
protection for its cloud communications platform against these types of web exploits.
upvoted 1 times

  PassNow1234 9 months, 1 week ago


Your answer is wrong.

Sophisticated DDOS = Shield Advanced (DD0S attacks the front!) What happens if your load balances goes down?

Your API gateway is on the BACK further behind the NLB. SQL Protect that with the WAF

B and C are right.


upvoted 3 times

  jwu413 8 months, 1 week ago


This guy just copies and pastes from ChatGPT.
upvoted 4 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


About AWS Shield Advanced and Amazon GuardDuty

AWS Shield Advanced is a managed DDoS protection service that provides additional protection for Amazon EC2 instances, Amazon
RDS DB instances, Amazon Elastic Load Balancers, and Amazon CloudFront distributions. It can help detect and mitigate large,
sophisticated DDoS attacks, "but it does not provide protection against web exploits like SQL injection."
Amazon GuardDuty is a threat detection service that uses machine learning and other techniques to identify potentially malicious
activity in your AWS accounts. It can be used in conjunction with AWS Shield Standard, which provides basic DDoS protection for
Amazon EC2 instances, Amazon RDS DB instances, and Amazon Elastic Load Balancers. However, neither Amazon GuardDuty nor AWS
Shield Standard provides protection against web exploits like SQL injection.

Overall, the combination of using AWS WAF to protect the NLB and Amazon API Gateway provides the most protection against web
exploits and large, sophisticated DDoS attacks.
upvoted 1 times
  BENICE 9 months, 2 weeks ago
Option B and C
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: BC
B and C
upvoted 1 times

  tz1 9 months, 3 weeks ago


B & C is the answer
upvoted 1 times

  Wpcorgan 10 months, 1 week ago


B and C
upvoted 1 times
Question #181 Topic 1

A company has a legacy data processing application that runs on Amazon EC2 instances. Data is processed sequentially, but the order of results
does not matter. The application uses a monolithic architecture. The only way that the company can scale the application to meet increased
demand is to increase the size of the instances.

The company’s developers have decided to rewrite the application to use a microservices architecture on Amazon Elastic Container Service
(Amazon ECS).

What should a solutions architect recommend for communication between the microservices?

A. Create an Amazon Simple Queue Service (Amazon SQS) queue. Add code to the data producers, and send data to the queue. Add code to
the data consumers to process data from the queue.

B. Create an Amazon Simple Notification Service (Amazon SNS) topic. Add code to the data producers, and publish notifications to the topic.
Add code to the data consumers to subscribe to the topic.

C. Create an AWS Lambda function to pass messages. Add code to the data producers to call the Lambda function with a data object. Add
code to the data consumers to receive a data object that is passed from the Lambda function.

D. Create an Amazon DynamoDB table. Enable DynamoDB Streams. Add code to the data producers to insert data into the table. Add code to
the data consumers to use the DynamoDB Streams API to detect new table entries and retrieve the data.

Correct Answer: A

Community vote distribution


A (89%) 11%

  Buruguduystunstugudunstuy Highly Voted  9 months, 1 week ago


Selected Answer: A
Option B, using Amazon Simple Notification Service (SNS), would not be suitable for this use case, as SNS is a pub/sub messaging service
that is designed for one-to-many communication, rather than point-to-point communication between specific microservices.

Option C, using an AWS Lambda function to pass messages, would not be suitable for this use case, as it would require the data producers
and data consumers to have a direct connection and invoke the Lambda function, rather than being decoupled through a message queue.

Option D, using an Amazon DynamoDB table with DynamoDB Streams, would not be suitable for this use case, as it would require the
data consumers to continuously poll the DynamoDB Streams API to detect new table entries, rather than being notified of new data
through a message queue.
upvoted 11 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Hence, Option A is the correct answer.

Create an Amazon Simple Queue Service (Amazon SQS) queue. Add code to the data producers, and send data to the queue. Add code
to the data consumers to process data from the queue.
upvoted 3 times

  TariqKipkemei Most Recent  2 weeks, 6 days ago


Selected Answer: A
Data is processed sequentially, but the order of results does not matter = Amazon Simple Queue Service
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: A
A) Create an Amazon Simple Queue Service (Amazon SQS) queue. Add code to the data producers, and send data to the queue. Add code
to the data consumers to process data from the queue.

For asynchronous communication between decoupled microservices, an SQS queue is the most appropriate service to use.

SQS provides a scalable, highly available queue to buffer messages between producers and consumers.
The order of processing does not matter, so a queue model fits well.
The consumers can scale independently to process messages from the queue.
upvoted 2 times

  cookieMr 3 months, 1 week ago


Selected Answer: A
A. Creating an Amazon SQS queue allows for asynchronous communication between microservices, decoupling the data producers and
consumers. It provides scalability, flexibility, and ensures that data processing can happen independently and at a desired pace.

B. Amazon SNS is more suitable for pub/sub messaging, where multiple subscribers receive the same message. It may not be the best fit
for sequential data processing.

C. Using AWS Lambda functions for communication introduces unnecessary complexity and may not be the optimal solution for
sequential data processing.

D. Amazon DynamoDB with DynamoDB Streams is primarily designed for real-time data streaming and change capture scenarios. It may
not be the most efficient choice for sequential data processing in a microservices architecture.
upvoted 4 times
  omoakin 4 months, 1 week ago
BBBBBBBBB
upvoted 1 times

  Bmarodi 4 months, 1 week ago


Selected Answer: A
SQS for decoupling a monolithic architecture, hence option A is the right answer.
upvoted 1 times

  Madhuaws 6 months ago


it also says 'the order of results does not matter'. Option B is correct.
upvoted 1 times

  asoli 6 months, 2 weeks ago


Selected Answer: A
The answer is A.
B is wrong because SNS cannot send events "directly" to ECS.
https://docs.aws.amazon.com/sns/latest/dg/sns-event-destinations.html
upvoted 1 times

  user_deleted 7 months, 1 week ago


Selected Answer: B
it deosn;t say it is one-one relationships , SNS is better
upvoted 3 times

  markw92 3 months, 2 weeks ago


watch out for this sentence in the question..."Data needs to process sequentially...."
upvoted 2 times

  career360guru 9 months, 2 weeks ago


Selected Answer: A
Best answer is A.
Though C or D is possible it requires additional components and integration and so they are not efficient. Assuming that rate of incoming
requests is within limits that SQS can handle A is best option.
upvoted 1 times

  k1kavi1 9 months, 2 weeks ago


Selected Answer: A
A is correct
upvoted 1 times

  Shasha1 9 months, 3 weeks ago


answer is B.
An Amazon Simple Notification Service (Amazon SNS) topic can be used for communication between the microservices in this scenario.
The data producers can be configured to publish notifications to the topic, and the data consumers can be configured to subscribe to the
topic and receive notifications as they are published. This allows for asynchronous communication between the microservices, Question
here focus on communication between microservices
upvoted 2 times

  xua81376 10 months, 2 weeks ago


We need decoupling so ok to use SQS
upvoted 2 times

  BENICE 10 months, 2 weeks ago


Can someone explain it bit more? Not able to understand it.
upvoted 2 times

  EKA_CloudGod 10 months, 2 weeks ago


As monolithic systems become too large to deal with, many enterprises are drawn to breaking them down into the microservices
architectural style by means of decoupling. Amazon Simple Queue Service (Amazon SQS) is a fully managed message queuing service
that makes it easy to decouple and scale microservices, distributed systems, and serverless applications
upvoted 14 times
  taer 10 months, 2 weeks ago
Selected Answer: A
Answer is A
upvoted 2 times

  Nigma 10 months, 2 weeks ago


SQS to decouple.
upvoted 2 times
Question #182 Topic 1

A company wants to migrate its MySQL database from on premises to AWS. The company recently experienced a database outage that
significantly impacted the business. To ensure this does not happen again, the company wants a reliable database solution on AWS that
minimizes data loss and stores every transaction on at least two nodes.

Which solution meets these requirements?

A. Create an Amazon RDS DB instance with synchronous replication to three nodes in three Availability Zones.

B. Create an Amazon RDS MySQL DB instance with Multi-AZ functionality enabled to synchronously replicate the data.

C. Create an Amazon RDS MySQL DB instance and then create a read replica in a separate AWS Region that synchronously replicates the data.

D. Create an Amazon EC2 instance with a MySQL engine installed that triggers an AWS Lambda function to synchronously replicate the data to
an Amazon RDS MySQL DB instance.

Correct Answer: B

Community vote distribution


B (97%)

  rjam Highly Voted  10 months, 2 weeks ago


Selected Answer: B
Amazon RDS MySQL DB instance with Multi-AZ functionality enabled to synchronously replicate the data
Standby DB in Multi-AZ- synchronous replication

Read Replica always asynchronous. so option C is ignored.


upvoted 15 times

  studynoplay Highly Voted  4 months, 3 weeks ago


Selected Answer: B
RDS Multi-AZ = Synchronous = Disaster Recovery (DR)
Read Replica = Asynchronous = High Availability
upvoted 6 times

  cookieMr Most Recent  3 months, 1 week ago


Selected Answer: B
B. Create an Amazon RDS MySQL DB instance with Multi-AZ functionality enabled to synchronously replicate the data.

Enabling Multi-AZ functionality in Amazon RDS ensures synchronous replication of data to a standby replica in a different Availability Zone.
This provides high availability and minimizes data loss in the event of a database outage.

A. Creating an Amazon RDS DB instance with synchronous replication to three nodes in three Availability Zones would provide even higher
availability but is not necessary for the stated requirements.

C. Creating a read replica in a separate AWS Region would provide disaster recovery capabilities but does not ensure synchronous
replication or meet the requirement of storing every transaction on at least two nodes.

D. Using an EC2 instance with a MySQL engine and triggering an AWS Lambda function for replication introduces unnecessary complexity
and is not the most suitable solution for ensuring reliable and synchronous replication.
upvoted 2 times

  channn 5 months, 3 weeks ago


Selected Answer: B
B
since all other answers r wrong
upvoted 2 times

  jayce5 6 months ago


Selected Answer: B
B
Since read replica is async.
upvoted 1 times

  LuckyAro 8 months, 2 weeks ago


Selected Answer: C
Multi AZ is not as protected as Multi-Region Read Replica.
upvoted 1 times
  JayBee65 8 months, 3 weeks ago
I curios to know why A isn't right. Is it just that it would take more effort?
upvoted 3 times

  techhb 9 months, 1 week ago


B is correct C requires more wokr.
upvoted 1 times

  BENICE 9 months, 2 weeks ago


Option B
upvoted 1 times

  bammy 9 months, 2 weeks ago


Multi-AZ will give at least two nodes as required by the question. The answer is B.

Amazon RDS provides high availability and failover support for DB instances using Multi-AZ deployments with a single standby DB
instance.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZSingleStandby.html
upvoted 3 times

  career360guru 9 months, 2 weeks ago


Selected Answer: B
Option B
upvoted 1 times

  Shasha1 9 months, 3 weeks ago


Option A is the correct answer in this scenario because it meets the requirements specified in the question. It creates an Amazon RDS DB
instance with synchronous replication to three nodes in three Availability Zones, which will provide high availability and durability for the
database, ensuring that the data is stored on multiple nodes and automatically replicated across Availability Zones.

Option B is not a correct answer because it creates an Amazon RDS MySQL DB instance with Multi-AZ functionality enabled, which only
provides failover capabilities. It does not enable synchronous replication to multiple nodes, which is required in this scenario.
upvoted 2 times

  JayBee65 8 months, 3 weeks ago


Option B is not incorrect: "The primary DB instance is synchronously replicated across Availability Zones to a standby replica to provide
data redundancy and minimize latency spikes during system backups" from
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZSingleStandby.html
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


I would go with Option B since it meets the company's requirements and is the most suitable solution.

By creating an Amazon RDS MySQL DB instance with Multi-AZ functionality enabled, the solutions architect will ensure that data is
automatically synchronously replicated across multiple AZs within the same Region. This provides high availability and data durability,
minimizing the risk of data loss and ensuring that every transaction is stored on at least two nodes.
upvoted 1 times

  stepman 9 months, 3 weeks ago


Maybe C since Amazon RDC now supports cross region read replica https://aws.amazon.com/about-aws/whats-new/2022/11/amazon-rds-
sql-server-cross-region-read-replica/
upvoted 1 times

  Wpcorgan 10 months, 1 week ago


B is correct
upvoted 1 times

  EKA_CloudGod 10 months, 2 weeks ago


Selected Answer: B
Option B is the correct answer:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZSingleStandby.html
upvoted 1 times

  Nigma 10 months, 2 weeks ago


B is the answer
upvoted 2 times
Question #183 Topic 1

A company is building a new dynamic ordering website. The company wants to minimize server maintenance and patching. The website must be
highly available and must scale read and write capacity as quickly as possible to meet changes in user demand.

Which solution will meet these requirements?

A. Host static content in Amazon S3. Host dynamic content by using Amazon API Gateway and AWS Lambda. Use Amazon DynamoDB with
on-demand capacity for the database. Configure Amazon CloudFront to deliver the website content.

B. Host static content in Amazon S3. Host dynamic content by using Amazon API Gateway and AWS Lambda. Use Amazon Aurora with Aurora
Auto Scaling for the database. Configure Amazon CloudFront to deliver the website content.

C. Host all the website content on Amazon EC2 instances. Create an Auto Scaling group to scale the EC2 instances. Use an Application Load
Balancer to distribute traffic. Use Amazon DynamoDB with provisioned write capacity for the database.

D. Host all the website content on Amazon EC2 instances. Create an Auto Scaling group to scale the EC2 instances. Use an Application Load
Balancer to distribute traffic. Use Amazon Aurora with Aurora Auto Scaling for the database.

Correct Answer: A

Community vote distribution


A (93%) 7%

  romko Highly Voted  10 months, 1 week ago


Selected Answer: A
A - is correct, because Dynamodb on-demand scales write and read capacity
B - Aurora auto scaling scales only read replicas
upvoted 36 times

  klayytech 6 months, 1 week ago


That’s not correct. Amazon Aurora with Aurora Auto Scaling can scale both read and write replicas. Is there anything else you would like
me to help you with?
upvoted 3 times

  Yadav_Sanjay 3 months, 1 week ago


Correct...Both can serve purpose but note the keyword "must scale read and write capacity as quickly as possible to meet changes in
user demand". DynamoDB can scale quickly than Aurora. Remember "PUSH BUTTON SCALING FEATURE" of Dynamo DB.
upvoted 3 times

  Yadav_Sanjay 3 months, 1 week ago


That's why Dynamo DB is best suited option
upvoted 1 times

  Manlikeleke Highly Voted  10 months, 2 weeks ago


please is this dump enough to pass the exam?
upvoted 10 times

  LuckyAro 7 months, 4 weeks ago


You can tell us now ? Going by the date of your post I guess you would have challenged the exam by now ? so how did it go ?
upvoted 5 times

  Bobbybash 10 months, 1 week ago


I HOPE SO
upvoted 8 times

  Angryasianxd Most Recent  2 weeks, 1 day ago


Selected Answer: A
Hi all! The answer is A and NOT B on this one as the company is building an ordering website (OLTP). DynamoDB's high performance read
and writes are perfect for an OLTP use case.

https://aws.amazon.com/blogs/database/how-to-determine-if-amazon-dynamodb-is-appropriate-for-your-needs-and-then-plan-your-
migration/
upvoted 1 times

  n0pz 2 weeks, 4 days ago


S3 is discarded since the question says: A company is building a new dynamic ordering website,
upvoted 1 times
  TariqKipkemei 2 weeks, 5 days ago
Selected Answer: A
minimize server maintenance and patching, highly available, scale read and write = serverless = Amazon S3, Amazon API Gateway, AWS
Lambda, Amazon DynamoDB
upvoted 1 times

  DebAwsAccount 3 weeks, 1 day ago


Selected Answer: A
Key phrase in the Question is must scale read and write capacity. Aurora is only for Read.
Amazon DynamoDB has two read/write capacity modes for processing reads and writes on your tables:
On-demand
Provisioned (default, free-tier eligible)
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: A
Minimize maintenance & Patching = Serverless
S3, DynamoDB are serverless
upvoted 1 times

  ravindrabagale 1 month, 2 weeks ago


Minimize maintenance & Patching = Serverless services
Serverless services with no sql database is perfect combination
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: A
B. This solution leverages serverless technologies like API Gateway and Lambda for hosting dynamic content, reducing server
maintenance and patching. Aurora with Aurora Auto Scaling provides a highly available and scalable database solution. Hosting static
content in S3 and configuring CloudFront for content delivery ensures high availability and efficient scaling.

A. Using DynamoDB with on-demand capacity may provide scalability, but it does not offer the same level of flexibility and performance as
Aurora. Additionally, it does not address the hosting of dynamic content using serverless technologies.

C. Hosting all the website content on EC2 instances requires server maintenance and patching. While using ASG and an ALB helps with
availability and scalability, it does not minimize server maintenance as requested.

D. Hosting all the website content on EC2 instances introduces server maintenance and patching. Using Aurora with Aurora Auto Scaling is
a good choice for the database, but it does not address the need to minimize server maintenance and patching for the overall
infrastructure.
upvoted 1 times

  dydzah 4 months ago


B isn't correct because of cooldown
You can tune the responsiveness of a target-tracking scaling policy by adding cooldown periods that affect scaling your Aurora DB cluster
in and out. A cooldown period blocks subsequent scale-in or scale-out requests until the period expires. These blocks slow the deletions of
Aurora Replicas in your Aurora DB cluster for scale-in requests, and the creation of Aurora Replicas for scale-out requests.
upvoted 1 times

  Abrar2022 4 months ago


Key word in question "storing ordering data"
DynamoDB is perfect for storing ordering data (key-values)
upvoted 2 times

  studynoplay 4 months, 3 weeks ago


Selected Answer: A
Minimize maintenance & Patching = Serverless
S3, DynamoDB are serverless
upvoted 2 times

  lucdt4 4 months, 3 weeks ago


The company wants to minimize server maintenance and patching -> Serverless (minimize)
C,D are wrong because these are not serverless
B is wrong because RDS is not serverless
-> A full serverless
upvoted 1 times

  yyuussaaa 2 weeks, 6 days ago


For anyone who is confused about Option B, there's a serverless Aurora service called "Aurora Serverless v2". This will bring us an
equivalent solution to option A. But the Option B in the question only states the Aurora, therefore by default we need to manage the
servers underneath.
Ref: https://www.projectpro.io/article/aws-aurora-vs-
rds/737#:~:text=RDS%20is%20a%20fully%2Dmanaged,manual%20management%20of%20database%20servers.
upvoted 1 times
  DavidNamy 8 months, 3 weeks ago
Selected Answer: B
The correct answer is B.

The option A would also meet the company's requirements of minimizing server maintenance and patching, and providing high
availability and quick scaling for read and write capacity. However, there are a few reasons why option B is a more optimal solution:

In option A, it uses Amazon DynamoDB with on-demand capacity for the database, which may not provide the same level of scalability and
performance as using Amazon Aurora with Aurora Auto Scaling.
Amazon Aurora offers additional features such as automatic failover, read replicas, and backups that makes it a more robust and resilient
option than DynamoDB. Additionally, the auto scaling feature is better suited to handle the changes in user demand.
Additionally, option B provides a more cost-effective solution, as Amazon Aurora can be more cost-effective for high read and write
workloads than Amazon DynamoDB, and also it's providing more features.
upvoted 2 times

  Joxtat 8 months, 1 week ago


The answer is A.
Key phrase in the Question is must scale read and write capacity. Aurora is only for Read.
Amazon DynamoDB has two read/write capacity modes for processing reads and writes on your tables:
On-demand
Provisioned (default, free-tier eligible)
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html
upvoted 3 times

  Zerotn3 9 months ago


Selected Answer: A
A for sure ~
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: A
Option A
upvoted 1 times

  lapaki 9 months, 3 weeks ago


Selected Answer: A
A. Looking for serverless to reduce maintenance requirements
upvoted 2 times
Question #184 Topic 1

A company has an AWS account used for software engineering. The AWS account has access to the company’s on-premises data center through a
pair of AWS Direct Connect connections. All non-VPC traffic routes to the virtual private gateway.

A development team recently created an AWS Lambda function through the console. The development team needs to allow the function to access
a database that runs in a private subnet in the company’s data center.

Which solution will meet these requirements?

A. Configure the Lambda function to run in the VPC with the appropriate security group.

B. Set up a VPN connection from AWS to the data center. Route the traffic from the Lambda function through the VPN.

C. Update the route tables in the VPC to allow the Lambda function to access the on-premises data center through Direct Connect.

D. Create an Elastic IP address. Configure the Lambda function to send traffic through the Elastic IP address without an elastic network
interface.

Correct Answer: C

Community vote distribution


A (79%) C (21%)

  Gil80 Highly Voted  10 months ago


Selected Answer: A
To configure a VPC for an existing function:

1. Open the Functions page of the Lambda console.


2. Choose a function.
3. Choose Configuration and then choose VPC.
4. Under VPC, choose Edit.
5. Choose a VPC, subnets, and security groups. <-- **That's why I believe the answer is A**.

Note:
If your function needs internet access, use network address translation (NAT). Connecting a function to a public subnet doesn't give it
internet access or a public IP address.
upvoted 12 times

  markw92 3 months, 2 weeks ago


The question says on-prem database...how do we create a SG for that instance in AWS? C make sense. my 2 cents..
upvoted 2 times

  javitech83 Highly Voted  9 months, 4 weeks ago


Selected Answer: A
it is A. C is not correct at all as in the question it metions that the VPC already has connectivity with on-premises
upvoted 8 times

  LuckyAro 8 months, 2 weeks ago


C says to "update the route table" not create a new connection. C is correct.
upvoted 2 times

  ruqui 3 months, 4 weeks ago


C is wrong. Lambda can't connect by default to resources in a private VPC, so you have to do some specific setup steps to run in a
private VPC, Answer A is correct
upvoted 1 times

  Adios_Amigo 5 months, 2 weeks ago


No need to do route updates. This is because the route to the destination on-premises is already set.
upvoted 3 times

  Guru4Cloud Most Recent  1 month, 2 weeks ago


Selected Answer: A
Go to the Lambda console.
Click the Functions tab.
Select the Lambda function that you want to configure.
Click the Configuration tab.
In the Network section, select the VPC that you want the function to run in.
In the Security groups section, select the security group that you want to allow the function to access the database subnet.
Click the Save button.
upvoted 1 times
  zjcorpuz 2 months, 1 week ago
Correct answer is A
Lambda is available in the Region by default.. if you want to connect it to your private subnet or to on prem data center you must
configure your Lambda with vpc..

C is wrong because there is no help adding routes to VPC without configuring your lambda to vpc.
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: A
Option A: Configure the Lambda function to run in the VPC with the appropriate security group. This allows the Lambda function to access
the database in the private subnet of the company's data center. By running the Lambda function in the VPC, it can communicate with
resources in the private subnet securely.

Option B is incorrect because setting up a VPN connection and routing the traffic from the Lambda function through the VPN would add
unnecessary complexity and overhead.

Option C is incorrect because updating the route tables in the VPC to allow access to the on-premises data center through Direct Connect
would affect the entire VPC's routing, potentially exposing other resources to the on-premises network.

Option D is incorrect because creating an Elastic IP address and sending traffic through it without an elastic network interface is not a
valid configuration for accessing resources in a private subnet.
upvoted 3 times

  cheese929 4 months, 4 weeks ago


Selected Answer: C
My answer is C. Refer to the steps in the link. need to configure the routing table to route traffic to the destination.
https://aws.amazon.com/blogs/compute/running-aws-lambda-functions-on-aws-outposts-using-aws-iot-greengrass/

A is wrong as it says configure the lambda function in the VPC. the requirement to run in the database that is on-premise.
upvoted 4 times

  kruasan 5 months ago


Selected Answer: A
once you have configured your Lambda to be deployed (or connected) to your VPC [1], as long as your VPC has connectivity to your data
center, it will be allowed to route the traffic towards it - whether it uses Direct Connect or other connections, like VPN.
https://repost.aws/questions/QUSaj1a6jBQ92Kp56klbZFNw/questions/QUSaj1a6jBQ92Kp56klbZFNw/aws-lambda-to-on-premise-via-direct-
connect-and-aws-privatelink?
upvoted 2 times

  Jinius83 5 months, 2 weeks ago


C
AWS -> 회사 데이터 센터로 나가는 트래픽이기 때문에
upvoted 2 times

  darn 5 months, 1 week ago


english please
upvoted 2 times

  datz 5 months, 3 weeks ago


Selected Answer: A
CORRECT ANSWER = A,
C = WRONG because in question, it is telling non VPN traffic is being sent through virtual private gateway(Direct Connect), meaning all
routes are looking towards on prem where out destination service is located. So no routing change will be needed.

When you create Lambda(Function) - > you need to choose VPN and than Security group inside VPC.

Link for better understanding :

https://www.youtube.com/watch?v=beV1AYyhgYA&ab_channel=DigitalCloudTraining
upvoted 3 times

  datz 5 months, 3 weeks ago


it is telling non "VPC" traffic, really wish there was edit function lol
upvoted 1 times

  Devsin2000 6 months, 2 weeks ago


In my opinion this question is flawed. Non of the answers makes any sense to me. However, if I have to choose one I will choose C. There
is no option of associating Security Group with Lambda function.
upvoted 2 times

  bdp123 7 months, 1 week ago


Selected Answer: A
https://docs.aws.amazon.com/lambda/latest/dg/configuration-vpc.html#vpc-managing-eni
upvoted 2 times

  nickolaj 7 months, 2 weeks ago


Selected Answer: A
The best solution to meet the requirements would be option A - Configure the Lambda function to run in the VPC with the appropriate
security group.

By configuring the Lambda function to run in the VPC, the function will have access to the private subnets in the company's data center
through the Direct Connect connections. Additionally, security groups can be used to control inbound and outbound traffic to and from
the Lambda function, ensuring that only the necessary traffic is allowed.
upvoted 2 times

  nickolaj 7 months, 2 weeks ago


Option B is not ideal as it would require additional configuration and management of a VPN connection between the company's data
center and AWS, which may not be necessary for the specific use case.

Option C is not recommended as updating the route tables to allow the Lambda function to access the on-premises data center
through Direct Connect would allow all VPC traffic to route through the data center, which may not be desirable and could potentially
create security risks.

Option D is not a viable solution for accessing resources in the on-premises data center as Elastic IP addresses are only used for
outbound internet traffic from an Amazon VPC, and cannot be used to communicate with resources in an on-premises data center.
upvoted 2 times

  Yelizaveta 7 months, 2 weeks ago


Selected Answer: A
"All non-VPC traffic routes to the virtual private gateway." means -> there are already the appropriate routes, so no need for update the
route tables.
Key phrase: "database that runs in a private subnet in the company's data center.", means: You need the appropriate security group to
access the DB.
upvoted 3 times

  LuckyAro 8 months, 2 weeks ago


Selected Answer: A
A makes more sense to me.
upvoted 1 times

  Mindvision 9 months ago


A = Answer.

Note that " All non-VPC traffic routes to the virtual gateway" meaning if traffic not meant for the VPC, it routes to on-prem (C answer
invalid). For the Lambda function to access the on-prem database you have to configure the Lambda function in the VPC and use
appropriate SG outbound.

Phew! did some research on this, was a bit confused with C.


upvoted 5 times

  Deepak_k 7 months, 3 weeks ago


Yes Lambda is not connected to an Amazon VPC. so Answer A
upvoted 1 times

  NV305 9 months, 1 week ago


Selected Answer: C
it is C only
upvoted 2 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Selected Answer: C
To allow an AWS Lambda function to access a database in a private subnet in the company's data center, the correct solution is to update
the route tables in the Virtual Private Cloud (VPC) to allow the Lambda function to access the on-premises data center through the AWS
Direct Connect connections.

Option C, updating the route tables in the VPC to allow the Lambda function to access the on-premises data center through Direct
Connect, is the correct solution to meet the requirements.
upvoted 2 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Option A, configuring the Lambda function to run in the VPC with the appropriate security group, is not the correct solution because it
does not allow the Lambda function to access the database in the private subnet in the data center.

Option B, setting up a VPN connection from AWS to the data center and routing the traffic from the Lambda function through the VPN,
is not the correct solution because it would not be the most efficient solution, as the traffic would need to be routed over the public
internet, potentially increasing latency.
Option D, creating an Elastic IP address and configuring the Lambda function to send traffic through the Elastic IP address without an
elastic network interface, is not a valid solution because Elastic IP addresses are used to assign a static public IP address to an instance
or network interface, and do not provide a direct connection to an on-premises data center.
upvoted 4 times

  Ezekiel2517 3 months ago


wrong again, my cute, tiny, untrained AI :)
It seems you lack the concept of on-prem, which is frankly awkward...
upvoted 1 times

  JayBee65 8 months, 3 weeks ago


Sorry, but like a lot of your responses in this group, your answers are incorrect. I really think you need to study more, unless you are
deliberately trying to confuse people. "All non-VPC traffic routes to the virtual private gateway" means that C is not necessary.
upvoted 6 times

  ProfXsamson 8 months ago


Have noticed the Buru----tuy guy/girl likes giving incorrect answers.
upvoted 2 times

  superman917 7 months, 3 weeks ago


Most likely Buru----tuy is getting responses from ChatGPT, which is not always right.
upvoted 5 times
Question #185 Topic 1

A company runs an application using Amazon ECS. The application creates resized versions of an original image and then makes Amazon S3 API
calls to store the resized images in Amazon S3.

How can a solutions architect ensure that the application has permission to access Amazon S3?

A. Update the S3 role in AWS IAM to allow read/write access from Amazon ECS, and then relaunch the container.

B. Create an IAM role with S3 permissions, and then specify that role as the taskRoleArn in the task definition.

C. Create a security group that allows access from Amazon ECS to Amazon S3, and update the launch configuration used by the ECS cluster.

D. Create an IAM user with S3 permissions, and then relaunch the Amazon EC2 instances for the ECS cluster while logged in as this account.

Correct Answer: B

Community vote distribution


B (100%)

  Buruguduystunstugudunstuy Highly Voted  9 months, 1 week ago


Selected Answer: B
To ensure that an Amazon Elastic Container Service (ECS) application has permission to access Amazon Simple Storage Service (S3), the
correct solution is to create an AWS Identity and Access Management (IAM) role with the necessary S3 permissions and specify that role as
the taskRoleArn in the task definition for the ECS application.

Option B, creating an IAM role with S3 permissions and specifying that role as the taskRoleArn in the task definition, is the correct solution
to meet the requirement.
upvoted 6 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Option A, updating the S3 role in IAM to allow read/write access from ECS and relaunching the container, is not the correct solution
because the S3 role is not associated with the ECS application.

Option C, creating a security group that allows access from ECS to S3 and updating the launch configuration used by the ECS cluster, is
not the correct solution because security groups are used to control inbound and outbound traffic to resources, and do not grant
permissions to access resources.

Option D, creating an IAM user with S3 permissions and relaunching the EC2 instances for the ECS cluster while logged in as this
account, is not the correct solution because it is generally considered best practice to use IAM roles rather than IAM users to grant
permissions to resources.
upvoted 4 times

  Guru4Cloud Most Recent  2 weeks, 5 days ago


Selected Answer: B
B. Create an IAM role with S3 permissions, and then specify that role as the taskRoleArn in the task definition
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: B
Option B: Create an IAM role with S3 permissions and specify that role as the taskRoleArn in the task definition. This approach allows the
ECS task to assume the specified role and gain the necessary permissions to access Amazon S3.

Option A is incorrect because updating the S3 role in IAM and relaunching the container does not associate the updated role with the ECS
task.

Option C is incorrect because creating a security group that allows access from Amazon ECS to Amazon S3 does not grant the necessary
permissions to the ECS task.

Option D is incorrect because creating an IAM user with S3 permissions and relaunching the EC2 instances for the ECS cluster does not
associate the IAM user with the ECS task.
upvoted 2 times

  dydzah 4 months ago


https://repost.aws/knowledge-center/ecs-fargate-access-aws-services
upvoted 1 times

  k1kavi1 9 months, 1 week ago


Selected Answer: B
https://www.examtopics.com/discussions/amazon/view/27954-exam-aws-certified-solutions-architect-associate-saa-c02/
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ecs-taskdefinition.html
upvoted 1 times
  techhb 9 months, 1 week ago
Selected Answer: B
The short name or full Amazon Resource Name (ARN) of the AWS Identity and Access Management role that grants containers in the task
permission to call AWS APIs on your behalf.
upvoted 1 times

  BENICE 9 months, 2 weeks ago


Option B
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: B
Option B.
upvoted 2 times

  k1kavi1 9 months, 2 weeks ago


Selected Answer: B
Agreed
upvoted 1 times

  lighrz 9 months, 3 weeks ago


Selected Answer: B
B is the best answer
upvoted 1 times

  Wpcorgan 10 months, 1 week ago


B is correct
upvoted 1 times

  taer 10 months, 2 weeks ago


Selected Answer: B
The answer is B.
upvoted 1 times

  Nigma 10 months, 2 weeks ago


B is the answer
upvoted 2 times
Question #186 Topic 1

A company has a Windows-based application that must be migrated to AWS. The application requires the use of a shared Windows file system
attached to multiple Amazon EC2 Windows instances that are deployed across multiple Availability Zone:

What should a solutions architect do to meet this requirement?

A. Configure AWS Storage Gateway in volume gateway mode. Mount the volume to each Windows instance.

B. Configure Amazon FSx for Windows File Server. Mount the Amazon FSx file system to each Windows instance.

C. Configure a file system by using Amazon Elastic File System (Amazon EFS). Mount the EFS file system to each Windows instance.

D. Configure an Amazon Elastic Block Store (Amazon EBS) volume with the required size. Attach each EC2 instance to the volume. Mount the
file system within the volume to each Windows instance.

Correct Answer: B

Community vote distribution


B (100%)

  Nigma Highly Voted  10 months, 2 weeks ago


Correct is B
FSx --> shared Windows file system(SMB)
EFS --> Linux NFS
upvoted 7 times

  TariqKipkemei Most Recent  2 weeks, 5 days ago


Selected Answer: B
Windows file system = Amazon FSx for Windows File Server
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: B
Configure Amazon FSx for Windows File Server. Mount the Amazon FSx file system to each Windows instance.
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: B
Option B: Configure Amazon FSx for Windows File Server. This service provides a fully managed Windows file system that can be easily
shared across multiple EC2 Windows instances. It offers high performance and supports Windows applications that require file storage.

Option A is incorrect because AWS Storage Gateway in volume gateway mode is not designed for shared file systems.

Option C is incorrect because while Amazon EFS can be mounted to multiple instances, it is a Linux-based file system and may not be
suitable for Windows applications.

Option D is incorrect because attaching and mounting an Amazon EBS volume to multiple instances simultaneously is not supported.
upvoted 2 times

  Bmarodi 4 months, 1 week ago


Selected Answer: B
Option B is right answer.
upvoted 1 times

  k1kavi1 9 months, 1 week ago


Selected Answer: B
References :
https://www.examtopics.com/discussions/amazon/view/28006-exam-aws-certified-solutions-architect-associate-saa-c02/

https://docs.aws.amazon.com/AmazonECS/latest/developerguide/wfsx-volumes.html
upvoted 1 times

  techhb 9 months, 1 week ago


Selected Answer: B
EFS is not compatible with Windows.
https://pilotcoresystems.com/insights/ebs-efs-fsx-s3-how-these-storage-options-
differ/#:~:text=EFS%20works%20with%20Linux%20and,with%20all%20Window%20Server%20platforms.
upvoted 1 times
  Buruguduystunstugudunstuy 9 months, 1 week ago
Selected Answer: B
A. Configure AWS Storage Gateway in volume gateway mode. Mount the volume to each Windows instance.

This option is incorrect because AWS Storage Gateway is not a file storage service. It is a hybrid storage service that allows you to store
data in the cloud while maintaining low-latency access to frequently accessed data. It is designed to integrate with on-premises storage
systems, not to provide file storage for Amazon EC2 instances.

B. Configure Amazon FSx for Windows File Server. Mount the Amazon FSx file system to each Windows instance.

This is the correct answer. Amazon FSx for Windows File Server is a fully managed file storage service that provides a native Windows file
system that can be accessed over the SMB protocol. It is specifically designed for use with Windows-based applications, and it can be
easily integrated with existing applications by mounting the file system to each EC2 instance.
upvoted 3 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


C. Configure a file system by using Amazon Elastic File System (Amazon EFS). Mount the EFS file system to each Windows instance.

This option is incorrect because Amazon EFS is a file storage service that is designed for use with Linux-based applications. It is not
compatible with Windows-based applications, and it cannot be accessed over the SMB protocol.

D. Configure an Amazon Elastic Block Store (Amazon EBS) volume with the required size. Attach each EC2 instance to the volume.
Mount the file system within the volume to each Windows instance.

This option is incorrect because Amazon EBS is a block storage service, not a file storage service. It is designed for storing raw block-
level data that can be accessed by a single EC2 instance at a time. It is not designed for use as a shared file system that can be accessed
by multiple instances.
upvoted 1 times

  BENICE 9 months, 2 weeks ago


B - is correct
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: B
Option B
upvoted 1 times

  Wpcorgan 10 months, 1 week ago


B is correct
upvoted 1 times

  xua81376 10 months, 2 weeks ago


B FSx for windows
upvoted 1 times

  BENICE 10 months, 2 weeks ago


B is correct option
upvoted 1 times

  rjam 10 months, 2 weeks ago


Selected Answer: B
Amazon FSx for Windows File Server
upvoted 3 times
Question #187 Topic 1

A company is developing an ecommerce application that will consist of a load-balanced front end, a container-based application, and a relational
database. A solutions architect needs to create a highly available solution that operates with as little manual intervention as possible.

Which solutions meet these requirements? (Choose two.)

A. Create an Amazon RDS DB instance in Multi-AZ mode.

B. Create an Amazon RDS DB instance and one or more replicas in another Availability Zone.

C. Create an Amazon EC2 instance-based Docker cluster to handle the dynamic application load.

D. Create an Amazon Elastic Container Service (Amazon ECS) cluster with a Fargate launch type to handle the dynamic application load.

E. Create an Amazon Elastic Container Service (Amazon ECS) cluster with an Amazon EC2 launch type to handle the dynamic application load.

Correct Answer: AD

Community vote distribution


AD (100%)

  techhb Highly Voted  9 months, 1 week ago


Selected Answer: AD
https://containersonaws.com/introduction/ec2-or-aws-fargate/
A.(O) multi-az <= 'little intervention'
B.(X) read replica <= Promoting a read replica to be a standalone DB instance
You can promote a read replica into a standalone DB instance. When you promote a read replica, the DB instance is rebooted before it
becomes available.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html
C.(X) use Amazon ECS instead of EC2-based docker for little human intervention
D.(O) Amazon ECS on AWS Fargate : AWS Fargate is a technology that you can use with Amazon ECS to run containers without having to
manage servers or clusters of Amazon EC2 instances.
E.(X) EC2 launch type
The EC2 launch type can be used to run your containerized applications on Amazon EC2 instances that you register to your Amazon ECS
cluster and manage yourself.
upvoted 10 times

  TariqKipkemei Most Recent  2 weeks, 5 days ago


Selected Answer: AD
highly available application, little manual intervention = serverless = Amazon Elastic Container Service with Fargate and Amazon RDS DB
instance in Multi-AZ mode
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: AD
The correct answers are A and D.

A) Creating an RDS DB instance in Multi-AZ mode provides automatic failover to a standby replica in another Availability Zone, providing
high availability.

D) Using ECS Fargate removes the need to provision and manage EC2 instances, allowing the service to scale dynamically based on
demand. ECS handles load balancing and availability out of the box.
upvoted 1 times

  jkirancdev 2 months ago


Selected Answer: AD
AD is the correct answer
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: AD
A. Create an Amazon RDS DB instance in Multi-AZ mode. This ensures that the database is highly available with automatic failover to a
standby replica in another Availability Zone.

D. Create an Amazon Elastic Container Service (Amazon ECS) cluster with a Fargate launch type to handle the dynamic application load.
Fargate abstracts the underlying infrastructure, automatically scaling and managing the containers, making it a highly available and low-
maintenance option.

Option B is not the best choice as it only creates replicas in another Availability Zone without the automatic failover capability provided by
Multi-AZ mode.
Option C is not the best choice as managing a Docker cluster on EC2 instances requires more manual intervention compared to using the
serverless capabilities of Fargate in option D.

Option E is not the best choice as it uses the EC2 launch type, which requires managing and scaling the EC2 instances manually. Fargate,
as mentioned in option D, provides a more automated and scalable solution.
upvoted 2 times
  studynoplay 4 months, 2 weeks ago
Selected Answer: AD
little manual intervention = Serverless
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: AD
Option A&D
upvoted 1 times

  k1kavi1 9 months, 2 weeks ago


Selected Answer: AD
A and D
upvoted 1 times

  Gabs90 10 months, 1 week ago


Selected Answer: AD
A and D
upvoted 1 times

  Wpcorgan 10 months, 1 week ago


A and D
upvoted 1 times

  BENICE 10 months, 2 weeks ago


A and D are the options
upvoted 1 times

  Danny23132412141_2312 10 months, 2 weeks ago


AD for sure
Link: https://www.examtopics.com/discussions/amazon/view/43729-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 3 times
Question #188 Topic 1

A company uses Amazon S3 as its data lake. The company has a new partner that must use SFTP to upload data files. A solutions architect needs
to implement a highly available SFTP solution that minimizes operational overhead.

Which solution will meet these requirements?

A. Use AWS Transfer Family to configure an SFTP-enabled server with a publicly accessible endpoint. Choose the S3 data lake as the
destination.

B. Use Amazon S3 File Gateway as an SFTP server. Expose the S3 File Gateway endpoint URL to the new partner. Share the S3 File Gateway
endpoint with the new partner.

C. Launch an Amazon EC2 instance in a private subnet in a VPInstruct the new partner to upload files to the EC2 instance by using a VPN. Run
a cron job script, on the EC2 instance to upload files to the S3 data lake.

D. Launch Amazon EC2 instances in a private subnet in a VPC. Place a Network Load Balancer (NLB) in front of the EC2 instances. Create an
SFTP listener port for the NLB. Share the NLB hostname with the new partner. Run a cron job script on the EC2 instances to upload files to the
S3 data lake.

Correct Answer: D

Community vote distribution


A (100%)

  roxx529 Highly Voted  4 months, 1 week ago


For Exam :
Whenever you see SFTP , FTP look for "Transfer" in options available
upvoted 18 times

  Chirantan Highly Voted  9 months, 1 week ago


Answer is A
AWS Transfer Family securely scales your recurring business-to-business file transfers to AWS Storage services using SFTP, FTPS, FTP, and
AS2 protocols.
https://aws.amazon.com/aws-transfer-family/
upvoted 11 times

  oguzbeliren 2 months ago


Answer A is not an answer because it requires more mannual efford. While AWS Transfer Family simplifies the setup of an SFTP server, it
still requires management and monitoring. This includes handling scaling, backups, patching, and other administrative tasks
associated with managing an SFTP server.
upvoted 1 times

  TariqKipkemei Most Recent  2 weeks, 5 days ago


Selected Answer: A
AWS Transfer Family securely scales your recurring business-to-business file transfers to AWS Storage services using SFTP, FTPS, FTP, and
AS2 protocols.
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


A is the correct answer.

AWS Transfer Family provides a fully managed SFTP service that can integrate directly with S3. It handles scaling, availability, and security
automatically with minimal overhead.
upvoted 1 times

  oguzbeliren 2 months ago


AWS Transfer Family is a fully managed service that makes it easy to set up and manage secure file transfers. It provides a high-availability
SFTP server that can be accessed from the public internet. However, this solution does not minimize operational overhead, as it requires
the solutions architect to manage the SFTP server.
upvoted 1 times

  cookieMr 2 months, 3 weeks ago


Selected Answer: A
This solution provides a highly available SFTP solution without the need for manual management or operational overhead. AWS Transfer
Family allows you to easily set up an SFTP server with authentication, authorization, and integration with S3 as the storage backend.

Option B is not the best choice as it suggests using Amazon S3 File Gateway, which is primarily used for file-based access to S3 storage
over NFS or SMB protocols, not for SFTP access.
Option C is not the best choice as it requires manual management of an EC2 instance, VPN setup, and cron job script for uploading files,
introducing operational overhead and potential complexity.

Option D is not the best choice as it also requires manual management of EC2 instances, Network Load Balancer, and cron job scripts for
file uploads. It is more complex and involves additional components compared to the simpler and fully managed solution provided by
AWS Transfer Family in option A.
upvoted 1 times
  cookieMr 3 months, 1 week ago
This solution provides a highly available SFTP solution without the need for manual management or operational overhead. AWS Transfer
Family allows you to easily set up an SFTP server with authentication, authorization, and integration with S3 as the storage backend.

Option B is not the best choice as it suggests using Amazon S3 File Gateway, which is primarily used for file-based access to S3 storage
over NFS or SMB protocols, not for SFTP access.

Option C is not the best choice as it requires manual management of an EC2 instance, VPN setup, and cron job script for uploading files,
introducing operational overhead and potential complexity.

Option D is not the best choice as it also requires manual management of EC2 instances, Network Load Balancer, and cron job scripts for
file uploads. It is more complex and involves additional components compared to the simpler and fully managed solution provided by
AWS Transfer Family in option A.
upvoted 2 times

  cookieMr 3 months, 1 week ago


A is correct
upvoted 1 times

  markw92 3 months, 2 weeks ago


I can't wrap my head around why the answer is D? this is so frustrating to see where i went wrong. I vote for A.
upvoted 1 times

  studynoplay 4 months, 2 weeks ago


Selected Answer: A
minimizes operational overhead = Serverless
AWS Transfer Family is serverless
upvoted 1 times

  Rahulbit34 5 months ago


AWS Transfer Family is compatible for SFTP<FTPS<FTP. A is the answer
upvoted 1 times

  kruasan 5 months ago


Selected Answer: A
AWS Transfer Family is a fully managed AWS service that you can use to transfer files into and out of Amazon Simple Storage Service
(Amazon S3) storage or Amazon Elastic File System (Amazon EFS) file systems over the following protocols:

Secure Shell (SSH) File Transfer Protocol (SFTP): version 3


File Transfer Protocol Secure (FTPS)
File Transfer Protocol (FTP)
Applicability Statement 2 (AS2)
upvoted 2 times

  Oyz 5 months, 3 weeks ago


Selected Answer: A
A - is the correct answer.
upvoted 2 times

  BENICE 9 months, 2 weeks ago


A -- is the option
upvoted 3 times

  career360guru 9 months, 2 weeks ago


Selected Answer: A
Option A
upvoted 3 times

  mj98 10 months ago


Selected Answer: A
AWS Transfer Family - SFTP
upvoted 2 times

  Bobbybash 10 months, 1 week ago


Selected Answer: A
AAAAAAAA
AWS Transfer for SFTP, a fully-managed, highly-available SFTP service. You simply create a server, set up user accounts, and associate the
server with one or more Amazon Simple Storage Service (Amazon S3) buckets
upvoted 2 times
  Bobbybash 10 months, 1 week ago
AAAAAAAA
AWS Transfer for SFTP, a fully-managed, highly-available SFTP service. You simply create a server, set up user accounts, and associate the
server with one or more Amazon Simple Storage Service (Amazon S3) buckets.
upvoted 1 times
Question #189 Topic 1

A company needs to store contract documents. A contract lasts for 5 years. During the 5-year period, the company must ensure that the
documents cannot be overwritten or deleted. The company needs to encrypt the documents at rest and rotate the encryption keys automatically
every year.

Which combination of steps should a solutions architect take to meet these requirements with the LEAST operational overhead? (Choose two.)

A. Store the documents in Amazon S3. Use S3 Object Lock in governance mode.

B. Store the documents in Amazon S3. Use S3 Object Lock in compliance mode.

C. Use server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Configure key rotation.

D. Use server-side encryption with AWS Key Management Service (AWS KMS) customer managed keys. Configure key rotation.

E. Use server-side encryption with AWS Key Management Service (AWS KMS) customer provided (imported) keys. Configure key rotation.

Correct Answer: CE

Community vote distribution


BD (76%) BC (23%)

  [Removed] Highly Voted  10 months, 1 week ago


Selected Answer: BD
Originally answered B and C due to least operational overhead. after research its bugging me that the s3 key rotation is determined based
on AWS master Key rotation which cannot guarantee the key is rotated with in a 365 day period. stated as "varies" in the documentation.
also its impossible to configure this in the console.
KMS-C is a tick box in the console to turn on annual key rotation but requires more operational overhead than SSE-S3.
C - will not guarantee the questions objectives but requires little overhead.
D - will guarantee the questions objective with more overhead.
upvoted 19 times

  vadiminski_a 9 months, 2 weeks ago


I‘d have to disagree on that. It states here that aws managed keys are rotated every year which is what the question asks:
https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html so C would be correct.
However, it also states that you cannot enable or disable rotation for aws managed keys which would again point towards D
upvoted 3 times

  jdr75 5 months, 3 weeks ago


You can't use this link
https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html
to said that "sse-s3" rotates every year, cos' preciselly that link refers to "KMS", that is covered with option D.
That the reason the solution is B+D.
upvoted 2 times

  LeGloupier Highly Voted  10 months, 2 weeks ago


Selected Answer: BD
should be BD
C could have been fine, but key rotation is activate per default on SSE-S3, and no way to deactivate it if I am not wrong
upvoted 6 times

  TariqKipkemei Most Recent  2 weeks, 5 days ago


Selected Answer: BC
Technically both BC and BD would work. But option with D customer has to manage the keys, but there is a requirement for LEAST
operational overhead, which leaves option C, keys are provided/managed by amazon SSE-S3 encryption.
upvoted 2 times

  Guru4Cloud 2 weeks, 5 days ago


B) Use S3 Object Lock compliance mode to prevent objects from being overwritten or deleted for 5 years.

D) Use AWS KMS customer managed keys for encryption, and configure automatic annual rotation.

Compliance mode provides the protection against overwriting/deletion needed for the full contract duration. And KMS customer managed
keys allow automated key rotation each year.
upvoted 1 times

  animefan1 2 months, 4 weeks ago


Selected Answer: BD
compliance meets company's requirement and with Customer managed keys, user can set auto rotation
upvoted 1 times
  cookieMr 3 months, 1 week ago
Selected Answer: BD
B. By using S3 Object Lock in compliance mode, it enforces a strict retention policy on the objects, preventing any modifications or
deletions.

D. By using server-side encryption with AWS KMS customer managed keys, the documents are encrypted with a customer-controlled key.
Enabling key rotation ensures that a new encryption key is generated automatically at the defined rotation interval, enhancing security.

Option A: S3 Object Lock in governance mode does not provide the required immutability for the documents, allowing potential
modifications or deletions.

Option C: Server-side encryption with SSE-S3 alone does not fulfill the requirement of encryption key rotation, which is explicitly specified.

Option E: Server-side encryption with customer-provided (imported) keys (SSE-C) is not necessary when AWS KMS customer managed keys
(Option D) can be used, which provide a more integrated and manageable solution.
upvoted 4 times

  ruqui 4 months, 1 week ago


Selected Answer: BD
Answer is BD. C is discarded because key rotation can't be configured by the customer
upvoted 1 times

  studynoplay 4 months, 2 weeks ago


Selected Answer: BD
With SSE-S3 you can NOT Configure key rotation (see the choice C last sentence)
With KMS you can configure key rotation
upvoted 1 times

  studynoplay 4 months, 2 weeks ago


also, SSE-S3 is default and free. The question is not about cost, it is about operational maintenance
upvoted 1 times

  cheese929 4 months, 4 weeks ago


Selected Answer: BD
My answer is B and D.
I choose D over C cos of the annual key rotation requirement.
upvoted 1 times

  kruasan 5 months ago


Selected Answer: BD
Consider using the default aws/s3 KMS key if:

You're uploading or accessing S3 objects using AWS Identity and Access Management (IAM) principals that are in the same AWS account
as the AWS KMS key.
You don't want to manage policies for the KMS key.

Consider using a customer managed key if:

You want to create, rotate, disable, or define access controls for the key.
You want to grant cross-account access to your S3 objects. You can configure the policy of a customer managed key to allow access from
another account.
https://repost.aws/knowledge-center/s3-object-encryption-keys
upvoted 1 times

  Ankit_EC_ran 5 months, 1 week ago


Selected Answer: BD
BD
"You cannot enable or disable key rotation for AWS owned keys. The key rotation strategy for an AWS owned key is determined by the AWS
service that creates and manages the key."
This eliminates option c which says configure key rotation
upvoted 3 times

  chibaniMed 5 months, 1 week ago


Selected Answer: BC
i choose C instead of D because this part of question "LEAST operational overhead"
AWS KMS automatically rotates AWS managed keys every year (approximately 365 days). You cannot enable or disable key rotation for
AWS managed keys
https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html
upvoted 1 times

  asoli 6 months, 2 weeks ago


Selected Answer: BD
The answer is B and D
C is not correct. with SSe-S3 encryption, you do not have control over the key rotation.
upvoted 3 times
  fkie4 6 months, 3 weeks ago
Selected Answer: BD
C is wrong. see this:
https://stackoverflow.com/questions/63478626/which-aws-s3-encryption-technique-provides-rotation-policy-for-encryption-
keys#:~:text=This%20uses%20your%20own%20key,automatically%20rotated%20every%201%20year.
it said "SSE-S3 - is free and uses AWS owned CMKs (CMK = Customer Master Key). The encryption key is owned and managed by AWS, and
is shared among many accounts. Its rotation is automatic with time that varies as shown in the table here. The time is not explicitly
defined." .
So SSE-S3 does have key rotation, but user cannot configure rotation frequency. It vaires and managed by AWS, NOT by user.
upvoted 2 times

  jennyka76 7 months, 1 week ago


2 QUESTION ASK FORl - The company needs to encrypt the documents at rest and rotate the encryption keys automatically every year.
READ: https://docs.aws.amazon.com/kms/latest/developerguide/overview.html
ANSWER - D
upvoted 1 times

  jennyka76 7 months, 1 week ago


1. QUESTION ASK THE FOLLOWING: During the 5-year period, the company must ensure that the documents cannot be overwritten or
deleted. ?
SEE: https://jayendrapatil.com/tag/s3-object-lock-in-governance-mode/
ANSWER: B
AM GOING RESEARCH ON SECOND PART OF QUESTION.
JESUS IS GOOD..
upvoted 2 times

  Yelizaveta 7 months, 3 weeks ago


Selected Answer: BD
C or D -> Trick question:
C is wrong because the keys are rotated automatically by the S3 service in (SSE-S3) option.
You are correct that the question says "rotate the encryption keys automatically every year."
But the Answer C says: "Configure key rotation" and that you can not do with (SSE-S3), because it rotates automatically ;)
upvoted 3 times
Question #190 Topic 1

A company has a web application that is based on Java and PHP. The company plans to move the application from on premises to AWS. The
company needs the ability to test new site features frequently. The company also needs a highly available and managed solution that requires
minimum operational overhead.

Which solution will meet these requirements?

A. Create an Amazon S3 bucket. Enable static web hosting on the S3 bucket. Upload the static content to the S3 bucket. Use AWS Lambda to
process all dynamic content.

B. Deploy the web application to an AWS Elastic Beanstalk environment. Use URL swapping to switch between multiple Elastic Beanstalk
environments for feature testing.

C. Deploy the web application to Amazon EC2 instances that are configured with Java and PHP. Use Auto Scaling groups and an Application
Load Balancer to manage the website’s availability.

D. Containerize the web application. Deploy the web application to Amazon EC2 instances. Use the AWS Load Balancer Controller to
dynamically route traffic between containers that contain the new site features for testing.

Correct Answer: D

Community vote distribution


B (91%) 9%

  Shasha1 Highly Voted  9 months, 3 weeks ago


B
Elastic Beanstalk is a fully managed service that makes it easy to deploy and run applications in the AWS; To enable frequent testing of
new site features, you can use URL swapping to switch between multiple Elastic Beanstalk environments.
upvoted 8 times

  oguzbeliren 2 months ago


The correct answer is D.

AWS Elastic Beanstalk is a service that makes it easy to deploy and manage web applications in the AWS cloud. However, it is not a good
solution for testing new site features frequently, as it can be difficult to switch between multiple Elastic Beanstalk environments.
upvoted 1 times

  cookieMr Highly Voted  3 months, 1 week ago


Selected Answer: B
B. Provides a highly available and managed solution with minimum operational overhead. By deploying the web application to EBS, the
infrastructure and platform management are abstracted, allowing easy deployment and scalability. With URL swapping, different
environments can be created for testing new site features, and traffic can be routed between these environments without any downtime.

A. Suggests using S3 for static content hosting and Lambda for dynamic content. While it offers simplicity for static content, it does not
provide the necessary flexibility and dynamic functionality required by a Java and PHP-based web application.

C. Involves manual management of EC2, ASG, and ELB, which requires more operational overhead and may not provide the desired level
of availability and ease of testing.

D. Introduces containerization, which adds complexity and operational overhead for managing containers and infrastructure, making it
less suitable for a requirement of minimum operational overhead.
upvoted 6 times

  TariqKipkemei Most Recent  2 weeks, 5 days ago


Selected Answer: B
AWS Elastic Beanstalk URL swapping is the main ask of this question.
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: B
B is the correct answer.

Using AWS Elastic Beanstalk provides a fully managed platform to deploy the web application. Elastic Beanstalk will handle provisioning
EC2 instances, load balancing, auto scaling, and application health monitoring.

Elastic Beanstalk's ability to support multiple environments and swap URLs allows easy testing of new features before swapping into
production. This requires minimal overhead compared to managing infrastructure directly.
upvoted 1 times
  oguzbeliren 2 months ago
The correct answer is D.

AWS Elastic Beanstalk is a service that makes it easy to deploy and manage web applications in the AWS cloud. However, it is not a good
solution for testing new site features frequently, as it can be difficult to switch between multiple Elastic Beanstalk environments.
upvoted 1 times

  Abrar2022 4 months ago


S3 is for hosting static websites not dynamic websites or applications
Beanstalk will take care of this.
upvoted 1 times

  kruasan 5 months ago


Selected Answer: B
Frequent feature testing -
- Multiple Elastic Beanstalk environments can be created easily for development, testing and production use cases.
- Traffic can be routed between environments for A/B testing and feature iteration using simple URL swapping techniques. No complex
routing rules or infrastructure changes required.
upvoted 1 times

  ashu089 5 months, 3 weeks ago


who needs discussion in the era the of chatGPT
upvoted 2 times

  aadityaravi8 2 months, 3 weeks ago


chatGPT always change its answer. just say wrong answer, he will come up with new answer each time with justification. chatGPT is not
trusted at all.
upvoted 3 times

  kerin 7 months, 2 weeks ago


Option B as it has the minimum operational overhead
upvoted 1 times

  maciekmaciek 7 months, 3 weeks ago


Selected Answer: B
Blue/Green deployments https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.CNAMESwap.html
upvoted 1 times

  naxer82 7 months, 3 weeks ago


Selected Answer: B
is correct
upvoted 1 times

  gustavtd 9 months ago


As I was told, Elastic Beanstalk is an expensive service, isn't it?
upvoted 2 times

  HayLLlHuK 8 months, 4 weeks ago


so what? The question doesn’t require the most cost-effective solution
upvoted 8 times

  techhb 9 months, 1 week ago


Selected Answer: B
D includes additional overhead of installing.
upvoted 2 times

  BENICE 9 months, 2 weeks ago


B -- is correct answer
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: B
Option B as it has the minimum operational overhead
upvoted 1 times

  k1kavi1 9 months, 2 weeks ago


Selected Answer: B
B looks correct
upvoted 1 times

  hpipit 9 months, 4 weeks ago


Selected Answer: B
B is the correct. 100%. i have confirmation
upvoted 2 times
Question #191 Topic 1

A company has an ordering application that stores customer information in Amazon RDS for MySQL. During regular business hours, employees
run one-time queries for reporting purposes. Timeouts are occurring during order processing because the reporting queries are taking a long time
to run. The company needs to eliminate the timeouts without preventing employees from performing queries.

What should a solutions architect do to meet these requirements?

A. Create a read replica. Move reporting queries to the read replica.

B. Create a read replica. Distribute the ordering application to the primary DB instance and the read replica.

C. Migrate the ordering application to Amazon DynamoDB with on-demand capacity.

D. Schedule the reporting queries for non-peak hours.

Correct Answer: B

Community vote distribution


A (100%)

  BENICE Highly Voted  9 months, 2 weeks ago


A is correct answer. This was in my exam
upvoted 17 times

  Grace83 6 months, 2 weeks ago


Did these questions help with your exam?
upvoted 2 times

  TariqKipkemei Most Recent  2 weeks, 3 days ago


Selected Answer: A
reports = read replica
upvoted 2 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: A
A is the correct answer.

Creating an RDS MySQL read replica will allow the reporting queries to be isolated and run without affecting performance of the primary
ordering application.

Read replicas allow read-only workloads to be scaled out while eliminating contention with the primary write workload.
upvoted 2 times

  james2033 2 months, 2 weeks ago


Selected Answer: A
Question keyword "regular business hours" made D is incorrect.

C migrate to Amazon DynamoDB (No-SQL) is meaningless, remove C.

Answer B, create a "read replica", it is ok, but "ordering application pointed to read replica" is incorrect.

A is correct answer. Easy question.


upvoted 1 times

  sickcow 3 months ago


Selected Answer: A
A sounds right
upvoted 1 times

  rauldevilla 3 months ago


Selected Answer: A
Using the primary instance continues with the problem
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: A
A. By moving the reporting queries to the read replica, the primary DB instance used for order processing is not affected by the long-
running reporting queries. This helps eliminate timeouts during order processing while allowing employees to perform their queries
without impacting the application's performance.

B. While this can provide some level of load distribution, it does not specifically address the issue of timeouts caused by reporting queries
during order processing.

C. While DynamoDB offers scalability and performance benefits, it may require significant changes to the application's data model and
querying approach.

D. While this approach can help alleviate the impact on order processing, it does not address the requirement of eliminating timeouts
without preventing employees from performing queries.
upvoted 3 times
  steev 3 months, 3 weeks ago
Selected Answer: A
correct
upvoted 1 times

  cheese929 4 months, 4 weeks ago


Selected Answer: A
A is correct.
upvoted 1 times

  kruasan 5 months ago


Selected Answer: A
Creating a read replica allows the company to offload the reporting queries to a separate database instance, reducing the load on the
primary database used for order processing. By moving the reporting queries to the read replica, the ordering application running on the
primary DB instance can continue to process orders without timeouts due to the long-running reporting queries.

Option B is not a good solution because distributing the ordering application to the primary DB instance and the read replica does not
address the issue of long-running reporting queries causing timeouts during order processing.
upvoted 1 times

  jjlin526 5 months, 2 weeks ago


Please DM contributor access: [email protected]
upvoted 2 times

  ammyboy 5 months ago


bro i need contibutor access please
upvoted 1 times

  Hung23 5 months, 2 weeks ago


Selected Answer: A
Answer: A
upvoted 1 times

  k33 6 months, 1 week ago


Selected Answer: A
Answer : A
upvoted 1 times

  PUCKER 6 months, 1 week ago


Selected Answer: A
SUMMA SUMMA KICK ERUDHAE ! ULUKULAE NALA BHODHA ERUDHAE !
upvoted 1 times

  idriskameni 6 months, 2 weeks ago


Selected Answer: A
A is correct
upvoted 1 times

  techhb 9 months, 1 week ago


Selected Answer: A
we cant distribute write load to s read replica
upvoted 2 times

  career360guru 9 months, 2 weeks ago


Selected Answer: A
Option A is right answer
upvoted 1 times
Question #192 Topic 1

A hospital wants to create digital copies for its large collection of historical written records. The hospital will continue to add hundreds of new
documents each day. The hospital’s data team will scan the documents and will upload the documents to the AWS Cloud.

A solutions architect must implement a solution to analyze the documents, extract the medical information, and store the documents so that an
application can run SQL queries on the data. The solution must maximize scalability and operational efficiency.

Which combination of steps should the solutions architect take to meet these requirements? (Choose two.)

A. Write the document information to an Amazon EC2 instance that runs a MySQL database.

B. Write the document information to an Amazon S3 bucket. Use Amazon Athena to query the data.

C. Create an Auto Scaling group of Amazon EC2 instances to run a custom application that processes the scanned files and extracts the
medical information.

D. Create an AWS Lambda function that runs when new documents are uploaded. Use Amazon Rekognition to convert the documents to raw
text. Use Amazon Transcribe Medical to detect and extract relevant medical information from the text.

E. Create an AWS Lambda function that runs when new documents are uploaded. Use Amazon Textract to convert the documents to raw text.
Use Amazon Comprehend Medical to detect and extract relevant medical information from the text.

Correct Answer: CD

Community vote distribution


BE (100%)

  KADSM Highly Voted  10 months, 1 week ago


B and E are correct. Textract to extract text from files. Rekognition can also be used for text detection but after Rekognition - it's
mentioned that Transcribe is used. Transcribe is used for Speech to Text. So that option D may not be valid.
upvoted 8 times

  vijaykamal Most Recent  4 days, 2 hours ago


Answer - BE
Option D mentions using Amazon Rekognition and Amazon Transcribe Medical, which are primarily designed for image and audio
analysis, respectively. While they can be part of a document processing pipeline, Amazon Textract and Amazon Comprehend Medical are
more suitable for extracting structured information from documents, making option E a better choice.
upvoted 1 times

  TariqKipkemei 2 weeks, 3 days ago


Selected Answer: BE
Write the document information to an Amazon S3 bucket. Use Amazon Athena to query the data.
Create an AWS Lambda function that runs when new documents are uploaded. Use Amazon Textract to convert the documents to raw
text. Use Amazon Comprehend Medical to detect and extract relevant medical information from the text.
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: BE
B and E are the correct answers.

B is correct because storing the scanned documents in Amazon S3 provides highly scalable and durable storage. Amazon Athena allows
running SQL queries directly against the data in S3 without needing to load the data into a database.

E is correct because using Lambda functions triggered by uploads provides a serverless approach to automatically process each
document. Amazon Textract and Comprehend Medical can extract text and medical information without needing to manage server
upvoted 2 times

  james2033 2 months, 2 weeks ago


Selected Answer: BE
Amazon Comprehend Medical for image reading
https://aws.amazon.com/comprehend/medical/ .

Amazon Transcribe Medical for speech audio. Remove D. Keep E.

A is meaningless, remove A (EC2).

B use Amazon S3, Athena for querying, keep B.


Conclusion combination B and E are correct answers.
upvoted 2 times
  MNotABot 2 months, 3 weeks ago
AC wrong as involve EC2. Either one of DE are correct so that makes B correct. Now E is obvious answer if we have read AWS FAQs
upvoted 1 times

  animefan1 2 months, 4 weeks ago


Selected Answer: BE
Textract to extract the content and Athena to run sql queries on S3 data
upvoted 1 times

  sickcow 3 months ago


Selected Answer: BE
From a DE/ML perspective Lambda + Textract + S3 + Athena is the best way to go
upvoted 1 times

  rauldevilla 3 months ago


Selected Answer: BE
Transcribe is used. Transcribe is used for Speech to Text
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: BE
B is correct because it suggests writing the document information to an Amazon S3 bucket, which provides scalable and durable object
storage. Using Amazon Athena, the data can be queried using SQL, enabling efficient analysis.

E is correct because it involves creating an AWS Lambda function triggered by new document uploads. Amazon Textract is used to convert
the documents to raw text, and Amazon Comprehend Medical extracts relevant medical information from the text.

A is incorrect because writing the document information to an Amazon EC2 instance with a MySQL database is not a scalable or efficient
solution for analysis.

C is incorrect because creating an Auto Scaling group of Amazon EC2 instances for processing scanned files and extracting information
would introduce unnecessary complexity and management overhead.

D is incorrect because using an EC2 instance with a MySQL database for storing document information is not the optimal solution for
scalability and efficient analysis.
upvoted 2 times

  AlankarJ 3 months, 3 weeks ago


It states in the question that the written documents are scanned. They are converted into images after being scanned. Rekognition would
be best to analyse images.
upvoted 1 times

  Bmarodi 4 months, 1 week ago


Selected Answer: BE
Options B & E are correct answers.
upvoted 1 times

  antropaws 4 months, 2 weeks ago


Selected Answer: BE
Why CD are marked as correct??
upvoted 1 times

  studynoplay 4 months, 2 weeks ago


Selected Answer: BE
operational efficiency = Serverless
S3 is serverless
upvoted 1 times

  k33 6 months, 1 week ago


Selected Answer: BE
Answer : BE
upvoted 1 times

  idriskameni 6 months, 2 weeks ago


Selected Answer: BE
B and E are correct
upvoted 1 times

  aakashkumar1999 8 months ago


Selected Answer: BE
Lambda, Textract and S3 Athena perfect combination
upvoted 2 times
Question #193 Topic 1

A company is running a batch application on Amazon EC2 instances. The application consists of a backend with multiple Amazon RDS databases.
The application is causing a high number of reads on the databases. A solutions architect must reduce the number of database reads while
ensuring high availability.

What should the solutions architect do to meet this requirement?

A. Add Amazon RDS read replicas.

B. Use Amazon ElastiCache for Redis.

C. Use Amazon Route 53 DNS caching

D. Use Amazon ElastiCache for Memcached.

Correct Answer: A

Community vote distribution


B (50%) A (50%)

  leonnnn Highly Voted  10 months, 1 week ago


Selected Answer: B
Use ElastiCache to reduce reading and choose redis to ensure high availability.
upvoted 26 times

  JoeGuan 1 month, 2 weeks ago


Caching Frequently Accessed Data: ElastiCache allows you to store frequently accessed or computationally expensive data in-memory
within the cache nodes. This means that when an application requests data, ElastiCache can provide the data directly from the cache
without having to query the RDS database. This reduces the number of reads on the RDS database because the data is retrieved from
the faster in-memory cache.
upvoted 1 times

  Lalo 7 months, 2 weeks ago


Where is the high availability when the database fails and the cache time runs out?
The answer is a.
upvoted 16 times

  Mia2009687 3 months ago


They run multiple databases
upvoted 1 times

  mandragon 4 months ago


Elasticache for Redis ensures high availability by using read replicas and Multi AZ with failover. It is also faster since it uses cache.

https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/AutoFailover.html
upvoted 1 times

  ruqui 4 months, 1 week ago


A can't be an answer because the requirement is"reduce the number of database reads"
upvoted 4 times

  channn Highly Voted  5 months, 3 weeks ago


Selected Answer: A
A vs B:
A: reduce the number of database reads on main + high availability provide
B: only reduce the number of DB reads
so A wins
upvoted 13 times

  mani37k Most Recent  2 days, 11 hours ago


Selected Answer: A
A. Add Amazon RDS read replicas.

Adding Amazon RDS read replicas is a commonly used strategy to offload read traffic from the primary database, thereby reducing the
number of database reads. Read replicas provide high availability and can distribute read queries across multiple instances, improving
overall read performance.

While options B and D suggest using Amazon ElastiCache for Redis or Memcached, these caching solutions are more focused on
improving read performance by caching frequently accessed data, but they do not inherently reduce the number of reads on the RDS
database. They can complement the solution by serving cached data, but they are not a direct way to reduce the reads on the database.
upvoted 1 times
  Modulopi 2 days, 13 hours ago
A for Availability
upvoted 1 times

  axelrodb 2 weeks ago


Selected Answer: B
B is the correct answer since the requirement is to reduce the read which can be achieved with ElastiCache. Adding RDS read replica is
only going to distribute the read requests.

With ElastiCache, read hit will occur thus achieving the goal mentioned.
upvoted 1 times

  gouranga45 2 weeks, 1 day ago


Selected Answer: A
A satisfies the given requirements
upvoted 1 times

  TariqKipkemei 2 weeks, 3 days ago


Selected Answer: A
RDS read replicas was designed specifically to handle this kind of scenario.
upvoted 1 times

  kambarami 2 weeks, 4 days ago


Answer is B
Use elastic cache to reduce the numbers of reads
upvoted 1 times

  Valder21 3 weeks, 6 days ago


Selected Answer: A
if it could be B, why not D?
upvoted 1 times

  ssa03 1 month ago


Selected Answer: B
reduce reading
upvoted 1 times

  skaikobad 1 month ago


Selected Answer: B
I think B is correct.
Because it Reduce Read operation also provide High Availability
upvoted 1 times

  chen0305_099 1 month, 1 week ago


Selected Answer: B
使用ElastiCache減少讀取,選擇redis保證高可用
upvoted 1 times

  mtmayer 1 month, 1 week ago


Selected Answer: B
Reduce the number of reads ......
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: B
B is the correct answer.

Using ElastiCache for Redis allows caching the data from the RDS databases. This reduces the number of reads required from the
databases by serving repeated reads from the Redis cache instead.

A is incorrect because RDS read replicas only help scale reads and do not reduce the overall reads from the primary database.
upvoted 2 times

  Aelodus 1 month, 2 weeks ago


Selected Answer: B
To reduce the number of database reads in general, a cache should be used. So that leaves B and D, since we want high availability, Redis
should be used so option B.
upvoted 1 times
  yhonatan2288 1 month, 2 weeks ago
Selected Answer: A
Agregar réplicas de lectura a Amazon RDS es una estrategia efectiva para reducir la carga en las bases de datos y mejorar el rendimiento.
Las réplicas de lectura permiten distribuir las lecturas entre las réplicas, aliviando la carga en la instancia principal de la base de datos.
Esto también puede mejorar la latencia para las consultas de lectura. Además, las réplicas de lectura también contribuyen a la alta
disponibilidad, ya que en caso de que la instancia principal falle, una de las réplicas de lectura puede promocionarse para asumir el rol de
instancia principal.
upvoted 1 times

  n43u435b543ht2b 1 month, 3 weeks ago


Selected Answer: B
Creating a read replica DOES NOT reduce the number of reads that are computed on the database, which is CLEARLY listed the goal of this
solution. However, using Elasticache Redis caches read queries, which means they do not have to be executed on the database or any
replicas.
upvoted 1 times
Question #194 Topic 1

A company needs to run a critical application on AWS. The company needs to use Amazon EC2 for the application’s database. The database must
be highly available and must fail over automatically if a disruptive event occurs.

Which solution will meet these requirements?

A. Launch two EC2 instances, each in a different Availability Zone in the same AWS Region. Install the database on both EC2 instances.
Configure the EC2 instances as a cluster. Set up database replication.

B. Launch an EC2 instance in an Availability Zone. Install the database on the EC2 instance. Use an Amazon Machine Image (AMI) to back up
the data. Use AWS CloudFormation to automate provisioning of the EC2 instance if a disruptive event occurs.

C. Launch two EC2 instances, each in a different AWS Region. Install the database on both EC2 instances. Set up database replication. Fail
over the database to a second Region.

D. Launch an EC2 instance in an Availability Zone. Install the database on the EC2 instance. Use an Amazon Machine Image (AMI) to back up
the data. Use EC2 automatic recovery to recover the instance if a disruptive event occurs.

Correct Answer: C

Community vote distribution


A (51%) C (49%)

  Gil80 Highly Voted  10 months ago


Selected Answer: A
Changing my vote to A. After reviewing a Udemy course of SAA-C03, it seems that A (multi-AZ and Clusters) is sufficient for HA.
upvoted 22 times

  berks 9 months, 1 week ago


what number of class ?
upvoted 4 times

  Gil80 Highly Voted  10 months ago


Selected Answer: C
The question states that it is a critical app and it has to be HA. A could be the answer, but it's in the same AZ, so if the entire region fails, it
doesn't cater for the HA requirement.

However, the likelihood of a failure in two different regions at the same time is 0. Therefore, to me it seems that C is the better option to
cater for HA requirement.

In addition, C does state like A that the DB app is installed on an EC2 instance.
upvoted 19 times

  aussiehoa 4 months, 3 weeks ago


Design for region failure? may as well design for AWS failure and put replica in GCP and Azure :v
upvoted 5 times

  Kp88 2 months, 1 week ago


And on-prem in multiple DCs and one in mars too :D
upvoted 5 times

  Steve_4542636 7 months ago


The question doesn't ask which option is the most HA. It asks what meets the requirements.
upvoted 3 times

  javitech83 9 months, 4 weeks ago


but for C you need communication between the two VPC, which increase the complexity. With a should be enough for HA
upvoted 4 times

  hieulam Most Recent  1 week, 4 days ago


Selected Answer: C
if a disruptive event occurs ==> Assuming that occurs in the region, not in the availability zone only, thus, C is correct.
upvoted 1 times

  TariqKipkemei 2 weeks, 3 days ago


Selected Answer: A
Technically, both option A and C provide HA. But I would go with A because its less complex and less costly on replication costs.
upvoted 1 times
  kanha1996 3 weeks, 2 days ago
A is the anser
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: C
Launch two EC2 instances, each in a different AWS Region. Install the database on both EC2 instances. Set up database replication. Fail
over the database to a second Region.

The key reasons are:

Cross-region redundancy provides the highest level of availability and disaster recovery. If one entire region goes down, the database can
fail over across regions.
Database replication ensures data is consistent between regions at all times.
Manual failover gives the flexibility to fail over on-demand in case of regional issues.
upvoted 1 times

  n43u435b543ht2b 1 month, 3 weeks ago


Selected Answer: C
A "critical application" should be protected against a regional outage. This sounds like overkill but is absolutely commonplace and used
frequently for truly critical applications.
upvoted 2 times

  james2033 2 months, 2 weeks ago


Selected Answer: C
Question keyword "disruptive event", "highly available", "failover automatically".

"Different Region" is least condition for against "disruptive event", not "different Availability Zone".

Typo in the question: "failover automatically", not "fail over automatically".


upvoted 1 times

  vini15 2 months, 2 weeks ago


Should be C
Cluster share same rack(hardware) in EC2 and its a logical grouping of instances within a single Availability Zone. Hence does not provide
HA.
upvoted 1 times

  sosda 2 months, 3 weeks ago


Selected Answer: C
Cluster = single AZ = not HA
upvoted 1 times

  Mia2009687 2 months, 3 weeks ago


Selected Answer: C
For A- Configure the EC2 instances as a cluster. Set up database replication
I don't understand why we need EC2 cluster while we require database cluster only.
upvoted 2 times

  sbnpj 3 months ago


Selected Answer: C
Option C addresses this requirement by launching two EC2 instances in different AWS Regions, installing the database on both instances,
setting up database replication, and enabling failover to the second region. In this configuration, if one region becomes unavailable, the
application can seamlessly fail over to the database instance in the second region, ensuring high availability and continuity of operations.
upvoted 1 times

  Ezekiel2517 3 months ago


For those who say A is correct, explain how to set up an EC2 cluster in a multi-AZ environment.

Spoiler alert: it is not technically doable. Ergo, the only viable remaining option is C, as full of flaws as it may be... I choose the lesser evil.
upvoted 2 times

  cookieMr 3 months, 1 week ago


Selected Answer: A
By launching two EC2 instances in different Availability Zones and configuring them as a cluster with database replication, the database
can achieve high availability and automatic failover. If one instance or Availability Zone becomes unavailable, the other instance can
continue serving the application without interruption.

B. Launching a single EC2 instance and using an AMI for backup and provisioning automation does not provide automatic failover or high
availability.

C. Launching EC2 instances in different AWS Regions and setting up database replication is a multi-Region setup, which can provide
disaster recovery capabilities but does not provide automatic failover within a single Region.
D. Using EC2 automatic recovery can recover the instance if it fails due to hardware issues, but it does not provide automatic failover or
high availability across multiple instances or Availability Zones.
upvoted 2 times
  antropaws 4 months, 2 weeks ago
Selected Answer: C
Cluster EC2s cannot span between AZs, which invalidates option A.
upvoted 10 times

  AbdulMalik_Y 4 months, 2 weeks ago


that's what i thought !!!
upvoted 1 times

  cheese929 4 months, 4 weeks ago


Selected Answer: A
A is correct. Meets the requirements
upvoted 2 times

  markw92 3 months, 2 weeks ago


You can't cluster EC2 when they are separate AZ's. This invalidate answer A. You have to look carefully and read each word carefully.
upvoted 4 times

  Dinya_jui 5 months ago


answer is C.. since multi region infrastructure provides more HA than multi AZ.
upvoted 2 times
Question #195 Topic 1

A company’s order system sends requests from clients to Amazon EC2 instances. The EC2 instances process the orders and then store the orders
in a database on Amazon RDS. Users report that they must reprocess orders when the system fails. The company wants a resilient solution that
can process orders automatically if a system outage occurs.

What should a solutions architect do to meet these requirements?

A. Move the EC2 instances into an Auto Scaling group. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to target an Amazon
Elastic Container Service (Amazon ECS) task.

B. Move the EC2 instances into an Auto Scaling group behind an Application Load Balancer (ALB). Update the order system to send messages
to the ALB endpoint.

C. Move the EC2 instances into an Auto Scaling group. Configure the order system to send messages to an Amazon Simple Queue Service
(Amazon SQS) queue. Configure the EC2 instances to consume messages from the queue.

D. Create an Amazon Simple Notification Service (Amazon SNS) topic. Create an AWS Lambda function, and subscribe the function to the SNS
topic. Configure the order system to send messages to the SNS topic. Send a command to the EC2 instances to process the messages by
using AWS Systems Manager Run Command.

Correct Answer: D

Community vote distribution


C (93%) 3%

  vijaykamal 4 days, 2 hours ago


Selected Answer: C
Option D suggests using Amazon SNS and AWS Lambda, which can be part of an event-driven architecture but may not be the best fit for
ensuring the automatic processing of orders during system outages. It relies on an additional AWS Systems Manager Run Command step,
which adds complexity and may not be as reliable as using SQS for queuing messages.
upvoted 1 times

  TariqKipkemei 2 weeks, 3 days ago


Selected Answer: C
Move the EC2 instances into an Auto Scaling group. Configure the order system to send messages to an Amazon Simple Queue Service
(Amazon SQS) queue. Configure the EC2 instances to consume messages from the queue.
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: C
The key reasons are:

Using an Auto Scaling group ensures the EC2 instances that process orders are highly available and scalable.
With SQS, the orders are decoupled from the instances that process them via asynchronous queuing.
If instances fail or go down, the orders remain in the queue until new instances can pick them up. This provides automated resilience.
Any failed processing can retry by resending messages back to the queue
upvoted 4 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: C
C is the correct answer.

Using an Auto Scaling group with EC2 instances behind a load balancer provides high availability and scalability.

Sending the orders to an SQS queue decouples the ordering system from the processing system. The EC2 instances can poll the queue for
new orders and process them even during an outage. Any failed orders will go back to the queue for reprocessing.
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: C
By moving the EC2 into an ASG and configuring them to consume messages from an SQS, the system can decouple the order processing
from the order system itself. This allows the system to handle failures and automatically process orders even if the order system or EC2
experience outages.

A. Using an ASG with an EventBridge rule targeting an ECS task does not provide the necessary decoupling and message queueing for
automatic order processing during outages.
B. Moving the EC2 instances into an ASG behind an
ALB does not address the need for message queuing and automatic processing during outages.

D. Using SNS and Lambda can provide notifications and orchestration capabilities, but it does not provide the necessary message
queueing and consumption for automatic order processing during outages. Additionally, using Systems Manager Run Command to send
commands for order processing adds complexity and does not provide the desired level of automation.
upvoted 2 times
  pisica134 3 months, 1 week ago
D is so unnecessary .... this confuses people
upvoted 1 times

  cookieMr 3 months, 1 week ago


Thx Allmightly for voting system! Answers provided by the site (and not by community) are 20% wrong.
upvoted 3 times

  markw92 3 months, 2 weeks ago


The answer D is so complex and unnecessary. Why moderator is not providing an explanation of answers when there are heavy conflicts.
These kind of answers put your knowledge in question which is not good going into the exam.
upvoted 1 times

  gx2222 5 months, 4 weeks ago


Selected Answer: C
To meet the company's requirements of having a resilient solution that can process orders automatically in case of a system outage, the
solutions architect needs to implement a fault-tolerant architecture. Based on the given scenario, a potential solution is to move the EC2
instances into an Auto Scaling group and configure the order system to send messages to an Amazon Simple Queue Service (Amazon
SQS) queue. The EC2 instances can then consume messages from the queue.
upvoted 2 times

  k33 6 months, 1 week ago


Selected Answer: C
Answer : C
upvoted 1 times

  nickolaj 7 months, 2 weeks ago


Selected Answer: C
C. Move the EC2 instances into an Auto Scaling group. Configure the order system to send messages to an Amazon Simple Queue Service
(Amazon SQS) queue. Configure the EC2 instances to consume messages from the queue.

To meet the requirements of the company, a solutions architect should ensure that the system is resilient and can process orders
automatically in the event of a system outage. To achieve this, moving the EC2 instances into an Auto Scaling group is a good first step.
This will enable the system to automatically add or remove instances based on demand and availability.
upvoted 2 times

  nickolaj 7 months, 2 weeks ago


However, it's also necessary to ensure that orders are not lost if a system outage occurs. To achieve this, the order system can be
configured to send messages to an Amazon Simple Queue Service (Amazon SQS) queue. SQS is a highly available and durable
messaging service that can help ensure that messages are not lost if the system fails.

Finally, the EC2 instances can be configured to consume messages from the queue, process the orders and then store them in the
database on Amazon RDS. This approach ensures that orders are not lost and can be processed automatically if a system outage
occurs. Therefore, option C is the correct answer.
upvoted 2 times

  nickolaj 7 months, 2 weeks ago


Option A is incorrect because it suggests creating an Amazon EventBridge rule to target an Amazon Elastic Container Service (ECS)
task. While this may be a valid solution in some cases, it is not necessary in this scenario.

Option B is incorrect because it suggests moving the EC2 instances into an Auto Scaling group behind an Application Load Balancer
(ALB) and updating the order system to send messages to the ALB endpoint. While this approach can provide resilience and
scalability, it does not address the issue of order processing and the need to ensure that orders are not lost if a system outage
occurs.

Option D is incorrect because it suggests using Amazon Simple Notification Service (SNS) to send messages to an AWS Lambda
function, which will then send a command to the EC2 instances to process the messages by using AWS Systems Manager Run
Command. While this approach may work, it is more complex than necessary and does not take advantage of the durability and
availability of SQS.
upvoted 2 times

  LuckyAro 8 months, 2 weeks ago


Selected Answer: C
My question is; can orders be sent directly into an SQS queue ? How about the protocol for management of the messages from the queue
? can EC2 instances be programmed to process them like Lambda ?
upvoted 1 times

  berks 9 months, 1 week ago


Selected Answer: D
I choose D
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Selected Answer: C
To meet the requirements of the company, a solution should be implemented that can automatically process orders if a system outage
occurs. Option C meets these requirements by using an Auto Scaling group and Amazon Simple Queue Service (SQS) to ensure that orders
can be processed even if a system outage occurs.

In this solution, the EC2 instances are placed in an Auto Scaling group, which ensures that the number of instances can be automatically
scaled up or down based on demand. The ordering system is configured to send messages to an SQS queue, which acts as a buffer and
stores the messages until they can be processed by the EC2 instances. The EC2 instances are configured to consume messages from the
queue and process them. If a system outage occurs, the messages in the queue will remain available and can be processed once the
system is restored.
upvoted 2 times

  techhb 9 months, 1 week ago


Selected Answer: A
c is right
upvoted 1 times

  NikaCZ 9 months, 2 weeks ago


C. Move the EC2 instances into an Auto Scaling group. Configure the order system to send messages to an Amazon Simple Queue Service
(Amazon SQS) queue. Configure the EC2 instances to consume messages from the queue.
upvoted 1 times

  romko 9 months, 2 weeks ago


Selected Answer: C
C, decouple applications and functionality, give ability to reprocess message if failed due to networking issue or overloaded other systems
upvoted 2 times

  Shasha1 9 months, 3 weeks ago


C
Configuring the EC2 instances to consume messages from the SQS queue will ensure that the instances can process orders automatically,
even if a system outage occurs.
e.
upvoted 1 times
Question #196 Topic 1

A company runs an application on a large fleet of Amazon EC2 instances. The application reads and writes entries into an Amazon DynamoDB
table. The size of the DynamoDB table continuously grows, but the application needs only data from the last 30 days. The company needs a
solution that minimizes cost and development effort.

Which solution meets these requirements?

A. Use an AWS CloudFormation template to deploy the complete solution. Redeploy the CloudFormation stack every 30 days, and delete the
original stack.

B. Use an EC2 instance that runs a monitoring application from AWS Marketplace. Configure the monitoring application to use Amazon
DynamoDB Streams to store the timestamp when a new item is created in the table. Use a script that runs on the EC2 instance to delete items
that have a timestamp that is older than 30 days.

C. Configure Amazon DynamoDB Streams to invoke an AWS Lambda function when a new item is created in the table. Configure the Lambda
function to delete items in the table that are older than 30 days.

D. Extend the application to add an attribute that has a value of the current timestamp plus 30 days to each new item that is created in the
table. Configure DynamoDB to use the attribute as the TTL attribute.

Correct Answer: D

Community vote distribution


D (91%) 9%

  Gil80 Highly Voted  10 months ago


Selected Answer: D
changing my answer to D after researching a bit.

The DynamoDB TTL feature allows you to define a per-item timestamp to determine when an item is no longer needed. Shortly after the
date and time of the specified timestamp, DynamoDB deletes the item from your table without consuming any write throughput.
upvoted 26 times

  TariqKipkemei Most Recent  2 weeks ago


Selected Answer: D
DynamoDB Time to Live was designed to handle this kind of requirement where an item is no longer needed. TTL is provided at no extra
cost as a means to reduce stored data volumes by retaining only the items that remain current for your workload’s needs
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: D
D. Extend the application to add an attribute that has a value of the current timestamp plus 30 days to each new item that is created in the
table. Configure DynamoDB to use the attribute as the TTL attribute.

The main reasons are:

Using DynamoDB's built-in TTL functionality is the most direct way to handle data expiration.
It avoids the complexity of triggers, streams, and lambda functions in option C.
Modifying the application code to add the TTL attribute is relatively simple and minimizes operational overhead
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: D
By adding a TTL attribute to the DynamoDB table and setting it to the current timestamp plus 30 days, DynamoDB will automatically
delete the items that are older than 30 days. This solution eliminates the need for manual deletion or additional infrastructure
components.

A. Redeploying the CloudFormation stack every 30 days and deleting the original stack introduces unnecessary complexity and
operational overhead.

B. Using an EC2 instance with a monitoring application and a script to delete items older than 30 days adds additional infrastructure and
maintenance efforts.

C. Configuring DynamoDB Streams to invoke a Lambda function to delete items older than 30 days adds complexity and requires
additional development and operational effort compared to using the built-in TTL feature of DynamoDB.
upvoted 2 times

  pisica134 3 months, 1 week ago


D: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html
upvoted 1 times

  Abrar2022 4 months ago


Amazon DynamoDB Time to Live (TTL) allows you to define a per-item timestamp to determine when an item is no longer needed.
upvoted 3 times

  studynoplay 4 months, 2 weeks ago


Selected Answer: D
C is incorrect because it can take more than 15 minutes to delete the old data. Lambda won't work
upvoted 1 times

  Konb 4 months, 3 weeks ago


Selected Answer: D
Clear case for TTL - every object gets deleted after a certain period of time
upvoted 1 times

  rushi0611 4 months, 4 weeks ago


Selected Answer: D
Use DynamoDB TTL feature to achieve this..
upvoted 1 times

  jdr75 5 months, 3 weeks ago


Selected Answer: D
C is absurd. DynamoDB usually is a RDS with high iops (read/write operations on tables), executing a Lambda function eachtime you insert
a item will not be cost-effective.It's much better create such a field the question propose, and manage the delete with a SQL delete
sentence:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/SQLtoNoSQL.DeleteData.html
upvoted 1 times

  LuckyAro 8 months, 2 weeks ago


Selected Answer: D
Amazon DynamoDB Time to Live (TTL) allows you to define a per-item timestamp to determine when an item is no longer needed. Shortly
after the date and time of the specified timestamp, DynamoDB deletes the item from your table without consuming any write throughput.
TTL is provided at no extra cost as a means to reduce stored data volumes by retaining only the items that remain current for your
workload’s needs.

TTL is useful if you store items that lose relevance after a specific time.
upvoted 1 times

  DavidNamy 8 months, 3 weeks ago


Selected Answer: D
D: This solution is more efficient and cost-effective than alternatives that would require additional resources and maintenance.
upvoted 1 times

  anonymouscloudguy 9 months, 1 week ago


Selected Answer: D
D DyanmoDB TTL will expire the items

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Selected Answer: D
To minimize cost and development effort, a solution that requires minimal changes to the existing application and infrastructure would be
the most appropriate. Option D meets these requirements by using DynamoDB's Time-To-Live (TTL) feature, which allows you to specify an
attribute on each item in a table that has a timestamp indicating when the item should expire.

In this solution, the application is extended to add an attribute that has a value of the current timestamp plus 30 days to each new item
that is created in the table. DynamoDB is then configured to use this attribute as the TTL attribute, which causes items to be automatically
deleted from the table when their TTL value is reached. This solution requires minimal changes to the existing application and
infrastructure and does not require any additional resources or a complex setup.
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Option A involves using AWS CloudFormation to redeploy the solution every 30 days, but this would require significant development
effort and could cause downtime for the application.

Option B involves using an EC2 instance and a monitoring application to delete items that are older than 30 days, but this requires
additional infrastructure and maintenance effort.

Option C involves using DynamoDB Streams and a Lambda function to delete items that are older than 30 days, but this requires
additional infrastructure and maintenance effort.
upvoted 1 times
  techhb 9 months, 1 week ago
Selected Answer: D
TTL does the trick
upvoted 1 times

  kvenikoduru 9 months, 1 week ago


Selected Answer: D
Amazon DynamoDB Time to Live (TTL) allows you to define a per-item timestamp to determine when an item is no longer needed. Shortly
after the date and time of the specified timestamp, DynamoDB deletes the item from your table without consuming any write throughput.
- check this link https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html
upvoted 1 times

  prethesh 9 months, 2 weeks ago


Selected Answer: D
https://aws.amazon.com/about-aws/whats-new/2017/02/amazon-dynamodb-now-supports-automatic-item-expiration-with-time-to-live-
ttl/
upvoted 1 times
Question #197 Topic 1

A company has a Microsoft .NET application that runs on an on-premises Windows Server. The application stores data by using an Oracle
Database Standard Edition server. The company is planning a migration to AWS and wants to minimize development changes while moving the
application. The AWS application environment should be highly available.

Which combination of actions should the company take to meet these requirements? (Choose two.)

A. Refactor the application as serverless with AWS Lambda functions running .NET Core.

B. Rehost the application in AWS Elastic Beanstalk with the .NET platform in a Multi-AZ deployment.

C. Replatform the application to run on Amazon EC2 with the Amazon Linux Amazon Machine Image (AMI).

D. Use AWS Database Migration Service (AWS DMS) to migrate from the Oracle database to Amazon DynamoDB in a Multi-AZ deployment.

E. Use AWS Database Migration Service (AWS DMS) to migrate from the Oracle database to Oracle on Amazon RDS in a Multi-AZ deployment.

Correct Answer: BD

Community vote distribution


BE (97%)

  DavidNamy Highly Voted  8 months, 3 weeks ago


Selected Answer: BE
B. Rehost the application in AWS Elastic Beanstalk with the .NET platform in a Multi-AZ deployment.
E. Use AWS Database Migration Service (AWS DMS) to migrate from the Oracle database to Oracle on Amazon RDS in a Multi-AZ
deployment.

Rehosting the application in Elastic Beanstalk with the .NET platform can minimize development changes. Multi-AZ deployment of Elastic
Beanstalk will increase the availability of application, so it meets the requirement of high availability.

Using AWS Database Migration Service (DMS) to migrate the database to Amazon RDS Oracle will ensure compatibility, so the application
can continue to use the same database technology, and the development team can use their existing skills. It also migrates to a managed
service, which will handle the availability, so the team do not have to worry about it. Multi-AZ deployment will increase the availability of
the database.
upvoted 9 times

  vijaykamal Most Recent  4 days, 1 hour ago


Selected Answer: BE
DynamoDB is NoSQL - E it out
Replatform requires considerable overhead - C is out
Lambda function is for running code for short duration - A is out
Answer - BE
upvoted 1 times

  TariqKipkemei 2 weeks ago


Selected Answer: BE
Minimize development changes + High availability = AWS Elastic Beanstalk and Oracle on Amazon RDS in a Multi-AZ deployment
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: B
B) Rehost the application in AWS Elastic Beanstalk with the .NET platform in a Multi-AZ deployment.

E) Use AWS Database Migration Service (AWS DMS) to migrate from the Oracle database to Oracle on Amazon RDS in a Multi-AZ
deployment.

The reasons are:

° Rehosting in Elastic Beanstalk allows lifting and shifting the .NET application with minimal code changes. Multi-AZ deployment provides
high availability.
° Using DMS to migrate the Oracle data to RDS Oracle in Multi-AZ deployment minimizes changes for the database while achieving high
availability.
° Together this "lift and shift" approach minimizes refactoring needs while providing HA on AWS.
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: BE
B. This allows the company to migrate the application to AWS without significant code changes while leveraging the scalability and high
availability provided by EBS's Multi-AZ deployment.
E. This enables the company to migrate the Oracle database to RDS while maintaining compatibility with the existing application and
leveraging the Multi-AZ deployment for high availability.

A. would require significant development changes and may not provide the same level of compatibility as rehosting or replatforming
options.

C. would still require changes to the application and the underlying infrastructure, whereas rehosting with EBS minimizes the need for
modification.

D. would likely require significant changes to the application code, as DynamoDB is a NoSQL database with a different data model
compared to Oracle.
upvoted 3 times
  markw92 3 months, 2 weeks ago
Answer is BE. No idea why D was chosen. That requires development work and question clearly states minimize development changes,
changing db from Oracle to DynamoDB is LOT of development.
upvoted 1 times

  Bmarodi 4 months, 1 week ago


Selected Answer: BE
B + E are the anwers that fulfil the requirements.
upvoted 1 times

  cheese929 4 months, 4 weeks ago


Selected Answer: BE
B and E
upvoted 1 times

  Nikhilcy 5 months ago


why not C?
upvoted 2 times

  AlankarJ 3 months, 3 weeks ago


It runs on a windows server, shifting the whole this to Linux based EC2 would be extra work and of no sense
upvoted 1 times

  k33 6 months, 1 week ago


Selected Answer: BE
Answer : BE
upvoted 1 times

  waiyiu9981 9 months ago


Why A is wrong?
upvoted 1 times

  gustavtd 9 months ago


Because that needs some development,
upvoted 2 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Selected Answer: BE
B. Rehost the application in AWS Elastic Beanstalk with the .NET platform in a Multi-AZ deployment.
E. Use AWS Database Migration Service (AWS DMS) to migrate from the Oracle database to Oracle on Amazon RDS in a Multi-AZ
deployment.

To minimize development changes while moving the application to AWS and to ensure a high level of availability, the company can rehost
the application in AWS Elastic Beanstalk with the .NET platform in a Multi-AZ deployment. This will allow the application to run in a highly
available environment without requiring any changes to the application code.

The company can also use AWS Database Migration Service (AWS DMS) to migrate the Oracle database to Oracle on Amazon RDS in a
Multi-AZ deployment. This will allow the company to maintain the existing database platform while still achieving a high level of
availability.
upvoted 3 times

  techhb 9 months, 1 week ago


Selected Answer: BE
B&E Option ,because D is for No-Sql
upvoted 1 times

  JayBee65 8 months, 3 weeks ago


And requires additional development effort
upvoted 1 times

  career360guru 9 months, 2 weeks ago


B&E Option
upvoted 1 times
  dcyberguy 10 months ago
B- According to the AWS documentation, the simplest way to migrate .NET applications to AWS is to repost the applications using either
AWS Elastic Beanstalk or Amazon EC2.
E - RDS with Oracle is a no-brainer
upvoted 3 times

  [Removed] 10 months ago


Selected Answer: BE
same as everyone else
upvoted 3 times

  KADSM 10 months, 1 week ago


B E should be correct. Question says "Minimize development changes" - so should go for same oracle DB
upvoted 1 times
Question #198 Topic 1

A company runs a containerized application on a Kubernetes cluster in an on-premises data center. The company is using a MongoDB database
for data storage. The company wants to migrate some of these environments to AWS, but no code changes or deployment method changes are
possible at this time. The company needs a solution that minimizes operational overhead.

Which solution meets these requirements?

A. Use Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 worker nodes for compute and MongoDB on EC2 for data storage.

B. Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate for compute and Amazon DynamoDB for data storage

C. Use Amazon Elastic Kubernetes Service (Amazon EKS) with Amazon EC2 worker nodes for compute and Amazon DynamoDB for data
storage.

D. Use Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate for compute and Amazon DocumentDB (with MongoDB
compatibility) for data storage.

Correct Answer: D

Community vote distribution


D (100%)

  Marge_Simpson Highly Voted  9 months, 3 weeks ago


Selected Answer: D
If you see MongoDB, just go ahead and look for the answer that says DocumentDB.
upvoted 16 times

  Guru4Cloud Most Recent  1 month, 2 weeks ago


Selected Answer: D
Option D is the correct solution that meets all the requirements:
º Use Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate for compute and Amazon DocumentDB (with MongoDB
compatibility) for data storage.
The key reasons are:
º EKS allows running the Kubernetes environment on AWS without changes.
º Using Fargate removes the need to provision and manage EC2 instances.
º DocumentDB provides MongoDB compatibility so the data layer is unchanged.
upvoted 2 times

  james2033 2 months, 2 weeks ago


Selected Answer: D
Question keyword "containerized application", "Kubernetes cluster", "no changes or deployment method changes". Choose C, not D.

But "minimizes operational overhead", choose D.


upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: D
This solution allows the company to leverage EKS to manage the K8s cluster and Fargate to handle the compute resources without
requiring manual management of EC2 worker nodes. The use of DocumentDB provides a fully managed MongoDB-compatible database
service in AWS.

A. would require managing and scaling the EC2 instances manually, which increases operational overhead.

B. would require significant changes to the application code as DynamoDB is a NoSQL database with a different data model compared to
MongoDB.

C. would also require code changes to adapt to DynamoDB's different data model, and managing EC2 worker nodes increases operational
overhead.
upvoted 2 times

  Bmarodi 4 months, 1 week ago


Selected Answer: D
The solution meets these requirements is option D.
upvoted 1 times

  studynoplay 4 months, 2 weeks ago


Selected Answer: D
minimizes operational overhead = Serverless (Fargate)
MongoDB = DocumentDB
upvoted 1 times
  Buruguduystunstugudunstuy 9 months, 1 week ago
Selected Answer: D
To minimize operational overhead and avoid making any code or deployment method changes, the company can use Amazon Elastic
Kubernetes Service (EKS) with AWS Fargate for computing and Amazon DocumentDB (with MongoDB compatibility) for data storage. This
solution allows the company to run the containerized application on EKS without having to manage the underlying infrastructure or make
any changes to the application code.

AWS Fargate is a fully-managed container execution environment that allows you to run containerized applications without the need to
manage the underlying EC2 instances.

Amazon DocumentDB is a fully-managed document database service that supports MongoDB workloads, allowing the company to use the
same database platform as in their on-premises environment without having to make any code changes.
upvoted 4 times

  techhb 9 months, 1 week ago


Selected Answer: D
Reason A &B Elimnated as its Kubernates
why D read here https://containersonaws.com/introduction/ec2-or-aws-fargate/
upvoted 2 times

  career360guru 9 months, 2 weeks ago


Selected Answer: D
Option D
upvoted 2 times

  dcyberguy 10 months ago


DDDDDDD
upvoted 1 times

  Gabs90 10 months, 1 week ago


Selected Answer: D
https://www.examtopics.com/discussions/amazon/view/67897-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 2 times

  leonnnn 10 months, 1 week ago


Selected Answer: D
D meets the requirements
upvoted 1 times

  Nigma 10 months, 1 week ago


Selected Answer: D
D
EKS because of Kubernetes so A and B are eliminated
not C because of MongoDB and Fargate is more expensive
upvoted 1 times
Question #199 Topic 1

A telemarketing company is designing its customer call center functionality on AWS. The company needs a solution that provides multiple speaker
recognition and generates transcript files. The company wants to query the transcript files to analyze the business patterns. The transcript files
must be stored for 7 years for auditing purposes.

Which solution will meet these requirements?

A. Use Amazon Rekognition for multiple speaker recognition. Store the transcript files in Amazon S3. Use machine learning models for
transcript file analysis.

B. Use Amazon Transcribe for multiple speaker recognition. Use Amazon Athena for transcript file analysis.

C. Use Amazon Translate for multiple speaker recognition. Store the transcript files in Amazon Redshift. Use SQL queries for transcript file
analysis.

D. Use Amazon Rekognition for multiple speaker recognition. Store the transcript files in Amazon S3. Use Amazon Textract for transcript file
analysis.

Correct Answer: C

Community vote distribution


B (88%) 7%

  Buruguduystunstugudunstuy Highly Voted  9 months, 1 week ago


Selected Answer: B
The correct answer is B: Use Amazon Transcribe for multiple speaker recognition. Use Amazon Athena for transcript file analysis.

Amazon Transcribe is a service that automatically transcribes spoken language into written text. It can handle multiple speakers and can
generate transcript files in real-time or asynchronously. These transcript files can be stored in Amazon S3 for long-term storage.

Amazon Athena is a query service that allows you to analyze data stored in Amazon S3 using SQL. You can use it to analyze the transcript
files and identify patterns in the data.

Option A is incorrect because Amazon Rekognition is a service for analyzing images and videos, not transcribing spoken language.

Option C is incorrect because Amazon Translate is a service for translating text from one language to another, not transcribing spoken
language.

Option D is incorrect because Amazon Textract is a service for extracting text and data from documents and images, not transcribing
spoken language.
upvoted 13 times

  TheAbsoluteTruth 6 months ago


What bothers me is the 7 years of storage.
upvoted 3 times

  enzomv 8 months, 1 week ago


The correct answer is C.
https://docs.aws.amazon.com/transcribe/latest/dg/what-is.html
You can transcribe streaming media in real time or you can upload and transcribe media files. To see which languages are supported
for each type of transcription, refer to the Supported languages and language-specific features table.
upvoted 1 times

  enzomv 8 months, 1 week ago


Disregard. I meant B
upvoted 1 times

  enzomv 8 months, 1 week ago


https://aws.amazon.com/about-aws/whats-new/2022/06/amazon-transcribe-supports-automatic-language-identification-multi-
lingual-audio/
Amazon Translate is a service for multi-language identification, which identifies all languages spoken in the audio file and creates
transcript using each identified language.
upvoted 1 times

  enzomv 8 months, 1 week ago


Disregard. I meant Amazon Transcribe
upvoted 1 times

  vijaykamal Most Recent  4 days, 1 hour ago


Selected Answer: B
Amazon Rekognition is primarily designed for image and video analysis, not for transcribing audio or recognizing multiple speakers. ->
Option A and D are ruled out
Amazon Translate is used for language translation -> Option C is ruled out
upvoted 1 times

  TariqKipkemei 2 weeks ago


Selected Answer: B
Provide multiple speaker recognition and generate transcript files = Amazon Transcribe
Query the transcript files = Amazon Athena
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: B
The correct answer is B: Use Amazon Transcribe for multiple speaker recognition. Use Amazon Athena for transcript file analysis.
upvoted 1 times

  Thornessen 2 months, 1 week ago


Selected Answer: B
Tricky or incomplete question..

B is the answer because Transcribe is the right service for processing voice calls.

But 7 years of storage is not covered (should add S3 storage)

And Athena querying is just SQL querying, it cannot help you much to recognize business patterns, for that I would think some text
analysis service like Comprehend would be needed.

Unless... We use Transcribe not only to transcribe, but also to recognize some key words, and then create a DB/S3 record with multiple
fields, e.g. if it is a telemarketing questionnaire, record answer for each question. Then SQL querying might be useful.
upvoted 1 times

  sickcow 3 months ago


Selected Answer: C
Transcribe and (s3) + Athena is the way to go here.
Redshift sounds like an overkill
upvoted 2 times

  cookieMr 3 months, 1 week ago


Amazon Transcribe provides accurate transcription of audio recordings with multiple speakers, generating transcript files. These files can
be stored in Amazon S3. To analyze the transcripts and extract insights, Amazon Athena allows SQL-based querying of the stored files.

A. Amazon Rekognition is for image and video analysis, not audio transcription.

C. Amazon Translate is for language translation, not speaker recognition or transcript analysis. Amazon Redshift may not be the best
choice for storing and querying transcript files.

D. Amazon Rekognition is for image and video analysis, and Amazon Textract is for document extraction, not suitable for audio
transcription or analysis. Storing the transcript files in S3 is appropriate, but the analysis requires a different service like Amazon Athena.
upvoted 1 times

  Bmarodi 4 months, 1 week ago


Selected Answer: B
the solution that meets these requirements is option B.
upvoted 1 times

  cheese929 4 months, 4 weeks ago


Selected Answer: B
B is correct
upvoted 1 times

  Rahulbit34 5 months ago


Amazon Transcribe is a service that convert speech into text, so B is the answer
upvoted 1 times

  k33 6 months, 1 week ago


Selected Answer: B
Answer : B
upvoted 2 times

  enzomv 8 months, 1 week ago


Selected Answer: C
https://docs.aws.amazon.com/transcribe/latest/dg/what-is.html
upvoted 1 times
  master1004 8 months, 3 weeks ago
The correct answer is C.
Wouldn't it be the right answer to save and analyze using Amazon Redshift, which can be used to analyze big data on data warhousing?
upvoted 2 times

  Chirantan 9 months, 1 week ago


B

https://aws.amazon.com/transcribe/
Amazon Transcribe
Automatically convert speech to text
upvoted 1 times

  techhb 9 months, 1 week ago


Selected Answer: B
Only B
ashttps://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-associate-saa-c03/view/7/#
Rekognition - Image and Video Analysis
Transcribe - Text to speech
Translate - Translate a text-based file from a language to another language
upvoted 3 times

  kvenikoduru 9 months, 1 week ago


Selected Answer: B
Rekognition - Image and Video Analysis
Transcribe - Text to speech
Translate - Translate a text based file from a language to another language
So by logical deduction is it B
upvoted 2 times

  career360guru 9 months, 2 weeks ago


Selected Answer: B
B is the right answer. You can specify the S3 bucket with transcribe to store the data for 7 years and use Athena for Analytics later.
Transcribe also supports Multiple speaker recognition.
upvoted 3 times
Question #200 Topic 1

A company hosts its application on AWS. The company uses Amazon Cognito to manage users. When users log in to the application, the
application fetches required data from Amazon DynamoDB by using a REST API that is hosted in Amazon API Gateway. The company wants an
AWS managed solution that will control access to the REST API to reduce development efforts.

Which solution will meet these requirements with the LEAST operational overhead?

A. Configure an AWS Lambda function to be an authorizer in API Gateway to validate which user made the request.

B. For each user, create and assign an API key that must be sent with each request. Validate the key by using an AWS Lambda function.

C. Send the user’s email address in the header with every request. Invoke an AWS Lambda function to validate that the user with that email
address has proper access.

D. Configure an Amazon Cognito user pool authorizer in API Gateway to allow Amazon Cognito to validate each request.

Correct Answer: A

Community vote distribution


D (96%)

  Buruguduystunstugudunstuy Highly Voted  9 months, 1 week ago


Selected Answer: D
KEYWORD: LEAST operational overhead

To control access to the REST API and reduce development efforts, the company can use an Amazon Cognito user pool authorizer in API
Gateway. This will allow Amazon Cognito to validate each request and ensure that only authenticated users can access the API. This
solution has the LEAST operational overhead, as it does not require the company to develop and maintain any additional infrastructure or
code.

Therefore, Option D is the correct answer.

Option D. Configure an Amazon Cognito user pool authorizer in API Gateway to allow Amazon Cognito to validate each request.
upvoted 6 times

  TariqKipkemei Most Recent  2 weeks ago


Selected Answer: D
use Amazon Cognito to authorize user requests.
upvoted 1 times

  Guru4Cloud 2 weeks, 5 days ago


Selected Answer: D
D. Configure an Amazon Cognito user pool authorizer in API Gateway to allow Amazon Cognito to validate each request
upvoted 1 times

  Guru4Cloud 1 month, 2 weeks ago


Selected Answer: D
Option D is the best solution with the least operational overhead:

Configure an Amazon Cognito user pool authorizer in API Gateway to allow Amazon Cognito to validate each request.

The key reasons are:

º Cognito user pool authorizers allow seamless integration between Cognito and API Gateway for access control.
º API Gateway handles validating the access tokens from Cognito automatically without any custom code.
º This is a fully managed solution with minimal ops overhead.
upvoted 1 times

  cookieMr 3 months, 1 week ago


By configuring an Amazon Cognito user pool authorizer in API Gateway, you can leverage the built-in functionality of Amazon Cognito to
authenticate and authorize users. This eliminates the need for custom development or managing access keys. Amazon Cognito handles
user authentication, securely manages user identities, and provides seamless integration with API Gateway for controlling access to the
REST API.

A. Configuring an AWS Lambda function as an authorizer in API Gateway would require custom implementation and management of the
authorization logic.

B. Creating and assigning an API key for each user would require additional management and validation logic in an AWS Lambda function.

C. Sending the user's email address in the header and validating it with an AWS Lambda function would also require custom
implementation and management of the authorization logic.

Option D, using an Amazon Cognito user pool authorizer, provides a streamlined and managed solution for controlling access to the REST
API with minimal operational overhead.
upvoted 1 times
  Bmarodi 4 months, 1 week ago
Selected Answer: D
solution will meet these requirements with the LEAST operational overhead is option D.
upvoted 1 times

  studynoplay 4 months, 2 weeks ago


Selected Answer: D
LEAST operational overhead = Serverless = Cognito user pool
upvoted 1 times

  cheese929 4 months, 4 weeks ago


Selected Answer: D
D is correct.
upvoted 1 times

  k33 6 months, 1 week ago


Selected Answer: D
Answer : D
upvoted 1 times

  Hello4me 6 months, 1 week ago


D is correct
upvoted 1 times

  Mahadeva 8 months, 3 weeks ago


Selected Answer: A
There is a difference between "Grant Access" (Authentication done by Cognito user pool), and "Control Access" to APIs (Authorization
using IAM policy, custom Authorizer, Federated Identity Pool). The question very specifically asks about *Control access to REST APIs*
which is a clear case of Authorization and not Authentication. So custom Authorizer using Lambda in API Gateway is the solution.

Pls refer to this blog: https://aws.amazon.com/blogs/security/building-fine-grained-authorization-using-amazon-cognito-api-gateway-and-


iam/
upvoted 1 times

  JayBee65 8 months, 3 weeks ago


This answer looks to be entirely wrong

This article explains how to do what you claim cannot be done:


https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-integrate-with-cognito.html

It starts "As an alternative to using IAM roles and policies or Lambda authorizers (formerly known as custom authorizers), you can use
an Amazon Cognito user pool to control who can access your API in Amazon API Gateway."

This suggests that Amazon Cognito user pools CAN be used for Authorization, which you say above cannot be done.

Further, it explains how to do this...

"To use an Amazon Cognito user pool with your API, you must first create an authorizer of the COGNITO_USER_POOLS type and then
configure an API method to use that authorizer"

So whilst A is a valid approach, it looks like D achieves the same with "the LEAST operational overhead".
upvoted 7 times

  TungPham 6 months, 4 weeks ago


Control access to a REST API using Amazon Cognito user pools as authorizer
https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-integrate-with-cognito.html
upvoted 3 times

  Mahadeva 8 months, 3 weeks ago


Option D: there is nothing called Cognito user pool authorizer. We only have custom Authorizer function through Lambda.
upvoted 1 times

  JayBee65 8 months, 3 weeks ago


Oh yes there is :)
upvoted 2 times

  k1kavi1 9 months, 1 week ago


Selected Answer: D
https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-integrate-with-cognito.html
upvoted 4 times

  MutiverseAgent 2 months, 1 week ago


up this
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: D
Option D - As company already has all the users authentication information in Cognito
upvoted 1 times

  k1kavi1 9 months, 2 weeks ago


Selected Answer: D
D is correct
upvoted 2 times

  mj98 10 months ago


API + Cognito integration - Answer D
upvoted 2 times

  Nigma 10 months, 1 week ago


Selected Answer: D
Answer : D
Check Gabs90 link

Use the Amazon Cognito console, CLI/SDK, or API to create a user pool—or use one that's owned by another AWS account
upvoted 1 times

  TMM369 10 months, 1 week ago


Selected Answer: D
D - https://aws.amazon.com/premiumsupport/knowledge-center/api-gateway-cognito-user-pool-authorizer/
upvoted 1 times
Question #201 Topic 1

A company is developing a marketing communications service that targets mobile app users. The company needs to send confirmation messages
with Short Message Service (SMS) to its users. The users must be able to reply to the SMS messages. The company must store the responses for
a year for analysis.

What should a solutions architect do to meet these requirements?

A. Create an Amazon Connect contact flow to send the SMS messages. Use AWS Lambda to process the responses.

B. Build an Amazon Pinpoint journey. Configure Amazon Pinpoint to send events to an Amazon Kinesis data stream for analysis and archiving.

C. Use Amazon Simple Queue Service (Amazon SQS) to distribute the SMS messages. Use AWS Lambda to process the responses.

D. Create an Amazon Simple Notification Service (Amazon SNS) FIFO topic. Subscribe an Amazon Kinesis data stream to the SNS topic for
analysis and archiving.

Correct Answer: A

Community vote distribution


B (82%) Other

  whoob 3 days, 13 hours ago


base function of AWS Pinpoint
upvoted 1 times

  TariqKipkemei 2 weeks ago


Selected Answer: B
Marketing communications = Amazon Pinpoint
upvoted 1 times

  Guru4Cloud 2 weeks, 5 days ago


Selected Answer: B
B. AWS Pinpoint is for Marketing communications.
upvoted 1 times

  cookieMr 3 months, 1 week ago


Selected Answer: B
By using Pinpoint, the company can effectively send SMS messages to its mobile app users. Additionally, Pinpoint allows the configuration
of journeys, which enable the tracking and management of user interactions. The events generated during the journey, including user
responses to SMS, can be captured and sent to an Kinesis data stream. This data stream can then be used for analysis and archiving
purposes.

A. Creating an Amazon Connect contact flow is primarily focused on customer support and engagement, and it lacks the capability to
store and process SMS responses for analysis.

C. Using SQS is a message queuing service and is not specifically designed for handling SMS responses or capturing them for analysis.

D. Creating an SNS FIFO topic and subscribing a Kinesis data stream is not the most appropriate solution for capturing and storing SMS
responses, as SNS is primarily used for message publishing and distribution.

In summary, option B is the best choice as it leverages Pinpoint to send SMS messages and captures user responses for analysis and
archiving using an Kinesis data stream.
upvoted 2 times

  Bmarodi 3 months, 3 weeks ago


Selected Answer: B
Option B is correct answer: link: https://aws.amazon.com/pinpoint/, and video under the link.
upvoted 2 times

  studynoplay 4 months, 2 weeks ago


Selected Answer: B
Two-Way Messaging
Receive SMS messages from your customers and reply back to them in a chat-like interactive experience. With Amazon Pinpoint, you can
create automatic responses when customers send you messages that contain certain keywords.
upvoted 1 times

  CLOUDUMASTER 5 months ago


Based on my research Kinesis stream is real time data ingestion, and also stores only event data and not the actual people responses,
furthermore there is no requirement to have real time data streaming. That is probably why I am hesitating agree here with everyone on
B and rather choose A.
upvoted 1 times

  jayce5 5 months ago


Selected Answer: B
The answer is B. AWS Pinpoint is for Marketing communications.
AWS Connect is for Contact center.
upvoted 1 times

  jaswantn 5 months, 1 week ago


Selected Answer: A
According to the following link I would choose Option A.
https://docs.aws.amazon.com/connect/latest/adminguide/web-and-mobile-chat.html
upvoted 1 times

  smartegnine 3 months, 4 weeks ago


no no, there is no SMS, note the question stated all activities through SMS, also Amazon connect flow most likely working on web
application UI, but if you see question clearly, this is receiving and sending SMS not through application UI (Web/Mobile App). So for
those reason we choose B
upvoted 1 times

  ProfXsamson 8 months ago


Selected Answer: B
Amazon Pinpoint is a flexible, scalable and fully managed push notification and SMS service for mobile apps.
upvoted 3 times

  Foucault 8 months, 2 weeks ago


It's B, see following link https://docs.aws.amazon.com/pinpoint/latest/developerguide/event-streams.html
upvoted 2 times

  LuckyAro 8 months, 2 weeks ago


Selected Answer: B
https://aws.amazon.com/pinpoint/product-details/sms/
Two-Way Messaging:
Receive SMS messages from your customers and reply back to them in a chat-like interactive experience. With Amazon Pinpoint, you can
create automatic responses when customers send you messages that contain certain keywords. You can even use Amazon Lex to create
conversational bots.
A majority of mobile phone users read incoming SMS messages almost immediately after receiving them. If you need to be able to provide
your customers with urgent or important information, SMS messaging may be the right solution for you.

You can use Amazon Pinpoint to create targeted groups of customers, and then send them campaign-based messages. You can also use
Amazon Pinpoint to send direct messages, such as appointment confirmations, order updates, and one-time passwords.
upvoted 2 times

  DavidNamy 8 months, 3 weeks ago


Selected Answer: D
D:
Amazon Simple Notification Service (SNS) is a fully managed messaging service that enables you to send and receive SMS messages in a
cost-effective and highly scalable way. By creating an SNS FIFO topic, you can ensure that the SMS messages are delivered to your users in
the order they were sent and that the SMS responses are processed and stored in the same order. You can also configure your SNS FIFO
topic to publish SMS responses to an Amazon Kinesis data stream, which will allow you to store and analyze the responses for a year.

Amazon Pinpoint ?¿?¿? NO!

is not correct solution because while Amazon Pinpoint allows you to send SMS and Email campaigns, as well as handle push notifications
to a user base, it doesn't provide SMS sending feature by itself. Furthermore, it's a service mainly focused on sending and tracking
marketing campaigns, not for managing two-way SMS communication and the reception of reply.
upvoted 3 times

  Omok 8 months ago


What do think about https://docs.aws.amazon.com/pinpoint/latest/userguide/channels-sms-two-way.html?
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Selected Answer: B
To send SMS messages and store the responses for a year for analysis, the company can use Amazon Pinpoint. Amazon Pinpoint is a fully-
managed service that allows you to send targeted and personalized SMS messages to your users and track the results.

To meet the requirements of the company, a solutions architect can build an Amazon Pinpoint journey and configure Amazon Pinpoint to
send events to an Amazon Kinesis data stream for analysis and archiving. The Kinesis data stream can be configured to store the data for
a year, allowing the company to analyze the responses over time.

So, Option B is the correct answer.


Option B. Build an Amazon Pinpoint journey. Configure Amazon Pinpoint to send events to an Amazon Kinesis data stream for analysis
and archiving.
upvoted 3 times
  techhb 9 months, 2 weeks ago
Selected Answer: B
We need to analyze and archiving A doesnt help with it.
upvoted 1 times

  BENICE 9 months, 2 weeks ago


B is correct answer
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: B
Answer B, This is Pinpoint usecase
upvoted 1 times
Question #202 Topic 1

A company is planning to move its data to an Amazon S3 bucket. The data must be encrypted when it is stored in the S3 bucket. Additionally, the
encryption key must be automatically rotated every year.

Which solution will meet these requirements with the LEAST operational overhead?

A. Move the data to the S3 bucket. Use server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Use the built-in key rotation
behavior of SSE-S3 encryption keys.

B. Create an AWS Key Management Service (AWS KMS) customer managed key. Enable automatic key rotation. Set the S3 bucket’s default
encryption behavior to use the customer managed KMS key. Move the data to the S3 bucket.

C. Create an AWS Key Management Service (AWS KMS) customer managed key. Set the S3 bucket’s default encryption behavior to use the
customer managed KMS key. Move the data to the S3 bucket. Manually rotate the KMS key every year.

D. Encrypt the data with customer key material before moving the data to the S3 bucket. Create an AWS Key Management Service (AWS KMS)
key without key material. Import the customer key material into the KMS key. Enable automatic key rotation.

Correct Answer: B

Community vote distribution


B (60%) A (39%)

  techhb Highly Voted  9 months, 2 weeks ago


Selected Answer: B
SSE-S3 - is free and uses AWS owned CMKs (CMK = Customer Master Key). The encryption key is owned and managed by AWS, and is
shared among many accounts. Its rotation is automatic with time that varies as shown in the table here. The time is not explicitly defined.

SSE-KMS - has two flavors:

AWS managed CMK. This is free CMK generated only for your account. You can only view it policies and audit usage, but not manage it.
Rotation is automatic - once per 1095 days (3 years),
Customer managed CMK. This uses your own key that you create and can manage. Rotation is not enabled by default. But if you enable it,
it will be automatically rotated every 1 year. This variant can also use an imported key material by you. If you create such key with an
imported material, there is no automated rotation. Only manual rotation.
SSE-C - customer provided key. The encryption key is fully managed by you outside of AWS. AWS will not rotate it.
upvoted 23 times

  ruqui 4 months, 1 week ago


AWS managed CMK rotates every 365 days (not 1095 days). Reference:
https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#key-mgmt
upvoted 1 times

  Buruguduystunstugudunstuy Highly Voted  9 months, 1 week ago


Selected Answer: A
KEYWORD: LEAST operational overhead

To encrypt the data when it is stored in the S3 bucket and automatically rotate the encryption key every year with the least operational
overhead, the company can use server-side encryption with Amazon S3-managed encryption keys (SSE-S3). SSE-S3 uses keys that are
managed by Amazon S3, and the built-in key rotation behavior of SSE-S3 encryption keys automatically rotates the keys every year.

To meet the requirements of the company, the solutions architect can move the data to the S3 bucket and enable server-side encryption
with SSE-S3. This solution requires no additional configuration or maintenance and has the least operational overhead.

Hence, the correct answer is;

Option A. Move the data to the S3 bucket. Use server-side encryption with Amazon S3-managed encryption keys (SSE-S3). Use the built-in
key rotation behavior of SSE-S3 encryption keys.
upvoted 22 times

  LuckyAro 8 months, 1 week ago


The order of these events is being ignored here in my opinion. The encryption checkbox needs to be checked before data is moved into
the S3 bucket or it will not be encrypted otherwise, you'll have to encrypt manually and reload into S3 bucket. If the box was checked
before moving data into S3 then you are good to go !
upvoted 4 times

  Smart 2 months ago


Ignoring the new changes that the default encryption is already enabled. I agree that the encryption should be configured before
moving the data into the bucket. Otherwise, the existing objects will remain unencrypted.
Correct Answer is B.

Additionally, where is the reference that SSE-S3 will rotate keys every year (which is the question's requirement).
upvoted 1 times
  LuckyAro 8 months, 1 week ago
https://docs.aws.amazon.com/AmazonS3/latest/userguide/default-bucket-encryption.html
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Option B involves using a customer-managed AWS KMS key and enabling automatic key rotation, but this requires the company to
manage the KMS key and monitor the key rotation process.

Option C involves using a customer-managed AWS KMS key, but this requires the company to manually rotate the key every year, which
introduces additional operational overhead.

Option D involves encrypting the data with customer key material and creating a KMS key without key material, but this requires the
company to manage the customer key material and import it into the KMS key, which introduces additional operational overhead.
upvoted 2 times

  JayBee65 8 months, 3 weeks ago


But...

For A there is no reference to how often these keys are rotated, and to rotate to a new key, you need to upload it, which is
operational overhead. So not only does it not necessarily meet the 'rotate keys every year' requirement, but every year it requires
operational overhead.

More importantly, the question states move the objects first, and then configure encryption, but ..."There is no change to the
encryption of the objects that existed in the bucket before default encryption was enabled." from
https://docs.aws.amazon.com/AmazonS3/latest/userguide/default-bucket-encryption.html

So A is clearly wrong.

For B, whilst you have to set up KMS once, you then don't have to anything else, which i would say is LEAST operational overhead.
upvoted 13 times

  ocbn3wby 9 months ago


God bless you, man! The most articulated answers, easy to understand. Good job!
upvoted 3 times

  JayBee65 8 months, 3 weeks ago


But wrong :)
upvoted 4 times

  ocbn3wby 7 months, 3 weeks ago


Reviewed it the second time. Some of them are wrong, indeed.
upvoted 1 times

  TariqKipkemei Most Recent  2 weeks ago


Selected Answer: A
LEAST operational overhead = Amazon S3 managed encryption keys
upvoted 1 times

  XCheng 2 weeks ago


Selected Answer: 一
https://docs.aws.amazon.com/zh_cn/AmazonS3/latest/userguide/default-bucket-encryption.html
upvoted 1 times

  roggerrubens 2 weeks, 3 days ago


Resposta A , todo objeto que é colocado no S3 , e automaticamente criptografado por padrão SSE-S3 , não ???
upvoted 1 times

  Guru4Cloud 2 weeks, 5 days ago


Selected Answer: B
B. Create an AWS Key Management Service (AWS KMS) customer managed key. Enable automatic key rotation. Set the S3 bucket’s default
encryption behavior to use the customer managed KMS key. Move the data to the S3 bucket.
upvoted 1 times

  Jeyaluxshan 3 weeks, 3 days ago


Answer is B.
SSE-S3 encryption will not apply to existing objects in S3 bucket.
Question is when it is stored in S3, data must be encrypted.
If you already stored and later enable SSE-S3, will not be a solution.
So A is not the correct answer.
upvoted 1 times

  omar_bahrain 4 weeks ago


Selected Answer: A
Once you enable SSE-S3 encryption for your S3 bucket, Amazon automatically rotates the data encryption keys for your objects every 365
days. This means that your data encryption keys are automatically replaced with new ones every year. You can also manually rotate the
encryption keys for your objects at any time.

https://saturncloud.io/blog/how-does-amazon-sses3-key-rotation-
work/#:~:text=Once%20you%20enable%20SSE%2DS3,your%20objects%20at%20any%20time.
upvoted 1 times

  Sutariya 4 weeks ago


B is right Answer : If you need more control over your keys, such as managing key rotation and access policy grants, you can choose to
use server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS), or dual-layer server-side encryption with AWS
KMS keys (DSSE-KMS). For more information about editing KMS keys
upvoted 1 times

  Aelodus 1 month, 2 weeks ago


Selected Answer: B
Went to AWS office for a cloud developer course, I asked the trainer what is SSE-S3's key rotation frequency, the trainer mentioned that the
information for the rotation frequency is not publicly available. This was done intentionally. He also said that if we truly wanted to know,
we had to hold a private consultation with them, as a representative of our company.

Going back to the question, since we cannot confirm the frequency of SSE-S3's key rotation the best answer for this question is B. It might
have higher operational overhead compared to A, but it is the only one that fulfills the requirements.
upvoted 1 times

  npraveen 1 month, 3 weeks ago


Selected Answer: A
Once you enable SSE-S3 encryption for your S3 bucket, Amazon automatically rotates the data encryption keys for your objects every 365
days. This means that your data encryption keys are automatically replaced with new ones every year.
upvoted 2 times

  oguzbeliren 1 month, 4 weeks ago


Answer is A
With server side encryption S3 will automatically manages the necryption keys and their rotation. The questions is specifically asking least
operational overhead.

Option B also provides a valid solution, but it involves more manual configuration and management of a customer-managed AWS Key
Management Service (AWS KMS) key, including enabling and configuring automatic key rotation.
upvoted 1 times

  cookieMr 3 months ago


Selected Answer: B
A. While using SSE-S3 the key rotation is handled automatically by AWS. AWS rotates the encryption keys at least once every 1095 days (3
years) on your behalf.

B. By using a customer managed key in AWS KMS with automatic key rotation enabled, and setting the S3 bucket's default encryption
behavior to use this key, the data stored in the S3 bucket will be encrypted and the encryption key will be automatically rotated every year.

C. This answer is not the most optimal solution as it suggests manually rotating the KMS key every year, which introduces manual
intervention and increases operational overhead.

D. This answer is not the most suitable option as it involves encrypting the data with customer key material and managing the key rotation
manually. It adds complexity and management overhead compared to using AWS KMS for key management and encryption.
upvoted 4 times

  pisica134 3 months, 1 week ago


chat gpt says it's B
upvoted 1 times

  danghh 4 months, 2 weeks ago


AWS KMS automatically rotates AWS managed keys (SSE-S3) every year (approximately 365 days). You cannot enable or disable key
rotation for AWS managed keys.
https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html
upvoted 1 times

  studynoplay 4 months, 2 weeks ago


Selected Answer: A
This question is old and written when there was no default encryption on S3.
Choosing A because Amazon S3 now applies server-side encryption with Amazon S3 managed keys (SSE-S3) as the base level of encryption
for every bucket in Amazon S3. Starting January 5, 2023, all new object uploads to Amazon S3 are automatically encrypted at no additional
cost and with no impact on performance.
upvoted 2 times

  studynoplay 4 months, 2 weeks ago


Just created a bucket and it says: The default encryption configuration of an S3 bucket is always enabled and is at a minimum set to
server-side encryption with Amazon S3 managed keys (SSE-S3). With server-side encryption, Amazon S3 encrypts an object before
saving it to disk and decrypts it when you download the object. Encryption doesn't change the way that you access data as an
authorized user. It only further protects your data.

You can configure default encryption for a bucket. You can use either server-side encryption with Amazon S3 managed keys (SSE-S3)
(the default) or server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS).
upvoted 1 times
  WilliamHoac 4 months, 3 weeks ago
B is correct answer.
KEYWORD: LEAST operational overhead and the encryption key must be automatically rotated every year
SSE-S3: cannot rotation.
Base on aws site: If you need more control over your keys, such as managing key rotation and access policy grants, you can choose to use
server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS)
upvoted 1 times
Question #203 Topic 1

The customers of a finance company request appointments with financial advisors by sending text messages. A web application that runs on
Amazon EC2 instances accepts the appointment requests. The text messages are published to an Amazon Simple Queue Service (Amazon SQS)
queue through the web application. Another application that runs on EC2 instances then sends meeting invitations and meeting confirmation
email messages to the customers. After successful scheduling, this application stores the meeting information in an Amazon DynamoDB
database.

As the company expands, customers report that their meeting invitations are taking longer to arrive.

What should a solutions architect recommend to resolve this issue?

A. Add a DynamoDB Accelerator (DAX) cluster in front of the DynamoDB database.

B. Add an Amazon API Gateway API in front of the web application that accepts the appointment requests.

C. Add an Amazon CloudFront distribution. Set the origin as the web application that accepts the appointment requests.

D. Add an Auto Scaling group for the application that sends meeting invitations. Configure the Auto Scaling group to scale based on the depth
of the SQS queue.

Correct Answer: D

Community vote distribution


D (100%)

  Buruguduystunstugudunstuy Highly Voted  9 months, 1 week ago


Selected Answer: D
Option D. Add an Auto Scaling group for the application that sends meeting invitations. Configure the Auto Scaling group to scale based
on the depth of the SQS queue.

To resolve the issue of longer delivery times for meeting invitations, the solutions architect can recommend adding an Auto Scaling group
for the application that sends meeting invitations and configuring the Auto Scaling group to scale based on the depth of the SQS queue.
This will allow the application to scale up as the number of appointment requests increases, improving the performance and delivery
times of the meeting invitations.
upvoted 8 times

  TariqKipkemei Most Recent  2 weeks ago


Selected Answer: D
Add an Auto Scaling group for the application that sends meeting invitations. Configure the Auto Scaling group to scale based on the
depth of the SQS queue.
upvoted 1 times

  Guru4Cloud 2 weeks, 5 days ago


Selected Answer: D
Add an Auto Scaling group for the application that sends meeting invitations. Configure the Auto Scaling group to scale based on the
depth of the SQS queue.
upvoted 1 times

  cookieMr 3 months ago


Selected Answer: D
By adding an ASG for the application that sends meeting invitations and configuring it to scale based on the depth of the SQS, the system
can automatically adjust its capacity based on the number of pending messages in the queue. This ensures that the application can
handle increased message load and process the meeting invitations more efficiently, reducing the delay experienced by customers.

A. Adding a DynamoDB Accelerator (DAX) cluster in front of the DynamoDB database would improve read performance for DynamoDB,
but it does not directly address the issue of delayed meeting invitations.

B. Adding an API Gateway API in front of the web application that accepts the appointment requests may help with request handling and
management, but it does not directly address the issue of delayed meeting invitations.

C. Adding an CloudFront distribution with the web application as the origin would improve content delivery and caching, but it does not
directly address the issue of delayed meeting invitations.
upvoted 4 times

  career360guru 9 months, 2 weeks ago


Selected Answer: D
Option D is the right Answer,
upvoted 2 times
  k1kavi1 9 months, 2 weeks ago
Selected Answer: D
Agreed
upvoted 1 times

  jambajuice 10 months, 1 week ago


Selected Answer: D
ANswer d
upvoted 1 times

  leonnnn 10 months, 1 week ago


Selected Answer: D
D meets the requirements
upvoted 1 times

  Nigma 10 months, 1 week ago


Selected Answer: D
Answer : D
upvoted 1 times
Question #204 Topic 1

An online retail company has more than 50 million active customers and receives more than 25,000 orders each day. The company collects
purchase data for customers and stores this data in Amazon S3. Additional customer data is stored in Amazon RDS.

The company wants to make all the data available to various teams so that the teams can perform analytics. The solution must provide the ability
to manage fine-grained permissions for the data and must minimize operational overhead.

Which solution will meet these requirements?

A. Migrate the purchase data to write directly to Amazon RDS. Use RDS access controls to limit access.

B. Schedule an AWS Lambda function to periodically copy data from Amazon RDS to Amazon S3. Create an AWS Glue crawler. Use Amazon
Athena to query the data. Use S3 policies to limit access.

C. Create a data lake by using AWS Lake Formation. Create an AWS Glue JDBC connection to Amazon RDS. Register the S3 bucket in Lake
Formation. Use Lake Formation access controls to limit access.

D. Create an Amazon Redshift cluster. Schedule an AWS Lambda function to periodically copy data from Amazon S3 and Amazon RDS to
Amazon Redshift. Use Amazon Redshift access controls to limit access.

Correct Answer: D

Community vote distribution


C (100%)

  anhike Highly Voted  9 months, 3 weeks ago


Answer : C keyword "manage-fine-grained"
https://aws.amazon.com/blogs/big-data/manage-fine-grained-access-control-using-aws-lake-formation/
upvoted 13 times

  markw92 3 months, 2 weeks ago


You can manage fine grained using redshift as well - https://aws.amazon.com/blogs/big-data/achieve-fine-grained-data-security-with-
row-level-access-control-in-amazon-redshift/
But, I believe the keyword to look for is "minimize operational overhead", which lakeformation does without duplicating much of the
data. Redshift is operational overhead and duplication of data. not sure why the answer is D. i vote C as well.
upvoted 3 times

  Olaunfazed 3 months ago


yeah, most of examtopics answers are wrong
upvoted 1 times

  karloscetina007 Most Recent  3 days, 13 hours ago


Selected Answer: C
a fine grained permissons is one of the conditions to acomplishes with the requirement.
With the use of AWS Glue you can get acomplishes with this requirement.
My answer is: C
upvoted 1 times

  TariqKipkemei 2 weeks ago


Selected Answer: C
With Lake formation you can scale permissions more easily with fine-grained security capabilities, including row- and cell-level
permissions and tag-based access control.
upvoted 1 times

  Guru4Cloud 2 weeks, 5 days ago


Selected Answer: C
Lake Formation enables the creation of a secure and scalable data lake on AWS, allowing centralized access controls for both S3 and RDS
data. By using Lake Formation, the company can manage permissions effectively and integrate RDS data through the AWS Glue JDBC
connection. Registering the S3 in Lake Formation ensures unified access control. This solution reduces operational overhead while
providing fine-grained permissions management.
upvoted 1 times

  cookieMr 3 months ago


Selected Answer: C
Lake Formation enables the creation of a secure and scalable data lake on AWS, allowing centralized access controls for both S3 and RDS
data. By using Lake Formation, the company can manage permissions effectively and integrate RDS data through the AWS Glue JDBC
connection. Registering the S3 in Lake Formation ensures unified access control. This solution reduces operational overhead while
providing fine-grained permissions management.

A. Directly writing purchase data to Amazon RDS with RDS access controls lacks comprehensive permissions management for both S3 and
RDS data.

B. Periodically copying data from RDS to S3 using Lambda and using AWS Glue and Athena for querying does not offer fine-grained
permissions management and introduces data synchronization complexities.

D. Creating an Redshift cluster and copying data from S3 and RDS to Redshift adds complexity and operational overhead without the
flexibility of Lake Formation's permissions management capabilities.
upvoted 3 times
  pisica134 3 months, 1 week ago
Answer is C AWS Lake Formation provides a comprehensive solution for building and managing a data lake. It simplifies data ingestion,
organization, and access control. By creating a data lake using AWS Lake Formation, you can centralize and govern access to your data
across multiple sources.
upvoted 1 times

  Bmarodi 3 months, 3 weeks ago


Selected Answer: C
Option C is right answer: https://docs.aws.amazon.com/lake-formation/latest/dg/what-is-lake-formation.html
upvoted 1 times

  Abrar2022 4 months ago


Lake Formation helps you manage fine-grained access for internal and external customers from a centralized location and in a scalable
way.
upvoted 1 times

  doorahmie 8 months, 1 week ago


https://docs.aws.amazon.com/lake-formation/latest/dg/access-control-overview.html
upvoted 2 times

  LuckyAro 8 months, 2 weeks ago


Selected Answer: C
To me, the give-away was: "The company wants to make all the data available to various teams" - Data-Lake - All data in one place.
upvoted 4 times

  master1004 8 months, 3 weeks ago


The correct answer is D.
The company uses all the data from various teams so that the teams can do their analysis.
Therefore, it is the best way to separately configure redshift for data warehousing and for all employees to connect to the redshift DB and
perform analysis tasks without burdening the operating DB (must minimize operational overhead).
upvoted 3 times

  ruqui 3 months, 4 weeks ago


I don't think that "periodically copy data from Amazon S3 and RDS to Redshift" minimize the operational overhead. The correct answer
for me is C
upvoted 1 times

  aba2s 8 months, 4 weeks ago


Selected Answer: C
Manage fine-grained access control using AWS Lake Formation
https://aws.amazon.com/blogs/big-data/manage-fine-grained-access-control-using-aws-lake-formation/
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Selected Answer: C
Option C. Create a data lake by using AWS Lake Formation. Create an AWS Glue JDBC connection to Amazon RDS. Register the S3 bucket in
Lake Formation. Use Lake Formation access controls to limit access.

To make all the data available to various teams and minimize operational overhead, the company can create a data lake by using AWS
Lake Formation. This will allow the company to centralize all the data in one place and use fine-grained access controls to manage access
to the data.

To meet the requirements of the company, the solutions architect can create a data lake by using AWS Lake Formation, create an AWS
Glue JDBC connection to Amazon RDS, and register the S3 bucket in Lake Formation. The solutions architect can then use Lake Formation
access controls to limit access to the data. This solution will provide the ability to manage fine-grained permissions for the data and
minimize operational overhead.
upvoted 3 times

  majdango 4 months, 2 weeks ago


..................
upvoted 1 times

  kvenikoduru 9 months, 1 week ago


Selected Answer: C
a combination of the following 2 URLs I believe it is C
https://aws.amazon.com/lake-formation/
https://aws.amazon.com/blogs/big-data/manage-fine-grained-access-control-using-aws-lake-formation/
upvoted 1 times
  career360guru 9 months, 2 weeks ago
Option C is the right answer. Fine-grained access-control from different types of data sources is a Lakeformation usecase.
upvoted 2 times

  gloritown 9 months, 3 weeks ago


Selected Answer: C
CCCCCCCCCCCC
upvoted 2 times

  9014 10 months ago


Selected Answer: C
ANSWER IS OF COURSE C
upvoted 1 times
Question #205 Topic 1

A company hosts a marketing website in an on-premises data center. The website consists of static documents and runs on a single server. An
administrator updates the website content infrequently and uses an SFTP client to upload new documents.

The company decides to host its website on AWS and to use Amazon CloudFront. The company’s solutions architect creates a CloudFront
distribution. The solutions architect must design the most cost-effective and resilient architecture for website hosting to serve as the CloudFront
origin.

Which solution will meet these requirements?

A. Create a virtual server by using Amazon Lightsail. Configure the web server in the Lightsail instance. Upload website content by using an
SFTP client.

B. Create an AWS Auto Scaling group for Amazon EC2 instances. Use an Application Load Balancer. Upload website content by using an SFTP
client.

C. Create a private Amazon S3 bucket. Use an S3 bucket policy to allow access from a CloudFront origin access identity (OAI). Upload website
content by using the AWS CLI.

D. Create a public Amazon S3 bucket. Configure AWS Transfer for SFTP. Configure the S3 bucket for website hosting. Upload website content
by using the SFTP client.

Correct Answer: C

Community vote distribution


C (73%) D (27%)

  bjexamprep Highly Voted  1 month, 3 weeks ago


Selected Answer: C
The question here is whether the solution architect can change the requirement. The requirement says very clear about SFTP which
cannot be addressed by option C. But the question also gives very clear hint about OAI which cannot be addressed by option D. Option D
also doesn't mention anything about CloudFront which is part of the requirement of the question.
So, if the requirement cannot be changed, D is the answer; if the requirement can be changed, C is the answer. But if the requirement can
be changed, what's the limitation? That will be a Chaos.
I'm voting C, and curse the question designer.
upvoted 6 times

  Iconique 5 days, 14 hours ago


"The solutions architect must design the most cost-effective and resilient architecture for website hosting to serve as the CloudFront
origin." The solution architect is looking for a solution that can fit with CloudFront as origin! So it doesn't matter that option D does not
mention CF, CF is part of the solution!
Having a marketing website on-premise clearly indicates having S3 as static content.
AWS Transfer Family is the way to upload files via FTP to S3!
So the answer is D.

Why not C?
User is already uploading content via FTP, option C is eliminating this option for him and forces using the CLI. The solution from C does
not meet the requirements of having FTP.
upvoted 1 times

  Guru4Cloud Most Recent  2 weeks, 5 days ago


Selected Answer: D
D - SFTP client to upload new documents.
upvoted 1 times

  Guru4Cloud 2 weeks, 5 days ago


I changed C. is better then D
upvoted 1 times

  cookieMr 3 months ago


Selected Answer: C
Hosting the website in a private S3 provides cost-effective and highly available storage for the static website content. By configuring a
bucket policy to allow access from a CloudFront OAI, the S3 can be securely accessed only through CloudFront. This ensures that the
website content is served through CloudFront while keeping the S3 private. Uploading website content using the AWS CLI allows for easy
and efficient content management.

A. Hosting the website on an Lightsail virtual server would introduce additional management overhead and costs compared to using S3
directly for static content hosting.

B. Using an AWS ASG with EC2 instances and an ALB is not necessary for serving static website content. It would add unnecessary
complexity and cost.

D. While using AWS Transfer for SFTP allows for SFTP uploads, it introduces additional costs and complexity compared to directly
uploading content to an S3 using the AWS CLI. Additionally, hosting the website content in a public S3 may not be desirable from a
security standpoint.
upvoted 3 times
  eugene_stalker 4 months, 1 week ago
Selected Answer: D
D - SFTP client to upload new documents.
upvoted 1 times

  bdp123 7 months, 2 weeks ago


Selected Answer: C
AWS transfer is a cost and doesn't mention using CloudFront
https://aws.amazon.com/aws-transfer-family/pricing/
upvoted 4 times

  Yelizaveta 7 months, 2 weeks ago


Selected Answer: C
If you don't want to disable block public access settings for your bucket but you still want your website to be public, you can create a
Amazon CloudFront distribution to serve your static website. For more information, see Use an Amazon CloudFront distribution to serve a
static website in the Amazon Route 53 Developer Guide.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteAccessPermissionsReqd.html
upvoted 1 times

  PDR 8 months, 1 week ago


Selected Answer: C
I at first thought D but it is in fact C because

"D: Create a public Amazon S3 bucket. Configure AWS Transfer for SFTP. Configure the S3 bucket for website hosting. Upload website
content by using the SFTP client." questions says that the company has decided to use Amazon Cloudfront and this answer does not
reference using CF and setting S3 as the Origin

"C. Create a private Amazon S3 bucket. Use an S3 bucket policy to allow access from a CloudFront origin access identity (OAI). Upload
website content by using the AWS CLI." - mentions CF and the origin and the AWS CLI does infact support transfer by SFTP (which was the
part I originally doubted but this link evidences that it does:

https://docs.aws.amazon.com/cli/latest/reference/transfer/describe-server.html
upvoted 2 times

  bullrem 8 months, 1 week ago


Selected Answer: D
Option C, creating a private Amazon S3 bucket and using an S3 bucket policy to allow access from a CloudFront origin access identity
(OAI), would not be the most cost-effective solution. While it would allow the company to use Amazon S3 for storage, it would also require
additional setup and maintenance of the OAI, which would add additional cost. Additionally, this solution would not allow the use of SFTP
client for uploading content which is the current method used by the company.
upvoted 1 times

  verguy 8 months, 3 weeks ago


The Answer is C
https://medium.com/aws-poc-and-learning/how-to-access-s3-hosted-website-via-cloudfront-using-oai-origin-access-identity-720ad7c57f15
upvoted 2 times

  Mahadeva 8 months, 3 weeks ago


Selected Answer: C
Option C is a better choice than D for following reasons:
(1) Cost effective: data transfer is cheaper for Cloudfront than directly from S3 bucket
(2) Resilient: recovery from failures. Having a Cloudfront distribution and making S3 bucket policy only for Cloudfront. ie. private bucket
(with OAI for access) hardens and betters resiliency.
upvoted 3 times

  gustavtd 9 months ago


Selected Answer: C
If you don't do extra setup in AWS, you can not use SFTP connecting to it, so D is not the case
upvoted 1 times

  vtbk 9 months ago


Selected Answer: C
s3 + Cloudfront. In this case, S3 does not need to be public.
upvoted 1 times
  Zerotn3 9 months ago
Selected Answer: D
The most cost-effective and resilient solution for hosting a website on AWS with CloudFront is to create a public Amazon S3 bucket,
configure AWS Transfer for SFTP, configure the S3 bucket for website hosting, and then upload website content using the SFTP client.

Option A involves using Amazon Lightsail to create a virtual server, which may not be the most cost-effective solution compared to using
S3. Option B involves using an Auto Scaling group with EC2 instances and an Application Load Balancer, which may be more expensive and
complex than using S3. Option C involves creating a private S3 bucket, which may not allow CloudFront to access the website content.
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Selected Answer: D
KEYWORD: most cost-effective and resilient architecture

Option D: Creating a public Amazon S3 bucket, configuring AWS Transfer for SFTP, configuring the S3 bucket for website hosting, and
uploading website content by using the SFTP client will meet these requirements with the most cost-effective and resilient architecture.

Configuring AWS Transfer for SFTP allows the company to securely upload content to the S3 bucket using the SFTP client, which the
administrator is already familiar with. This eliminates the need to change the administrator’s workflow or learn new tools.
upvoted 1 times

  Joxtat 8 months, 2 weeks ago


https://medium.com/aws-poc-and-learning/how-to-access-s3-hosted-website-via-cloudfront-using-oai-origin-access-identity-
720ad7c57f15
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Option C: Creating a private Amazon S3 bucket and using an S3 bucket policy to allow access from a CloudFront origin access identity
(OAI) is not a suitable solution because it does not allow the administrator to use an SFTP client to upload website content. The
administrator would need to use the AWS CLI or a different tool to upload content to the S3 bucket, which would require a change in
the administrator’s workflow.
upvoted 1 times

  JayBee65 8 months, 3 weeks ago


The requirements are "cost-effective and resilient architecture", and nothing about least operational overhead so your concerns are
not valid. Cloudfront makes it resilient and cuts costs, so far more relevant.
upvoted 1 times

  PassNow1234 9 months, 1 week ago


. The solutions architect must design the most cost-effective and resilient architecture for website hosting to serve as the CloudFront
origin.

Are you sure about D?


upvoted 1 times

  17Master 8 months, 2 weeks ago


An administrator updates the website content infrequently and uses an SFTP client to upload new documents.
upvoted 1 times

  techhb 9 months, 2 weeks ago


Selected Answer: C
Answer is C only,Bucket doesn't need to be public when using cloudfront.
https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-serve-static-website/
upvoted 1 times

  JayBee65 8 months, 3 weeks ago


Yes " If your use case requires the block public access settings to be turned on, use the REST API endpoint as the origin. Then, restrict
access by an origin access control (OAC) or origin access identity (OAI)."
upvoted 1 times

  BENICE 9 months, 2 weeks ago


C is correct answer
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: C
Option C is right answer as company has already decided to use Cloudfront.
Option D is not correct as it does not use Cloudfront.
As long as there is way to upload the content using CLI, it is OK as updates are not very frequent.
upvoted 1 times
Question #206 Topic 1

A company wants to manage Amazon Machine Images (AMIs). The company currently copies AMIs to the same AWS Region where the AMIs were
created. The company needs to design an application that captures AWS API calls and sends alerts whenever the Amazon EC2 CreateImage API
operation is called within the company’s account.

Which solution will meet these requirements with the LEAST operational overhead?

A. Create an AWS Lambda function to query AWS CloudTrail logs and to send an alert when a CreateImage API call is detected.

B. Configure AWS CloudTrail with an Amazon Simple Notification Service (Amazon SNS) notification that occurs when updated logs are sent to
Amazon S3. Use Amazon Athena to create a new table and to query on CreateImage when an API call is detected.

C. Create an Amazon EventBridge (Amazon CloudWatch Events) rule for the CreateImage API call. Configure the target as an Amazon Simple
Notification Service (Amazon SNS) topic to send an alert when a CreateImage API call is detected.

D. Configure an Amazon Simple Queue Service (Amazon SQS) FIFO queue as a target for AWS CloudTrail logs. Create an AWS Lambda
function to send an alert to an Amazon Simple Notification Service (Amazon SNS) topic when a CreateImage API call is detected.

Correct Answer: D

Community vote distribution


C (67%) A (21%) 10%

  [Removed] Highly Voted  10 months ago


Selected Answer: C
I'm team C.
https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/monitor-ami-
events.html#:~:text=For%20example%2C%20you%20can%20create%20an%20EventBridge%20rule%20that%20detects%20when%20the%2
0AMI%20creation%20process%20has%20completed%20and%20then%20invokes%20an%20Amazon%20SNS%20topic%20to%20send%20a
n%20email%20notification%20to%20you.
upvoted 13 times

  MutiverseAgent 2 months, 1 week ago


C is correct > https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitor-ami-events.html
upvoted 1 times

  JayBee65 8 months, 3 weeks ago


That link contains the exact use case and explains how C can be used.
Option B requires you to send logs to S3 and use Athena, 2 additional services that are not required, so this does not meet the "LEAST
operational overhead?" requirement, since these are extra services requiring management.
upvoted 2 times

  Wajif Highly Voted  9 months, 1 week ago


Selected Answer: A
Why not A? API calls are already logged in Cloudtrail.
upvoted 11 times

  TariqKipkemei Most Recent  1 week, 6 days ago


Selected Answer: C
Event bridge was built specifically to handle this kind of scenario:
CreateImage API call (Event Source) -> Event bus -> Rules - > Amazon SNS (Event target)
upvoted 1 times

  Guru4Cloud 2 weeks, 6 days ago


Selected Answer: C
C. Create an Amazon EventBridge (Amazon CloudWatch Events) rule for the CreateImage API call. Configure the target as an Amazon
Simple Notification Service (Amazon SNS) topic to send an alert when a CreateImage API call is detected
upvoted 2 times

  Nava702 3 weeks, 6 days ago


Selected Answer: A
A look like the least overhead option to capture an API call.
upvoted 1 times

  Mia2009687 2 months, 3 weeks ago


Selected Answer: B
The company needs to design an application that captures AWS API calls and sends alerts whenever the Amazon EC2 CreateImage API
operation is called within the company’s account.

With option C, it won't "The company needs to design an application that captures AWS API calls". it only sends the "CreateImage API "
event. We need to store the AWS API calls as well.
upvoted 1 times
  cookieMr 3 months ago
EventBridge (formerly CloudWatch Events) is a fully managed event bus service that allows you to monitor and respond to events within
your AWS environment. By creating an EventBridge rule specifically for the CreateImage API call, you can easily detect and capture this
event. Configuring the target as an SNS topic allows you to send an alert whenever a CreateImage API call occurs. This solution requires
minimal operational overhead as EventBridge and SNS are fully managed services.

A. While using an Lambda to query CloudTrail logs and send an alert can achieve the desired outcome, it introduces additional operational
overhead compared to using EventBridge and SNS directly.

B. Configuring CloudTrail with an SNS notification and using Athena to query on CreateImage API calls would require more setup and
maintenance compared to using EventBridge and SNS.

D. Configuring an SQS FIFO queue as a target for CloudTrail logs and using a function to send an alert to an SNS topic adds unnecessary
complexity to the solution and increases operational overhead. Using EventBridge and SNS directly is a simpler and more efficient
approach.
upvoted 3 times

  pisica134 3 months, 1 week ago


D makes no sense, FIFO is not required, SQS is not used for sending notifications...C all the way
upvoted 1 times

  edric1998 3 months, 1 week ago


Selected Answer: D
As the link shared by who chose C, it said EventBrigde can catch event ( available/failed/deregistered). In this doc, CreateImage not distinct
with CopyImage/RegisterImage/CreateRestoreImageTask.
So It not C.
It not B because it very overhead.
And the question say "whenever', means quick as possible, so It not A.
The right answer is D
upvoted 1 times

  TheAbsoluteTruth 6 months ago


Selected Answer: C
https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/monitor-ami-
events.html#:~:text=For%20example%2C%20you%20can%20create%20an
%20EventBridge%20regla%20que%20detecta%20cuando%20el%20creación%20AMI%20proceso%20ha%20completado%20y%20entonces
%20invoca%20un%20Amazon%20SNS%20tema%20para%20enviar%20un%20correoelectrónico%20notificación%20para%20usted
upvoted 1 times

  test_devops_aws 6 months, 2 weeks ago


Selected Answer: C
https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/monitor-ami-events.html
upvoted 2 times

  kraken21 6 months ago


Option C makes sense here.
upvoted 1 times

  Zerotn3 9 months ago


Selected Answer: C
LEAST operational overhead
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Selected Answer: C
The correct solution is Option C. Creating an Amazon EventBridge (Amazon CloudWatch Events) rule for the CreateImage API call and
configuring the target as an Amazon Simple Notification Service (Amazon SNS) topic to send an alert when a CreateImage API call is
detected will meet the requirements with the least operational overhead.

Amazon EventBridge is a serverless event bus that makes it easy to connect applications together using data from your own applications,
integrated Software as a Service (SaaS) applications, and AWS services. By creating an EventBridge rule for the CreateImage API call, the
company can set up alerts whenever this operation is called within their account. The alert can be sent to an SNS topic, which can then be
configured to send notifications to the company's email or other desired destination.

This solution does not require the company to create a Lambda function or query CloudTrail logs, which makes it the most cost-effective
and efficient option.
upvoted 7 times

  career360guru 9 months, 2 weeks ago


Selected Answer: C
Option C is right answer.
Eventbridge has integration with CloudTrail as source of events (using pipes).
Option D is incorrect as Cloudtrail can not automatically send its API event logs to SQS.
upvoted 1 times

  Shasha1 9 months, 3 weeks ago


C
Option B is not correct because it involves using Amazon Athena to query AWS CloudTrail logs, which can be a time-consuming and error-
prone process. Additionally, it requires the company to manage the underlying infrastructure for Amazon Athena, which adds operational
overhead.
upvoted 1 times

  Sahilbhai 9 months, 3 weeks ago


Selected Answer: C
answer is c
upvoted 1 times

  javitech83 9 months, 4 weeks ago


Selected Answer: C
it is C
upvoted 1 times
Question #207 Topic 1

A company owns an asynchronous API that is used to ingest user requests and, based on the request type, dispatch requests to the appropriate
microservice for processing. The company is using Amazon API Gateway to deploy the API front end, and an AWS Lambda function that invokes
Amazon DynamoDB to store user requests before dispatching them to the processing microservices.

The company provisioned as much DynamoDB throughput as its budget allows, but the company is still experiencing availability issues and is
losing user requests.

What should a solutions architect do to address this issue without impacting existing users?

A. Add throttling on the API Gateway with server-side throttling limits.

B. Use DynamoDB Accelerator (DAX) and Lambda to buffer writes to DynamoDB.

C. Create a secondary index in DynamoDB for the table with the user requests.

D. Use the Amazon Simple Queue Service (Amazon SQS) queue and Lambda to buffer writes to DynamoDB.

Correct Answer: D

Community vote distribution


D (96%)

  Guru4Cloud 2 weeks, 6 days ago


Selected Answer: D
This solution can handle bursts of incoming requests more effectively and reduce the chances of losing requests due to DynamoDB
capacity limitations. The Lambda can be configured to retrieve messages from the SQS and write them to DynamoDB at a controlled rate,
allowing DynamoDB to handle the requests within its provisioned capacity. This approach provides resilience to spikes in traffic and
ensures that requests are not lost during periods of high demand.
upvoted 1 times

  cookieMr 3 months ago


Selected Answer: D
This solution can handle bursts of incoming requests more effectively and reduce the chances of losing requests due to DynamoDB
capacity limitations. The Lambda can be configured to retrieve messages from the SQS and write them to DynamoDB at a controlled rate,
allowing DynamoDB to handle the requests within its provisioned capacity. This approach provides resilience to spikes in traffic and
ensures that requests are not lost during periods of high demand.

A. It limits can help control the request rate, but it may lead to an increase in errors and affect the user experience. Throttling alone may
not be sufficient to address the availability issues and prevent the loss of requests.

B. It can improve read performance but does not directly address the availability issues and loss of requests. It focuses on optimizing read
operations rather than buffering writes.

C. It may help with querying the user requests efficiently, but it does not directly solve the availability issues or prevent the loss of
requests. It is more focused on data retrieval rather than buffering writes.
upvoted 2 times

  studynoplay 4 months, 2 weeks ago


Selected Answer: D
DAX is for reads
upvoted 1 times

  smartegnine 3 months, 3 weeks ago


DAX is not ideal for the following types of applications:

Applications that require strongly consistent reads (or that cannot tolerate eventually consistent reads).

Applications that do not require microsecond response times for reads, or that do not need to offload repeated read activity from
underlying tables.

Applications that are write-intensive, or that do not perform much read activity.

Applications that are already using a different caching solution with DynamoDB, and are using their own client-side logic for working
with that caching solution.
upvoted 2 times

  nder 7 months, 1 week ago


Selected Answer: D
The key here is "Losing user requests" sqs messages will stay in the queue until it has been processed
upvoted 3 times
  dark_firzen 8 months, 1 week ago
Selected Answer: D
D because SQS is the cheapest way. First 1,000,000 requests are free each month.

Question states: "The company provisioned as much DynamoDB throughput as its budget allows"
upvoted 3 times

  Wajif 9 months, 1 week ago


Selected Answer: D
D is more likely to fix this problem as SQS queue has the ability to wait (buffer) for consumer to notify that the request or message has
been processed.
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Selected Answer: D
To address the issue of lost user requests and improve the availability of the API, the solutions architect should use the Amazon Simple
Queue Service (Amazon SQS) queue and Lambda to buffer writes to DynamoDB. Option D (correct answer)

By using an SQS queue and Lambda, the solutions architect can decouple the API front end from the processing microservices and
improve the overall scalability and availability of the system. The SQS queue acts as a buffer, allowing the API front end to continue
accepting user requests even if the processing microservices are experiencing high workloads or are temporarily unavailable. The Lambda
function can then retrieve requests from the SQS queue and write them to DynamoDB, ensuring that all user requests are stored and
processed. This approach allows the company to scale the processing microservices independently from the API front end, ensuring that
the API remains available to users even during periods of high demand.
upvoted 4 times

  alect096 9 months, 2 weeks ago


Selected Answer: B
I would go to B : https://aws.amazon.com/es/blogs/database/amazon-dynamodb-accelerator-dax-a-read-throughwrite-through-cache-for-
dynamodb/
upvoted 1 times

  ruqui 2 months, 3 weeks ago


That's wrong. The document you mentioned explained it very clearly:
"Whereas both read-through and write-through caches address read-heavy workloads, a write-back (or write-behind) cache is designed
to address write-heavy workloads. Note that DAX is not a write-back cache currently"
upvoted 1 times

  BENICE 9 months, 2 weeks ago


D is correct answer
upvoted 1 times

  NikaCZ 9 months, 2 weeks ago


Selected Answer: D
D. Use the Amazon Simple Queue Service (Amazon SQS) queue and Lambda to buffer writes to DynamoDB.
upvoted 1 times

  career360guru 9 months, 2 weeks ago


Selected Answer: D
Option D is right answer
upvoted 1 times

  alexfk 9 months, 2 weeks ago


Why not B? DAX.

"When you’re developing against DAX, instead of pointing your application at the DynamoDB endpoint, you point it at the DAX endpoint,
and DAX handles the rest. As a read-through/write-through cache, DAX seamlessly intercepts the API calls that an application normally
makes to DynamoDB so that both read and write activity are reflected in the DAX cache."

https://aws.amazon.com/es/blogs/database/amazon-dynamodb-accelerator-dax-a-read-throughwrite-through-cache-for-dynamodb/
upvoted 1 times

  ruqui 2 months, 3 weeks ago


B is wrong because of this:

"Whereas both read-through and write-through caches address read-heavy workloads, a write-back (or write-behind) cache is designed
to address write-heavy workloads. Note that DAX is not a write-back cache currently"
upvoted 1 times

  AgboolaKun 5 months, 3 weeks ago


It is not DAX because of the company's budget restriction associated with the DynamoDB. This is a requirement in the question.
DynamoDB charges for DAX capacity by the hour and your DAX instances run with no long-term commitments. Please refer to:
https://aws.amazon.com/dynamodb/pricing/provisioned/#.E2.80.A2_DynamoDB_Accelerator_.28DAX.29
upvoted 2 times
  akosigengen 10 months ago
yeah I though the answer is also DAX.
upvoted 1 times

  leonnnn 10 months, 1 week ago


Selected Answer: D
Using SQS should be the answer.
upvoted 3 times

  nVizzz 10 months ago


Why not DAX? Could somebody explain?
upvoted 1 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Using DynamoDB Accelerator (DAX) and Lambda to buffer writes to DynamoDB, may improve the write performance of the system,
but it does not provide the same level of scalability and availability as using an SQS queue and Lambda.

Hence, Option B is incorrect.


upvoted 1 times

  bmofo 9 months, 4 weeks ago


key noted issue is "losing user requests" which is resolved with SQS
upvoted 5 times

  Rameez1 10 months ago


DAX helps in reducing the read loads from DynamoDB, here we need a solution to handle write requests, which is well handled by
SQS and Lamda to buffer writes on DynamoDB.
upvoted 4 times

  jambajuice 10 months, 1 week ago


Selected Answer: D
Answer d
upvoted 2 times

  Nigma 10 months, 1 week ago


Answer : D
upvoted 1 times
Question #208 Topic 1

A company needs to move data from an Amazon EC2 instance to an Amazon S3 bucket. The company must ensure that no API calls and no data
are routed through public internet routes. Only the EC2 instance can have access to upload data to the S3 bucket.

Which solution will meet these requirements?

A. Create an interface VPC endpoint for Amazon S3 in the subnet where the EC2 instance is located. Attach a resource policy to the S3 bucket
to only allow the EC2 instance’s IAM role for access.

B. Create a gateway VPC endpoint for Amazon S3 in the Availability Zone where the EC2 instance is located. Attach appropriate security
groups to the endpoint. Attach a resource policy to the S3 bucket to only allow the EC2 instance’s IAM role for access.

C. Run the nslookup tool from inside the EC2 instance to obtain the private IP address of the S3 bucket’s service API endpoint. Create a route
in the VPC route table to provide the EC2 instance with access to the S3 bucket. Attach a resource policy to the S3 bucket to only allow the
EC2 instance’s IAM role for access.

D. Use the AWS provided, publicly available ip-ranges.json file to obtain the private IP address of the S3 bucket’s service API endpoint. Create a
route in the VPC route table to provide the EC2 instance with access to the S3 bucket. Attach a resource policy to the S3 bucket to only allow
the EC2 instance’s IAM role for access.

Correct Answer: B

Community vote distribution


A (54%) B (46%)

  SSASSWS Highly Voted  10 months, 1 week ago


Selected Answer: A
I think answer should be A and not B.
as we cannot "Attach a security groups to a gateway endpoint."
upvoted 17 times

  A_New_Guy 9 months, 2 weeks ago


It's possible:

https://aws.amazon.com/premiumsupport/knowledge-center/connect-s3-vpc-endpoint/
upvoted 2 times

  Iconique 5 days, 14 hours ago


Go to console and test it yourself! With Interface Endpoint you can add security groups.
upvoted 1 times

  kruasan 5 months ago


No, it’s not
upvoted 3 times

  Guru4Cloud 1 week, 4 days ago


it is possible - you should do more reading
upvoted 1 times

  markw92 3 months, 2 weeks ago


Gateway endpoint must be used as a target in a route table does not use security groups.
upvoted 3 times

  Buruguduystunstugudunstuy Highly Voted  9 months, 1 week ago


Selected Answer: B
The correct solution to meet the requirements is Option B. A gateway VPC endpoint for Amazon S3 should be created in the Availability
Zone where the EC2 instance is located. This will allow the EC2 instance to access the S3 bucket directly, without routing through the
public internet. The endpoint should also be configured with appropriate security groups to allow access to the S3 bucket. Additionally, a
resource policy should be attached to the S3 bucket to only allow the EC2 instance's IAM role for access.
upvoted 15 times

  Buruguduystunstugudunstuy 9 months, 1 week ago


Option A is incorrect because an interface VPC endpoint for Amazon S3 would not provide a direct connection between the EC2
instance and the S3 bucket.

Option C is incorrect because using the nslookup tool to obtain the private IP address of the S3 bucket's service API endpoint would not
provide a secure connection between the EC2 instance and the S3 bucket.
Option D is incorrect because using the ip-ranges.json file to obtain the private IP address of the S3 bucket's service API endpoint is not
a secure method to connect the EC2 instance to the S3 bucket.
upvoted 2 times

  ChrisG1454 7 months, 2 weeks ago


There are two types VPC Endpoint:

Gateway endpoint
Interface endpoint

A Gateway endpoint:

1) Helps you to securely connect to Amazon S3 and DynamoDB


2) Endpoint serves as a target in your route table for traffic
3) Provide access to endpoint (endpoint, identity and resource policies)

An Interface endpoint:

1) Help you to securely connect to AWS services EXCEPT FOR Amazon S3 and DynamoDB
2) Powered by PrivateLink (keeps network traffic within AWS network)
3) Needs a elastic network interface (ENI) (entry point for traffic)
upvoted 16 times

  slackbot 1 month, 1 week ago


interface endpoint exists for S3 as well
upvoted 1 times

  mhmt4438 9 months ago


An interface VPC endpoint does provide a direct connection between the EC2 instance and the S3 bucket. It enables private
communication between instances in your VPC and resources in other services without requiring an internet gateway, a NAT device,
or a VPN connection.

Option A , which recommends creating an interface VPC endpoint for Amazon S3 in the subnet where the EC2 instance is located
and attaching a resource policy to the S3 bucket to only allow the EC2 instance's IAM role for access, is the correct solution for the
given scenario. It meets the requirement to ensure that no API calls and no data are routed through public internet routes and that
only the EC2 instance can have access to upload data to the S3 bucket.
upvoted 2 times

  Omok 8 months ago


In support, see https://docs.aws.amazon.com/AmazonS3/latest/userguide/privatelink-interface-endpoints.html#types-of-vpc-
endpoints-for-s3
upvoted 3 times

  vijaykamal Most Recent  4 days ago


Selected Answer: A
Option B mentions creating a gateway VPC endpoint for Amazon S3, but gateway endpoints are primarily used for routing traffic to
Amazon S3 over Direct Connect or VPN connections, and they don't support attaching security groups. It's also essential to restrict access
with a resource policy on the S3 bucket, which is not mentioned in option B.

Options C and D suggest alternative approaches using DNS resolution and VPC route tables, but these options may not provide the same
level of security and isolation as the interface VPC endpoint in option A. Additionally, these options are more complex to set up and
maintain.
upvoted 1 times

  Mandar15 5 days, 14 hours ago


Selected Answer: A
https://docs.aws.amazon.com/AmazonS3/latest/userguide/privatelink-interface-endpoints.html#types-of-vpc-endpoints-for-s3
upvoted 1 times

  Mandar15 5 days, 14 hours ago


Selected Answer: A
https://docs.aws.amazon.com/AmazonS3/latest/userguide/privatelink-interface-endpoints.html#types-of-vpc-endpoints-for-s3
upvoted 1 times

  JKevin778 1 week ago


Selected Answer: B
Gateway Endpoint for S3 and DynamoDB, So B
upvoted 2 times

  Guru4Cloud 1 week, 4 days ago


Selected Answer: B
B is the correct answer.

To meet the requirements of no public internet access and only allowing the EC2 instance access, the solution is to:

Create a gateway VPC endpoint for S3 in the subnet where the EC2 instance is located. This keeps S3 access within the VPC and does not
route via the internet.
Attach appropriate security groups to the endpoint to control access.
Use a S3 bucket resource policy to only allow access from the EC2 instance IAM role.
upvoted 2 times
  TariqKipkemei 1 week, 6 days ago
Selected Answer: A
You can provision interface endpoints for s3.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/privatelink-interface-endpoints.html#:~:text=With-,AWS%20PrivateLink,-
for%20Amazon%20S3
upvoted 1 times

  5ab5e39 3 weeks, 3 days ago


B is correct,The outbound rules for the security group for instances that access Amazon S3 through the gateway endpoint must allow
traffic to Amazon S3. You can use the prefix list ID for Amazon S3 as the destination in the outbound rule.
upvoted 1 times

  ukivanlamlpi 1 month, 1 week ago


Selected Answer: B
nothing call interface VPC endpoint for s3, only gateway interface VPC for s3
upvoted 2 times

  skh015 1 month ago


Interface endpoint exists
https://youtu.be/TqApkvJx5hw
upvoted 1 times

  A1975 2 months ago


Both the Interface Endpoint and Gateway Endpoint are forms of VPC Endpoint. The earlier is located inside a subnet and connected to a
security group; the subsequent is located inside a VPC and connected to a routing table.
upvoted 1 times

  aadityaravi8 2 months, 3 weeks ago


you cannot directly attach security groups to VPC endpoints in AWS.
Hence Option A is the right choice.
upvoted 2 times

  minkian_ 2 months, 3 weeks ago


Selected Answer: A
정답은 A 입니다 게이트웨이 엔드포인트는 Private Link 를 지원하지 않습니다
https://docs.aws.amazon.com/AmazonS3/latest/userguide/privatelink-interface-endpoints.html#types-of-vpc-endpoints-for-s3
upvoted 1 times

  darren_song 3 months ago


Selected Answer: B
Please refer to the following documents :

https://docs.aws.amazon.com/whitepapers/latest/aws-privatelink/what-are-vpc-endpoints.html

https://repost.aws/knowledge-center/connect-s3-vpc-endpoint

https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html

https://digitalcloud.training/vpc-interface-endpoint-vs-gateway-endpoint-in-aws/
upvoted 3 times

  Mia2009687 3 months ago


Selected Answer: B
There is no additional charge for using gateway endpoints.

Amazon S3 supports both gateway endpoints and interface endpoints. With a gateway endpoint, you can access Amazon S3 from your
VPC, without requiring an internet gateway or NAT device for your VPC, and with no additional cost. However, gateway endpoints do not
allow access from on-premises networks, from peered VPCs in other AWS Regions, or through a transit gateway. For those scenarios, you
must use an interface endpoint, which is available for an additional cost.

The topic mentioned no API or other public network service should access to S3.
So Gateway endpoint should be better.
upvoted 2 times

  cookieMr 3 months ago


Selected Answer: A
By creating an interface VPC endpoint for Amazon S3 in the same subnet as the EC2 instance, the data transfer between the EC2 instance
and S3 can occur privately within the Amazon network, without traversing the public internet. This ensures secure and direct
communication between the EC2 instance and S3. Attaching a resource policy to the S3 bucket that allows access only from the IAM role
associated with the EC2 instance further restricts access to only the authorized instance.
B. Creating a gateway VPC endpoint for Amazon S3 would still involve routing through the public internet, which is not desired in this case.

C. Running nslookup or creating a specific route in the VPC route table does not provide the desired level of security and privacy, as the
traffic may still traverse public internet routes.

D. Using the publicly available ip-ranges.json file to obtain the private IP address of the S3 bucket's service API endpoint is not a
recommended approach, as IP addresses can change over time, and it does not provide the same level of security as using VPC endpoints.
upvoted 2 times
  abhishek2021 3 months, 2 weeks ago
Selected Answer: A
security group cannot be associated with Gateway Endpoint. so, the answer is A.
upvoted 1 times
Question #209 Topic 1

A solutions architect is designing the architecture of a new application being deployed to the AWS Cloud. The application will run on Amazon EC2
On-Demand Instances and will automatically scale across multiple Availability Zones. The EC2 instances will scale up and down frequently
throughout the day. An Application Load Balancer (ALB) will handle the load distribution. The architecture needs to support distributed session
data management. The company is willing to make changes to code if needed.

What should the solutions architect do to ensure that the architecture supports distributed session data management?

A. Use Amazon ElastiCache to manage and store session data.

B. Use session affinity (sticky sessions) of the ALB to manage session data.

C. Use Session Manager from AWS Systems Manager to manage the session.

D. Use the GetSessionToken API operation in AWS Security Token Service (AWS STS) to manage the session.

Correct Answer: A

Community vote distribution


A (100%)

  Buruguduystunstugudunstuy Highly Voted  9 months, 1 week ago


Selected Answer: A
The correct answer is A. Use Amazon ElastiCache to manage and store session data.

In order to support distributed session data management in this scenario, it is necessary to use a distributed data store such as Amazon
ElastiCache. This will allow the session data to be stored and accessed by multiple EC2 instances across multiple Availability Zones, which
is necessary for a scalable and highly available architecture.

Option B, using session affinity (sticky sessions) of the ALB, would not be sufficient because this would only allow the session data to be
stored on a single EC2 instance, which would not be able to scale across multiple Availability Zones.

Options C and D, using Session Manager and the GetSessionToken API operation in AWS STS, are not related to session data management
and would not be appropriate solutions for this scenario.
upvoted 16 times

  TariqKipkemei Most Recent  1 week, 6 days ago


Selected Answer: A
Yap agree with go you guys, this is one of the use cases for Amazon ElastiCache.
It was designed to store ephemeral session data to quickly personalize gaming, e-commerce, social media, and online applications with
microsecond response times.
https://aws.amazon.com/elasticache/#:~:text=Store-,ephemeral,-session%20data%20to
upvoted 1 times

  Guru4Cloud 2 weeks, 6 days ago


Selected Answer: A
The correct answer is A. Use Amazon ElastiCache to manage and store session data.
upvoted 1 times

  cookieMr 2 months, 3 weeks ago


Selected Answer: A
ElastiCache is a managed in-memory data store service that is well-suited for managing session data in a distributed architecture. It
provides high-performance, scalable, and durable storage for session data, allowing multiple EC2 instances to access and share session
data seamlessly. By using ElastiCache, the application can offload the session management workload from the EC2 instances and leverage
the distributed caching capabilities of ElastiCache for improved scalability and performance.

Option B, using session affinity (sticky sessions) of the ALB, is not the best choice for distributed session data management because it ties
each session to a specific EC2 instance. As the instances scale up and down frequently, it can lead to uneven load distribution and may not
provide optimal scalability.

Options C and D are not applicable for managing session data. AWS Systems Manager's Session Manager is primarily used for secure
remote shell access to EC2 instances, and the AWS STS GetSessionToken API operation is used for temporary security credentials and not
session data management.
upvoted 1 times

  cookieMr 3 months ago


ElastiCache is a managed in-memory data store service that is well-suited for managing session data in a distributed architecture. It
provides high-performance, scalable, and durable storage for session data, allowing multiple EC2 instances to access and share session
data seamlessly. By using ElastiCache, the application can offload the session management workload from the EC2 instances and leverage
the distributed caching capabilities of ElastiCache for improved scalability and performance.
Option B, using session affinity (sticky sessions) of the ALB, is not the best choice for distributed session data management because it ties
each session to a specific EC2 instance. As the instances scale up and down frequently, it can lead to uneven load distribution and may not
provide optimal scalability.

Options C and D are not applicable for managing session data. AWS Systems Manager's Session Manager is primarily used for secure
remote shell access to EC2 instances, and the AWS STS GetSessionToken API operation is used for temporary security credentials and not
session data management.
upvoted 2 times
  Abrar2022 3 months, 2 weeks ago
Selected Answer: A
A. Use Amazon ElastiCache to manage and store session data.
- Correct. - Session data is managed at the application-layer, and a distributed cache should be used

B. Use session affinity (sticky sessions) of the ALB to manage session data.
- Wrong. This tightly couples the individual EC2 instances to the session data, and requires additional logic in the ALB. When scale-in
happens, the session data stored on individual EC2 instances is destroyed
upvoted 1 times

  techhb 8 months, 2 weeks ago


Selected Answer: A
correct answer is A as instance are getting up and down.
upvoted 1 times

  inseong 9 months, 2 weeks ago


야 근데 210문제는 어딧냐 ..?
upvoted 1 times

  noche 7 months, 1 week ago


https://www.examtopics.com/discussions/amazon/view/94992-exam-aws-certified-solutions-architect-associate-saa-c03/
여기 임마
upvoted 1 times

  NikaCZ 9 months, 2 weeks ago


Selected Answer: A
Amazon ElastiCache to manage and store session data.
upvoted 1 times

  k1kavi1 9 months, 2 weeks ago


Selected Answer: A
https://www.examtopics.com/discussions/amazon/view/46412-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times

  Shasha1 9 months, 3 weeks ago


A
Amazon ElastiCache to manage and store session data. This solution will allow the application to automatically scale across multiple
Availability Zones without losing session data, as the session data will be stored in a cache that is accessible from any EC2 instance.
Additionally, using Amazon ElastiCache will enable the company to easily manage and scale the cache as needed, without requiring any
changes to the application code. Option C is not correct because,Session Manager from AWS Systems Manager will not provide the
necessary support for distributed session data management. Session Manager is a tool for managing and tracking sessions on EC2
instances, but it does not provide a mechanism for storing and managing session data in a distributed environment.
upvoted 3 times

  TelaO 10 months ago


better justification found here...
https://www.examtopics.com/discussions/amazon/view/46412-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 3 times

  kmaneith 10 months ago


why not C?
upvoted 1 times

  leonnnn 10 months, 1 week ago


Selected Answer: A
ALB sticky session can keep request accessing to the same backend application. But it says "distributed session management" and
company "will to change code", so I think A is better
upvoted 3 times

  Nigma 10 months, 1 week ago


Selected Answer: A
Answer : A
upvoted 1 times
Question #210 Topic 1

A company offers a food delivery service that is growing rapidly. Because of the growth, the company’s order processing system is experiencing
scaling problems during peak traffic hours. The current architecture includes the following:

• A group of Amazon EC2 instances that run in an Amazon EC2 Auto Scaling group to collect orders from the application
• Another group of EC2 instances that run in an Amazon EC2 Auto Scaling group to fulfill orders

The order collection process occurs quickly, but the order fulfillment process can take longer. Data must not be lost because of a scaling event.

A solutions architect must ensure that the order collection process and the order fulfillment process can both scale properly during peak traffic
hours. The solution must optimize utilization of the company’s AWS resources.

Which solution meets these requirements?

A. Use Amazon CloudWatch metrics to monitor the CPU of each instance in the Auto Scaling groups. Configure each Auto Scaling group’s
minimum capacity according to peak workload values.

B. Use Amazon CloudWatch metrics to monitor the CPU of each instance in the Auto Scaling groups. Configure a CloudWatch alarm to invoke
an Amazon Simple Notification Service (Amazon SNS) topic that creates additional Auto Scaling groups on demand.

C. Provision two Amazon Simple Queue Service (Amazon SQS) queues: one for order collection and another for order fulfillment. Configure the
EC2 instances to poll their respective queue. Scale the Auto Scaling groups based on notifications that the queues send.

D. Provision two Amazon Simple Queue Service (Amazon SQS) queues: one for order collection and another for order fulfillment. Configure the
EC2 instances to poll their respective queue. Create a metric based on a backlog per instance calculation. Scale the Auto Scaling groups
based on this metric.

Correct Answer: C

Community vote distribution


D (80%) C (20%)

  TungPham Highly Voted  6 months, 4 weeks ago


Selected Answer: D
When the backlog per instance reaches the target value, a scale-out event will happen. Because the backlog per instance is already 150
messages (1500 messages / 10 instances), your group scales out, and it scales out by five instances to maintain proportion to the target
value.
Backlog per instance: To calculate your backlog per instance, start with the ApproximateNumberOfMessages queue attribute to
determine the length of the SQS queue (number of messages available for retrieval from the queue). Divide that number by the fleet's
running capacity, which for an Auto Scaling group is the number of instances in the InService state, to get the backlog per instance.
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-using-sqs-queue.html
upvoted 5 times

  Guru4Cloud Most Recent  2 weeks, 6 days ago


Selected Answer: D
D. Provision two Amazon Simple Queue Service (Amazon SQS) queues: one for order collection and another for order fulfillment.
Configure the EC2 instances to poll their respective queue. Create a metric based on a backlog per instance calculation. Scale the Auto
Scaling groups based on this metric.
upvoted 1 times

  n43u435b543ht2b 1 month, 3 weeks ago


Selected Answer: D
C is incorrect as scaling based on the number of "notifications" doesn't make logical sense. This means that both the order collection and
fulfilment instances would scale in parallel, but they have clearly said that the collection is processing quickly while the fulfilment is
struggling. Therefore, we should scale the pool when there is a backlog building in a respective queue - not just based on the number of
incoming requests.
upvoted 1 times

  argl1995 3 months ago


SQS auto-scales by default so I don't think we need to mention it explicitly. Option D should be correct.
upvoted 1 times

  cookieMr 3 months ago


Selected Answer: D
A. This approach focuses solely on CPU utilization, which may not accurately reflect the scaling needs of the order collection and
fulfillment processes. It does not address the need for decoupling and reliable message processing.

B. While this approach incorporates alarms to trigger additional Auto Scaling groups, it lacks the decoupling and reliable message
processing provided by using SQS queues. It may lead to inefficient scaling and potential data loss.

C. Although using SQS queues is a step in the right direction, scaling solely based on queue notifications may not provide optimal resource
utilization. It does not consider the backlog per instance and does not allow for fine-grained control over scaling.

Overall, option D, which involves using SQS queues for order collection and fulfillment, creating a metric based on backlog per instance
calculation, and scaling the Auto Scaling groups accordingly, is the most suitable solution to address the scaling problems while
optimizing resource utilization and ensuring reliable message processing.
upvoted 2 times
  studynoplay 4 months, 2 weeks ago
Selected Answer: D
C is incorrect. "based on notifications that the queues send" SQS does not send notification
upvoted 2 times

  mandragon 4 months, 4 weeks ago


Selected Answer: C
D is not correct because it requires more operational overhead and complexity than option C which is simpler and more cost-effective. It
uses the existing queue metrics that are provided by Amazon SQS and does not require creating or publishing any custom metrics. You
can use target tracking scaling policies to automatically maintain a desired backlog per instance ratio without having to calculate or
monitor it yourself.
upvoted 2 times

  JayBee65 8 months, 1 week ago


Selected Answer: D
Scale based on queue length
upvoted 2 times

  Rudraman 8 months, 2 weeks ago


answer is D.
read question again
upvoted 2 times

  LuckyAro 8 months, 2 weeks ago


Selected Answer: D
The number of instances in your Auto Scaling group can be driven by how long it takes to process a message and the acceptable amount
of latency (queue delay).
The solution is to use a backlog per instance metric with the target value being the acceptable backlog per instance to maintain.
upvoted 1 times

  Aseem8888 8 months, 2 weeks ago


Selected Answer: D
D is correct
upvoted 1 times

  Rudraman 8 months, 2 weeks ago


C
Need to Auto-
Scale Queue of SQS
upvoted 1 times

  JayBee65 8 months, 1 week ago


Why would you scale based on " Scale the Auto Scaling groups based on notifications that the queues send."? Would it not make 1000
times more sense to scale base don queue length "Create a metric based on a backlog per instance calculation"?
upvoted 3 times

  techhb 8 months, 2 weeks ago


Selected Answer: D
I think its D as here we are creating new metric to calculate load on each EC2 instance.
upvoted 2 times

  techhb 8 months, 2 weeks ago


I think its D as here we are creating new metric to calculate load on each EC2 instance.
upvoted 2 times

  wmp7039 8 months, 2 weeks ago


Selected Answer: D
C is incorrect as SQS doesn't send notifications and needs to be polled by the consumers
upvoted 2 times

  KM01 8 months, 2 weeks ago


I think, D
upvoted 1 times

  swolfgang 8 months, 2 weeks ago


Selected Answer: C
ı think c ,but ı m not sure ı think both of solve problem
upvoted 1 times

  JayBee65 8 months, 1 week ago


No they don't. How exactly would you scale based on a queue sending a message? Scale up when it sends a message? Scale up every
time it sends a message? This takes no account of how quickly messages are processed.
upvoted 2 times
Question #211 Topic 1

A company hosts multiple production applications. One of the applications consists of resources from Amazon EC2, AWS Lambda, Amazon RDS,
Amazon Simple Notification Service (Amazon SNS), and Amazon Simple Queue Service (Amazon SQS) across multiple AWS Regions. All company
resources are tagged with a tag name of “application” and a value that corresponds to each application. A solutions architect must provide the
quickest solution for identifying all of the tagged components.

Which solution meets these requirements?

A. Use AWS CloudTrail to generate a list of resources with the application tag.

B. Use the AWS CLI to query each service across all Regions to report the tagged components.

C. Run a query in Amazon CloudWatch Logs Insights to report on the components with the application tag.

D. Run a query with the AWS Resource Groups Tag Editor to report on the resources globally with the application tag.

Correct Answer: D

Community vote distribution


D (100%)

  TariqKipkemei 1 week, 5 days ago


Selected Answer: D
Tags are key and value pairs that act as metadata for organizing your AWS resources
upvoted 1 times

  Guru4Cloud 2 weeks, 6 days ago


Selected Answer: D
D. Run a query with the AWS Resource Groups Tag Editor to report on the resources globally with the application tag
upvoted 1 times

  cookieMr 3 months ago


Selected Answer: D
A is not the quickest solution because CloudTrail primarily focuses on capturing and logging API activity. While it can provide information
about resource changes, it may not provide a comprehensive and quick way to identify all the tagged components across multiple services
and Regions.

B involves manually querying each service using the AWS CLI, which can be time-consuming and cumbersome, especially when dealing
with multiple services and Regions. It is not the most efficient solution for quickly identifying tagged components.

C is focused on analyzing logs rather than directly identifying the tagged components. While CloudWatch Logs Insights can help extract
information from logs, it may not provide a straightforward and quick way to gather a consolidated list of all tagged components across
different services and Regions.

D is the quickest solution as it leverages the Resource Groups Tag Editor, which is specifically designed for managing and organizing
resources based on tags. It offers a centralized and efficient approach to generate a report of tagged components across multiple services
and Regions.
upvoted 4 times

  Bmarodi 4 months, 1 week ago


Selected Answer: D
A solutions architect can provide the quickest solution for identifying all of the tagged components by running running a query with the
AWS Resource Groups Tag Editor to report on the resources globally with the application tag, hence the option D is right answer.
upvoted 2 times

  Dondozzy 6 months, 3 weeks ago


Selected Answer: D
The answer is D
upvoted 2 times

  sh0811 8 months ago


Selected Answer: D
D가 맞습니다.
upvoted 2 times

  Training4aBetterLife 8 months, 1 week ago


Selected Answer: D
https://docs.aws.amazon.com/tag-editor/latest/userguide/tagging.html
upvoted 2 times
  Rudraman 8 months, 2 weeks ago
Answer is D.
upvoted 1 times

  techhb 8 months, 2 weeks ago


Selected Answer: D
validated
https://docs.aws.amazon.com/tag-editor/latest/userguide/tagging.html
upvoted 1 times

  kbaruu 8 months, 2 weeks ago


Selected Answer: D
D is correct
upvoted 1 times

  waiyiu9981 8 months, 3 weeks ago


Selected Answer: D
https://www.examtopics.com/discussions/amazon/view/51352-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times
Question #212 Topic 1

A company needs to export its database once a day to Amazon S3 for other teams to access. The exported object size varies between 2 GB and 5
GB. The S3 access pattern for the data is variable and changes rapidly. The data must be immediately available and must remain accessible for up
to 3 months. The company needs the most cost-effective solution that will not increase retrieval time.

Which S3 storage class should the company use to meet these requirements?

A. S3 Intelligent-Tiering

B. S3 Glacier Instant Retrieval

C. S3 Standard

D. S3 Standard-Infrequent Access (S3 Standard-IA)

Correct Answer: A

Community vote distribution


A (71%) D (19%) 10%

  techhb Highly Voted  8 months, 2 weeks ago


Selected Answer: A
S3 Intelligent-Tiering monitors access patterns and moves objects that have not been accessed for 30 consecutive days to the Infrequent
Access tier and after 90 days of no access to the Archive Instant Access tier.
upvoted 10 times

  Devsin2000 4 months, 3 weeks ago


https://aws.amazon.com/getting-started/hands-on/getting-started-using-amazon-s3-intelligent-tiering/
upvoted 2 times

  TariqKipkemei Most Recent  1 week, 5 days ago


Selected Answer: A
access pattern for the data is variable and changes rapidly = S3 Intelligent-Tiering
upvoted 1 times

  Sultanoid 1 month ago


Selected Answer: C
There are 2 viable options A and C.
The Intelligent tearing(A) might put your data in the archive or Infrequent Acces if it is not used for 80 days and then used as crazy for the
last 10 days of the period which will cause delays in retrieval or the costs associated with traffic.
Option C can be optimised with the Time To Live policy of 90 days and will e the most efficient and reliable solution to satisfy the needs.
upvoted 3 times

  mtmayer 2 months ago


Has to be C. S3 Intelligent-Tiering is for data with varying or unknown access needs. Not the case here. We know data must be highly
available for 30 days.
upvoted 1 times

  maheshudara 2 months, 4 weeks ago


Selected Answer: A
key - "Changing access patterns"
upvoted 1 times

  maheshudara 2 months, 4 weeks ago


"The S3 access pattern for the data is variable and changes rapidly"
upvoted 1 times

  cookieMr 3 months ago


Selected Answer: A
Option A is designed for objects with changing access patterns, but it may not be the most cost-effective solution for long-term storage of
the data, especially if the access pattern is variable and changes rapidly.

Option B is optimized for long-term archival storage and may not provide the immediate accessibility required by the company. Retrieving
data from Glacier storage typically incurs a longer retrieval time compared to other storage classes.

Option C is the appropriate choice for immediate availability and quick access to the data. It offers high durability, availability, and low
latency access, making it suitable for the company's needs. However, it is not the most cost-effective option for long-term storage.
Option D is a more cost-effective storage class compared to S3 Standard, especially for data that is accessed less frequently. However,
since the access pattern for the data is variable and changes rapidly, S3 Standard-IA may not be the most cost-effective solution, as it
incurs additional retrieval fees for frequent access.
upvoted 2 times
  markw92 3 months, 2 weeks ago
Answer A: S3 Intelligent-Tiering is the recommended storage class for data with unknown, changing, or unpredictable access patterns,
independent of object size or retention period, such as data lakes, data analytics, and new applications.
upvoted 1 times

  AlankarJ 3 months, 3 weeks ago


The questions specifically says, data should me immediately available. So D can’t be true as S3 Infrequent access is for data which is not
accessed frequently. Don’t forget upto 3 months.
upvoted 2 times

  ruqui 4 months, 1 week ago


Selected Answer: A
Amazon S3 Intelligent-Tiering is the only cloud storage class that delivers automatic storage cost savings when data access patterns
change, without performance impact or operational overhead
upvoted 1 times

  ErfanKh 5 months, 3 weeks ago


Selected Answer: D
I think D and ChatGPT says D as well
upvoted 1 times

  ALLVCAP01 3 months ago


chatgpt isn't perfect yet. Most of them are wrong when it comes to problems.
upvoted 1 times

  mahejosh 3 months, 3 weeks ago


ChatGpt is cheeks, eff that
upvoted 1 times

  studynoplay 4 months, 2 weeks ago


ChatGPT is not always correct. Use your intelligence to answer questions
upvoted 3 times

  Grace83 6 months, 2 weeks ago


Definitely A
upvoted 1 times

  Russs99 6 months, 3 weeks ago


Selected Answer: D
D is the correct answer for this use case
upvoted 1 times

  neosis91 7 months, 3 weeks ago


Selected Answer: D
Response D, not A
S3 Intelligent-Tiering is a cost-optimized storage class that automatically moves data to the most cost-effective access tier based on
changing access patterns. Although it offers cost savings, it also introduces additional latency and retrieval time into the data retrieval
process, which may not meet the requirement of "immediately available" data.

On the other hand, S3 Standard-Infrequent Access (S3 Standard-IA) provides low cost storage with low latency and high throughput
performance. It is designed for infrequently accessed data that can be recreated if lost, and can be retrieved in a timely manner if
required. It is a cost-effective solution that meets the requirement of immediately available data and remains accessible for up to 3
months.
upvoted 2 times

  Rudraman 8 months, 2 weeks ago


Changes rapidly and immidiately available so Answe is AAAAA.
upvoted 4 times

  Aninina 8 months, 2 weeks ago


Selected Answer: A
A looks correct
upvoted 3 times

  Parsons 8 months, 2 weeks ago


Selected Answer: A
"The S3 access pattern for the data is variable and changes rapidly" => Use S3 intelligent tiering with smart enough to transit the prompt
storage class.
upvoted 4 times
  mhmt4438 8 months, 2 weeks ago
Selected Answer: D
D. S3 Standard-Infrequent Access (S3 Standard-IA)

S3 Standard-IA is the most cost-effective storage class that meets the company's requirements. It provides immediate access to the data,
and the data remains accessible for up to 3 months. S3 Standard-IA is optimized for infrequently accessed data, which is suitable for the
company's use case of exporting the database once a day. This storage class also has a lower retrieval fee compared to S3 Glacier, which is
important for the company as the S3 access pattern for the data is variable and changes rapidly. S3 Intelligent-Tiering and S3 Standard are
not the best choice in this case because they are designed for frequently accessed data and have higher retrieval fees
upvoted 2 times

  Joxtat 8 months, 2 weeks ago


The correct answer is A.
The S3 access pattern for the data is variable and changes rapidly.
upvoted 5 times
Question #213 Topic 1

A company is developing a new mobile app. The company must implement proper traffic filtering to protect its Application Load Balancer (ALB)
against common application-level attacks, such as cross-site scripting or SQL injection. The company has minimal infrastructure and operational
staff. The company needs to reduce its share of the responsibility in managing, updating, and securing servers for its AWS environment.

What should a solutions architect recommend to meet these requirements?

A. Configure AWS WAF rules and associate them with the ALB.

B. Deploy the application using Amazon S3 with public hosting enabled.

C. Deploy AWS Shield Advanced and add the ALB as a protected resource.

D. Create a new ALB that directs traffic to an Amazon EC2 instance running a third-party firewall, which then passes the traffic to the current
ALB.

Correct Answer: A

Community vote distribution


A (70%) C (30%)

  ShinobiGrappler Highly Voted  8 months, 2 weeks ago


Selected Answer: C
C --- Read and understand the question. *The company needs to reduce its share of responsibility in managing, updating, and securing
servers for its AWS environment* Go with AWS Shield advanced --This is a managed service that includes AWS WAF, custom mitigations,
and DDoS insight.
upvoted 11 times

  Guru4Cloud 2 weeks, 6 days ago


I dont know how this comment gets 11x upvotes.
A.To filter traffic and protect against application attacks like cross-site scripting and SQL injection, the company can use AWS Web
Application Firewall with managed rules on the Application Load Balancer. This provides security with minimal infrastructure and
operations overhead.
upvoted 3 times

  Steve_4542636 7 months ago


You stated, "This is a managed service that includes AWS WAF, custom mitigations, and DDoS insight." and you are correct. However,
the service you would actually have to setup to prevent SQL injection attacks is WAF.
upvoted 7 times

  darn 5 months, 1 week ago


exactly, thats like saying lets implemented NEtwork firewall Manager to manage WAF, absurd!
upvoted 3 times

  arjundevops 5 months, 2 weeks ago


Brother answer is A, Read the question once again or ask CHATGPT for more in-depth analysis
upvoted 2 times

  TariqKipkemei Most Recent  1 week, 5 days ago


Selected Answer: A
AWS WAF helps you protect against common web exploits and bots that can affect availability, compromise security, or consume excessive
resources. Protect against vulnerabilities and exploits such as SQL injection or Cross site scripting attacks.
upvoted 2 times

  Guru4Cloud 2 weeks, 6 days ago


Selected Answer: A
To filter traffic and protect against application attacks like cross-site scripting and SQL injection, the company can use AWS Web
Application Firewall with managed rules on the Application Load Balancer. This provides security with minimal infrastructure and
operations overhead.
upvoted 2 times

  Undisputed 2 months ago


Selected Answer: A
To achieve proper traffic filtering and protect the Application Load Balancer (ALB) against common application-level attacks, such as cross-
site scripting (XSS) or SQL injection, while minimizing infrastructure and operational overhead, the company can consider using AWS Web
Application Firewall (WAF) with AWS Managed Rules.
upvoted 2 times
  vini15 2 months, 2 weeks ago
A-- Keywords(cross-site scripting or SQL injection)
upvoted 3 times

  animefan1 3 months ago


Selected Answer: A
WAF benefits are rules, SQL injection & XSS protection
upvoted 1 times

  sbnpj 3 months ago


Selected Answer: A
Not C because- WS Shield Advanced provides DDoS protection, it does not specifically address application-level attacks such as XSS or SQL
injection
upvoted 2 times

  cookieMr 3 months ago


Selected Answer: A
By configuring AWS WAF rules and associating them with the ALB, the company can filter and block malicious traffic before it reaches the
application. AWS WAF offers pre-configured rule sets and allows custom rule creation to protect against common vulnerabilities like XSS
and SQL injection.

Option B does not provide the necessary security and traffic filtering capabilities to protect against application-level attacks. It is more
suitable for hosting static content rather than implementing security measures.

Option C is focused on DDoS protection rather than application-level attacks like XSS or SQL injection. While AWS Shield Advanced does
not address the specific requirements mentioned in the scenario.

Option D involves maintaining and securing additional infrastructure, which goes against the requirement of reducing responsibility and
relying on minimal operational staff.
upvoted 3 times

  fishy_resolver 3 months, 3 weeks ago


Selected Answer: C
With Shield advanced you get centralized protection management; this allows you to use AWS firewall manager (included in AWS Shield)
with policies automatically apply WAF to appliances. Massive sales pitch, see the link: https://aws.amazon.com/shield/features/
upvoted 1 times

  Terry_123 4 months, 2 weeks ago


Selected Answer: A
Shield is not aimed to handle SQL injection.
upvoted 1 times

  studynoplay 4 months, 2 weeks ago


Selected Answer: A
WAF = cross-site scripting or SQL injection
Shield/Shield Advanced = DDoS
upvoted 2 times

  Abhineet9148232 5 months ago


Selected Answer: A
Even with AWS Shield Advanced, you would still need to configure AWS WAF (only it's costing is included with Shield Adv.) rules to protect
against common application-level attacks such as cross-site scripting or SQL injection.

Since, there is no mention of protection against DDoS attacks, C is a more costly and not useful.
upvoted 2 times

  SkyZeroZx 5 months, 1 week ago


Selected Answer: A
WAF == application-level attacks, such as cross-site scripting or SQL injection
A
upvoted 2 times

  arjundevops 5 months, 2 weeks ago


Selected Answer: A
Answer is A, WAF will protect the infra from CSS typing of injections
while Sheild will be used to protect Infra from DDOS attacks

Dont get Confused.

only trick to get the right answer for the question is


read the question multiple times even when you are very confident about the answer you chose on first attempt
upvoted 4 times

  Kenzo 6 months ago


Answer is A
upvoted 1 times

  [Removed] 6 months ago


Selected Answer: A
AWS WAF projects against SQL injection.
upvoted 1 times

  supppp 6 months, 1 week ago


CCCCCCCCCCCCCCCCCCCCC
upvoted 1 times
Question #214 Topic 1

A company’s reporting system delivers hundreds of .csv files to an Amazon S3 bucket each day. The company must convert these files to Apache
Parquet format and must store the files in a transformed data bucket.

Which solution will meet these requirements with the LEAST development effort?

A. Create an Amazon EMR cluster with Apache Spark installed. Write a Spark application to transform the data. Use EMR File System (EMRFS)
to write files to the transformed data bucket.

B. Create an AWS Glue crawler to discover the data. Create an AWS Glue extract, transform, and load (ETL) job to transform the data. Specify
the transformed data bucket in the output step.

C. Use AWS Batch to create a job definition with Bash syntax to transform the data and output the data to the transformed data bucket. Use
the job definition to submit a job. Specify an array job as the job type.

D. Create an AWS Lambda function to transform the data and output the data to the transformed data bucket. Configure an event notification
for the S3 bucket. Specify the Lambda function as the destination for the event notification.

Correct Answer: D

Community vote distribution


B (100%)

  Babba Highly Voted  8 months, 2 weeks ago


Selected Answer: B
It looks like AWS Glue allows fully managed CSV to Parquet conversion jobs: https://docs.aws.amazon.com/prescriptive-
guidance/latest/patterns/three-aws-glue-etl-job-types-for-converting-data-to-apache-parquet.html
upvoted 10 times

  nileeka97 Most Recent  1 week, 1 day ago


Selected Answer: B
Parquet format ========> Amazon Glue
upvoted 1 times

  Guru4Cloud 2 weeks, 6 days ago


Selected Answer: B
B. Create an AWS Glue crawler to discover the data. Create an AWS Glue extract, transform, and load (ETL) job to transform the data.
Specify the transformed data bucket in the output step.
upvoted 1 times

  cookieMr 3 months ago


Selected Answer: B
AWS Glue is a fully managed ETL service that simplifies the process of preparing and transforming data for analytics. Using AWS Glue
requires minimal development effort compared to the other options.

Option A requires more development effort as it involves writing a Spark application to transform the data. It also introduces additional
infrastructure management with the EMR cluster.

Option C requires writing and managing custom Bash scripts for data transformation. It requires more manual effort and does not
provide the built-in capabilities of AWS Glue for data transformation.

Option D requires developing and managing a custom Lambda for data transformation. While Lambda can handle the transformation, it
requires more effort compared to AWS Glue, which is specifically designed for ETL operations.

Therefore, option B provides the easiest and least development effort by leveraging AWS Glue's capabilities for data discovery,
transformation, and output to the transformed data bucket.
upvoted 3 times

  markw92 3 months, 2 weeks ago


Least development effort means lambda. Glue also works but more overhead and cost. A simple lambda like this
https://github.com/ayshaysha/aws-csv-to-parquet-converter/blob/main/csv-parquet-converter.py
can be used to convert as soon as you see files in s3 bucket.
upvoted 3 times

  achevez85 6 months, 4 weeks ago


Selected Answer: B
https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/three-aws-glue-etl-job-types-for-converting-data-to-apache-
parquet.html
upvoted 1 times
  Training4aBetterLife 8 months, 1 week ago
Selected Answer: B
S3 provides a single control to automatically encrypt all new objects in a bucket with SSE-S3 or SSE-KMS. Unfortunately, these controls only
affect new objects. If your bucket already contains millions of unencrypted objects, then turning on automatic encryption does not make
your bucket secure as the unencrypted objects remain.

For S3 buckets with a large number of objects (millions to billions), use Amazon S3 Inventory to get a list of the unencrypted objects, and
Amazon S3 Batch Operations to encrypt the large number of old, unencrypted files.
upvoted 2 times

  Training4aBetterLife 8 months, 1 week ago


Versioning:

When you overwrite an S3 object, it results in a new object version in the bucket. However, this will not remove the old unencrypted
versions of the object. If you do not delete the old version of your newly encrypted objects, you will be charged for the storage of both
versions of the objects.

S3 Lifecycle

If you want to remove these unencrypted versions, use S3 Lifecycle to expire previous versions of objects. When you add a Lifecycle
configuration to a bucket, the configuration rules apply to both existing objects and objects added later. C is missing this step, which I
believe is what makes B the better choice. B includes the functionality of encrypting the old unencrypted objects via Batch Operations,
whereas, Versioning does not address the old unencrypted objects.
upvoted 1 times

  Training4aBetterLife 8 months, 1 week ago


Please delete this. I was meaning to place this response on a different question.
upvoted 1 times

  Training4aBetterLife 8 months, 1 week ago


Please delete this. I was meaning to place this response on a different question.
upvoted 1 times

  Rudraman 8 months, 2 weeks ago


ETL = Glue
upvoted 3 times

  Aninina 8 months, 2 weeks ago


Selected Answer: B
B is the correct answer
upvoted 1 times

  techhb 8 months, 2 weeks ago


Selected Answer: B
AWS Glue Crawler is for ETL
upvoted 1 times

  kbaruu 8 months, 2 weeks ago


Selected Answer: B
The correct answer is B
upvoted 1 times

  Mamiololo 8 months, 2 weeks ago


B is the answer
upvoted 2 times

  swolfgang 8 months, 2 weeks ago


Selected Answer: B
ıt should be b
upvoted 1 times

  marcioicebr 8 months, 2 weeks ago


Selected Answer: B
De acordo com a documentação, a resposta certa é B.

https://docs.aws.amazon.com/pt_br/prescriptive-guidance/latest/patterns/three-aws-glue-etl-job-types-for-converting-data-to-apache-
parquet.html
upvoted 1 times

  AHUI 8 months, 2 weeks ago


B is the ans
upvoted 1 times
  mhmt4438 8 months, 2 weeks ago
Selected Answer: B
Answer is B
upvoted 1 times

  Kayamables 8 months, 2 weeks ago


Option B sounds more plausible to me.
upvoted 1 times
Question #215 Topic 1

A company has 700 TB of backup data stored in network attached storage (NAS) in its data center. This backup data need to be accessible for
infrequent regulatory requests and must be retained 7 years. The company has decided to migrate this backup data from its data center to AWS.
The migration must be complete within 1 month. The company has 500 Mbps of dedicated bandwidth on its public internet connection available
for data transfer.

What should a solutions architect do to migrate and store the data at the LOWEST cost?

A. Order AWS Snowball devices to transfer the data. Use a lifecycle policy to transition the files to Amazon S3 Glacier Deep Archive.

B. Deploy a VPN connection between the data center and Amazon VPC. Use the AWS CLI to copy the data from on premises to Amazon S3
Glacier.

C. Provision a 500 Mbps AWS Direct Connect connection and transfer the data to Amazon S3. Use a lifecycle policy to transition the files to
Amazon S3 Glacier Deep Archive.

D. Use AWS DataSync to transfer the data and deploy a DataSync agent on premises. Use the DataSync task to copy files from the on-premises
NAS storage to Amazon S3 Glacier.

Correct Answer: A

Community vote distribution


A (100%)

  TariqKipkemei 1 week, 4 days ago


Selected Answer: A
Terabytes, low costs, limited time = AWS snowball devices
upvoted 1 times

  Guru4Cloud 2 weeks, 6 days ago


Selected Answer: A
A. Order AWS Snowball devices to transfer the data. Use a lifecycle policy to transition the files to Amazon S3 Glacier Deep Archive.
upvoted 1 times

  voccer 2 months, 3 weeks ago


Selected Answer: A
hundreds of Terabytes => always use Snowball
upvoted 4 times

  gosai90786 3 months ago


one DataSync agent can use 10GBps and can setup a bandwidth.
So total time = (700X1000)GB/10GBps = 70000 sec = 19.4 days.
Using Multiple Snowball devices will involve ordering them from AWS, setting them up on your data-center for copy and then incurring the
shipping cost for too and fro movement to your AWS cloud.
if time constraint was critical , say 1 week then snowball would have been a viable option. But here we have 30 days, so DataSync will be
less costly(takes `19days)
upvoted 1 times

  slackbot 1 month, 1 week ago


your math is wrong mate, and they have 0.5Gbps connection, not 10GBps
500Mpbs = roughly 60MBps
30x24x3600x0.06TB = roughly 155TB
this is way short of 700TB
upvoted 2 times

  cookieMr 3 months ago


Selected Answer: A
By ordering Snowball devices, the company can transfer the 700 TB of backup data from its data center to AWS. Once the data is
transferred to S3, a lifecycle policy can be applied to automatically transition the files from the S3 Standard storage class to the cost-
effective Amazon S3 Glacier Deep Archive storage class.

Option B would require continuous data transfer over the public internet, which could be time-consuming and costly given the large
amount of data. It may also require significant bandwidth allocation.

Option C would involve additional costs for provisioning and maintaining the dedicated connection, which may not be necessary for a one-
time data migration.

Option D could be a viable option, but it may incur additional costs for deploying and managing the DataSync agent.
Therefore, option A is the recommended choice as it provides a secure and efficient data transfer method using Snowball devices and
allows for cost optimization through lifecycle policies by transitioning the data to S3 Glacier Deep Archive for long-term storage.
upvoted 2 times
  arjundevops 5 months, 2 weeks ago
A is the correct answer.
even though they have 500mbps internetspeed, it will take around 130days to transfer the data from on premises to AWS

so they have only 1 option which is Snowball devices


upvoted 2 times

  Paras043 5 months, 3 weeks ago


Selected Answer: A
A is the correct one
upvoted 1 times

  CapJackSparrow 6 months, 3 weeks ago


Q: What is AWS Snowball Edge?

AWS Snowball Edge is an edge computing and data transfer device provided by the AWS Snowball service. It has on-board storage and
compute power that provides select AWS services for use in edge locations. Snowball Edge comes in two options, Storage Optimized and
Compute Optimized, to support local data processing and collection in disconnected environments such as ships, windmills, and remote
factories. Learn more about its features here.

Q: What happened with the original 50 TB and 80 TB AWS Snowball devices?

The original Snowball devices were transitioned out of service and Snowball Edge Storage Optimized are now the primary devices used for
data transfer.

Q: Can I still order the original Snowball 50 TB and 80 TB devices?

No. For data transfer needs now, please select the Snowball Edge Storage Optimized devices.
upvoted 1 times

  vherman 7 months ago


Selected Answer: A
Snowball
upvoted 1 times

  KZM 7 months, 2 weeks ago


9 Snowball devices are needed to migrate the 700TB of data.
upvoted 1 times

  KZM 7 months, 2 weeks ago


700TB of Data can not be transferred through a 500Mbps link within one month.

Total data that can be transferred in one month = bandwidth x time


= (500 Mbps / 8 bits per byte) x (30 days x 24 hours x 3600 seconds per hour)
= 648,000 GB or 648 TB
This is calculated theoretically with the maximum available situation. Due to a number of factors, the actual total transferred Data may
be less than 645 TB.
upvoted 3 times

  mandragon 4 months, 3 weeks ago


Good thinking. Agree with the solution. Only the calculation is wrong. It should give 162tb as a result
upvoted 3 times

  Rudraman 8 months, 2 weeks ago


Snow ball Devices the answe is AAAAA.
upvoted 2 times

  wmp7039 8 months, 2 weeks ago


A is incorrect as DC is an expensive option. Correct answer should be C as the company already has 500Mbps that can be used for data
transfer. By consuming all the available internet bandwidth, data transfer will complete in 3 hours 6 mins -
https://www.omnicalculator.com/other/data-transfer
upvoted 1 times

  wmp7039 8 months, 2 weeks ago


Ignore please, miscalculated time to transfer, it will take 129 days and will breach the 1 month requirement. A is correct.
upvoted 5 times

  kbaruu 8 months, 2 weeks ago


Selected Answer: A
A is correct
upvoted 1 times
  swolfgang 8 months, 2 weeks ago
a is correct but not less expensive.I think,should be D.
upvoted 1 times

  Parsons 8 months, 2 weeks ago


Selected Answer: A
A is correct.

Cannot copy files directly from on-prem to S3 Glacier with DataSync. It should be S3 standard first, then configuration S3 Lifecycle to
transit to Glacier => Exclude D.
upvoted 1 times

  PDR 8 months ago


yes you can - https://docs.aws.amazon.com/datasync/latest/userguide/create-s3-location.html#using-storage-classes
upvoted 1 times

  mhmt4438 8 months, 2 weeks ago


Selected Answer: A
The correct answer is A
upvoted 1 times

  Morinator 8 months, 3 weeks ago


Less expensive = Data Sync i guess (D)
upvoted 2 times

  Pindol 8 months, 1 week ago


"The migration must be complete within 1 month" you can't complete this with transfer 500Mb/s. With that speed we need 129days to
transfer. Snowball is only way to do it in desired time.
upvoted 2 times
Question #216 Topic 1

A company has a serverless website with millions of objects in an Amazon S3 bucket. The company uses the S3 bucket as the origin for an
Amazon CloudFront distribution. The company did not set encryption on the S3 bucket before the objects were loaded. A solutions architect needs
to enable encryption for all existing objects and for all objects that are added to the S3 bucket in the future.

Which solution will meet these requirements with the LEAST amount of effort?

A. Create a new S3 bucket. Turn on the default encryption settings for the new S3 bucket. Download all existing objects to temporary local
storage. Upload the objects to the new S3 bucket.

B. Turn on the default encryption settings for the S3 bucket. Use the S3 Inventory feature to create a .csv file that lists the unencrypted
objects. Run an S3 Batch Operations job that uses the copy command to encrypt those objects.

C. Create a new encryption key by using AWS Key Management Service (AWS KMS). Change the settings on the S3 bucket to use server-side
encryption with AWS KMS managed encryption keys (SSE-KMS). Turn on versioning for the S3 bucket.

D. Navigate to Amazon S3 in the AWS Management Console. Browse the S3 bucket’s objects. Sort by the encryption field. Select each
unencrypted object. Use the Modify button to apply default encryption settings to every unencrypted object in the S3 bucket.

Correct Answer: B

Community vote distribution


B (85%) Other

  Parsons Highly Voted  8 months, 2 weeks ago


Selected Answer: B
Step 1: S3 inventory to get object list
Step 2 (If needed): Use S3 Select to filter
Step 3: S3 object operations to encrypt the unencrypted objects.

On the going object use default encryption.


upvoted 11 times

  Parsons 8 months, 2 weeks ago


Useful ref link: https://aws.amazon.com/blogs/storage/encrypting-objects-with-amazon-s3-batch-operations/
upvoted 8 times

  cookieMr Most Recent  3 months ago


Selected Answer: B
By enabling default encryption settings on the S3, all newly added objects will be automatically encrypted. To encrypt the existing objects,
the S3 Inventory feature can be used to generate a list of unencrypted objects. Then, an S3 Batch Operations job can be executed to copy
those objects while applying encryption.

A. This solution involves creating a new S3 and manually downloading and uploading all existing objects. It requires significant effort and
time to transfer millions of objects, making it a less efficient solution.

C. While enabling SSE with AWS KMS is a valid approach to encrypt objects in an S3, it does not address the requirement of encrypting
existing objects. It only applies encryption to new objects added to the bucket.

D. Manually modifying each object in the S3 to apply default encryption settings is a labor-intensive and error-prone process. It would
require individually selecting and modifying each unencrypted object, which is impractical for a large number of objects.
upvoted 3 times

  CapJackSparrow 6 months, 3 weeks ago


Selected Answer: B
B...

https://catalog.us-east-1.prod.workshops.aws/workshops/05f16f1a-0bbf-45a7-a304-4fcd7fca3d1f/en-US/s3-track/module-2

You're welcome
upvoted 3 times

  bdp123 7 months, 2 weeks ago


Selected Answer: B
Amazon S3 now configures default encryption on all existing unencrypted buckets to apply server-side encryption with S3 managed keys
(SSE-S3) as the base level of encryption for new objects uploaded to these buckets. Objects that are already in an existing unencrypted
bucket won't be automatically encrypted.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/default-encryption-faq.html
upvoted 3 times
  Yelizaveta 7 months, 2 weeks ago
Selected Answer: B
https://docs.aws.amazon.com/AmazonS3/latest/userguide/batch-ops-copy-example-bucket-key.html
upvoted 1 times

  aakashkumar1999 7 months, 4 weeks ago


Selected Answer: B
B is the correct answer
upvoted 1 times

  Val182 8 months ago


Selected Answer: B
B 100%
https://spin.atomicobject.com/2020/09/15/aws-s3-encrypt-existing-objects/
upvoted 1 times

  LuckyAro 8 months ago


Selected Answer: A
Why is no one discussing A ? I think A can also achieve the required results. B is the most appropriate answer though.
upvoted 1 times

  Training4aBetterLife 8 months, 1 week ago


Selected Answer: B
S3 provides a single control to automatically encrypt all new objects in a bucket with SSE-S3 or SSE-KMS. Unfortunately, these controls only
affect new objects. If your bucket already contains millions of unencrypted objects, then turning on automatic encryption does not make
your bucket secure as the unencrypted objects remain.

For S3 buckets with a large number of objects (millions to billions), use Amazon S3 Inventory to get a list of the unencrypted objects, and
Amazon S3 Batch Operations to encrypt the large number of old, unencrypted files.
upvoted 3 times

  Training4aBetterLife 8 months, 1 week ago


Versioning:

When you overwrite an S3 object, it results in a new object version in the bucket. However, this will not remove the old unencrypted
versions of the object. If you do not delete the old version of your newly encrypted objects, you will be charged for the storage of both
versions of the objects.

S3 Lifecycle

If you want to remove these unencrypted versions, use S3 Lifecycle to expire previous versions of objects. When you add a Lifecycle
configuration to a bucket, the configuration rules apply to both existing objects and objects added later. C is missing this step, which I
believe is what makes B the better choice. B includes the functionality of encrypting the old unencrypted objects via Batch Operations,
whereas, Versioning does not address the old unencrypted objects.
upvoted 1 times

  Training4aBetterLife 8 months, 1 week ago


S3 provides a single control to automatically encrypt all new objects in a bucket with SSE-S3 or SSE-KMS. Unfortunately, these controls only
affect new objects. If your bucket already contains millions of unencrypted objects, then turning on automatic encryption does not make
your bucket secure as the unencrypted objects remain.

For S3 buckets with a large number of objects (millions to billions), use Amazon S3 Inventory to get a list of the unencrypted objects, and
Amazon S3 Batch Operations to encrypt the large number of old, unencrypted files.
upvoted 1 times

  Training4aBetterLife 8 months, 1 week ago


Versioning:

When you overwrite an S3 object, it results in a new object version in the bucket. However, this will not remove the old unencrypted
versions of the object. If you do not delete the old version of your newly encrypted objects, you will be charged for the storage of both
versions of the objects.

S3 Lifecycle

If you want to remove these unencrypted versions, use S3 Lifecycle to expire previous versions of objects. When you add a Lifecycle
configuration to a bucket, the configuration rules apply to both existing objects and objects added later. C is missing this step, which I
believe is what makes B the better choice. B includes the functionality of encrypting the old unencrypted objects via Batch Operations,
whereas, Versioning does not address the old unencrypted objects.
upvoted 1 times

  Training4aBetterLife 8 months, 1 week ago


Please remove duplicate response as I was meaning to submit a voting comment.
upvoted 1 times
  John_Zhuang 8 months, 1 week ago
Selected Answer: B
C is wrong. Even though you turn on the SSE-KMS with a new key, the existing objects are still yet to be encrypted. They still need to be
manually encrypted by AWS batch
upvoted 1 times

  LuckyAro 8 months, 2 weeks ago


Selected Answer: B
https://spin.atomicobject.com/2020/09/15/aws-s3-encrypt-existing-objects/
upvoted 1 times

  Aninina 8 months, 2 weeks ago


Selected Answer: C
C is the answer
upvoted 1 times

  techhb 8 months, 2 weeks ago


Selected Answer: B
Agree with Parsons
upvoted 1 times

  Lilibell 8 months, 2 weeks ago


the answer is C
also, the questions require future encryption of the objects is the S3 bucket = VERSIONING
upvoted 1 times

  swolfgang 8 months, 2 weeks ago


Selected Answer: C
could not open default encripton for exist bucket,so need to use KMS
upvoted 1 times

  mhmt4438 8 months, 2 weeks ago


Selected Answer: C
The correct answer is C
upvoted 1 times
Question #217 Topic 1

A company runs a global web application on Amazon EC2 instances behind an Application Load Balancer. The application stores data in Amazon
Aurora. The company needs to create a disaster recovery solution and can tolerate up to 30 minutes of downtime and potential data loss. The
solution does not need to handle the load when the primary infrastructure is healthy.

What should a solutions architect do to meet these requirements?

A. Deploy the application with the required infrastructure elements in place. Use Amazon Route 53 to configure active-passive failover. Create
an Aurora Replica in a second AWS Region.

B. Host a scaled-down deployment of the application in a second AWS Region. Use Amazon Route 53 to configure active-active failover.
Create an Aurora Replica in the second Region.

C. Replicate the primary infrastructure in a second AWS Region. Use Amazon Route 53 to configure active-active failover. Create an Aurora
database that is restored from the latest snapshot.

D. Back up data with AWS Backup. Use the backup to create the required infrastructure in a second AWS Region. Use Amazon Route 53 to
configure active-passive failover. Create an Aurora second primary instance in the second Region.

Correct Answer: D

Community vote distribution


A (69%) D (31%)

  Parsons Highly Voted  8 months, 2 weeks ago


Selected Answer: A
A is correct.
- "The solution does not need to handle the load when the primary infrastructure is healthy." => Should use Route 53 Active-Passive ==>
Exclude B, C
- D is incorrect because "Create an Aurora second primary instance in the second Region.", we need to create an Aurora Replica enough.
upvoted 18 times

  Parsons 8 months, 2 weeks ago


Ref link: https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover-types.html
upvoted 4 times

  aakashkumar1999 Highly Voted  7 months, 4 weeks ago


Selected Answer: D
I am confused within A and D but I think D is the answer because this seems to be a cost related problem, a replica is kind of a standby
and you can promote to be the main db anytime without any much downtime, but here it says it can withstand 30 mins of downtime so
we can just keep a backup of the instance and then create a DB whenever required from the backup, hence less cost
upvoted 9 times

  TariqKipkemei Most Recent  1 week, 4 days ago


Selected Answer: A
'Can tolerate up to 30 minutes of downtime and potential data loss' rules out any option with 'active-active'. Leaves D and A. D is
convoluted. Leaving A.
upvoted 1 times

  diabloexodia 2 months, 2 weeks ago


Selected Answer: A
Anything that is not instant recovery is active - passive.
In active -passive we have :
1. Aws Backup(least op overhead) - RTO/RPO = hours
2. Pilot Light ( Basic Infra is already deployed, but needs to be fully implemented) -RTO/RPO = 10's of minutes.
3. Warm Standby- (Basic infra + runs small loads ( might need to add auto scaling) -RTO/RPO= minutes
4. ( ACTIVE -ACTIVE ) : Multi AZ option : instant

here we can tolerate 30 mins


hence B,D are incorrect. AWS backup is in hours, hence D is incorrect .
therefore A
upvoted 4 times

  cookieMr 3 months ago


Selected Answer: A
A. involves deploying the application and infrastructure elements in the primary Region. An Aurora Replica is created in a second Region
to serve as the standby database. Route 53 is configured with active-passive failover, directing traffic to the primary Region by default. In
the event of a disaster, Route 53 can automatically redirect traffic to the standby Region, minimizing downtime. Data loss may occur up to
the point of the last replication to the standby Region, which can be within the defined tolerance of 30 minutes.

Option B, is not necessary in this case as the solution does not need to handle the load when the primary infrastructure is healthy, and it
may involve higher complexity and costs.

Option C, may introduce additional complexity and potential data loss, as the standby database might not be up-to-date with the primary
database.

Option D, may be suitable for backup and recovery scenarios but may not provide the required failover and downtime tolerance specified
in the requirements.
upvoted 1 times
  antropaws 3 months, 4 weeks ago
Selected Answer: D
I vote D, because option A is not highly available. In option A, you can't configure active-passive failover because you haven't created a
backup infrastructure.
upvoted 1 times

  kraken21 6 months ago


Selected Answer: A
It is a cross region DR strategy. You need a read replica and Application in another region to have a realistic DR option. The read replica
will take few minutes to to promoted/Active and the application is available. Option D lacks clarity on application and Backups can take
time to restore.
upvoted 2 times

  Yelizaveta 7 months, 2 weeks ago


Selected Answer: A
Depending on the Regions involved and the amount of data to be copied, a cross-Region snapshot copy can take hours to complete and
will be a factor to consider for the RPO requirements. You need to take this into account when you estimate the RPO of this DR strategy.

If you have strict RTO and RPO requirements, you should consider a different DR strategy, such as Amazon Aurora Global Database .
https://aws.amazon.com/blogs/database/cost-effective-disaster-recovery-for-amazon-aurora-databases-using-aws-backup/
upvoted 1 times

  JiyuKim 7 months, 3 weeks ago


Selected Answer: D
The solution does not need to handle the load when the primary infrastructure is healthy. -> Amazon Route 53 active-passive failover ->
A,D
The company can tolerate up to 30 minutes of downtime and potential data loss -> backup -> D
you don't have to use read replicas if you can tolerate downtime and data loss.
upvoted 3 times

  ChrisG1454 7 months, 2 weeks ago


Consider Answer B.
It is suggesting a Pilot Light DR strategy.
https://docs.aws.amazon.com/whitepapers/latest/disaster-recovery-workloads-on-aws/disaster-recovery-options-in-the-cloud.html
upvoted 2 times

  Bofi 7 months ago


I will Vote B and i initially thought it Pilot Light however after 2nd read, it seem it more like warm standby. Option D looks more like
back up and Restore strategy and it will take more than 30 minutes to get it done. C is wrong, snapshot takes longer time to restore
upvoted 1 times

  ChrisG1454 7 months ago


The key sentence is
"a disaster recovery solution and can tolerate up to 30 minutes of downtime and potential data loss"
Take a look at the visualization in the URL provided. Pilot light = 30 minutes.
upvoted 2 times

  Aninina 8 months, 2 weeks ago


Selected Answer: A
A is correct
upvoted 1 times

  gunmin 8 months, 2 weeks ago


Selected Answer: A
aaaaaaaa
upvoted 1 times

  mhmt4438 8 months, 2 weeks ago


Selected Answer: A
answer is d
upvoted 1 times

  alanp 8 months, 3 weeks ago


Ans is A
upvoted 1 times

  bamishr 8 months, 3 weeks ago


Selected Answer: A
A is correct answer.
https://www.examtopics.com/discussions/amazon/view/81439-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times

  bamishr 8 months, 3 weeks ago


Selected Answer: D
https://www.examtopics.com/discussions/amazon/view/81439-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times
Question #218 Topic 1

A company has a web server running on an Amazon EC2 instance in a public subnet with an Elastic IP address. The default security group is
assigned to the EC2 instance. The default network ACL has been modified to block all traffic. A solutions architect needs to make the web server
accessible from everywhere on port 443.

Which combination of steps will accomplish this task? (Choose two.)

A. Create a security group with a rule to allow TCP port 443 from source 0.0.0.0/0.

B. Create a security group with a rule to allow TCP port 443 to destination 0.0.0.0/0.

C. Update the network ACL to allow TCP port 443 from source 0.0.0.0/0.

D. Update the network ACL to allow inbound/outbound TCP port 443 from source 0.0.0.0/0 and to destination 0.0.0.0/0.

E. Update the network ACL to allow inbound TCP port 443 from source 0.0.0.0/0 and outbound TCP port 32768-65535 to destination
0.0.0.0/0.

Correct Answer: AE

Community vote distribution


AE (85%)
( )

  Parsons Highly Voted  8 months, 2 weeks ago


Selected Answer: AE
A, E is perfect the combination. To be more precise, We should add outbound with "outbound TCP port 32768-65535 to destination
0.0.0.0/0." as an ephemeral port due to the stateless of NACL.
upvoted 9 times

  MohammadTofic8787 2 weeks, 1 day ago


i Think AD because acl is stateless we must open the port outbound and inbound , in option E we only open 443 on inbound
upvoted 1 times

  MohammadTofic8787 2 weeks, 1 day ago


i Think AD because acl is stateless we must open the port outbound and inbound , in option c we only open 443 on inbound
upvoted 1 times

  oguzbeliren 1 month, 4 weeks ago


What is the main reason that you are using the TCP port 32768-65535> In the question, it doesn't ask you any requirement about it.
upvoted 2 times

  TariqKipkemei Most Recent  1 week, 4 days ago


Selected Answer: AE
ACL is stateless. you have to define both inbound and outbound rules.
upvoted 1 times

  MohammadTofic8787 2 weeks, 1 day ago


i Think AD because acl is stateless we must open the port outbound and inbound , in option c we only open 443 on inbound
upvoted 1 times

  MohammadTofic8787 2 weeks, 1 day ago


please admin delete this , sorry
upvoted 1 times

  MohammadTofic8787 2 weeks, 1 day ago


i Think AD because acl is stateless we must open the port outbound and inbound , in option D we only open 443 on inbound
upvoted 1 times

  MohammadTofic8787 2 weeks, 1 day ago


please admin delete this , sorry
upvoted 1 times

  Guru4Cloud 2 weeks, 6 days ago


Selected Answer: AE
A, E is perfect the combination. To be more precise, We should add outbound with "outbound TCP port 32768-65535 to destination
0.0.0.0/0." as an ephemeral port due to the stateless of NACL.
upvoted 2 times
  beginnercloud 1 month ago
Selected Answer: AE
AE is the best answer here, but in reality, E is not good enough. Here, it says that the client chooses the ephemeral port, and it can start
from 1024. Only Linux clients have the range starting at 32768 https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-
acls.html#nacl-ephemeral-ports Unless the destination advertises the ephemeral ports, which I don't think is the case
upvoted 1 times

  Thornessen 2 months, 1 week ago


Selected Answer: AE
AE is the best answer here, but in reality, E is not good enough.

Here, it says that the client chooses the ephemeral port, and it can start from 1024. Only Linux clients have the range starting at 32768
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html#nacl-ephemeral-ports

Unless the destination advertises the ephemeral ports, which I don't think is the case
upvoted 1 times

  Abrar2022 4 months ago


32768-65535 ports Allows outbound IPv4 responses to clients on the internet (for example, serving webpages to people visiting the web
servers in the subnet).
upvoted 1 times

  WherecanIstart 6 months, 3 weeks ago


Selected Answer: AE
NACL blocks outgoing traffic since it is infact stateless..Option E allows outbound traffic from ephemeral ports going outside of the VPC
back to the web.
upvoted 2 times

  Brak 6 months, 4 weeks ago


It can't be C, since the current NACL blocks all traffic, including outbound. Need to allow outbound traffic through the NACL.
But E is a bad answer, since ephemeral ports start at 1024, not 32768.
upvoted 1 times

  neosis91 7 months, 3 weeks ago


Selected Answer: AC
A and C not E
Option E states to allow incoming TCP ports on 443 and outgoing on 32768-65535 to all IP addresses (0.0.0.0/0). This option only allows
outgoing ports and does not guarantee that incoming connections on 443 will be allowed. It does not meet the requirement of making
the web server accessible on port 443 from anywhere. Therefore, option C which states to allow incoming TCP ports on 443 from all IP
addresses is the best answer to meet the requirements.
upvoted 4 times

  slackbot 1 month, 1 week ago


seems like either you did not read what you wrote "Option E states to allow incoming TCP ports on 443 and outgoing on 32768-65535
to all IP addresses (0.0.0.0/0)." (because first part of the sentence allows incoming 443) or you do not understand how ACLs work - they
are STATELESS, which means, you need to allow both IN and OUT, not just IN like SGs which are stateful. if they were the same - what
would be the purpose of the ACLs?
upvoted 1 times

  JoeGuan 1 month, 1 week ago


It seems there are lots of questions that ask for minimum requirements, and often times adding 'things' to the solution are not correct.
I am not sure about this question and I would pick C. E adds ambiguity. What if you only needed to open ports for Lambda? That would
be a different set of ports. I think E adds some assumptions into the question. I think opening some ports for some assumptions and
keeping ports closed for other assumptions is not correct. The best assumption is to assume they are asking how to open ports for 443
upvoted 1 times

  slackbot 1 month, 1 week ago


E still guarantees something will work. C definitely means - nothing will work, because you are not allowing egress traffic at all
upvoted 1 times

  Deepak_k 7 months, 1 week ago


Answer : AE - Incoming traffic on port 443 but sever can use any port to reply back.
upvoted 2 times

  Aninina 8 months, 2 weeks ago


Selected Answer: AE
AE correct
upvoted 3 times

  techhb 8 months, 2 weeks ago


Selected Answer: AE
A & E , E as NACL is stateless.
upvoted 2 times
  AHUI 8 months, 2 weeks ago
AE:
https://www.examtopics.com/discussions/amazon/view/29767-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times

  mhmt4438 8 months, 2 weeks ago


Selected Answer: AE
https://www.examtopics.com/discussions/amazon/view/29767-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times

  kbaruu 8 months, 2 weeks ago


Selected Answer: AE
A E is correct
upvoted 1 times

  alanp 8 months, 3 weeks ago


Ans AE
upvoted 1 times
Question #219 Topic 1

A company’s application is having performance issues. The application is stateful and needs to complete in-memory tasks on Amazon EC2
instances. The company used AWS CloudFormation to deploy infrastructure and used the M5 EC2 instance family. As traffic increased, the
application performance degraded. Users are reporting delays when the users attempt to access the application.

Which solution will resolve these issues in the MOST operationally efficient way?

A. Replace the EC2 instances with T3 EC2 instances that run in an Auto Scaling group. Make the changes by using the AWS Management
Console.

B. Modify the CloudFormation templates to run the EC2 instances in an Auto Scaling group. Increase the desired capacity and the maximum
capacity of the Auto Scaling group manually when an increase is necessary.

C. Modify the CloudFormation templates. Replace the EC2 instances with R5 EC2 instances. Use Amazon CloudWatch built-in EC2 memory
metrics to track the application performance for future capacity planning.

D. Modify the CloudFormation templates. Replace the EC2 instances with R5 EC2 instances. Deploy the Amazon CloudWatch agent on the EC2
instances to generate custom application latency metrics for future capacity planning.

Correct Answer: D

Community vote distribution


D (100%)

  Parsons Highly Voted  8 months, 2 weeks ago


Selected Answer: D
D is the correct answer.

"in-memory tasks" => need the "R" EC2 instance type to archive memory optimization. So we are concerned about C & D.
Because EC2 instances don't have built-in memory metrics to CW by default. As a result, we have to install the CW agent to archive the
purpose.
upvoted 16 times

  Babba Highly Voted  8 months, 2 weeks ago


Selected Answer: D
It's D, EC2 do not provide by default memory metrics to CloudWatch and require the CloudWatch Agent to be installed on the monitored
instances : https://aws.amazon.com/premiumsupport/knowledge-center/cloudwatch-memory-metrics-ec2/
upvoted 6 times

  Guru4Cloud Most Recent  2 weeks, 6 days ago


Selected Answer: D
R5 instances are better optimized for the in-memory workload than M5.
Auto Scaling alone doesn't handle stateful applications well, manual capacity adjustments would still be needed.
Custom latency metrics give better visibility than built-in metrics for capacity planning.
upvoted 1 times

  cookieMr 3 months ago


Selected Answer: D
By replacing the M5 instances with R5 instances, which are optimized for memory-intensive workloads, the application can benefit from
increased memory capacity and performance.

In addition, deploying the CloudWatch agent on the EC2 instances allows for the generation of custom application latency metrics, which
can provide valuable insights into the application's performance.

This solution addresses the performance issues efficiently by leveraging the appropriate instance types and collecting custom application
metrics for better monitoring and future capacity planning.

A. Replacing with T3 instances may not provide enough memory capacity for in-memory tasks.

B. Manually increasing the capacity of the ASG does not directly address the performance issues.

C. Relying solely on built-in EC2 memory metrics may not provide enough granularity for optimizing in-memory tasks.

The most efficient solution is to modify the CloudFormation templates, replace with R5 instances, and deploy the CloudWatch agent for
custom metrics.
upvoted 2 times

  Bmarodi 4 months, 1 week ago


Selected Answer: D
Option D is the correct answer.
upvoted 1 times

  BABU97 6 months ago


will go for C
upvoted 1 times

  Aninina 8 months, 2 weeks ago


Selected Answer: D
Would go with D
upvoted 1 times

  mhmt4438 8 months, 2 weeks ago


Selected Answer: D
ı think D
upvoted 1 times
Question #220 Topic 1

A solutions architect is designing a new API using Amazon API Gateway that will receive requests from users. The volume of requests is highly
variable; several hours can pass without receiving a single request. The data processing will take place asynchronously, but should be completed
within a few seconds after a request is made.

Which compute service should the solutions architect have the API invoke to deliver the requirements at the lowest cost?

A. An AWS Glue job

B. An AWS Lambda function

C. A containerized service hosted in Amazon Elastic Kubernetes Service (Amazon EKS)

D. A containerized service hosted in Amazon ECS with Amazon EC2

Correct Answer: B

Community vote distribution


B (94%) 6%

  Parsons Highly Voted  8 months, 2 weeks ago


Selected Answer: B
B is the correct answer.
API Gateway + Lambda is the perfect solution for modern applications with serverless architecture.
upvoted 6 times

  TariqKipkemei Most Recent  1 week, 4 days ago


Selected Answer: B
data processing should be completed within a few seconds = An AWS Lambda function
upvoted 1 times

  Guru4Cloud 2 weeks, 6 days ago


Selected Answer: B
B. An AWS Lambda function
upvoted 1 times

  ukivanlamlpi 1 month, 1 week ago


Selected Answer: D
lambda is expensive than running ECS on EC2
upvoted 1 times

  Undisputed 2 months ago


Selected Answer: B
Lambda all the way.
upvoted 1 times

  cookieMr 3 months ago


Selected Answer: B
Lambda is a serverless compute service that can be triggered by API Gateway to process requests asynchronously. It automatically scales
based on the incoming request volume and allows for cost optimization by charging only for the actual compute time used to process the
requests.

A. Glue is a fully managed ETL service. It is designed for data processing and transformation tasks rather than serving API requests. It may
not be suitable for handling variable request volumes and delivering responses within a few seconds.

C. While EKS provides scalability and flexibility, it may introduce additional complexity and overhead for managing and scaling the
infrastructure for handling variable API request volumes.

D. Similar to the previous option, using ECS with EC2 would require additional effort for infrastructure management and scaling, which
may not be necessary for handling intermittent and variable API request volumes.
upvoted 2 times

  Bmarodi 4 months, 1 week ago


Selected Answer: B
Option B meets the requirements.
upvoted 1 times

  Aninina 8 months, 2 weeks ago


Selected Answer: B
Lambda !
upvoted 3 times

  mhmt4438 8 months, 2 weeks ago


Selected Answer: B
https://www.examtopics.com/discussions/amazon/view/43780-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times
Question #221 Topic 1

A company runs an application on a group of Amazon Linux EC2 instances. For compliance reasons, the company must retain all application log
files for 7 years. The log files will be analyzed by a reporting tool that must be able to access all the files concurrently.

Which storage solution meets these requirements MOST cost-effectively?

A. Amazon Elastic Block Store (Amazon EBS)

B. Amazon Elastic File System (Amazon EFS)

C. Amazon EC2 instance store

D. Amazon S3

Correct Answer: D

Community vote distribution


D (100%)

  Chiquitabandita 1 day ago


this sounds like an expensive solution but if necessary then S3 would be the best
upvoted 1 times

  TariqKipkemei 1 week, 3 days ago


most cost effective = Amazon S3
upvoted 1 times

  Guru4Cloud 2 weeks, 6 days ago


Selected Answer: D
D. Amazon S3
upvoted 1 times

  cookieMr 3 months ago


Selected Answer: D
A. EBS provides block-level storage volumes for use with EC2 instances. While it offers durability and persistence, it is not the most cost-
effective solution for long-term retention of log files. Additionally, it does not provide concurrent access to the files, which is a requirement
in this scenario.

B. EFS is a scalable file storage service that can be mounted on multiple EC2 instances concurrently. While it provides concurrent access to
files, it may not be the most cost-effective option for long-term retention due to its higher pricing compared to S3.

C. The instance store is a temporary storage option that is physically attached to the EC2 instance. It does not provide the durability and
long-term retention required for compliance purposes. Additionally, the instance store is not accessible outside of the specific EC2
instance it is attached to, so concurrent access by the reporting tool would not be possible.

Therefore, considering the requirements for long-term retention, concurrent access, and cost-effectiveness, S3 is the most suitable and
cost-effective storage solution.
upvoted 4 times

  kapit 3 months, 1 week ago


s3<efs<ebs
upvoted 1 times

  Iconique 5 days, 13 hours ago


actually S3 < EBS < EFS, but for EBS you need to pay for the underlying provisioned GB.
If you compare 1 GB then S3 < EBS < EFS but if you have 100GB storage for EBS than EBS is more expensive.
upvoted 1 times

  mattcl 3 months, 3 weeks ago


"The log files will be analyzed by a reporting tool that must be able to access all the files concurrently" , so you need to access concurrently
to get the logs. So is EFS. Letter B
upvoted 1 times

  northyork 3 months, 3 weeks ago


https://aws.amazon.com/efs/faq/
EFS is a file storage service for use with Amazon compute (EC2, containers, serverless) and on-premises servers. EFS provides a file system
interface, file system access semantics (such as strong consistency and file locking), and concurrently accessible storage for up to
thousands of EC2 instances.
upvoted 1 times
  alexandercamachop 4 months, 2 weeks ago
Selected Answer: D
Whenever we see long time storage and no special requirements that needs EFS or FSx, then S3 is the way.
upvoted 2 times

  elearningtakai 6 months ago


Selected Answer: D
To meet the requirements of retaining application log files for 7 years and allowing concurrent access by a reporting tool, while also being
cost-effective, the recommended storage solution would be D: Amazon S3.
upvoted 2 times

  osmk 6 months, 1 week ago


ddddddddddddddddddd
upvoted 1 times

  udo2020 6 months, 1 week ago


What about the keyword "concurrently"? Doesn't this mean EFS?
upvoted 3 times

  Aninina 8 months, 2 weeks ago


Selected Answer: D
Cost Effective: S3
upvoted 2 times

  Parsons 8 months, 2 weeks ago


Selected Answer: D
S3 is enough with the lowest cost perspective.
upvoted 1 times

  mhmt4438 8 months, 2 weeks ago


Selected Answer: D
https://www.examtopics.com/discussions/amazon/view/22182-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 2 times
Question #222 Topic 1

A company has hired an external vendor to perform work in the company’s AWS account. The vendor uses an automated tool that is hosted in an
AWS account that the vendor owns. The vendor does not have IAM access to the company’s AWS account.

How should a solutions architect grant this access to the vendor?

A. Create an IAM role in the company’s account to delegate access to the vendor’s IAM role. Attach the appropriate IAM policies to the role for
the permissions that the vendor requires.

B. Create an IAM user in the company’s account with a password that meets the password complexity requirements. Attach the appropriate
IAM policies to the user for the permissions that the vendor requires.

C. Create an IAM group in the company’s account. Add the tool’s IAM user from the vendor account to the group. Attach the appropriate IAM
policies to the group for the permissions that the vendor requires.

D. Create a new identity provider by choosing “AWS account” as the provider type in the IAM console. Supply the vendor’s AWS account ID and
user name. Attach the appropriate IAM policies to the new provider for the permissions that the vendor requires.

Correct Answer: A

Community vote distribution


A (86%) 9%

  mp165 Highly Voted  8 months, 2 weeks ago


Selected Answer: A
A is proper

https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_common-scenarios_third-party.html
upvoted 7 times

  TariqKipkemei Most Recent  1 week, 3 days ago


Selected Answer: A
Create an IAM role in the company’s account to delegate access to the vendor’s IAM role. Attach the appropriate IAM policies to the role
for the permissions that the vendor requires
upvoted 1 times

  Guru4Cloud 2 weeks, 6 days ago


Selected Answer: A
A. Create an IAM role in the company’s account to delegate access to the vendor’s IAM role. Attach the appropriate IAM policies to the role
for the permissions that the vendor requires.
upvoted 1 times

  cookieMr 3 months ago


By creating an IAM role and delegating access to the vendor's IAM role, you establish a trust relationship between accounts. This allows
the vendor's automated tool to assume the role in the company's account and access the necessary resources.

By attaching the appropriate IAM policies to the role, you can define the precise permissions that the vendor requires for their tool to
perform its tasks. This ensures that the vendor has the necessary access without granting them direct IAM access to the company's
account.

B is incorrect because creating an IAM user with a password would require sharing the credentials with the vendor, which is not
recommended for security reasons.

C is incorrect because adding the vendor's IAM user to an IAM group in the company's account would not provide a direct and controlled
way to delegate access to the vendor's tool.

D is incorrect because creating a new identity provider for the vendor's AWS account would not provide a straightforward way to delegate
access to the vendor's tool. Identity providers are typically used for federated access using external identity systems.
upvoted 3 times

  teja54 4 months ago


Selected Answer: C
....................................
upvoted 1 times

  Bmarodi 4 months ago


Selected Answer: A
Option A fulfill the requirements.
upvoted 1 times
  Aninina 8 months, 2 weeks ago
Selected Answer: A
IAM role is the answer
upvoted 1 times

  techhb 8 months, 2 weeks ago


Selected Answer: A
A is correct answer.
upvoted 1 times

  kbaruu 8 months, 2 weeks ago


Selected Answer: A
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_common-scenarios_third-party.html
upvoted 2 times

  venice1234 8 months, 2 weeks ago


Selected Answer: A
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user_externalid.html
upvoted 2 times

  Parsons 8 months, 2 weeks ago


Selected Answer: A
A is the correct answer.
upvoted 3 times

  Babba 8 months, 2 weeks ago


Selected Answer: D
My guess is D: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_common-scenarios_third-party.html
upvoted 2 times
Question #223 Topic 1

A company has deployed a Java Spring Boot application as a pod that runs on Amazon Elastic Kubernetes Service (Amazon EKS) in private
subnets. The application needs to write data to an Amazon DynamoDB table. A solutions architect must ensure that the application can interact
with the DynamoDB table without exposing traffic to the internet.

Which combination of steps should the solutions architect take to accomplish this goal? (Choose two.)

A. Attach an IAM role that has sufficient privileges to the EKS pod.

B. Attach an IAM user that has sufficient privileges to the EKS pod.

C. Allow outbound connectivity to the DynamoDB table through the private subnets’ network ACLs.

D. Create a VPC endpoint for DynamoDB.

E. Embed the access keys in the Java Spring Boot code.

Correct Answer: AD

Community vote distribution


AD (100%)

  TariqKipkemei 1 week, 3 days ago


Selected Answer: AD
The application needs to write data to an Amazon DynamoDB table = Attach an IAM role that has write privileges to the EKS pod
Without exposing traffic to the internet = VPC endpoint for DynamoDB
upvoted 2 times

  Guru4Cloud 2 weeks, 6 days ago


Selected Answer: AD
A. By attaching an IAM role to the EKS pod, you can grant the necessary permissions for the pod to access DynamoDB. The IAM role
should have appropriate policies allowing access to the DynamoDB table.

D. Creating a VPC endpoint for DynamoDB allows the EKS pod to access DynamoDB privately within the VPC, without the need for internet
connectivity. The VPC endpoint provides a direct and secure connection to DynamoDB, eliminating the need for traffic to flow over the
internet.
upvoted 1 times

  cookieMr 3 months ago


Selected Answer: AD
A. By attaching an IAM role to the EKS pod, you can grant the necessary permissions for the pod to access DynamoDB. The IAM role
should have appropriate policies allowing access to the DynamoDB table.

D. Creating a VPC endpoint for DynamoDB allows the EKS pod to access DynamoDB privately within the VPC, without the need for internet
connectivity. The VPC endpoint provides a direct and secure connection to DynamoDB, eliminating the need for traffic to flow over the
internet.

B is incorrect because attaching an IAM user to the pod is not a recommended approach. IAM users are meant for accessing AWS services
through the AWS Management Console or AP.

C is incorrect because configuring outbound connectivity through network ACLs would not provide a secure and direct connection to
DynamoDB.

E is incorrect because embedding access keys in the code is not a recommended security practice. It can lead to potential security
vulnerabilities. It is better to use IAM roles or other secure mechanisms for providing access to AWS services.
upvoted 2 times

  Bmarodi 4 months ago


Selected Answer: AD
A & D options fulfill the requirements.
upvoted 1 times

  LuckyAro 8 months, 2 weeks ago


Selected Answer: AD
Definitely
upvoted 1 times

  Aninina 8 months, 2 weeks ago


Selected Answer: AD
A D are the correct options
upvoted 1 times
  venice1234 8 months, 2 weeks ago
Selected Answer: AD
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/vpc-endpoints-dynamodb.html
https://aws.amazon.com/about-aws/whats-new/2019/09/amazon-eks-adds-support-to-assign-iam-permissions-to-kubernetes-service-
accounts/
upvoted 2 times

  Parsons 8 months, 2 weeks ago


Selected Answer: AD
A, D is the correct answer.
upvoted 2 times

  mhmt4438 8 months, 2 weeks ago


Selected Answer: AD
The correct answer is A,D
upvoted 1 times
Question #224 Topic 1

A company recently migrated its web application to AWS by rehosting the application on Amazon EC2 instances in a single AWS Region. The
company wants to redesign its application architecture to be highly available and fault tolerant. Traffic must reach all running EC2 instances
randomly.

Which combination of steps should the company take to meet these requirements? (Choose two.)

A. Create an Amazon Route 53 failover routing policy.

B. Create an Amazon Route 53 weighted routing policy.

C. Create an Amazon Route 53 multivalue answer routing policy.

D. Launch three EC2 instances: two instances in one Availability Zone and one instance in another Availability Zone.

E. Launch four EC2 instances: two instances in one Availability Zone and two instances in another Availability Zone.

Correct Answer: CE

Community vote distribution


CE (57%) BE (43%)

  Steve_4542636 Highly Voted  7 months ago


Selected Answer: BE
I went back and rewatched the lectures from Udemy on Weighted and Multi-Value. The lecturer said that Multi-value is *not* as substitute
for ELB and he stated that DNS load balancing is a good use case for Weighted routing policies
upvoted 7 times

  smartegnine 3 months, 3 weeks ago


Weighted routing based on weight assigned, it can not do randomly choose, please see last sentence of the question choose randomly.
upvoted 3 times

  Techi47 Most Recent  5 days, 20 hours ago


Option CE Correct:
To route traffic roughly and randomly to multiple resources, such as web servers, you create a multi-value response record for each
resource and optionally associate a Route 53 health check with each record.
https://disaster-recovery.workshop.aws/en/services/networking/route53/routing-policies/routing-multiple-answer.html
upvoted 1 times

  kwang312 1 week, 2 days ago


Selected Answer: CE
CE is correct
upvoted 1 times

  TariqKipkemei 1 week, 3 days ago


Selected Answer: CE
Highly available and fault tolerant = two instances in two AZs
Route traffic randomly = Amazon Route 53 multivalue answer routing policy
upvoted 1 times

  LazyTs 3 weeks, 6 days ago


Selected Answer: CE
Multivalue answer routing policy – Use when you want Route 53 to respond to DNS queries with up to eight healthy records selected at
random. You can use multivalue answer routing to create records in a private hosted zone.

Weighted routing policy – Use to route traffic to multiple resources in proportions that you specify. You can use weighted routing to create
records in a private hosted zone.
upvoted 1 times

  Zeezie 2 months ago


I chose CE, but couldn't it also be BE? If you set all of the weights to the same, equal value? Wouldn't then the traffic be distributed
randomly and evenly among all healthy instances?
upvoted 1 times

  jacob_ho 4 weeks ago


This is "equal distribution", not "random distribution"; think about the differences
upvoted 1 times

  cookieMr 3 months ago


Selected Answer: CE
C. A multivalue answer routing policy in Route 53 allows you to configure multiple values for a DNS record, and Route 53 responds to DNS
queries with multiple random values. This enables the distribution of traffic randomly among the available EC2 instances.

E. By launching EC2 instances in different AZs, you achieve high availability and fault tolerance. Launching four instances (two in each AZ)
ensures that there are enough resources to handle the traffic load and maintain the desired level of availability.

A. Failover routing is designed to direct traffic to a backup resource or secondary location only when the primary resource or location is
unavailable.

B. Although a weighted routing policy allows you to distribute traffic across multiple EC2 instances, it does not ensure random
distribution.

D. While launching instances in multiple AZs is important for fault tolerance, having only three instances does not provide an even
distribution of traffic. With only three instances, the traffic may not be evenly distributed, potentially leading to imbalanced resource
utilization.
upvoted 4 times

  samsoft556 3 months, 1 week ago


Selected Answer: CE
Randomly is the key word
upvoted 2 times

  secdgs 3 months, 3 weeks ago


Selected Answer: CE
C: Multi-value To route traffic approximately randomly to multiple resources and have healt check
B: Weighted default use for when you need load to one server more than ohter server. if you need for random to all server should be letter
in this C options "and weight to all server with same value".
upvoted 1 times

  smartegnine 3 months, 3 weeks ago


Selected Answer: CE
Must C and E, B is not correct because it based on the assigned weight it can not do randomly
upvoted 1 times

  ChrisAn 3 months, 3 weeks ago


Selected Answer: CE
Option C, creating an Amazon Route 53 multivalue answer routing policy, is the correct choice. With this routing policy, Route 53 returns
multiple IP addresses for the same domain name, allowing the traffic to be distributed randomly among the available EC2 instances. This
ensures that the traffic is evenly distributed across the instances launched in different Availability Zones, achieving the desired
randomness and load balancing.

Option E is the correct choice. By launching instances in different Availability Zones, the company ensures that there are redundant copies
of the application running in separate physical locations, providing fault tolerance. With two instances in one Availability Zone and two
instances in another, traffic can be distributed randomly among them, improving availability and load balancing.
upvoted 1 times

  Axeashes 3 months, 4 weeks ago


Selected Answer: CE
https://aws.amazon.com/route53/faqs/
upvoted 2 times

  Bmarodi 4 months ago


Selected Answer: BE
I vote for B & E options.
upvoted 1 times

  antropaws 4 months, 1 week ago


It can also be A) failover routing policy:

"Active-active failover:

Use this failover configuration when you want all of your resources to be available the majority of the time. When a resource becomes
unavailable, Route 53 can detect that it's unhealthy and stop including it when responding to queries.

In active-active failover, all the records that have the same name, the same type (such as A or AAAA), and the same routing policy (such as
weighted or latency) are active unless Route 53 considers them unhealthy. Route 53 can respond to a DNS query using any healthy
record".
upvoted 1 times

  michellemeloc 4 months, 2 weeks ago


Selected Answer: CE
After read the doc, I understood that the question does not ask to route the traffic with a specific proportion (in this case, it would be
Weighted routing policy). Question requires to be Random, so the only option that do this really randomly is Multivalue answer routing
policy. https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html
upvoted 1 times
  mandragon 4 months, 3 weeks ago
Selected Answer: BE
Multivalue answer routing policy allows Route 53 to respond to DNS queries with up to eight healthy records selected at random, but it
does not allow you to specify the proportion of traffic that each record receives. Weighted routing policy allows you to route traffic
randomly to all running EC2 instances based on the weights that you assign to each instance.
upvoted 4 times

  slackbot 1 month, 1 week ago


weighted allows for distribution by percent of load per target. which is not the requirement. the requirement is RANDOMLY ->
multivalue
upvoted 1 times

  rushi0611 4 months, 3 weeks ago


Selected Answer: CE
C: For traffic to go to EC2 'Randomly', R53 will answer the IP's of all EC2's and the client will choose randomly, while maintaining high
availability and fault tolerance as unhealthy IP's will not be sent forward as the answer to the DNS query.
upvoted 1 times
Question #225 Topic 1

A media company collects and analyzes user activity data on premises. The company wants to migrate this capability to AWS. The user activity
data store will continue to grow and will be petabytes in size. The company needs to build a highly available data ingestion solution that facilitates
on-demand analytics of existing data and new data with SQL.

Which solution will meet these requirements with the LEAST operational overhead?

A. Send activity data to an Amazon Kinesis data stream. Configure the stream to deliver the data to an Amazon S3 bucket.

B. Send activity data to an Amazon Kinesis Data Firehose delivery stream. Configure the stream to deliver the data to an Amazon Redshift
cluster.

C. Place activity data in an Amazon S3 bucket. Configure Amazon S3 to run an AWS Lambda function on the data as the data arrives in the S3
bucket.

D. Create an ingestion service on Amazon EC2 instances that are spread across multiple Availability Zones. Configure the service to forward
data to an Amazon RDS Multi-AZ database.

Correct Answer: A

Community vote distribution


B (91%) 9%

  Guru4Cloud 2 weeks, 6 days ago


Selected Answer: B
B. Send activity data to an Amazon Kinesis Data Firehose delivery stream. Configure the stream to deliver the data to an Amazon Redshift
cluster.
upvoted 1 times

  beginnercloud 1 month ago


Selected Answer: B
Petabyte scale- Redshift
upvoted 1 times

  NVenkatS 1 month, 1 week ago


Selected Answer: B
Petabyte scale- Redshift
upvoted 1 times

  A1975 1 month, 4 weeks ago


Selected Answer: B
1- Kinesis Data Stream provides a fully managed platform for custom data processing and analysis. Or we can say that used for custom
data processing and analysis which required more manual intervention.
2- Kinesis Data Firehose simplifies the delivery of streaming data to various destinations without the need for complex transformations.
Option B is more suitable for the given scenario.
upvoted 1 times

  sickcow 3 months ago


Selected Answer: B
Petabyte Scale sounds like Redshift!
upvoted 1 times

  cookieMr 3 months ago


Selected Answer: B
B provides a fully managed and scalable solution for data ingestion and analytics. KDF simplifies the data ingestion process by
automatically scaling to handle large volumes of streaming data. It can directly load the data into an Redshift cluster, which is a powerful
and fully managed data warehousing solution.

A. While Kinesis can handle streaming data, it requires additional processing to load the data into an analytics solution.

C. Although S3 and Lambda can handle the storage and processing of data, it requires more manual configuration and management
compared to the fully managed solution offered by KDF and Redshift.

D. This option involves more operational overhead, as it requires managing and scaling the EC2 instances and RDS database infrastructure
manually.

Therefore, option B with KDF delivering the data to Redshift cluster offers the most streamlined and operationally efficient solution for
ingesting and analyzing the user activity data in the given scenario.
upvoted 1 times
  pisica134 3 months, 1 week ago
petabytes in size => redshift
upvoted 2 times

  mattcl 3 months, 3 weeks ago


It's A. Data Stream is better in this case, and you can query data in S3 with Athena
upvoted 2 times

  JoeGuan 1 month, 1 week ago


https://aws.amazon.com/streaming-data/ a good explanation of either option. firehose appears to be an option for Least operational
overhead, as the streams product requires some building of apps etc.
upvoted 1 times

  Yadav_Sanjay 3 months, 2 weeks ago


Data Stream Can't write to S3. That's why B is only left correct answer.
upvoted 1 times

  baba365 3 months, 1 week ago


Answer A… key phrase’ least operational overhead’
KDF can write to S3 … https://docs.aws.amazon.com/firehose/latest/dev/what-is-this-service.html
upvoted 1 times

  Bmarodi 4 months ago


Selected Answer: B
Option B is correct answer.
upvoted 1 times

  kruasan 5 months ago


Selected Answer: B
This solution meets the requirements as follows:
• Kinesis Data Firehose can scale to ingest and process multiple terabytes per hour of streaming data. This can easily handle the petabyte-
scale data volumes.
• Firehose can deliver the data to Redshift, a petabyte-scale data warehouse, enabling on-demand SQL analytics of the data.
• Redshift is a fully managed service, minimizing operational overhead. Firehose is also fully managed, handling scalability, availability, and
durability of the streaming data ingestion.
upvoted 1 times

  gold4otas 6 months, 1 week ago


Selected Answer: B
B: The answer is certainly option "B" because ingesting user activity data can easily be handled by Amazon Kinesis Data streams. The
ingested data can then be sent into Redshift for Analytics.

Amazon Redshift is a fully managed, petabyte-scale data warehouse service in the cloud. Amazon Redshift Serverless lets you access and
analyze data without all of the configurations of a provisioned data warehouse.

https://docs.aws.amazon.com/redshift/latest/mgmt/welcome.html
upvoted 2 times

  GalileoEC2 6 months, 2 weeks ago


the Key sentence here is: "that facilitates on-demand analytics", tthats the reason because we need to choose Kinesis Data streams over
Data Firehose
upvoted 1 times

  alexleely 8 months, 1 week ago


Selected Answer: B
B: Kinesis Data Firehose service automatically load the data into Amazon Redshift and is a petabyte-scale data warehouse service. It allows
you to perform on-demand analytics with minimal operational overhead. Since the requirement didn't state what kind of analytics you
need to run, we can assume that we do not need to set up additional services to provide further analytics. Thus, it has the least
operational overhead.

Why not A: It is a viable solution, but storing the data in S3 would require you to set up additional services like Amazon Redshift or
Amazon Athena to perform the analytics.
upvoted 2 times

  Berny 8 months, 2 weeks ago


Selected Answer: B
Data ingestion through Kinesis data streams will require manual intervention to provide more shards as data size grows. Kinesis firehose
will ingest data with the least operational overhead.
upvoted 4 times

  mp165 8 months, 2 weeks ago


Selected Answer: A
I think the key word in the question is "ingestion"...whish is data stream

Data Streams is a low latency streaming service in AWS Kinesis with the facility for ingesting at scale. On the other hand, Kinesis Firehose
aims to serve as a data transfer service. The primary purpose of Kinesis Firehose focuses on loading streaming data to Amazon S3, Splunk,
ElasticSearch, and RedShift
upvoted 3 times
  Aninina 8 months, 2 weeks ago
Selected Answer: B
petabytes: redshift
upvoted 3 times

  wmp7039 8 months, 2 weeks ago


Selected Answer: B
Amazon Kinesis Data Firehose + Redshift meets the requirements
upvoted 1 times
Question #226 Topic 1

A company collects data from thousands of remote devices by using a RESTful web services application that runs on an Amazon EC2 instance.
The EC2 instance receives the raw data, transforms the raw data, and stores all the data in an Amazon S3 bucket. The number of remote devices
will increase into the millions soon. The company needs a highly scalable solution that minimizes operational overhead.

Which combination of steps should a solutions architect take to meet these requirements? (Choose two.)

A. Use AWS Glue to process the raw data in Amazon S3.

B. Use Amazon Route 53 to route traffic to different EC2 instances.

C. Add more EC2 instances to accommodate the increasing amount of incoming data.

D. Send the raw data to Amazon Simple Queue Service (Amazon SQS). Use EC2 instances to process the data.

E. Use Amazon API Gateway to send the raw data to an Amazon Kinesis data stream. Configure Amazon Kinesis Data Firehose to use the data
stream as a source to deliver the data to Amazon S3.

Correct Answer: AE

Community vote distribution


AE (100%)

  Parsons Highly Voted  8 months, 2 weeks ago


Selected Answer: AE
A, E is the correct answer

"RESTful web services" => API Gateway.


"EC2 instance receives the raw data, transforms the raw data, and stores all the data in an Amazon S3 bucket" => GLUE with (Extract -
Transform - Load)
upvoted 8 times

  TariqKipkemei Most Recent  1 week, 3 days ago


Selected Answer: AE
E then A no doubt.
upvoted 1 times

  Guru4Cloud 2 weeks, 6 days ago


Selected Answer: AE
A. It automatically discovers the schema of the data and generates ETL code to transform it.

E. API Gateway can be used to receive the raw data from the remote devices via RESTful web services. It provides a scalable and managed
infrastructure to handle the incoming requests. The data can then be sent to an Amazon Kinesis data stream, which is a highly scalable
and durable real-time data streaming service. From there, Amazon Kinesis Data Firehose can be configured to use the data stream as a
source and deliver the transformed data to Amazon S3. This combination of services allows for the seamless ingestion and processing of
data while minimizing operational overhead.
upvoted 1 times

  ibu007 4 weeks ago


Selected Answer: AE
A - Use AWS Glue to process the raw data in Amazon S3
E - Use Amazon API Gateway to send the raw data to an Amazon Kinesis data stream. Configure Amazon Kinesis Data Firehose to use the
data stream as a source to deliver the data to Amazon S3
upvoted 1 times

  GCB1990 1 month ago


Correct answer: D and E
upvoted 1 times

  cookieMr 3 months ago


Selected Answer: AE
A. It automatically discovers the schema of the data and generates ETL code to transform it.

E. API Gateway can be used to receive the raw data from the remote devices via RESTful web services. It provides a scalable and managed
infrastructure to handle the incoming requests. The data can then be sent to an Amazon Kinesis data stream, which is a highly scalable
and durable real-time data streaming service. From there, Amazon Kinesis Data Firehose can be configured to use the data stream as a
source and deliver the transformed data to Amazon S3. This combination of services allows for the seamless ingestion and processing of
data while minimizing operational overhead.
B. It does not directly address the need for scalable data processing and storage. It focuses on managing DNS and routing traffic to
different endpoints.

C. Adding more EC2 can lead to increased operational overhead in terms of managing and scaling the instances.

D. Using SQS and EC2 for processing data introduces more complexity and operational overhead.
upvoted 2 times
  wRhlH 3 months, 2 weeks ago
Why not BC?
upvoted 1 times

  AnnieTran_91 3 months, 2 weeks ago


Why it not CE?
Add more EC2 instances to accommodate the increasing amount of incoming data?
upvoted 1 times

  TTaws 3 months, 1 week ago


EC2 is not server-less. they want to minimize overhead
upvoted 1 times

  studynoplay 4 months, 2 weeks ago


Selected Answer: AE
minimizes operational overhead = Serverless
Glue, Kinesis Datastream, S3 are serverless
upvoted 1 times

  KZM 7 months, 2 weeks ago


How about "C" to increase EC2 instances for the increased devices soon?
upvoted 1 times

  Aninina 8 months, 2 weeks ago


Selected Answer: AE
Glue and API
upvoted 2 times

  mhmt4438 8 months, 2 weeks ago


Selected Answer: AE
https://www.examtopics.com/discussions/amazon/view/83387-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times
Question #227 Topic 1

A company needs to retain its AWS CloudTrail logs for 3 years. The company is enforcing CloudTrail across a set of AWS accounts by using AWS
Organizations from the parent account. The CloudTrail target S3 bucket is configured with S3 Versioning enabled. An S3 Lifecycle policy is in
place to delete current objects after 3 years.

After the fourth year of use of the S3 bucket, the S3 bucket metrics show that the number of objects has continued to rise. However, the number
of new CloudTrail logs that are delivered to the S3 bucket has remained consistent.

Which solution will delete objects that are older than 3 years in the MOST cost-effective manner?

A. Configure the organization’s centralized CloudTrail trail to expire objects after 3 years.

B. Configure the S3 Lifecycle policy to delete previous versions as well as current versions.

C. Create an AWS Lambda function to enumerate and delete objects from Amazon S3 that are older than 3 years.

D. Configure the parent account as the owner of all objects that are delivered to the S3 bucket.

Correct Answer: B

Community vote distribution


B (86%) 14%

  TariqKipkemei 1 week, 2 days ago


Selected Answer: B
Ensure to delete previous versions as well.
upvoted 1 times

  Guru4Cloud 2 weeks, 6 days ago


Selected Answer: B
This is the most cost-effective option because:
• Versioning has caused the number of objects to increase over time, even as current objects are deleted after 3 years. By deleting
previous versions as well, this will clean up old object versions and reduce storage costs.
• An S3 Lifecycle policy incurs no additional charges and requires no additional resources to configure and run. It is a native S3 tool for
managing object lifecycles cost-effectively.
upvoted 2 times

  cookieMr 3 months ago


Selected Answer: B
By configuring the S3 Lifecycle policy to delete previous versions as well as current versions, the older versions of the CloudTrail logs will
be deleted. This ensures that objects older than 3 years are removed from the S3 bucket, reducing the object count and controlling
storage costs.

A. This option is not directly related to managing objects in the S3. It focuses on configuring the expiration of CloudTrail trails, which may
not address the need to delete objects from the S3 bucket.

C. While it is technically possible to create a Lambda to delete objects older than 3 years, this approach would introduce additional
complexity and operational overhead.

D. Changing the ownership of the objects in the S3 bucket does not directly address the need to delete objects older than 3 years.
Ownership does not affect the deletion behavior of the objects.
upvoted 1 times

  Bmarodi 4 months ago


Selected Answer: B
I go for option B.
upvoted 1 times

  ruqui 4 months, 1 week ago


I don't think it's possible to configure an S3 lifecycle policy to delete all versions of an object, so B is wrong ... I think the question is
improperly worded
upvoted 1 times

  Rahulbit34 5 months ago


• Versioning has caused the number of objects to increase over time, even as current objects are deleted after 3 years. By deleting
previous versions as well, this will clean up old object versions and reduce storage costs. • An S3 Lifecycle policy incurs no additional
charges and requires no additional resources to configure and run. It is a native S3 tool for managing object lifecycles cost-effectively.
upvoted 1 times
  kruasan 5 months ago
Selected Answer: B
This is the most cost-effective option because:
• Versioning has caused the number of objects to increase over time, even as current objects are deleted after 3 years. By deleting
previous versions as well, this will clean up old object versions and reduce storage costs.
• An S3 Lifecycle policy incurs no additional charges and requires no additional resources to configure and run. It is a native S3 tool for
managing object lifecycles cost-effectively.
upvoted 3 times

  kruasan 5 months ago


https://docs.aws.amazon.com/AmazonS3/latest/userguide/DeletingObjectVersions.html
upvoted 2 times

  bullrem 8 months, 1 week ago


Selected Answer: C
A more cost-effective solution would be to configure the organization's centralized CloudTrail trail to expire objects after 3 years. This
would ensure that all objects, including previous versions, are deleted after the specified retention period.
Another option would be to create an AWS Lambda function to enumerate and delete objects from Amazon S3 that are older than 3 years,
this would allow you to have more control over the deletion process and to write a custom logic that best fits your use case.
upvoted 3 times

  JayBee65 8 months, 1 week ago


Selected Answer: B
The question clearly says "An S3 Lifecycle policy is in place to delete current objects after 3 years". This implies that previous versions are
not deleted, since this is a separate setting, and since logs are constantly changed, it would seem to make sense to delete previous
versions so, so B. D is wrong, since the parent account (the management account) will already be the owner of all objects delivered to the
S3 bucket, "All accounts in the organization can see MyOrganizationTrail in their list of trails, but member accounts cannot remove or
modify the organization trail. Only the management account or delegated administrator account can change or delete the trail for the
organization.", see https://docs.aws.amazon.com/awscloudtrail/latest/userguide/creating-trail-organization.html
upvoted 2 times

  John_Zhuang 8 months, 1 week ago


Selected Answer: B
B is the right answer. Ref: https://docs.aws.amazon.com/awscloudtrail/latest/userguide/best-practices-
security.html#:~:text=The%20CloudTrail%20trail,time%20has%20passed.

Option A is wrong. No way to expire the cloudtrail logs


upvoted 3 times

  techhb 8 months, 2 weeks ago


Selected Answer: B
Configure the S3 Lifecycle policy to delete previous versions
upvoted 2 times

  Aninina 8 months, 2 weeks ago


Selected Answer: B
B. Configure the S3 Lifecycle policy to delete previous versions as well as current versions.
upvoted 1 times

  Aninina 8 months, 2 weeks ago


B. Configure the S3 Lifecycle policy to delete previous versions as well as current versions.
upvoted 1 times

  Parsons 8 months, 2 weeks ago


Selected Answer: B
B is correct answer
upvoted 2 times

  AHUI 8 months, 2 weeks ago


Ans: A
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/creating-trail-organization.html
When you create an organization trail, a trail with the name that you give it is created in every AWS account that belongs to your
organization. Users with CloudTrail permissions in member accounts can see this trail when they log into the AWS CloudTrail console from
their AWS accounts, or when they run AWS CLI commands such as describe-trail. However, users in member accounts do not have
sufficient permissions to delete the organization trail, turn logging on or off, change what types of events are logged, or otherwise change
the organization trail in any way.
upvoted 1 times

  AHUI 8 months, 2 weeks ago


correction: Ans D is the answer.
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/creating-trail-organization.html
upvoted 1 times

  mhmt4438 8 months, 2 weeks ago


Selected Answer: B
B. Configure the S3 Lifecycle policy to delete previous versions as well as current versions.

To delete objects that are older than 3 years in the most cost-effective manner, the company should configure the S3 Lifecycle policy to
delete previous versions as well as current versions. This will ensure that all versions of the objects, including the previous versions, are
deleted after 3 years.
upvoted 1 times
Question #228 Topic 1

A company has an API that receives real-time data from a fleet of monitoring devices. The API stores this data in an Amazon RDS DB instance for
later analysis. The amount of data that the monitoring devices send to the API fluctuates. During periods of heavy traffic, the API often returns
timeout errors.

After an inspection of the logs, the company determines that the database is not capable of processing the volume of write traffic that comes
from the API. A solutions architect must minimize the number of connections to the database and must ensure that data is not lost during periods
of heavy traffic.

Which solution will meet these requirements?

A. Increase the size of the DB instance to an instance type that has more available memory.

B. Modify the DB instance to be a Multi-AZ DB instance. Configure the application to write to all active RDS DB instances.

C. Modify the API to write incoming data to an Amazon Simple Queue Service (Amazon SQS) queue. Use an AWS Lambda function that
Amazon SQS invokes to write data from the queue to the database.

D. Modify the API to write incoming data to an Amazon Simple Notification Service (Amazon SNS) topic. Use an AWS Lambda function that
Amazon SNS invokes to write data from the topic to the database.

Correct Answer: C

Community vote distribution


C (100%)

  TariqKipkemei 1 week, 2 days ago


Selected Answer: C
Decouple the API and the DB with Amazon Simple Queue Service (Amazon SQS) queue.
upvoted 1 times

  Guru4Cloud 2 weeks, 6 days ago


Selected Answer: C
C. Modify the API to write incoming data to an Amazon Simple Queue Service (Amazon SQS) queue. Use an AWS Lambda function that
Amazon SQS invokes to write data from the queue to the database.
upvoted 1 times

  cookieMr 3 months ago


Selected Answer: C
By leveraging SQS as a buffer and using an Lambda to process and write data from the queue to the database, the solution provides
scalability, decoupling, and reliability while minimizing the number of connections to the database. This approach handles fluctuations in
traffic and ensures data integrity during high-traffic periods.

A. Increasing the size of the DB instance may provide more memory, but it does not address the issue of handling high write traffic
efficiently and minimizing connections to the database.

B. Modifying the DB instance to be a Multi-AZ instance and writing to all active instances can improve availability but does not address the
issue of efficiently handling high write traffic and minimizing connections to the database.

D. Using SNS and an Lambda can provide decoupling and scalability, but it is not suitable for handling heavy write traffic efficiently and
minimizing connections to the database.
upvoted 2 times

  Moccorso 3 months, 1 week ago


I think D, "Use an AWS Lambda function that Amazon SQS invokes to write data from the queue to the database" SQS can't invokes
Lambda becouse SQS is pull.
upvoted 3 times

  shivamrulz 3 months, 2 weeks ago


Why not B
upvoted 2 times

  Russs99 6 months, 2 weeks ago


C is in deed the correct answer for the use case
upvoted 1 times

  kaushald 6 months, 3 weeks ago


Selected Answer: C
C is correct
upvoted 1 times

  Steve_4542636 7 months ago


Selected Answer: C
Cis correct
upvoted 1 times

  maciekmaciek 7 months, 3 weeks ago


Selected Answer: C
C looks ok
upvoted 1 times

  iamjaehyuk 7 months, 3 weeks ago


why not D?
upvoted 1 times

  Parsons 8 months, 2 weeks ago


Selected Answer: C
C is correct.
upvoted 2 times

  mhmt4438 8 months, 2 weeks ago


Selected Answer: C
C. Modify the API to write incoming data to an Amazon Simple Queue Service (Amazon SQS) queue. Use an AWS Lambda function that
Amazon SQS invokes to write data from the queue to the database.

To minimize the number of connections to the database and ensure that data is not lost during periods of heavy traffic, the company
should modify the API to write incoming data to an Amazon SQS queue. The use of a queue will act as a buffer between the API and the
database, reducing the number of connections to the database. And the use of an AWS Lambda function invoked by SQS will provide a
more flexible way of handling the data and processing it. This way, the function will process the data from the queue and insert it into the
database in a more controlled way.
upvoted 2 times

  Aninina 8 months, 2 weeks ago


Did you use ChatGPT?
upvoted 6 times

  Nguyen25183 7 months, 1 week ago


same question as you :D
upvoted 1 times
Question #229 Topic 1

A company manages its own Amazon EC2 instances that run MySQL databases. The company is manually managing replication and scaling as
demand increases or decreases. The company needs a new solution that simplifies the process of adding or removing compute capacity to or
from its database tier as needed. The solution also must offer improved performance, scaling, and durability with minimal effort from operations.

Which solution meets these requirements?

A. Migrate the databases to Amazon Aurora Serverless for Aurora MySQL.

B. Migrate the databases to Amazon Aurora Serverless for Aurora PostgreSQL.

C. Combine the databases into one larger MySQL database. Run the larger database on larger EC2 instances.

D. Create an EC2 Auto Scaling group for the database tier. Migrate the existing databases to the new environment.

Correct Answer: A

Community vote distribution


A (100%)

  TariqKipkemei 1 week, 2 days ago


Selected Answer: A
Migrate the databases to Amazon Aurora Serverless for Aurora MySQL
upvoted 1 times

  Undisputed 2 months ago


Selected Answer: A
Aurora MySQL
upvoted 1 times

  cookieMr 3 months ago


Selected Answer: A
Migrating the databases to Aurora Serverless provides automated scaling and replication capabilities. Aurora Serverless automatically
scales the capacity based on the workload, allowing for seamless addition or removal of compute capacity as needed. It also offers
improved performance, durability, and high availability without requiring manual management of replication and scaling.

B. Incorrect because it suggests migrating to a different database engine, which may introduce compatibility issues and require significant
code modifications.

C. Incorrect because consolidating into a larger MySQL database on larger EC2 instances does not provide the desired scalability and
automation.

D. Incorrect because using EC2 Auto Scaling groups for the database tier still requires manual management of replication and scaling.
upvoted 2 times

  Bmarodi 4 months ago


Selected Answer: A
Option A is right answer.
upvoted 1 times

  Bhrino 7 months, 1 week ago


Selected Answer: A
A is correct because aurora might be more expensive but its serverless and is much faster
upvoted 1 times

  mp165 8 months, 2 weeks ago


Selected Answer: A
A is porper

https://aws.amazon.com/rds/aurora/serverless/
upvoted 3 times

  Aninina 8 months, 2 weeks ago


Selected Answer: A
Aurora MySQL
upvoted 1 times

  mhmt4438 8 months, 2 weeks ago


Selected Answer: A
https://www.examtopics.com/discussions/amazon/view/51509-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times
Question #230 Topic 1

A company is concerned that two NAT instances in use will no longer be able to support the traffic needed for the company’s application. A
solutions architect wants to implement a solution that is highly available, fault tolerant, and automatically scalable.

What should the solutions architect recommend?

A. Remove the two NAT instances and replace them with two NAT gateways in the same Availability Zone.

B. Use Auto Scaling groups with Network Load Balancers for the NAT instances in different Availability Zones.

C. Remove the two NAT instances and replace them with two NAT gateways in different Availability Zones.

D. Replace the two NAT instances with Spot Instances in different Availability Zones and deploy a Network Load Balancer.

Correct Answer: C

Community vote distribution


C (100%)

  TariqKipkemei 1 week, 2 days ago


Selected Answer: C
Highly available, fault tolerant, and automatically scalable = two NAT gateways in different Availability Zones
upvoted 1 times

  Undisputed 2 months ago


Selected Answer: C
Remove the two NAT instances and replace them with two NAT gateways in different Availability Zones
upvoted 1 times

  cookieMr 3 months ago


Selected Answer: C
This recommendation ensures high availability and fault tolerance by distributing the NAT gateways across multiple AZs. NAT gateways are
managed AWS services that provide scalable and highly available outbound NAT functionality. By deploying NAT gateways in differentAZs,
the company can achieve redundancy and avoid a single point of failure. This solution also provides automatic scaling to handle increasing
traffic without manual intervention.

Option A is incorrect because placing both NAT gateways in the same Availability Zone does not provide fault tolerance.

Option B is incorrect because using Auto Scaling groups with Network Load Balancers is not the recommended approach for NAT
instances.

Option D is incorrect because Spot Instances are not suitable for critical infrastructure components like NAT instances.
upvoted 1 times

  Axeashes 3 months, 4 weeks ago


Selected Answer: C
HA: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-comparison.html
Scalability: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html
upvoted 1 times

  Bhrino 7 months, 1 week ago


Selected Answer: C
fyi yall in most cases nat instances are a bad thing because their customer managed while nat gateways are AWS Managed. So in this case
I already know to get rid of the nat instances the reason its c is because it wants high availability meaning different AZs
upvoted 3 times

  Theodorz 7 months, 3 weeks ago


Could anybody teach me why the B cannot be correct answer? This solution also seems providing Scalability(Auto Scaling Group), High
Availability(different AZ), and Fault Tolerance(NLB & AZ).

I honestly think that C is not enough, because each NAT gateway can provide a few scalability, but the bandwidth limit is clearly explained
in the document. The C exactly mentioned "two NAT gateways" so the number of NAT is fixed, which will reach its limit soon.
upvoted 2 times

  KZM 7 months, 2 weeks ago


Option B proposes to use an Auto Scaling group with Network Load Balancers to continue using the existing two NAT instances.
However, NAT instances do not support automatic failover without a script, unlike NAT gateways which provide this functionality.
Additionally, using Network Load Balancers to balance traffic between NAT instances adds more complexity to the solution.
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-comparison.html
upvoted 2 times
  JayBee65 8 months, 1 week ago
C. If you have resources in multiple Availability Zones and they share one NAT gateway, and if the NAT gateway’s Availability Zone is down,
resources in the other Availability Zones lose internet access. To create an Availability Zone-independent architecture, create a NAT
gateway in each Availability Zone and configure your routing to ensure that resources use the NAT gateway in the same Availability Zone.
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html#nat-gateway-basics
upvoted 1 times

  techhb 8 months, 2 weeks ago


Selected Answer: C
Replace NAT Instances with Gateway
upvoted 2 times

  mhmt4438 8 months, 2 weeks ago


Selected Answer: C
Correct answer is C
upvoted 2 times
Question #231 Topic 1

An application runs on an Amazon EC2 instance that has an Elastic IP address in VPC A. The application requires access to a database in VPC B.
Both VPCs are in the same AWS account.

Which solution will provide the required access MOST securely?

A. Create a DB instance security group that allows all traffic from the public IP address of the application server in VPC A.

B. Configure a VPC peering connection between VPC A and VPC B.

C. Make the DB instance publicly accessible. Assign a public IP address to the DB instance.

D. Launch an EC2 instance with an Elastic IP address into VPC B. Proxy all requests through the new EC2 instance.

Correct Answer: B

Community vote distribution


B (81%) A (19%)

  JayBee65 Highly Voted  8 months, 1 week ago


A is correct. B will work but is not the most secure method, since it will allow everything in VPC A to talk to everything in VPC B and vice
versa, not at all secure. A on the other hand will only allow the application (since you select it's IP address) to talk to the application server
in VPC A - you are allowing only the required connectivity. See the link for this exact use case:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.RDSSecurityGroups.html
upvoted 12 times

  mhmt4438 8 months ago


" allows all traffic from the public IP address" Nice bro niceee This is absolutely the most secure method at all. :)))
upvoted 10 times

  graveend 1 month, 3 weeks ago


Both VPCs are in the "SAME AWS ACCOUNT" and the requirement specifies allowing traffic from the *PUBLIC IP of the APPLICATION
SERVER*. In this case the traffic remains inside the AWS infrastructure or will it go through the public internet?
upvoted 1 times

  datz 5 months, 3 weeks ago


he must be the security engineer lolol :D

"Jaybee" - Please dont ever say that traffic over the public internet is secure :D
upvoted 3 times

  test_devops_aws 6 months, 2 weeks ago


:)))))))))
upvoted 1 times

  TariqKipkemei Most Recent  1 week, 2 days ago


Selected Answer: B
VPC to VPC comms = VPC peering
upvoted 1 times

  Sutariya 3 weeks, 6 days ago


B is correct : Setup VPC peering and connect Application from VPC A to connect with VPC B in private subnet so DB instace always secure
with internet.
upvoted 1 times

  _d1rk_ 1 month, 1 week ago


Am I missing something or simply A is wrong because, without VPC peering (or other inter-connection sharing mechanisms such as
Transit Gateway or VPN), VPC A and VPC B cannot communicate each other?
upvoted 1 times

  jacob_ho 4 weeks ago


can use vpc endpoints but no option use that
upvoted 1 times

  A1975 1 month, 3 weeks ago


Selected Answer: B
When you establish peering relationships between VPCs across different AWS Regions, resources in the VPCs (for example, EC2 instances
and Lambda functions) in different AWS Regions can communicate with each other using private IP addresses, without using a gateway,
VPN connection, or network appliance. The traffic remains in the private IP space. All inter-Region traffic is encrypted with no single point
of failure, or bandwidth bottleneck. Traffic always stays on the global AWS backbone, and never traverses the public internet, which
reduces threats, such as common exploits, and DDoS attacks. Inter-Region VPC peering provides a simple and cost-effective way to share
resources between regions or replicate data for geographic redundancy.

https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html
upvoted 1 times
  animefan1 3 months ago
Selected Answer: B
With peering, we EC2 can communicate with RDS. RDS SG can have inbound from EC2 IP rather than VPC CIDR for more security
upvoted 1 times

  maggie135 3 months ago


Selected Answer: B
VPC peering uses AWS network.
upvoted 1 times

  cookieMr 3 months ago


Selected Answer: B
By configuring a VPC peering connection between VPC A and VPC B, you can establish private and secure communication between the EC2
instance in VPC A and the database in VPC B. VPC peering allows traffic to flow between the two VPCs using private IP addresses, without
the need for public IP addresses or exposing the database to the internet.

Option A is not the best solution as it requires allowing all traffic from the public IP address of the application server, which can be less
secure.

Option C involves making the DB instance publicly accessible, which introduces security risks by exposing the database directly to the
internet.

Option D adds unnecessary complexity by launching an additional EC2 instance in VPC B and proxying all requests through it, which is not
the most efficient and secure approach in this scenario.
upvoted 3 times

  joechen2023 3 months, 2 weeks ago


Selected Answer: B
I don't like A because the security group setting is wrong as it set up to allow all public IP addresses. If the security group setting is correct,
then I will go for A
I don't like B because it need to set up security group as well on top of peering.
for exam purpose only, I will go with the least worst choice which is B
upvoted 1 times

  Bmarodi 3 months, 3 weeks ago


Selected Answer: A
The keywords are: "access MOST securely", hence the option A meets these requirements.
upvoted 1 times

  smartegnine 3 months, 3 weeks ago


Selected Answer: A
Each VPC security group rule makes it possible for a specific source to access a DB instance in a VPC that is associated with that VPC
security group. The source can be a range of addresses (for example, 203.0.113.0/24), or another VPC security group.

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide
upvoted 1 times

  MostafaWardany 3 months, 3 weeks ago


Selected Answer: B
Most secure = VPC peering
upvoted 1 times

  Bmarodi 4 months ago


Selected Answer: B
I vote for option B.
upvoted 1 times

  Piccalo 4 months, 1 week ago


Selected Answer: B
BBBB. A is not secure
upvoted 1 times

  channn 6 months ago


Selected Answer: A
peering is not secure to B as no more control on access from A to B
upvoted 1 times
  JohnnyBG 7 months, 4 weeks ago
Selected Answer: B
B But what a crappy question/answers ...
upvoted 3 times

  kerl 8 months ago


Answer is B,
A is not the answer <--it is not SECURE to have your traffic flow out from the internet to database.
upvoted 4 times
Question #232 Topic 1

A company runs demonstration environments for its customers on Amazon EC2 instances. Each environment is isolated in its own VPC. The
company’s operations team needs to be notified when RDP or SSH access to an environment has been established.

A. Configure Amazon CloudWatch Application Insights to create AWS Systems Manager OpsItems when RDP or SSH access is detected.

B. Configure the EC2 instances with an IAM instance profile that has an IAM role with the AmazonSSMManagedInstanceCore policy attached.

C. Publish VPC flow logs to Amazon CloudWatch Logs. Create required metric filters. Create an Amazon CloudWatch metric alarm with a
notification action for when the alarm is in the ALARM state.

D. Configure an Amazon EventBridge rule to listen for events of type EC2 Instance State-change Notification. Configure an Amazon Simple
Notification Service (Amazon SNS) topic as a target. Subscribe the operations team to the topic.

Correct Answer: C

Community vote distribution


C (77%) 14% 9%

  Vickysss Highly Voted  8 months, 2 weeks ago


Selected Answer: C
https://aws.amazon.com/blogs/security/how-to-monitor-and-visualize-failed-ssh-access-attempts-to-amazon-ec2-linux-instances/
upvoted 8 times

  NitiATOS 8 months ago


https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs-records-examples.html#flow-log-example-accepted-rejected

Adding this to support that VPC flow logs can be used to cvapture Accepted or Rejected SSH and RDP traffic.
upvoted 2 times

  ruqui 4 months ago


I don't think C would be an acceptable solution ... the request is to be notified WHEN a SSH and/or RDP connection is established so
it requires real-time monitoring and that is something the C solution does not provide ... I would select A as a correct answer
upvoted 1 times

  cookieMr Highly Voted  3 months ago


Selected Answer: C
By publishing VPC flow logs to CloudWatch Logs and creating metric filters to detect RDP or SSH access, the operations team can
configure an CloudWatch metric alarm to notify them when the alarm is triggered. This will provide the desired notification when RDP or
SSH access to an environment is established.

Option A is incorrect because CloudWatch Application Insights is not designed for detecting RDP or SSH access.

Option B is also incorrect because configuring an IAM instance profile with the AmazonSSMManagedInstanceCore policy does not directly
address the requirement of notifying the operations team when RDP or SSH access occurs.

Option D is wrong beacuse configuring an EventBridge rule to listen for EC2 Instance State-change Notification events and using an SNS
topic as a target will notify the operations team about changes in the instance state, such as starting or stopping instances. However, it
does not specifically detect or notify when RDP or SSH access is established, which is the requirement stated in the question.
upvoted 5 times

  TariqKipkemei Most Recent  1 week, 2 days ago


Selected Answer: C
Publish VPC flow logs to Amazon CloudWatch Logs. Create required metric filters. Create an Amazon CloudWatch metric alarm with a
notification action for when the alarm is in the ALARM state
upvoted 1 times

  Bmarodi 3 months, 2 weeks ago


Selected Answer: C
VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC.
Flow log data can be published to the following locations: Amazon CloudWatch Logs, Amazon S3, or Amazon Kinesis Data Firehose. After
you create a flow log, you can retrieve and view the flow log records in the log group, bucket, or delivery stream that you configured.

Flow logs can help you with a number of tasks, such as:

Diagnosing overly restrictive security group rules

Monitoring the traffic that is reaching your instance


Determining the direction of the traffic to and from the network interfaces
Ref link: https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html
upvoted 2 times
  cokutan 3 months, 3 weeks ago
Selected Answer: C
seems like c:
https://aws.amazon.com/tr/blogs/security/how-to-monitor-and-visualize-failed-ssh-access-attempts-to-amazon-ec2-linux-instances/
upvoted 1 times

  ChrisAn 3 months, 3 weeks ago


Selected Answer: D
D. Configure an Amazon EventBridge rule to listen for events of type EC2 Instance State-change Notification. Configure an Amazon Simple
Notification Service (Amazon SNS) topic as a target. Subscribe the operations team to the topic. This setup allows the EventBridge rule to
capture instance state change events, such as when RDP or SSH access is established. The rule can then send notifications to the specified
SNS topic, which is subscribed by the operations team.
upvoted 2 times

  markw92 3 months, 2 weeks ago


D is wrong. EC2 instance state change is only for pending, running etc.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring-instance-state-changes.html you can't have state change of ssh or
rdp.
upvoted 1 times

  datz 5 months, 3 weeks ago


Selected Answer: C
C:

https://www.youtube.com/watch?v=KAe3Eju59OU
upvoted 1 times

  Abhineet9148232 7 months ago


Selected Answer: C
https://aws.amazon.com/blogs/security/how-to-monitor-and-visualize-failed-ssh-access-attempts-to-amazon-ec2-linux-instances/
upvoted 1 times

  bullrem 8 months, 1 week ago


Selected Answer: A
A. Configuring Amazon CloudWatch Application Insights to create AWS Systems Manager OpsItems when RDP or SSH access is detected
would be the most appropriate solution in this scenario. This would allow the operations team to be notified when RDP or SSH access has
been established and provide them with the necessary information to take action if needed. Additionally, Amazon CloudWatch Application
Insights would allow for monitoring and troubleshooting of the system in real-time.
upvoted 1 times

  Training4aBetterLife 8 months, 1 week ago


Selected Answer: C
EC2 Instance State-change Notifications are not the same as RDP or SSH established connection notifications. Use Amazon CloudWatch
Logs to monitor SSH access to your Amazon EC2 Linux instances so that you can monitor rejected (or established) SSH connection
requests and take action.
upvoted 4 times

  alexleely 8 months, 1 week ago


Selected Answer: A
The Answer can be A or C depending on the requirement if it requires real-time notification.
A: Allows the operations team to be notified in real-time when access is established, and also provides visibility into the access events
through the OpsItems.

C: The logs will need to be analyzed and metric filters applied to detect access, and then the alarm will trigger based on that analysis. This
method could have a delay in providing notifications. Thus, not the best solution if real-time notification is required.

Why not D: RDP or SSH access does not cause an EC2 instance to have a state change. The state change events that Amazon EventBridge
can listen for include stopping, starting, and terminated instances, which do not apply to RDP or SSH access. But RDP or SSH connection to
an EC2 instance does generate an event in the system, such as a log entry which can be used to notify the Operation team. Since its a log,
you would require a service that monitors logs like CloudTrail, VPC Flow logs, or AWS Systems Manager Session Manager.
upvoted 2 times

  JayBee65 8 months, 1 week ago


I completely agree with the logic here, but I'm thinking C, since I believe you will need to "Create required metric filters" in order to
detect RDP or SSH access, and this is not specified in the question, see https://docs.aws.amazon.com/systems-
manager/latest/userguide/OpsCenter-create-OpsItems-from-CloudWatch-Alarms.html
upvoted 2 times

  owlminus 8 months, 2 weeks ago


Selected Answer: C
It's C fam. RDP or SSH connections won't change the state of the EC2 instance, so D doesn't make sense.
upvoted 4 times
  forzadejan 8 months, 2 weeks ago
D. Configure an Amazon EventBridge rule to listen for events of type EC2 Instance State-change Notification. Configure an Amazon Simple
Notification Service (Amazon SNS) topic as a target. Subscribe the operations team to the topic.

EC2 instances sends events to the EventBridge when state change occurs, such as when a new RDP or SSH connection is established, you
can use EventBridge to configure a rule that listens for these events and trigger an action, like sending an email or SMS, when the
connection is detected. The operations team can be notified by subscribing to the Amazon Simple Notification Service (Amazon SNS) topic,
which can be configured as the target of the EventBridge rule.
upvoted 3 times

  alanp 8 months, 2 weeks ago


Are state changes pending:
running
stopping
stopped
shutting-down
terminated

https://aws.amazon.com/blogs/security/how-to-monitor-and-visualize-failed-ssh-access-attempts-to-amazon-ec2-linux-instances/
upvoted 2 times

  mhmt4438 8 months, 2 weeks ago


Selected Answer: D
Configure an Amazon EventBridge rule to listen for events of type EC2 Instance State-change Notification. Configure an Amazon Simple
Notification Service (Amazon SNS) topic as a target. Subscribe the operations team to the topic. This approach allows you to set up a rule
that listens for state change events on the EC2 instances, specifically for when RDP or SSH access is established, and trigger a notification
via Amazon SNS to the operations team. This way they will be notified when RDP or SSH access to an environment has been established.
upvoted 3 times

  CapJackSparrow 6 months, 3 weeks ago


um, isn't "EC2 Instance State-change" like running, terminated, or stopped?
upvoted 1 times
Question #233 Topic 1

A solutions architect has created a new AWS account and must secure AWS account root user access.

Which combination of actions will accomplish this? (Choose two.)

A. Ensure the root user uses a strong password.

B. Enable multi-factor authentication to the root user.

C. Store root user access keys in an encrypted Amazon S3 bucket.

D. Add the root user to a group containing administrative permissions.

E. Apply the required permissions to the root user with an inline policy document.

Correct Answer: AB

Community vote distribution


AB (75%) BD (17%) 8%

  TariqKipkemei 1 week, 2 days ago


Selected Answer: AB
Ensure the root user uses a strong password. Enable multi-factor authentication to the root user.
upvoted 1 times

  cookieMr 3 months ago


Selected Answer: AB
A. Setting a strong password for the root user is an essential security measure to prevent unauthorized access.

B. Enabling MFA adds an extra layer of security by requiring an additional authentication factor, such as a code from a mobile app or a
hardware token, in addition to the password.

C. Root user access keys should be avoided whenever possible, and it is best to use IAM users with restricted permissions instead.

D. The root user already has unrestricted access to all resources and services in the account, so granting additional administrative
permissions could increase the risk of unauthorized actions.

E. Instead, it is recommended to create IAM users with appropriate permissions and use those users for day-to-day operations, while
keeping the root user secured and only using it for necessary administrative tasks.
upvoted 2 times

  DiscussionMonke 3 months, 1 week ago


Selected Answer: AB
Options A & B are the CORRECT answers.
upvoted 1 times

  Bmarodi 4 months ago


Selected Answer: AB
Options A & B are the right answers.
upvoted 1 times

  luisgu 4 months, 4 weeks ago


Selected Answer: AB
See https://docs.aws.amazon.com/SetUp/latest/UserGuide/best-practices-root-user.html
upvoted 1 times

  Kunj7 6 months ago


Selected Answer: AB
A and B are the correct answers:

Option A: A strong password is always required for any AWS account you create, and should not be shared or stored anywhere as there is
always a risk.

Option B: This is following AWS best practice, by enabling MFA on your root user which provides another layer of security on the account
and unauthorised access will be denied if the user does not have the correct password and MFA.
upvoted 1 times

  WherecanIstart 6 months, 3 weeks ago


Selected Answer: AB
AB are the right answers.
upvoted 1 times
  fkie4 6 months, 4 weeks ago
This is probably the hardest question in AWS history
upvoted 3 times

  ProfXsamson 8 months ago


Selected Answer: AB
AB is the only feasible answer here.
upvoted 3 times

  bullrem 8 months, 1 week ago


Selected Answer: BE
B. Enabling multi-factor authentication for the root user provides an additional layer of security to ensure that only authorized individuals
are able to access the root user account.
E. Applying the required permissions to the root user with an inline policy document ensures that the root user only has the necessary
permissions to perform the necessary tasks, and not any unnecessary permissions that could potentially be misused.
upvoted 2 times

  bullrem 8 months, 1 week ago


https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html
upvoted 1 times

  bullrem 8 months, 1 week ago


The other options are not sufficient to secure the root user access because:
A. A strong password alone is not enough to protect against potential security threats such as phishing or brute force attacks.
C. Storing the root user access keys in an encrypted S3 bucket does not address the root user's authentication process.
D. Adding the root user to a group with administrative permissions does not address the root user's authentication process and does
not provide an additional layer of security.
upvoted 1 times

  [Removed] 5 months, 3 weeks ago


Strong passwords + multi factor is the counter to brute force...
upvoted 1 times

  Pindol 8 months, 1 week ago


Selected Answer: AB
AB obviusly
upvoted 1 times

  david76x 8 months, 2 weeks ago


Selected Answer: AB
Root user already has admin, so D is not correct
upvoted 1 times

  Aninina 8 months, 2 weeks ago


Selected Answer: AB
AB are correct
upvoted 1 times

  wmp7039 8 months, 2 weeks ago


Selected Answer: AB
D is incorrect as root user already has full admin access.
upvoted 2 times

  swolfgang 8 months, 2 weeks ago


Selected Answer: AB
D. Add the root user to a group containing administrative permissions. >>its not about security,actually its unsecure so >> a&B
upvoted 1 times

  raf123123 8 months, 2 weeks ago


Selected Answer: BD
BD is correct
upvoted 2 times

  mhmt4438 8 months, 2 weeks ago


Selected Answer: BD
https://www.examtopics.com/discussions/amazon/view/21794-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 2 times

  JayBee65 8 months, 1 week ago


What would D achieve exactly??? :)
upvoted 1 times
  Aninina 8 months, 2 weeks ago
AB are correct in this link
upvoted 2 times
Question #234 Topic 1

A company is building a new web-based customer relationship management application. The application will use several Amazon EC2 instances
that are backed by Amazon Elastic Block Store (Amazon EBS) volumes behind an Application Load Balancer (ALB). The application will also use
an Amazon Aurora database. All data for the application must be encrypted at rest and in transit.

Which solution will meet these requirements?

A. Use AWS Key Management Service (AWS KMS) certificates on the ALB to encrypt data in transit. Use AWS Certificate Manager (ACM) to
encrypt the EBS volumes and Aurora database storage at rest.

B. Use the AWS root account to log in to the AWS Management Console. Upload the company’s encryption certificates. While in the root
account, select the option to turn on encryption for all data at rest and in transit for the account.

C. Use AWS Key Management Service (AWS KMS) to encrypt the EBS volumes and Aurora database storage at rest. Attach an AWS Certificate
Manager (ACM) certificate to the ALB to encrypt data in transit.

D. Use BitLocker to encrypt all data at rest. Import the company’s TLS certificate keys to AWS Key Management Service (AWS KMS) Attach the
KMS keys to the ALB to encrypt data in transit.

Correct Answer: C

Community vote distribution


C (100%)

  TariqKipkemei 1 week ago


Selected Answer: C
Use AWS Key Management Service (AWS KMS) to encrypt the EBS volumes and Aurora database storage at rest. Attach an AWS Certificate
Manager (ACM) certificate to the ALB to encrypt data in transit
upvoted 1 times

  Guru4Cloud 2 weeks, 6 days ago


Selected Answer: C
C is the best answer.

To encrypt data at rest, AWS Key Management Service (AWS KMS) can be used to encrypt EBS volumes and Aurora database storage.

To encrypt data in transit, an AWS Certificate Manager (ACM) certificate can be attached to the Application Load Balancer (ALB) to enable
HTTPS and TLS encryption.
upvoted 1 times

  cookieMr 3 months ago


Selected Answer: C
AWS KMS can be used to encrypt the EBS and Aurora database storage at rest.
ACM can be used to obtain an SSL/TLS certificate and attach it to the ALB. This encrypts the data in transit between the clients and the
ALB.

A is incorrect because it suggests using ACM to encrypt the EBS, which is not the correct service for encrypting EBS.

B is incorrect because relying on the AWS root account and selecting an option in the AWS Management Console to enable encryption for
all data at rest and in transit is not a valid approach.

D is incorrect because BitLocker is not a suitable solution for encrypting data in AWS services. It is primarily used for encrypting data on
Windows-based operating systems. Additionally, importing TLS certificate keys to AWS KMS and attaching them to the ALB is not the
recommended approach for encrypting data in transit.
upvoted 4 times

  MAMADOUG 3 months, 3 weeks ago


Selected Answer: C
Option C it's correct
upvoted 1 times

  Bmarodi 4 months ago


Selected Answer: C
Option C fulfills the requirements.
upvoted 1 times

  techhb 8 months, 2 weeks ago


Selected Answer: C
C is correct ,A REVERSES the work ofeach service.
upvoted 3 times

  Aninina 8 months, 2 weeks ago


Selected Answer: C
C is correct!
upvoted 3 times

  mhmt4438 8 months, 2 weeks ago


Selected Answer: C
c is correct answer
upvoted 2 times
Question #235 Topic 1

A company is moving its on-premises Oracle database to Amazon Aurora PostgreSQL. The database has several applications that write to the
same tables. The applications need to be migrated one by one with a month in between each migration. Management has expressed concerns
that the database has a high number of reads and writes. The data must be kept in sync across both databases throughout the migration.

What should a solutions architect recommend?

A. Use AWS DataSync for the initial migration. Use AWS Database Migration Service (AWS DMS) to create a change data capture (CDC)
replication task and a table mapping to select all tables.

B. Use AWS DataSync for the initial migration. Use AWS Database Migration Service (AWS DMS) to create a full load plus change data capture
(CDC) replication task and a table mapping to select all tables.

C. Use the AWS Schema Conversion Tool with AWS Database Migration Service (AWS DMS) using a memory optimized replication instance.
Create a full load plus change data capture (CDC) replication task and a table mapping to select all tables.

D. Use the AWS Schema Conversion Tool with AWS Database Migration Service (AWS DMS) using a compute optimized replication instance.
Create a full load plus change data capture (CDC) replication task and a table mapping to select the largest tables.

Correct Answer: C

Community vote distribution


C (83%) A (17%)

  aakashkumar1999 Highly Voted  7 months, 4 weeks ago


Selected Answer: C
C : because we need SCT to convert from Oracle to PostgreSQL, and we need memory optimized machine for databases not compute
optimized.
upvoted 7 times

  hissein 3 weeks, 3 days ago


why it is memory optimized and not compute optimized machine ?
upvoted 2 times

  Guru4Cloud 2 weeks, 6 days ago


A memory-optimized replication instance is recommended because the database has a high number of reads and writes. Memory-
optimized instances are designed to deliver fast performance for workloads that process large data sets in memory.
upvoted 3 times

  hissein 1 day, 15 hours ago


thank you
upvoted 1 times

  TariqKipkemei Most Recent  6 days, 23 hours ago


Selected Answer: C
Oracle database to Amazon Aurora PostgreSQL = AWS Schema Conversion Tool
High number of reads and writes = memory optimized replication instance
upvoted 1 times

  Guru4Cloud 2 weeks, 6 days ago


Selected Answer: C
A memory-optimized replication instance is recommended because the database has a high number of reads and writes. Memory-
optimized instances are designed to deliver fast performance for workloads that process large data sets in memory.
upvoted 1 times

  _d1rk_ 1 month, 1 week ago


Selected Answer: C
DataSync is for file-level synch, so A and B can be excluded. C is better than D because memory-optimized instances are recommended to
handle the high number of reads and writes
upvoted 2 times

  ukivanlamlpi 1 month, 1 week ago


Selected Answer: A
why not a? only capture the change is sufficient
upvoted 2 times
  Mmmmmmkkkk 3 months ago
Bbbbbb
upvoted 1 times

  cookieMr 3 months ago


Selected Answer: C
The AWS SCT is used to convert the schema and code of the Oracle database to be compatible with Aurora PostgreSQL. AWS DMS is
utilized to migrate the data from the Oracle database to Aurora PostgreSQL. Using a memory-optimized replication instance is
recommended to handle the high number of reads and writes during the migration process.
By creating a full load plus CDC replication task, the initial data migration is performed, and ongoing changes in the Oracle database are
continuously captured and applied to the Aurora PostgreSQL database. Selecting all tables for table mapping ensures that all the
applications writing to the same tables are migrated.

Option A & B are incorrect because using AWS DataSync alone is not sufficient for database migration and data synchronization.

Option D is incorrect because using a compute optimized replication instance is not the most suitable choice for handling the high
number of reads and writes.
upvoted 1 times

  omoakin 4 months, 1 week ago


BBBBBBBBBBBBB
upvoted 2 times

  SimiTik 5 months, 2 weeks ago


B chatgpt
upvoted 2 times

  KZM 7 months, 2 weeks ago


DMS+SCT for Oracle to Aurora PostgreSQL migration
https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-an-oracle-database-to-aurora-postgresql-using-aws-dms-
and-aws-sct.html
upvoted 2 times

  icurfer 8 months ago


https://aws.amazon.com/ko/premiumsupport/knowledge-center/dms-memory-optimization/
upvoted 1 times

  dark_firzen 8 months ago


Selected Answer: C
It has to be either C or D because it requires Schema Conversion Tool to convert Oracle database to Amazon Aurora PostgreSQL. C would
be the better choice here because it replicates a memory optimized instance, which is recommended for databases. Also, the database
must be kept in sync, so they require mapping to select all tables.
upvoted 3 times

  bullrem 8 months, 1 week ago


A or C are both valid options. Both options involve using AWS DataSync for the initial migration, and then using AWS Database Migration
Service (AWS DMS) to create a change data capture (CDC) replication task for ongoing data synchronization.
Option A: Uses a memory optimized replication instance.
Option C: Uses a compute optimized replication instance.

Option A is a better choice for migrations where the data is more complex and may require more memory.
Option C is a better choice for migrations that require more processing power.
It is also depend on the size of the data, the complexity of the data, and the resources available in the target Aurora cluster.
upvoted 1 times

  JayBee65 8 months, 1 week ago


Why would you not use the schema conversion tool, which is designed specifically to covert form one db engine to another. It can convert
Oracle to Aurora PostgreSQL, see https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Welcome.html. Then it is
a choice of C or D. Since you want to move all tables C makes more sense that D.
A and B are wrong since DataSync deals with data not databases, see https://aws.amazon.com/datasync/faqs/.
upvoted 4 times

  brownest 8 months, 1 week ago


Selected Answer: A
Initial migration is full using DataSync and on-going replication is through CDC for the changes. The full load was already performed so no
need to do it again as with Answer B.
upvoted 1 times

  brownest 8 months, 1 week ago


Changing my answer to C as you need schema conversion from Oracle the PostgreSQL
upvoted 2 times

  TapasGhosh 8 months, 2 weeks ago


Correct answer is C
upvoted 2 times
  wmp7039 8 months, 2 weeks ago
Selected Answer: A
A is correct. Initial migration is full using DataSync and on-going replication is through CDC Task -
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Task.CDC.html
upvoted 1 times
Question #236 Topic 1

A company has a three-tier application for image sharing. The application uses an Amazon EC2 instance for the front-end layer, another EC2
instance for the application layer, and a third EC2 instance for a MySQL database. A solutions architect must design a scalable and highly
available solution that requires the least amount of change to the application.

Which solution meets these requirements?

A. Use Amazon S3 to host the front-end layer. Use AWS Lambda functions for the application layer. Move the database to an Amazon
DynamoDB table. Use Amazon S3 to store and serve users’ images.

B. Use load-balanced Multi-AZ AWS Elastic Beanstalk environments for the front-end layer and the application layer. Move the database to an
Amazon RDS DB instance with multiple read replicas to serve users’ images.

C. Use Amazon S3 to host the front-end layer. Use a fleet of EC2 instances in an Auto Scaling group for the application layer. Move the
database to a memory optimized instance type to store and serve users’ images.

D. Use load-balanced Multi-AZ AWS Elastic Beanstalk environments for the front-end layer and the application layer. Move the database to an
Amazon RDS Multi-AZ DB instance. Use Amazon S3 to store and serve users’ images.

Correct Answer: A

Community vote distribution


D (68%) B (29%)

  PDR Highly Voted  8 months ago


Selected Answer: B
B and D very similar with D being the 'best' solution but it is not the one that requires the least amount of development changes as the
application would need to be changed to store images in S3 instead of DB
upvoted 8 times

  TariqKipkemei Most Recent  6 days, 23 hours ago


Selected Answer: D
Use load-balanced Multi-AZ AWS Elastic Beanstalk environments for the front-end layer and the application layer. Move the database to an
Amazon RDS Multi-AZ DB instance. Use Amazon S3 to store and serve users’ images
upvoted 1 times

  Guru4Cloud 2 weeks, 6 days ago


Selected Answer: D
Use Elastic Beanstalk load-balanced environments for the web and app tiers. This provides auto scaling and high availability with minimal
effort.
Move the database to RDS Multi-AZ. This handles scaling reads and storage, and provides HA with automated failover.
Use S3 for serving user images. S3 is highly scalable and durable storage.
The application code remains unchanged using this approach.
upvoted 1 times

  Mia2009687 3 months ago


Selected Answer: A
AWS Elastic Beanstalk makes it even easier for developers to quickly deploy and manage applications in the AWS Cloud. Developers simply
upload their application, and Elastic Beanstalk automatically handles the deployment details of capacity provisioning, load balancing,
auto-scaling, and application health monitoring.

I don't quite understand why people choose D.


upvoted 1 times

  cookieMr 3 months ago


Selected Answer: D
By using load-balanced Multi-AZ AWS EBS, you achieve scalability and high availability for both layers without requiring significant
changes to the application. Moving the DB to an RDS Multi-AZ DB ensures high availability and automatic failover. Storing and serving
users' images through S3 provides a scalable and highly available solution.

A is incorrect because using S3 for the front-end layer and Lambda for the application layer would require significant changes to the
application architecture. Moving the DB to DynamoDB would require rewriting the DB-related code.

B is incorrect because using load-balanced Multi-AZ AWS EBS environments and an RDS DB with read replicas for serving images would be
a more suitable solution. RDS with read replicas can handle the image-serving workload more efficiently than using S3 for this purpose.

C is incorrect because using S3 for the front-end layer and an ASG of EC2 for the application layer would require modifying the application
architecture. Storing and serving images from a memory-optimized EC2 type may not be the most efficient and scalable approach
compared to using S3.
upvoted 2 times
  markw92 3 months, 2 weeks ago
"least amount of change to the application." - A has lots of changes, completely revamping the application and lots of new pieces. D is
closest with only addition of s3 to store images which is right move. You do not want images to store in any database anyway.
upvoted 2 times

  aaroncelestin 1 month, 1 week ago


Thats what I was thinking, but the question doesn't mention anything about storing users' images anywhere. Are we supposed to just
assume that they wanted to store the images in a DB even though that is a bad idea?
upvoted 1 times

  Bmarodi 4 months ago


Selected Answer: D
Option D meets the requirements.
upvoted 1 times

  Grace83 6 months, 2 weeks ago


D is correct
upvoted 2 times

  focus_23 8 months ago


Selected Answer: D
RDS multi AZ.
upvoted 2 times

  wmp7039 8 months, 2 weeks ago


Selected Answer: D
D is correct as application changes needs to me minimal
upvoted 2 times

  mhmt4438 8 months, 2 weeks ago


Selected Answer: D
Correct answer is D
upvoted 2 times

  Aninina 8 months, 2 weeks ago


Selected Answer: D
for "Highly available": Multi-AZ &
for "least amount of changes to the application": Elastic Beanstalk automatically
handles the deployment, from capacity provisioning, load balancing, auto-scaling to
application health monitoring
upvoted 4 times

  Morinator 8 months, 3 weeks ago


Selected Answer: D
https://www.examtopics.com/discussions/amazon/view/24840-exam-aws-certified-solutions-architect-associate-saa-c02/

Please ExamTopics, review your own answers


upvoted 4 times
Question #237 Topic 1

An application running on an Amazon EC2 instance in VPC-A needs to access files in another EC2 instance in VPC-B. Both VPCs are in separate
AWS accounts. The network administrator needs to design a solution to configure secure access to EC2 instance in VPC-B from VPC-A. The
connectivity should not have a single point of failure or bandwidth concerns.

Which solution will meet these requirements?

A. Set up a VPC peering connection between VPC-A and VPC-B.

B. Set up VPC gateway endpoints for the EC2 instance running in VPC-B.

C. Attach a virtual private gateway to VPC-B and set up routing from VPC-A.

D. Create a private virtual interface (VIF) for the EC2 instance running in VPC-B and add appropriate routes from VPC-A.

Correct Answer: A

Community vote distribution


A (91%) 9%

  LuckyAro Highly Voted  8 months ago


Selected Answer: A
AWS uses the existing infrastructure of a VPC to create a VPC peering connection; it is neither a gateway nor a VPN connection, and does
not rely on a separate piece of physical hardware. There is no single point of failure for communication or a bandwidth bottleneck.

https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html
upvoted 6 times

  Guru4Cloud Most Recent  2 weeks, 6 days ago


Selected Answer: A
A. Set up a VPC peering connection between VPC-A and VPC-B
upvoted 1 times

  MNotABot 2 months, 2 weeks ago


https://www.bing.com/search?
pglt=41&q=can+we+do+VPC+peering+across+AWS+accounts&cvid=48a8ceecc85a429c9ddd698b01055890&aqs=edge..69i57j0l8j69i11004.
10897j0j1&FORM=ANNAB1&PC=LCTS
upvoted 1 times

  cookieMr 3 months ago


Selected Answer: A
A VPC peering connection allows secure communication between instances in different VPCs using private IP addresses without the need
for internet gateways, VPN connections, or NAT devices. By setting it up, the application running in VPC-A can directly access the EC2 in
VPC-B without going through the public internet or any single point of failure.

B is incorrect because VPC gateway endpoints are used for accessing S3 or DynamoDB from a VPC without going over the internet. They
are not designed for establishing connectivity between EC2 instances in different VPCs.

C is incorrect because it would require configuring a VPN connection between the VPCs. This would introduce additional complexity and
potential single points of failure.

D is incorrect because creating a private VIF and adding routes would be applicable for establishing a direct connection between on-
premises infrastructure and VPC-B using Direct Connect, but it is not suitable for the scenario of communication between EC2 instances in
separate VPCs within different AWS accounts.
upvoted 2 times

  Anmol_1010 3 months, 2 weeks ago


D, VPC PEERINGVIS IN SAME ACCOUNT
upvoted 1 times

  im6h 3 months, 2 weeks ago


No, VPC Peering can use across account.

https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html
upvoted 1 times

  omoakin 4 months, 1 week ago


DDDDDDDDDDDDDD
upvoted 2 times
  omoakin 4 months, 1 week ago
This is the only viable solution
Create a private virtual interface (VIF) for the EC2 instance running in VPC-B and add appropriate routes from VPC-A
upvoted 1 times

  michellemeloc 4 months, 2 weeks ago


Selected Answer: A
"You can create a VPC peering connection between your own VPCs, or with a VPC in another AWS account."

https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html
upvoted 2 times

  PDR 8 months ago


Selected Answer: A
correct answer is A and as mentioned by JayBee65 below, key reason being that solution should not have a single point of failure and
bandwidth restrictions

the following paragraph is taken from the AWS docs page linked below that backs this up
"AWS uses the existing infrastructure of a VPC to create a VPC peering connection; it is neither a gateway nor a VPN connection, and does
not rely on a separate piece of physical hardware. There is no single point of failure for communication or a bandwidth bottleneck."

https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html
upvoted 2 times

  LuckyAro 8 months, 2 weeks ago


Selected Answer: B
A VPC endpoint gateway to the EC2 Instance is more specific and more secure than forming a VPC peering that exposes the whole of the
VPC infrastructure just for one connection.
upvoted 2 times

  JayBee65 8 months, 1 week ago


Your logic is correct but security is not a requirement here - the requirements are "The connectivity should not have a single point of
failure or bandwidth concerns." A VPC gateway endpoint" would form a single point of failure, so B is incorrect, (and C and D are
incorrect for the same reason, they create single points of failure).
upvoted 4 times

  mhmt4438 8 months, 2 weeks ago


Selected Answer: A
Correct answer is A
upvoted 2 times

  Aninina 8 months, 2 weeks ago


Selected Answer: A
VPC peering allows resources in different VPCs to communicate with each other as if they were within the same network. This solution
would establish a direct network route between VPC-A and VPC-B, eliminating the need for a single point of failure or bandwidth concerns.
upvoted 1 times

  waiyiu9981 8 months, 3 weeks ago


Selected Answer: A
https://www.examtopics.com/discussions/amazon/view/27763-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 4 times
Question #238 Topic 1

A company wants to experiment with individual AWS accounts for its engineer team. The company wants to be notified as soon as the Amazon
EC2 instance usage for a given month exceeds a specific threshold for each account.

What should a solutions architect do to meet this requirement MOST cost-effectively?

A. Use Cost Explorer to create a daily report of costs by service. Filter the report by EC2 instances. Configure Cost Explorer to send an Amazon
Simple Email Service (Amazon SES) notification when a threshold is exceeded.

B. Use Cost Explorer to create a monthly report of costs by service. Filter the report by EC2 instances. Configure Cost Explorer to send an
Amazon Simple Email Service (Amazon SES) notification when a threshold is exceeded.

C. Use AWS Budgets to create a cost budget for each account. Set the period to monthly. Set the scope to EC2 instances. Set an alert
threshold for the budget. Configure an Amazon Simple Notification Service (Amazon SNS) topic to receive a notification when a threshold is
exceeded.

D. Use AWS Cost and Usage Reports to create a report with hourly granularity. Integrate the report data with Amazon Athena. Use Amazon
EventBridge to schedule an Athena query. Configure an Amazon Simple Notification Service (Amazon SNS) topic to receive a notification when
a threshold is exceeded.

Correct Answer: B

Community vote distribution


C (95%) 5%

  Aninina Highly Voted  8 months, 2 weeks ago


Selected Answer: C
AWS Budgets allows you to create budgets for your AWS accounts and set alerts when usage exceeds a certain threshold. By creating a
budget for each account, specifying the period as monthly and the scope as EC2 instances, you can effectively track the EC2 usage for
each account and be notified when a threshold is exceeded. This solution is the most cost-effective option as it does not require additional
resources such as Amazon Athena or Amazon EventBridge.
upvoted 8 times

  vijaykamal Most Recent  2 days, 18 hours ago


Selected Answer: C
Option A and Option B suggest using Cost Explorer to create reports and send notifications. While Cost Explorer is useful for analyzing
costs, it does not provide the real-time alerting capability that AWS Budgets offers.

Option D suggests using AWS Cost and Usage Reports integrated with Amazon Athena and Amazon EventBridge, which can be a more
complex and potentially costlier solution compared to AWS Budgets for this specific use case. It's also more suitable for fine-grained,
custom analytics rather than straightforward threshold-based alerts.
upvoted 1 times

  TariqKipkemei 6 days, 23 hours ago


Selected Answer: C
AWS Budgets was designed to handle this scenario.
upvoted 1 times

  Undisputed 2 months ago


Selected Answer: C
Use AWS Budgets to create a cost budget for each account. Set the period to monthly. Set the scope to EC2 instances. Set an alert
threshold for the budget. Configure an Amazon Simple Notification Service (Amazon SNS) topic to receive a notification when a threshold
is exceeded.
upvoted 1 times

  cookieMr 3 months ago


Selected Answer: C
By creating a cost budget for each account, specifying the period as monthly and scoping it to EC2, you can track and monitor the costs
associated with EC2 specifically. Set an alert threshold in the budget, which will trigger a notification when the specified threshold is
exceeded. Configure an SNS to receive the notification, which can be subscribed to by the company to receive immediate alerts.

A and B are not the most cost-effective solutions as they involve using Cost Explorer to create reports, which may not provide real-time
notifications when the threshold is exceeded. Additionally, A. suggests using a daily report, while B. suggests using a monthly report,
which may not provide the desired level of granularity for immediate notifications.

D involves using Cost and Usage Reports with Athena and EventBridge. This solution provides more flexibility and data analysis
capabilities, it is more complex and may incur additional costs for using Athena and generating hourly reports.
upvoted 1 times
  Samuel03 7 months, 1 week ago
Selected Answer: D
I go with D. It says "as soon as", "daily" reports seems to be a bit longer time frame to wait in my opinion.
upvoted 1 times

  Bofi 7 months ago


Athena can only be use in s3, that is enough to discard D
upvoted 1 times

  Samuel03 7 months, 1 week ago


Actually, I take that back. It clearly says "Cost effective."
upvoted 3 times

  alexleely 8 months, 1 week ago


C: AWS Budgets allows you to set a budget for costs and usage for your accounts and you can set alerts when the budget threshold is
exceeded in real-time which meets the requirement.

Why not B: B would be the most cost-effective if the requirements didn't ask for real-time notification. You would not incur additional costs
for the daily or monthly reports and the notifications. But doesn't provide real-time alerts.
upvoted 4 times

  mp165 8 months, 2 weeks ago


Selected Answer: C
Agree...C
upvoted 2 times

  mhmt4438 8 months, 2 weeks ago


Selected Answer: C
Answer is C
upvoted 1 times

  venice1234 8 months, 2 weeks ago


Selected Answer: C
https://aws.amazon.com/getting-started/hands-on/control-your-costs-free-tier-budgets/
upvoted 1 times

  Morinator 8 months, 3 weeks ago


Selected Answer: C
AWS budget IMO, it's done for it
upvoted 2 times
Question #239 Topic 1

A solutions architect needs to design a new microservice for a company’s application. Clients must be able to call an HTTPS endpoint to reach the
microservice. The microservice also must use AWS Identity and Access Management (IAM) to authenticate calls. The solutions architect will write
the logic for this microservice by using a single AWS Lambda function that is written in Go 1.x.

Which solution will deploy the function in the MOST operationally efficient way?

A. Create an Amazon API Gateway REST API. Configure the method to use the Lambda function. Enable IAM authentication on the API.

B. Create a Lambda function URL for the function. Specify AWS_IAM as the authentication type.

C. Create an Amazon CloudFront distribution. Deploy the function to Lambda@Edge. Integrate IAM authentication logic into the
Lambda@Edge function.

D. Create an Amazon CloudFront distribution. Deploy the function to CloudFront Functions. Specify AWS_IAM as the authentication type.

Correct Answer: A

Community vote distribution


A (87%) 13%

  mhmt4438 Highly Voted  8 months, 2 weeks ago


Selected Answer: A
A. Create an Amazon API Gateway REST API. Configure the method to use the Lambda function. Enable IAM authentication on the API.
This option is the most operationally efficient as it allows you to use API Gateway to handle the HTTPS endpoint and also allows you to use
IAM to authenticate the calls to the microservice. API Gateway also provides many additional features such as caching, throttling, and
monitoring, which can be useful for a microservice.
upvoted 15 times

  TariqKipkemei Most Recent  6 days, 22 hours ago


Selected Answer: A
Create an Amazon API Gateway REST API. Configure the method to use the Lambda function. Enable IAM authentication on the API
upvoted 1 times

  Guru4Cloud 2 weeks, 6 days ago


Selected Answer: A
A. Create an Amazon API Gateway REST API. Configure the method to use the Lambda function. Enable IAM authentication on the API.
This option is the most operationally efficient as it allows you to use API Gateway to handle the HTTPS endpoint and also allows you to use
IAM to authenticate the calls to the microservice. API Gateway also provides many additional features such as caching, throttling, and
monitoring, which can be useful for a microservice.
upvoted 1 times

  Smart 2 months ago


Selected Answer: B
C & D (incorrect) - what will be the origin for CDN? Plus Go is not supported. Plus for option D, IAM is not supported.

A, why develop and manage API in API GW?

Just enable Lambda function URL...


upvoted 1 times

  Zeezie 2 months ago


B -- MOST operationally efficient. Just look at the Lambda Create function console...

Enable function URL >


Use function URLs to assign HTTP(S) endpoints to your Lambda function.

Auth type
Choose the auth type for your function URL. >
AWS_IAM
Only authenticated IAM users and roles can make requests to your function URL.
upvoted 1 times

  testopesto 2 months ago


Selected Answer: B
The MOST operationally efficient way
https://docs.aws.amazon.com/lambda/latest/dg/urls-auth.html
https://docs.aws.amazon.com/lambda/latest/dg/lambda-urls.html
upvoted 1 times
  Undisputed 2 months ago
Selected Answer: A
Create an Amazon API Gateway REST API. Configure the method to use the Lambda function. Enable IAM authentication on the API.
upvoted 1 times

  cookieMr 3 months ago


Selected Answer: A
By creating an API Gateway REST API, you can define the HTTPS endpoint that clients can call to reach the microservice. Enable IAM
authentication on the API to enforce authentication for the API calls. This ensures that only authenticated requests are allowed to reach
the microservice. This solution is operationally efficient as it leverages the built-in capabilities of API Gateway to handle the HTTP
endpoint, request routing, and IAM authentication. It provides a scalable and managed solution without the need for additional
infrastructure components.

B suggests creating a Lambda URL and specifying AWS IAM as the authentication type. While this can provide IAM authentication, it lacks
the benefits of API Gateway, such as request validation, rate limiting, and easy management of API configurations.

C and D involve using CloudFront, Lambda@Edge, and CloudFront Functions. While these services offer flexibility and the ability to run
logic at the edge locations, they introduce additional complexity and may not be necessary for the given requirement.
upvoted 1 times

  Smart 2 months ago


The question is not asking for API Gateway benefits.
upvoted 1 times

  vassdlevi 4 months ago


Selected Answer: B
https://docs.aws.amazon.com/lambda/latest/dg/urls-configuration.html
upvoted 1 times

  PRASAD180 7 months, 1 week ago


A is crt 100%
upvoted 2 times

  tellmenowwwww 7 months, 1 week ago


Why c is not correct? ?
upvoted 3 times

  moiraqi 4 months, 1 week ago


Lambda@Edge only support NodeJS or Python
upvoted 2 times

  vassdlevi 3 months ago


AWS Lambda natively supports Java, Go, PowerShell, Node.js, C#, Python, and Ruby code, and provides a Runtime API which allows
you to use any additional programming languages to author your functions.
upvoted 1 times

  bdp123 7 months, 2 weeks ago


Selected Answer: A
https://asanchez.dev/blog/deploy-api-go-aws-lambda-gateway/
upvoted 1 times

  SanLi 8 months, 2 weeks ago


D
https://aws.amazon.com/premiumsupport/knowledge-center/iam-authentication-api-gateway/
upvoted 1 times

  JayBee65 8 months, 1 week ago


With CloudFront Functions in Amazon CloudFront, you can write lightweight functions in JavaScript for high-scale, latency-sensitive CDN
customizations. But you are using Go 1.x. Lambda supports go. So A makes a lot more sense than D
upvoted 1 times
Question #240 Topic 1

A company previously migrated its data warehouse solution to AWS. The company also has an AWS Direct Connect connection. Corporate office
users query the data warehouse using a visualization tool. The average size of a query returned by the data warehouse is 50 MB and each
webpage sent by the visualization tool is approximately 500 KB. Result sets returned by the data warehouse are not cached.

Which solution provides the LOWEST data transfer egress cost for the company?

A. Host the visualization tool on premises and query the data warehouse directly over the internet.

B. Host the visualization tool in the same AWS Region as the data warehouse. Access it over the internet.

C. Host the visualization tool on premises and query the data warehouse directly over a Direct Connect connection at a location in the same
AWS Region.

D. Host the visualization tool in the same AWS Region as the data warehouse and access it over a Direct Connect connection at a location in
the same Region.

Correct Answer: C

Community vote distribution


D (88%) 8%

  AlessandraSAA Highly Voted  7 months ago


Selected Answer: D
A. --> No since if you access via internet you are creating egress traffic.
B. -->It's a good choice to have both DWH and visualization in the same region to lower the egress transfer (i.e. data going egress/out of
the region) but if you access over internet you might still have egress transfer.
C. -> Valid but in this case you send out of AWS 50MB if you query the DWH instead of the visualization tool, D removes this need since
puts the visualization tools in AWS with the DWH so reduces data returned out of AWS from 50MB to 500KB
D. --> Correct, see explanation on answer C
-------------------------------------------------------------------------------------------------------------------------------------------
Useful links:
AWS Direct Connect connection create a connection in an AWS Direct Connect location to establish a network connection from your
premises to an AWS Region.
https://docs.aws.amazon.com/directconnect/latest/UserGuide/Welcome.html
upvoted 6 times

  TariqKipkemei Most Recent  6 days, 22 hours ago


Selected Answer: D
Host the visualization tool in the same AWS Region as the data warehouse and access it over a Direct Connect connection at a location in
the same Region
upvoted 1 times

  Guru4Cloud 2 weeks, 6 days ago


Selected Answer: D
D. Host the visualization tool in the same AWS Region as the data warehouse and access it over a Direct Connect connection at a location
in the same Region.
upvoted 1 times

  jtexam 2 months, 3 weeks ago


Selected Answer: B
by hosting in same region, you have 500kb transfer charged on internet transfer teir, 50MB charged in inter-region tier.

using direct link, both are charged in direct link tier. direct link tier is not cheap.

so i go for B
upvoted 1 times

  Mmmmmmkkkk 3 months ago


Aaaaaaaa
upvoted 1 times

  cookieMr 3 months ago


Selected Answer: D
Hosting the visualization tool in the same AWS Region as the data warehouse and accessing it over a Direct Connect connection within the
same Region eliminates data transfer fees and ensures low-latency, high-bandwidth connectivity.

A. Hosting the visualization tool on premises and querying the data warehouse over the internet incurs data transfer costs for every query
result, as well as potential latency and bandwidth limitations.

B. Hosting the visualization tool in the same AWS Region as the data warehouse but accessing it over the internet still incurs data transfer
costs for each query result.

C. Hosting the visualization tool on premises and querying the data warehouse over a Direct Connect connection within the same AWS
Region incurs data transfer costs for every query result and adds complexity by requiring on-premises infrastructure.
upvoted 1 times
  dexpos 8 months ago
Selected Answer: D
D let you reduce at minimum the data transfer costs
upvoted 1 times

  alexleely 8 months, 1 week ago


Selected Answer: D
D: Direct Connect connection at a location in the same Region will provide the lowest data transfer egress cost, improved performance,
and lower complexity

Why it is not C is because the visualization tool is hosted on-premises, as it's not hosted in the same region as the data warehouse the
data transfer between them would occur over the internet, thus, would incur in egress data transfer costs.
upvoted 4 times

  markw92 3 months, 2 weeks ago


C option doesnt travel through internet because we have a direct connect. If you are hosting your visualization tool in same region why
you need a direct connection which D has? Doesn't make sense. So, C is the right answer.
upvoted 1 times

  Vickysss 8 months, 2 weeks ago


Selected Answer: C
https://www.nops.io/reduce-aws-data-transfer-costs-dont-get-stung-by-hefty-egress-fees/
upvoted 2 times

  JayBee65 8 months, 1 week ago


Whilst "Direct Connect can help lower egress costs even after taking the installation costs into account. This is because AWS charges
lower transfer rates." D removes the need to send the query results out of AWS and instead returns the web page, so reduces data
returned from 50MB to 500KB, so D
upvoted 1 times

  mhmt4438 8 months, 2 weeks ago


Selected Answer: D
Correct answer is D
upvoted 4 times

  Aninina 8 months, 2 weeks ago


Selected Answer: D
Should be D
https://aws.amazon.com/directconnect/pricing/
https://aws.amazon.com/blogs/aws/aws-data-transfer-prices-reduced/
upvoted 2 times

  Morinator 8 months, 3 weeks ago


Selected Answer: D
https://www.examtopics.com/discussions/amazon/view/47140-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 2 times
Question #241 Topic 1

An online learning company is migrating to the AWS Cloud. The company maintains its student records in a PostgreSQL database. The company
needs a solution in which its data is available and online across multiple AWS Regions at all times.

Which solution will meet these requirements with the LEAST amount of operational overhead?

A. Migrate the PostgreSQL database to a PostgreSQL cluster on Amazon EC2 instances.

B. Migrate the PostgreSQL database to an Amazon RDS for PostgreSQL DB instance with the Multi-AZ feature turned on.

C. Migrate the PostgreSQL database to an Amazon RDS for PostgreSQL DB instance. Create a read replica in another Region.

D. Migrate the PostgreSQL database to an Amazon RDS for PostgreSQL DB instance. Set up DB snapshots to be copied to another Region.

Correct Answer: C

Community vote distribution


C (73%) B (27%)

  Steve_4542636 Highly Voted  7 months ago


Selected Answer: C
Multi az is not the same as multi regional
upvoted 19 times

  alexleely Highly Voted  8 months, 1 week ago


Selected Answer: B
B: Amazon RDS Multi-AZ feature automatically creates a synchronous replica in another availability zone and failover to the replica in the
event of an outage. This will provide high availability and data durability across multiple AWS regions which fit the requirements.

Though C may sound good, it in fact requires manual management and monitoring of the replication process due to the fact that Amazon
RDS read replicas are asynchronous, meaning there is a delay between the primary and read replica. Therefore, there will be a need to
ensure that the read replica is constantly up-to-date and someone still has to fix any read replica errors during the replication process
which may cause data inconsistency. Lastly, you still have to configure additional steps to make it fail over to the read replica.
upvoted 13 times

  Mahadeva 8 months, 1 week ago


But the question is clearly asking for Multiple Regions. Multi-AZ is not across Regions.
upvoted 18 times

  alexleely 8 months, 1 week ago


You are right, Multi-AZ is only within one Region. C would be the right answer.
upvoted 11 times

  smartegnine 3 months, 3 weeks ago


https://aws.amazon.com/rds/features/multi-az/

smartegnine 0 minutes ago Awaiting moderator approval


Selected Answer: B
In an Amazon RDS Multi-AZ deployment, Amazon RDS automatically creates a primary database (DB) instance and synchronously
replicates the data to an instance in a different AZ.
upvoted 1 times

  Rehan33 7 months, 1 week ago


I go with option B because:
Multi-AZ is for high availiblity
Read replicas are for low-latency
in question they talk about available online
upvoted 3 times

  vijaykamal Most Recent  2 days, 17 hours ago


Selected Answer: B
Option C, while providing a read replica in another Region, adds complexity to the architecture and may introduce some additional
operational overhead compared to Multi-AZ. Cross-Region replication involves setting up and managing replication between two separate
RDS instances.
upvoted 1 times

  TariqKipkemei 6 days, 22 hours ago


Selected Answer: C
Migrate the PostgreSQL database to an Amazon RDS for PostgreSQL DB instance. Create a read replica in another Region
upvoted 1 times
  Guru4Cloud 2 weeks, 6 days ago
Selected Answer: C
Multi-AZ is not the same as Multi-Regional
upvoted 1 times

  Valder21 3 weeks, 5 days ago


can someone explain why not D
upvoted 1 times

  beginnercloud 1 month ago


Selected Answer: C
key words "AWS Regions at all times" so C is correct
upvoted 1 times

  fuzzycr 2 months, 2 weeks ago


Selected Answer: C
key words "AWS Regions at all times"
upvoted 1 times

  cookieMr 3 months ago


Selected Answer: C
By migrating the PostgreSQL database to an RDS for PostgreSQL DB instance and creating a read replica in another AWS Region, you can
achieve data availability and online access across multiple Regions. This solution requires less operational overhead compared to
managing a PostgreSQL cluster on EC2 instances (Option A) or setting up manual replication using snapshots (Option D). Additionally,
Amazon RDS handles the underlying infrastructure and replication setup, reducing the operational complexity for the company.

Option B, is a valid solution for achieving high availability within a single AWS Region. However, it does not meet the requirement of having
the data available and online across multiple AWS Regions at all times, which is specified in the question. The Multi-AZ feature in RDS
provides automatic failover within the same Region, but it does not replicate the data to multiple Regions.
upvoted 3 times

  mal1903 3 months, 2 weeks ago


Selected Answer: B
C and D just specifiy another single region. This does not translate to multiple regions.

B (Multi-AZ) means the solution will be highly available.

The data will be available in multiple regions for both B and C but B is a better solution!
upvoted 1 times

  Guru4Cloud 1 week, 4 days ago


its data is available and online across multiple AWS Regions at all times
upvoted 1 times

  MrAWSAssociate 3 months, 2 weeks ago


Selected Answer: C
Answer B is not right, because "RDS Multi-AZ" always span at least two Availability Zones within a single region and the question
requirment RDS DB should be available in multiple regions. Therefore, C is the most suitable answer for this question.
upvoted 1 times

  MrAWSAssociate 3 months, 2 weeks ago


I would like to change my answer to "B". The question has some distractor words: "its data is available and online across multiple AWS
Regions at all times". We agree that AWS is a could service available online around the world in 99 regions. So the option "B" is the most
appropriate answer, since multi-AZ focuses on the avialability factor and it has the LEAST amount of operational overhead.
upvoted 1 times

  abhishek2021 3 months, 2 weeks ago


Selected Answer: B
B & C both makes data available. However, B is less overhead.
What I think, the question is asking for data availability across multiple regions not for a DR solution. So, RDS being accessible over public
IP will do the trick for data being available across regions.
upvoted 1 times

  Guru4Cloud 1 week, 4 days ago


Multi-AZ is not the same as Multi-Regional
upvoted 1 times

  Bmarodi 3 months, 2 weeks ago


Selected Answer: C
Option meets the requirements, ref. link: https://aws.amazon.com/blogs/database/best-practices-for-amazon-rds-for-postgresql-cross-
region-read-replicas/
upvoted 1 times
  smartegnine 3 months, 3 weeks ago
Selected Answer: B
In an Amazon RDS Multi-AZ deployment, Amazon RDS automatically creates a primary database (DB) instance and synchronously
replicates the data to an instance in a different AZ.

https://aws.amazon.com/rds/features/multi-az/
upvoted 1 times

  ruqui 4 months, 1 week ago


Selected Answer: C
B is wrong because Multi AZ feature don't allow to have replicas in another region!!!! (the requirement is that "data should be available and
online across multiple AWS Regions at all times") ... only feasible option is C
upvoted 1 times

  kaustubhBarhate 4 months, 2 weeks ago


Multi-AZ provides redundancy within a single Region, it does not replicate data across multiple Regions. If the requirement specifically
states the need for data availability across multiple Regions, creating a read replica in another Region (option C) would be the more
appropriate choice.
upvoted 1 times

  fakrap 4 months, 3 weeks ago


Selected Answer: C
Multi region
upvoted 2 times
Question #242 Topic 1

A company hosts its web application on AWS using seven Amazon EC2 instances. The company requires that the IP addresses of all healthy EC2
instances be returned in response to DNS queries.

Which policy should be used to meet this requirement?

A. Simple routing policy

B. Latency routing policy

C. Multivalue routing policy

D. Geolocation routing policy

Correct Answer: C

Community vote distribution


C (94%) 6%

  LuckyAro Highly Voted  8 months, 2 weeks ago


Selected Answer: C
Use a multivalue answer routing policy to help distribute DNS responses across multiple resources. For example, use multivalue answer
routing when you want to associate your routing records with a Route 53 health check. For example, use multivalue answer routing when
you need to return multiple values for a DNS query and route traffic to multiple IP addresses.

https://aws.amazon.com/premiumsupport/knowledge-center/multivalue-versus-simple-policies/
upvoted 8 times

  TariqKipkemei Most Recent  6 days ago


Selected Answer: C
Use Multivalue answer routing policy when you want Route 53 to respond to DNS queries with up to eight healthy records selected at
random.
upvoted 1 times

  Guru4Cloud 2 weeks, 6 days ago


Selected Answer: C
C. Multivalue routing policy
upvoted 1 times

  Guru4Cloud 2 weeks, 6 days ago


Selected Answer: A
A. Deploy an AWS Storage Gateway file gateway as a virtual machine (VM) on premises at each clinic
upvoted 1 times

  animefan1 3 months ago


multivalue supports health checks
upvoted 1 times

  cookieMr 3 months ago


The Multivalue routing policy allows Route 53 to respond to DNS queries with multiple healthy IP addresses for the same resource. This is
particularly useful in scenarios where multiple instances are serving the same purpose and need to be load balanced or failover capable.
With the Multivalue routing policy, Route 53 returns multiple IP addresses in a random order to distribute the traffic across all healthy
instances.

Option A (Simple routing policy) would only return a single IP address in response to DNS queries and does not support returning multiple
addresses.

Option B (Latency routing policy) is used to route traffic based on the lowest latency to the resource and does not fulfill the requirement of
returning all healthy IP addresses.

Option D (Geolocation routing policy) is used to route traffic based on the geographic location of the user and does not fulfill the
requirement of returning all healthy IP addresses.

Therefore, the Multivalue routing policy is the most suitable option for returning the IP addresses of all healthy EC2 instances in response
to DNS queries.
upvoted 2 times

  MLCL 6 months, 2 weeks ago


IP are returned RANDOMLY for multi-value Routing, is this what we want ?
upvoted 4 times
  WherecanIstart 6 months, 3 weeks ago
Selected Answer: C
Multivalue answer routing policy ...answer is C
upvoted 1 times

  mhmt4438 8 months, 2 weeks ago


Selected Answer: C
Answer is C
upvoted 2 times

  Aninina 8 months, 2 weeks ago


Selected Answer: C
Should be C
upvoted 1 times

  bamishr 8 months, 3 weeks ago


Selected Answer: C
https://www.examtopics.com/discussions/amazon/view/46491-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times

  Morinator 8 months, 3 weeks ago


Selected Answer: C
https://www.examtopics.com/discussions/amazon/view/46491-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times
Question #243 Topic 1

A medical research lab produces data that is related to a new study. The lab wants to make the data available with minimum latency to clinics
across the country for their on-premises, file-based applications. The data files are stored in an Amazon S3 bucket that has read-only permissions
for each clinic.

What should a solutions architect recommend to meet these requirements?

A. Deploy an AWS Storage Gateway file gateway as a virtual machine (VM) on premises at each clinic

B. Migrate the files to each clinic’s on-premises applications by using AWS DataSync for processing.

C. Deploy an AWS Storage Gateway volume gateway as a virtual machine (VM) on premises at each clinic.

D. Attach an Amazon Elastic File System (Amazon EFS) file system to each clinic’s on-premises servers.

Correct Answer: C

Community vote distribution


A (93%) 7%

  mhmt4438 Highly Voted  8 months, 2 weeks ago


Selected Answer: A
A. Deploy an AWS Storage Gateway file gateway as a virtual machine (VM) on premises at each clinic

AWS Storage Gateway is a service that connects an on-premises software appliance with cloud-based storage to provide seamless and
secure integration between an organization's on-premises IT environment and AWS's storage infrastructure. By deploying a file gateway
as a virtual machine on each clinic's premises, the medical research lab can provide low-latency access to the data stored in the S3 bucket
while maintaining read-only permissions for each clinic. This solution allows the clinics to access the data files directly from their on-
premises file-based applications without the need for data transfer or migration.
upvoted 12 times

  TariqKipkemei Most Recent  6 days ago


Selected Answer: A
The Amazon S3 File Gateway enables you to store and retrieve objects in Amazon Simple Storage Service (S3) using file protocols such as
Network File System (NFS) and Server Message Block (SMB). Objects written through S3 File Gateway can be directly accessed in S3.
upvoted 1 times

  Guru4Cloud 2 weeks, 6 days ago


Selected Answer: A
A. Deploy an AWS Storage Gateway file gateway as a virtual machine (VM) on premises at each clinic
upvoted 1 times

  cookieMr 3 months ago


A. It allows the clinics to access the data files stored in the S3 bucket through a file interface. The file gateway caches frequently accessed
data locally, reducing latency and providing fast access to the data.

B. It involves transferring the data files from the Amazon S3 bucket to each clinic's on-premises applications using AWS DataSync. While
this enables data migration, it may not provide real-time access and may introduce additional latency.

C. It is suitable for block-level access to data rather than file-level access. It may not be the most efficient solution for file-based
applications.

D. It involves using Amazon EFS, which is a scalable file storage service, to provide file-level access to the data. However, it may introduce
additional complexity and latency compared to using a file gateway solution.
upvoted 2 times

  Bmarodi 4 months ago


Selected Answer: A
Option A meets the requirements.
upvoted 1 times

  jaswantn 5 months, 2 weeks ago


For File-based applications use File Gateway: (Option A)
upvoted 1 times

  Grace83 6 months, 2 weeks ago


Definitely A.
Why are there so many wrong answers by Admins?
upvoted 4 times
  maggie135 3 months ago
I guess to force us to read and think, so one can't just memorize the answer and go to exam ?)
upvoted 3 times

  AlessandraSAA 7 months ago


Selected Answer: A
Amazon S3 File Gateway enables you to store file data as objects in Amazon S3 cloud storage for data lakes, backups, and Machine
Learning workflows. With Amazon S3 File Gateway, each file is stored as an object in Amazon S3 with a one-to-one mapping between a file
and an object.

Volume Gateway provides block storage volumes over iSCSI, backed by Amazon S3, and provides point-in-time backups as Amazon EBS
snapshots. Volume Gateway integrates with AWS Backup, an automated and centralized backup service, to protect Storage Gateway
volumes.

So it's A
upvoted 3 times

  Steve_4542636 7 months ago


Selected Answer: A
A for answer
upvoted 1 times

  bdp123 8 months ago


Selected Answer: A
https://cloud.in28minutes.com/aws-certification-aws-storage-gateway
upvoted 1 times

  kbaruu 8 months, 1 week ago


Selected Answer: A
A. Deploy an AWS Storage Gateway file gateway...
upvoted 1 times

  imisioluwa 8 months, 2 weeks ago


Selected Answer: A
The correct answer is A.
https://www.knowledgehut.com/tutorials/aws/aws-storage-
gateway#:~:text=AWS%20Storage%20Gateway%20helps%20in%20connecting,as%20well%20as%20providing%20data%20security.&text=A
WS%20Storage%20Gateway%20helps,as%20providing%20data%20security.&text=Gateway%20helps%20in%20connecting,as%20well%20a
s%20providing
https://docs.aws.amazon.com/storagegateway/latest/vgw/WhatIsStorageGateway.html
upvoted 1 times

  venice1234 8 months, 2 weeks ago


Selected Answer: C
I think C (Volume Gateway) is correct as it has an option to have Local Storage with Asynchronous sync with S3. This would give low latency
access to all local files not just cached/recent files.
upvoted 2 times

  laicos 8 months, 2 weeks ago


Selected Answer: A
https://aws.amazon.com/storagegateway/file/
upvoted 1 times

  Aninina 8 months, 2 weeks ago


Selected Answer: A
A. Deploy an AWS Storage Gateway file gateway as a virtual machine (VM) on premises at each clinic
upvoted 1 times

  Morinator 8 months, 3 weeks ago


Selected Answer: A
It's A imo (file gatewat)
upvoted 2 times
Question #244 Topic 1

A company is using a content management system that runs on a single Amazon EC2 instance. The EC2 instance contains both the web server
and the database software. The company must make its website platform highly available and must enable the website to scale to meet user
demand.

What should a solutions architect recommend to meet these requirements?

A. Move the database to Amazon RDS, and enable automatic backups. Manually launch another EC2 instance in the same Availability Zone.
Configure an Application Load Balancer in the Availability Zone, and set the two instances as targets.

B. Migrate the database to an Amazon Aurora instance with a read replica in the same Availability Zone as the existing EC2 instance. Manually
launch another EC2 instance in the same Availability Zone. Configure an Application Load Balancer, and set the two EC2 instances as targets.

C. Move the database to Amazon Aurora with a read replica in another Availability Zone. Create an Amazon Machine Image (AMI) from the
EC2 instance. Configure an Application Load Balancer in two Availability Zones. Attach an Auto Scaling group that uses the AMI across two
Availability Zones.

D. Move the database to a separate EC2 instance, and schedule backups to Amazon S3. Create an Amazon Machine Image (AMI) from the
original EC2 instance. Configure an Application Load Balancer in two Availability Zones. Attach an Auto Scaling group that uses the AMI
across two Availability Zones.

Correct Answer: C

Community vote distribution


C (95%) 5%

  mhmt4438 Highly Voted  8 months, 2 weeks ago


Selected Answer: C
C. Move the database to Amazon Aurora with a read replica in another Availability Zone. Create an Amazon Machine Image (AMI) from the
EC2 instance. Configure an Application Load Balancer in two Availability Zones. Attach an Auto Scaling group that uses the AMI across two
Availability Zones.

This approach will provide both high availability and scalability for the website platform. By moving the database to Amazon Aurora with a
read replica in another availability zone, it will provide a failover option for the database. The use of an Application Load Balancer and an
Auto Scaling group across two availability zones allows for automatic scaling of the website to meet increased user demand. Additionally,
creating an AMI from the original EC2 instance allows for easy replication of the instance in case of failure.
upvoted 9 times

  Bmarodi 4 months ago


Very good explanations!
upvoted 1 times

  TariqKipkemei Most Recent  6 days ago


Selected Answer: C
Move the database to Amazon Aurora with a read replica in another Availability Zone. Create an Amazon Machine Image (AMI) from the
EC2 instance. Configure an Application Load Balancer in two Availability Zones. Attach an Auto Scaling group that uses the AMI across two
Availability Zones.
upvoted 1 times

  Guru4Cloud 2 weeks, 6 days ago


Selected Answer: C
C. Move the database to Amazon Aurora with a read replica in another Availability Zone. Create an Amazon Machine Image (AMI) from the
EC2 instance. Configure an Application Load Balancer in two Availability Zones. Attach an Auto Scaling group that uses the AMI across two
Availability Zones.
upvoted 1 times

  MutiverseAgent 2 months ago


Selected Answer: D
The question does not say if the current application is using a relational database, so how we can be sure that it can moved to RDS or
aurora as answers A, B & C states? In my opinion the right answer is D.
upvoted 1 times

  animefan1 3 months ago


Selected Answer: C
has all options needed for HA
upvoted 1 times
  cookieMr 3 months ago
Selected Answer: C
Option A does not provide a solution for high availability or scalability. Manually launching another EC2 instance in the same AZ may not
ensure high availability, as a failure in that AZ would result in downtime.

Option B improves database performance and provides a level of fault tolerance, it does not address the scalability aspect of the website
platform.

Option C provides both high availability and fault tolerance. Creating an AMI allows for easy replication of the EC2 instance across AZs.
Configuring an ALB in two AZs and attaching an ASG ensures scalability and load distribution across multiple instances.

Option D does not provide the high availability and scalability required by the company. Scheduled backups to S3 address data protection
but do not contribute to website availability or scalability.
upvoted 1 times

  Bmarodi 4 months ago


Selected Answer: C
Option C meets the requirements.
upvoted 1 times

  ssoffline 4 months, 1 week ago


Why not D?

Are we just assuming that there will be no write to the db?


upvoted 1 times

  antropaws 4 months, 1 week ago


Selected Answer: C
Absolutely C.
upvoted 1 times

  Aninina 8 months, 2 weeks ago


Selected Answer: C
C: This will allow the website platform to be highly available by using Aurora, which provides automatic failover and replication.
Additionally, by creating an AMI from the original EC2 instance, the Auto Scaling group can automatically launch new instances in multiple
availability zones and use the Application Load Balancer to distribute traffic across them. This way, the website will be able to handle the
increased traffic, and will be less likely to go down due to a single point of failure.
upvoted 3 times
Question #245 Topic 1

A company is launching an application on AWS. The application uses an Application Load Balancer (ALB) to direct traffic to at least two Amazon
EC2 instances in a single target group. The instances are in an Auto Scaling group for each environment. The company requires a development
environment and a production environment. The production environment will have periods of high traffic.

Which solution will configure the development environment MOST cost-effectively?

A. Reconfigure the target group in the development environment to have only one EC2 instance as a target.

B. Change the ALB balancing algorithm to least outstanding requests.

C. Reduce the size of the EC2 instances in both environments.

D. Reduce the maximum number of EC2 instances in the development environment’s Auto Scaling group.

Correct Answer: A

Community vote distribution


A (58%) D (40%)

  mhmt4438 Highly Voted  8 months, 2 weeks ago


Selected Answer: D
D. Reduce the maximum number of EC2 instances in the development environment’s Auto Scaling group

This option will configure the development environment in the most cost-effective way as it reduces the number of instances running in
the development environment and therefore reduces the cost of running the application. The development environment typically requires
less resources than the production environment, and it is unlikely that the development environment will have periods of high traffic that
would require a large number of instances. By reducing the maximum number of instances in the development environment's Auto
Scaling group, the company can save on costs while still maintaining a functional development environment.
upvoted 10 times

  JayBee65 8 months, 1 week ago


No, it will not reduce the number of instances being used, since a minimum of 2 will be used at all times.
upvoted 5 times

  Mandar15 Most Recent  3 days, 22 hours ago


Selected Answer: A
Option A
upvoted 1 times

  TariqKipkemei 6 days ago


Selected Answer: A
wont think much about this, option A is the most cost effective
upvoted 1 times

  Its_SaKar 1 week, 1 day ago


Selected Answer: A
Option A because it can't be option D as there should be at least two EC2 instances in Auto scaling group, and can't be reduced to one as
said in option D.

So, simply reconfigure the target group in the development environment to have only one EC2 instance as a target as said in option A to
reduce cost.
upvoted 1 times

  Its_SaKar 1 week, 1 day ago


Selected Answer: D
Option A because it can't be option D as there should be at least two EC2 instances in Auto scaling group, and can't be reduced to one as
said in option D.

So, simply reconfigure the target group in the development environment to have only one EC2 instance as a target as said in option A to
reduce cost.
upvoted 1 times

  Its_SaKar 1 week, 1 day ago


plz remove this comment as i mistakely voted option D here. I have posted another comment above.
upvoted 1 times

  Guru4Cloud 2 weeks, 6 days ago


Selected Answer: A
A. Reconfigure the target group in the development environment to have only one EC2 instance as a target
upvoted 1 times

  kwang312 4 weeks ago


Selected Answer: A
I choose A but cannot understand this question, which environment handles the traffic? The question is not clearly for have correct
answer.
upvoted 1 times

  yhonatan2288 1 month, 3 weeks ago


Selected Answer: A
El entorno de desarrollo generalmente no necesita manejar la misma cantidad de tráfico que el entorno de producción y, por lo tanto,
puede tener una infraestructura más pequeña para ahorrar costos. Al configurar solo una instancia EC2 como objetivo en el grupo de
Auto Scaling del entorno de desarrollo, estarás reduciendo los costos operativos al tener menos recursos activos y consumiendo menos
instancias EC2.
upvoted 1 times

  cookieMr 3 months ago


Selected Answer: A
By configuring the target group in the development environment to have only one EC2 instance as a target, you are effectively reducing
the resources allocated to that environment. This helps minimize costs by utilizing fewer EC2 instances and associated resources.

Option B does not directly address the cost-effectiveness of the development environment. It focuses on load balancing strategies rather
than cost optimization.

Option C may not be the most cost-effective solution unless the current instance sizes are over-provisioned or unnecessary for the
application's requirements.

Option D can help reduce costs, but it may impact the environment's ability to handle traffic and scale efficiently, especially during periods
of increased load.

Overall, option A provides a cost-effective approach by minimizing the resources allocated to the development environment while still
maintaining a functional setup.
upvoted 1 times

  MrAWSAssociate 3 months, 2 weeks ago


I think option D is true, only in case we have multipe target groups, but remember in the question it has been mentioned that there is only
single target group. If we do what option "D" indicated in a single target group, it will affect the production group too. Therefore, I think
option A is more reasonable.
upvoted 1 times

  ChrisAn 3 months, 3 weeks ago


Selected Answer: A
A# By reducing the number of EC2 instances in the target group of the development environment to just one, you can lower the cost
associated with running multiple instances. Since the development environment typically has lower traffic and does not require the same
level of availability and scalability as the production environment, having a single instance is sufficient for testing and development
purposes.
upvoted 2 times

  markw92 3 months, 2 weeks ago


I also thought D is the answer but after careful reading of the question, the current minimum number of ec2 are 2, so even though we
reduce the auto scaling group to minimum, it still leaves 2 in dev env. I think A is the answer. Pretty tricky and we have to pay attention
to small details.
upvoted 1 times

  Bmarodi 4 months ago


Selected Answer: A
Option A is most-effective.
upvoted 1 times

  michellemeloc 4 months, 2 weeks ago


Selected Answer: A
Just A reduce the cost effectively. D COULD reduce, but not reduce immediately.
upvoted 2 times

  ErfanKh 5 months, 2 weeks ago


Selected Answer: A
I am voting A here, there is no need for Autoscaling since we can just set dev environment to 1 EC2 instance which would be the lowest
cost.
upvoted 4 times

  Kenzo 5 months, 4 weeks ago


Honestly this question is useless, there's nothing wrong with the existing environment
upvoted 2 times
  taehyeki 6 months, 3 weeks ago
Selected Answer: D
if specify only one instance in target group,
we dont have any merit for using auto scale group
i think so i go with d
upvoted 2 times

  HaineHess 7 months ago


Selected Answer: A
it's A (D does not reduce €)
upvoted 3 times
Question #246 Topic 1

A company runs a web application on Amazon EC2 instances in multiple Availability Zones. The EC2 instances are in private subnets. A solutions
architect implements an internet-facing Application Load Balancer (ALB) and specifies the EC2 instances as the target group. However, the
internet traffic is not reaching the EC2 instances.

How should the solutions architect reconfigure the architecture to resolve this issue?

A. Replace the ALB with a Network Load Balancer. Configure a NAT gateway in a public subnet to allow internet traffic.

B. Move the EC2 instances to public subnets. Add a rule to the EC2 instances’ security groups to allow outbound traffic to 0.0.0.0/0.

C. Update the route tables for the EC2 instances’ subnets to send 0.0.0.0/0 traffic through the internet gateway route. Add a rule to the EC2
instances’ security groups to allow outbound traffic to 0.0.0.0/0.

D. Create public subnets in each Availability Zone. Associate the public subnets with the ALB. Update the route tables for the public subnets
with a route to the private subnets.

Correct Answer: C

Community vote distribution


D (83%) Other

  ktulu2602 Highly Voted  7 months ago


I think either the question or the answers are not formulated correctly because of this document:
https://docs.aws.amazon.com/prescriptive-guidance/latest/load-balancer-stickiness/subnets-routing.html
A - Might be possible but it's quite impractical
B - Not needed as the setup described should work as is provided the SGs of the EC2 instances accept traffic from the ALB
C - Update the route tables for the EC2 instances’ subnets to send 0.0.0.0/0 traffic through the internet gateway route - not needed as the
EC2 instances would receive the traffic from the ALB ENIs. Add a rule to the EC2 instances’ security groups to allow outbound traffic to
0.0.0.0/0 - the default behaviour of the SG is to allow outbound traffic only.
D - Create public subnets in each Availability Zone. Associate the public subnets with the ALB - if it's a internet facing ALB these should
already be in place. Update the route tables for the public subnets with a route to the private subnets - no need as the local prefix entry in
the route tables would take care of this point

I'm 110% sure the question or answers or both are wrong. Prove me wrong! :)
upvoted 11 times

  UnluckyDucky 6 months, 3 weeks ago


Completely agreed, I was looking for an option to allow HTTPS traffic on port 443 from the ALB to the EC2 instance's security group.

Either the question or the answers are wrong.


upvoted 4 times

  bdp123 Highly Voted  7 months, 1 week ago


Selected Answer: D
I change my answer to 'D' because of following link:
https://aws.amazon.com/premiumsupport/knowledge-center/public-load-balancer-private-ec2/
upvoted 11 times

  vijaykamal Most Recent  2 days, 17 hours ago


Selected Answer: D
ption A (replace ALB with Network Load Balancer and add a NAT gateway) is not the most straightforward solution because it changes the
load balancer type and introduces a NAT gateway, which might be unnecessary if the goal is to use an ALB for web traffic. ALBs are
commonly used for internet-facing web applications.

Option B (move EC2 instances to public subnets and modify security group rules) involves placing instances in public subnets, which is
generally not recommended for security reasons. Additionally, it suggests modifying security group rules for outbound traffic, which
might not be the best practice to resolve the issue.

Option C (update route tables and security group rules) addresses the route table update, but it also suggests moving instances to public
subnets, which is not ideal from a security perspective.
upvoted 1 times

  TariqKipkemei 6 days ago


Selected Answer: D
Create public subnets in each Availability Zone. Associate the public subnets with the ALB. Update the route tables for the public subnets
with a route to the private subnets.
upvoted 1 times
  Its_SaKar 1 week, 1 day ago
Selected Answer: D
Option A is incorrect Internet traffic is http and https so it cant be configured to NLB
Option B and option C is incorrect because senging 0.0.0.0/0 is not best practices

Option D is correct because its the only option left. and updating the route tables for the public subnets with a route to the private
subnets ensures internet access to EC2 instances in private subnet.
upvoted 1 times

  Guru4Cloud 2 weeks, 6 days ago


Selected Answer: D
D. is the correct solution. By creating public subnets and associating them with the ALB, inbound internet traffic can reach the ALB. The
route tables for the public subnets are updated to include a route to the private subnets, allowing traffic to reach the EC2 instances in the
private subnets. This setup enables secure access to the application while allowing internet traffic to reach the EC2 instances through the
ALB.
upvoted 1 times

  A1975 2 months ago


Selected Answer: D
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-example-private-subnets-nat.html
upvoted 2 times

  cookieMr 3 months ago


Selected Answer: D
A. suggests using a different type of load balancer and configuring a NAT gateway, but it does not address the issue of internet traffic
reaching the EC2 instances.

B. suggests exposing the EC2 instances to the public internet, which may pose security risks and does not address the issue of inbound
internet traffic reaching the instances.

C. suggests configuring the EC2 instances to have outbound internet access, but it does not solve the problem of inbound internet traffic
reaching the instances.

D. is the correct solution. By creating public subnets and associating them with the ALB, inbound internet traffic can reach the ALB. The
route tables for the public subnets are updated to include a route to the private subnets, allowing traffic to reach the EC2 instances in the
private subnets. This setup enables secure access to the application while allowing internet traffic to reach the EC2 instances through the
ALB.
upvoted 3 times

  Vinhkewl 3 months, 1 week ago


Should be C
It would normally make sense to segregate your ALBs into public or private zones by security group and target group, but this is
configuration rather than architectural placement - there is nothing preventing you from adding a rule to route specific paths or ports to a
public subnet from an ALB that has until then been serving private subnets only.
upvoted 1 times

  Abrar2022 3 months, 3 weeks ago


Selected Answer: D
To attach Amazon EC2 instances that are located in a private subnet, first create public subnets
upvoted 3 times

  Bmarodi 4 months ago


Selected Answer: D
I vote with the option D.
upvoted 1 times

  antropaws 4 months, 1 week ago


D is not quite accurate because subnets in a VPC have a local route by default, meaning that all subnets are able to communicate with
each other: "Every route table contains a local route for communication within the VPC. This route is added by default to all route tables".
This question is poorly formulated.
upvoted 2 times

  kraken21 6 months ago


Selected Answer: D
https://aws.amazon.com/premiumsupport/knowledge-center/public-load-balancer-private-ec2/
upvoted 2 times

  Theodorz 7 months ago


Selected Answer: C
I think C would be correct answer.
upvoted 1 times

  AYap 7 months, 1 week ago


Answer: D
https://aws.amazon.com/premiumsupport/knowledge-center/public-load-balancer-private-ec2/
upvoted 3 times

  bdp123 7 months, 2 weeks ago


Selected Answer: C
Just need to configure the outbound path from the servers back out to the Internet. Inbound path is already configured
upvoted 1 times

  nickolaj 7 months, 3 weeks ago


Selected Answer: C
The correct answer is C. To resolve the issue of internet traffic not reaching the EC2 instances, the solutions architect should update the
route tables for the EC2 instances' subnets to send 0.0.0.0/0 traffic through the internet gateway route. The EC2 instances are in private
subnets, so they need a route to the internet to be able to receive internet traffic. Additionally, the solutions architect should add a rule to
the EC2 instances' security groups to allow outbound traffic to 0.0.0.0/0, to ensure that the instances are allowed to send traffic out to the
internet.
upvoted 1 times

  markw92 3 months, 2 weeks ago


Private subnet can only access internet via NAT Gateway or instance. You can't attach internet gateway to private. Internet gateway
allows public subnet reachable via internet. The whole idea of private is shielding from outside world. It doesn;t make sense to add
internet gateway. May be it is a typo, the answer should have NAT not internet gateway?!!
upvoted 2 times

  ruqui 4 months, 1 week ago


your answer is wrong!!! private subnets don't have access to the internet gateway, it's not possible to configure a private subnet to
send traffic to an internet gateway
upvoted 2 times

  nickolaj 7 months, 3 weeks ago


Option B is not a complete solution, as it only allows outbound traffic, but the instances need to be able to receive inbound traffic from
the internet.

Option D is not necessary, as the internet-facing ALB is already specified and the EC2 instances are already part of the target group.

Option A is not a solution to the problem, as it does not address the underlying issue of the EC2 instances not being able to receive
internet traffic.
upvoted 1 times
Question #247 Topic 1

A company has deployed a database in Amazon RDS for MySQL. Due to increased transactions, the database support team is reporting slow reads
against the DB instance and recommends adding a read replica.

Which combination of actions should a solutions architect take before implementing this change? (Choose two.)

A. Enable binlog replication on the RDS primary node.

B. Choose a failover priority for the source DB instance.

C. Allow long-running transactions to complete on the source DB instance.

D. Create a global table and specify the AWS Regions where the table will be available.

E. Enable automatic backups on the source instance by setting the backup retention period to a value other than 0.

Correct Answer: AC

Community vote distribution


CE (90%) 10%

  fkie4 Highly Voted  6 months, 4 weeks ago


Who would know this stuff man...
upvoted 43 times

  MNotABot 2 months, 3 weeks ago


"Allow long-running transactions to complete on the source DB instance." --. Makes sense / Also a backup before changing anything
again made a sense.
upvoted 1 times

  presetacsing 4 months, 1 week ago


exactly
upvoted 1 times

  KelvinEM Highly Voted  8 months, 2 weeks ago


C,E
"An active, long-running transaction can slow the process of creating the read replica. We recommend that you wait for long-running
transactions to complete before creating a read replica. If you create multiple read replicas in parallel from the same source DB instance,
Amazon RDS takes only one snapshot at the start of the first create action.

When creating a read replica, there are a few things to consider. First, you must enable automatic backups on the source DB instance by
setting the backup retention period to a value other than 0. This requirement also applies to a read replica that is the source DB instance
for another read replica"
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html
upvoted 28 times

  vijaykamal Most Recent  2 days, 17 hours ago


Selected Answer: CE
A. Enabling binlog replication is not something you need to do manually before creating a read replica. Amazon RDS for MySQL manages
replication internally, and it's not necessary to enable binlog replication explicitly.

B. Choosing a failover priority is related to Multi-AZ configurations and automatic failover, but it is not specifically required when adding a
read replica.

D. Creating a global table and specifying AWS Regions is related to Aurora Global Databases, which is not the same as creating a read
replica for a standard RDS instance.
upvoted 1 times

  Guru4Cloud 2 weeks, 6 days ago


Selected Answer: CE
**C. Long-running transactions can prevent the read replica from catching up with the source DB instance. Allowing these transactions to
complete before creating the read replica can help ensure that the replica is able to stay synchronized with the source.

**E. Automatic backups must be enabled on the source DB instance for read replicas to be created. This is done by setting the backup
retention period to a value other than 0.
upvoted 1 times

  cd93 1 month, 2 weeks ago


Bin log (binary log) is a specific terminology to MySQL, it is a write-only file that logs all history and used for purposes such as point-in-
time recovery and transaction replication.
Option A is technically correct but on AWS RDS, this MySQL feature is turned on by setting backup retention period > 0, that is why we
must enable backup before replication can work (for MySQL, at least) => Option E is the more general answer for AWS RDS.

Option C is just a recommendation from AWS official documentation, it is there to prevent data mismatch on primary and secondaries
when the long-running transactions have not been complete yet.
upvoted 1 times
  A1975 2 months ago
Selected Answer: CE
Before a MySQL DB instance can serve as a replication source, make sure to enable automatic backups on the source DB instance. To do
this, set the backup retention period to a value other than 0. This requirement also applies to a read replica that is the source DB instance
for another read replica. Automatic backups are supported for read replicas running any version of MySQL. You can configure replication
based on binary log coordinates for a MySQL DB instance

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_MySQL.Replication.ReadReplicas.html
upvoted 1 times

  StacyY 2 months ago


A E. Binlog is needed for on-going replication setup and DB backup is needed for setup the replication DB
upvoted 1 times

  Mmmmmmkkkk 3 months ago


Correction: c and e
upvoted 1 times

  Mmmmmmkkkk 3 months ago


A and e
upvoted 1 times

  cookieMr 3 months ago


Selected Answer: CE
A. enables the binary log replication feature on the RDS primary node, which is necessary for setting up a read replica.

B. determines the order in which DB instances are promoted to the primary role during a failover scenario. It is not directly related to
adding a read replica to address slow reads.

C. ensures that any ongoing transactions on the source DB instance are allowed to finish before implementing the change. It helps
maintain data integrity and consistency during the transition to the read replica.

D. is a feature specific to DynamoDB. It allows for multi-region replication and high availability in DynamoDB, but it is not applicable in this
scenario.

E. ensures that regular backups are taken for the source DB instance. This is important for data protection and recovery purposes, as it
allows for point-in-time restoration in case of any issues during or after the addition of the read replica.
upvoted 1 times

  Abrar2022 3 months, 3 weeks ago


Selected Answer: CE
Before adding read replicas, one needs to allow long-running transactions to complete on the source DB instance otherwise you might
end up interrupting transactions. The, you should enable automatic backups on the source instance and set the backup retention period
to a value other than 0.
upvoted 1 times

  Bmarodi 4 months ago


Selected Answer: CE
The combination of actions should a solutions architect take before implementing this chang are options C & E.
upvoted 1 times

  omoakin 4 months, 1 week ago


AAAAAAAAAAA EEEEEEEEEEEEEEEE
upvoted 1 times

  Yadav_Sanjay 4 months, 2 weeks ago


Selected Answer: CE
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html#USER_ReadRepl.Create
upvoted 1 times

  bdp123 7 months, 2 weeks ago


Selected Answer: CE
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html#USER_ReadRepl.Create
upvoted 2 times

  bdp123 8 months ago


Selected Answer: CE
When creating a Read Replica, there are a few things to consider. First, you must enable automatic backups on the source DB instance by
setting the backup retention period to a value other than 0. This requirement also applies to a Read Replica that is the source DB instance
for another Read Replica.

After you enable automatic backups by modifying your read replica instance to have a backup retention period greater than 0 days, you’ll
find that the log_bin and binlog_format will align itself with the configuration specified in your parameter group dynamically and will not
require the RDS instance to be restarted. You will also be able to create a read replica from your read replica instance with no further
modification requirements.

https://blog.pythian.com/enabling-binary-logging-rds-read-replica/
upvoted 2 times
  alexleely 8 months, 1 week ago
Selected Answer: AC
A,C

A: Before we can start read replica, it is important to enable binary logging on the RDS primary node, thus, ensuring read replica to have
up-to-date data.
C: To avoid inconsistencies between the primary and replica instances by allowing long-running transactions to complete on the source DB
instance

Though E is a good practise, it is not part of the steps you need to do before enabling read replica.
upvoted 2 times

  JayBee65 8 months, 1 week ago


Binlog replication is a popular feature serving multiple use cases, including offloading transactional work from a source database,
replicating changes to a separate dedicated system to run analytics, and streaming data into other systems, but the benefits don’t
come for free. I don't believe it is used for creating read replicas. It is not mentioned in the link below.
On the other hand this link https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html#USER_ReadRepl.Create
says...
(C) We recommend that you wait for long-running transactions to complete before creating a read replica.
(E) First, you must enable automatic backups on the source DB instance by setting the backup retention period to a value other than 0
upvoted 1 times

  alexleely 8 months, 1 week ago


You are right, Binlog is enabled by doing (E). If we think from Database-as-a-service, C and E would be the correct answer. My answer
will only be correct if it is not using AWS. Apologizes.
upvoted 2 times
Question #248 Topic 1

A company runs analytics software on Amazon EC2 instances. The software accepts job requests from users to process data that has been
uploaded to Amazon S3. Users report that some submitted data is not being processed Amazon CloudWatch reveals that the EC2 instances have
a consistent CPU utilization at or near 100%. The company wants to improve system performance and scale the system based on user load.

What should a solutions architect do to meet these requirements?

A. Create a copy of the instance. Place all instances behind an Application Load Balancer.

B. Create an S3 VPC endpoint for Amazon S3. Update the software to reference the endpoint.

C. Stop the EC2 instances. Modify the instance type to one with a more powerful CPU and more memory. Restart the instances.

D. Route incoming requests to Amazon Simple Queue Service (Amazon SQS). Configure an EC2 Auto Scaling group based on queue size.
Update the software to read from the queue.

Correct Answer: D

Community vote distribution


D (93%) 7%

  mhmt4438 Highly Voted  8 months, 2 weeks ago


Selected Answer: D
D. Route incoming requests to Amazon Simple Queue Service (Amazon SQS). Configure an EC2 Auto Scaling group based on queue size.
Update the software to read from the queue.

By routing incoming requests to Amazon SQS, the company can decouple the job requests from the processing instances. This allows
them to scale the number of instances based on the size of the queue, providing more resources when needed. Additionally, using an
Auto Scaling group based on the queue size will automatically scale the number of instances up or down depending on the workload.
Updating the software to read from the queue will allow it to process the job requests in a more efficient manner, improving the
performance of the system.
upvoted 9 times

  TariqKipkemei Most Recent  6 days ago


Selected Answer: D
Route incoming requests to Amazon Simple Queue Service (Amazon SQS). Configure an EC2 Auto Scaling group based on queue size.
Update the software to read from the queue
upvoted 1 times

  Guru4Cloud 2 weeks, 6 days ago


Selected Answer: D
D. Route incoming requests to Amazon Simple Queue Service (Amazon SQS). Configure an EC2 Auto Scaling group based on queue size.
Update the software to read from the queue.
upvoted 1 times

  Kill3rasp3r 1 month, 1 week ago


Selected Answer: D
I would vote A if it was ALB targeting an EC2 auto scaling group.
I would vote D if the auto scaling group was based on CPU utilization rather than queue size.
So I think both answers are wrong but D is okay enough.
upvoted 1 times

  cookieMr 3 months ago


Selected Answer: D
A. Creating a copy of the instance and placing all instances behind an ALB does not address the high CPU utilization issue or provide
scalability based on user load.

B. Creating an S3 VPC endpoint for S3 and updating the software to reference the endpoint improves network performance but does not
address the high CPU utilization or provide scalability based on user load.

C. Stopping the EC2 instances and modifying the instance type to one with a more powerful CPU and more memory may improve
performance, but it does not address scalability based on user load.

D. Routing incoming requests to SQS, configuring an EC2 ASG based on queue size, and updating the software to read from the queue
improves system performance and provides scalability based on user load.

Therefore, option D is the correct choice as it addresses the high CPU utilization, improves system performance, and enables scalability
based on user load.
upvoted 1 times
  WherecanIstart 6 months, 3 weeks ago
Selected Answer: D
Autoscaling Group and SQS solves the problem.
SQS - Decouples the process
ASG - Autoscales the EC2 instances based on usage
upvoted 1 times

  ak1ak 7 months ago


Selected Answer: A
its definitely A
upvoted 1 times

  wRhlH 4 months, 1 week ago


You don't "scale the system by load" by choosing A
upvoted 1 times

  AHUI 8 months, 2 weeks ago


D is correct. Decouple the process. autoscale the EC2 based on query size. best choice
upvoted 3 times

  Aninina 8 months, 2 weeks ago


I think it's A " A. Create a copy of the instance. Place all instances behind an Application Load Balancer.
upvoted 1 times
Question #249 Topic 1

A company is implementing a shared storage solution for a media application that is hosted in the AWS Cloud. The company needs the ability to
use SMB clients to access data. The solution must be fully managed.

Which AWS solution meets these requirements?

A. Create an AWS Storage Gateway volume gateway. Create a file share that uses the required client protocol. Connect the application server
to the file share.

B. Create an AWS Storage Gateway tape gateway. Configure tapes to use Amazon S3. Connect the application server to the tape gateway.

C. Create an Amazon EC2 Windows instance. Install and configure a Windows file share role on the instance. Connect the application server to
the file share.

D. Create an Amazon FSx for Windows File Server file system. Attach the file system to the origin server. Connect the application server to the
file system.

Correct Answer: D

Community vote distribution


D (100%)

  Morinator Highly Voted  8 months, 3 weeks ago


Selected Answer: D
SMB + fully managed = fsx for windows imo
upvoted 10 times

  TariqKipkemei Most Recent  6 days ago


Selected Answer: D
SMB = Amazon FSx for Windows File Server
upvoted 1 times

  Guru4Cloud 2 weeks, 6 days ago


Selected Answer: D
D. Create an Amazon FSx for Windows File Server file system. Attach the file system to the origin server. Connect the application server to
the file system
upvoted 1 times

  Guru4Cloud 2 weeks, 6 days ago


All who selected D. are correct - see more details from our community
upvoted 1 times

  animefan1 3 months ago


Selected Answer: D
Fsx is fully managed. Plus it supports SMB protocol
upvoted 1 times

  cookieMr 3 months ago


Selected Answer: D
A. involves using Storage Gateway, but it does not specifically mention support for SMB clients. It may not meet the requirement of using
SMB clients to access data.

B. involves using Storage Gateway with tape gateway configuration, which is primarily used for archiving data to S3. It does not provide
native support for SMB clients to access data.

C. involves manually setting up and configuring a Windows file share on an EC2 Windows instance. While it allows SMB clients to access
data, it is not a fully managed solution as it requires manual setup and maintenance.

D. involves creating an FSx for Windows File Server file system, which is a fully managed Windows file system that supports SMB clients. It
provides an easy-to-use shared storage solution with native SMB support.

Based on the requirements of using SMB clients and needing a fully managed solution, option D is the most suitable choice.
upvoted 2 times

  devonwho 8 months ago


Selected Answer: D
Amazon FSx has native support for Windows file system features and for the industry-standard Server Message Block (SMB) protocol to
access file storage over a network.
https://docs.aws.amazon.com/fsx/latest/WindowsGuide/what-is.html
upvoted 3 times
  LuckyAro 8 months, 2 weeks ago
Selected Answer: D
Amazon FSx for Windows File Server file system
upvoted 1 times

  techhb 8 months, 2 weeks ago


amazon fsx for smb connectivity.
upvoted 1 times

  Aninina 8 months, 2 weeks ago


Selected Answer: D
FSX is the ans
upvoted 1 times

  mhmt4438 8 months, 2 weeks ago


Selected Answer: D
https://www.examtopics.com/discussions/amazon/view/81115-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 2 times

  bamishr 8 months, 3 weeks ago


Selected Answer: D
D. Create an Amazon FSx for Windows File Server file system. Attach the file system to the origin server. Connect the application server to
the file system.
upvoted 1 times
Question #250 Topic 1

A company’s security team requests that network traffic be captured in VPC Flow Logs. The logs will be frequently accessed for 90 days and then
accessed intermittently.

What should a solutions architect do to meet these requirements when configuring the logs?

A. Use Amazon CloudWatch as the target. Set the CloudWatch log group with an expiration of 90 days

B. Use Amazon Kinesis as the target. Configure the Kinesis stream to always retain the logs for 90 days.

C. Use AWS CloudTrail as the target. Configure CloudTrail to save to an Amazon S3 bucket, and enable S3 Intelligent-Tiering.

D. Use Amazon S3 as the target. Enable an S3 Lifecycle policy to transition the logs to S3 Standard-Infrequent Access (S3 Standard-IA) after
90 days.

Correct Answer: A

Community vote distribution


D (93%) 7%

  LuckyAro Highly Voted  8 months, 2 weeks ago


Selected Answer: D
D is the correct answer.
upvoted 5 times

  TariqKipkemei Most Recent  6 days ago


Selected Answer: D
Use Amazon S3 as the target. Enable an S3 Lifecycle policy to transition the logs to S3 Standard-Infrequent Access (S3 Standard-IA) after 90
days
upvoted 1 times

  Guru4Cloud 2 weeks, 6 days ago


Selected Answer: D
D. Use Amazon S3 as the target. Enable an S3 Lifecycle policy to transition the logs to S3 Standard-Infrequent Access (S3 Standard-IA) after
90 days.
upvoted 1 times

  animefan1 3 months ago


Selected Answer: D
S3 will store logs. With life cycle, we can move it to different class. With Option A, log groups expiration will simply remove the logs and
failing the 2nd request in question
upvoted 1 times

  cookieMr 3 months ago


Selected Answer: D
A. suggests using CloudWatch as the target for VPC Flow Logs. However, it does not provide a mechanism for managing the retention of
the logs for 90 days and then accessing them intermittently.

B. suggests using Kinesis as the target for VPC Flow Logs. While it can retain the logs for 90 days, it does not address the requirement for
intermittent access to the logs.

C. suggests using CloudTrail as the target for VPC Flow Logs. However, CloudTrail is designed for auditing and monitoring API activity, not
for capturing network traffic logs. It does not meet the requirement of capturing VPC Flow Logs.

D. suggests using S3 as the target for VPC Flow Logs and leveraging S3 Lifecycle policies to transition the logs to a cost-effective storage
class after 90 days. It meets the requirement of retaining the logs for 90 days and provides the flexibility for intermittent access while
optimizing storage costs.
upvoted 2 times

  markw92 3 months, 2 weeks ago


A doesn't solve "90 days and then accessed intermittently" this statement. It sets expire after 90. Not sure otherwise A seems to be right
choice since you can create dashboards etc.
upvoted 1 times

  Bmarodi 4 months ago


Selected Answer: A
Option A meets these requirements.
upvoted 1 times
  ocbn3wby 7 months, 3 weeks ago
Selected Answer: D
There's a table here that specifies that VPC Flow logs can go directly to S3. Does not need to go via CloudTrail and then to S3. Nor via CW.

https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AWS-logs-and-resource-policy.html#AWS-logs-infrastructure-S3
upvoted 3 times

  techhb 8 months, 2 weeks ago


Selected Answer: D
we need to preserve logs hence D
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CloudWatchLogsConcepts.html
upvoted 2 times

  mp165 8 months, 2 weeks ago


Selected Answer: D
D...agree that retention is the key word
upvoted 2 times

  swolfgang 8 months, 2 weeks ago


Selected Answer: D
a is not,retantion means delete after 90 days but questions say rarely access.
upvoted 2 times

  mhmt4438 8 months, 2 weeks ago


Selected Answer: D
D. Use Amazon S3 as the target. Enable an S3 Lifecycle policy to transition the logs to S3 Standard-Infrequent Access (S3 Standard-IA) after
90 days.

By using Amazon S3 as the target for the VPC Flow Logs, the logs can be easily stored and accessed by the security team. Enabling an S3
Lifecycle policy to transition the logs to S3 Standard-Infrequent Access (S3 Standard-IA) after 90 days will automatically move the logs to a
storage class that is optimized for infrequent access, reducing the storage costs for the company. The security team will still be able to
access the logs as needed, even after they have been transitioned to S3 Standard-IA, but the storage cost will be optimized.
upvoted 4 times

  laicos 8 months, 2 weeks ago


Selected Answer: D
I prefer D
"accessed intermittently" need logs after 90 days
upvoted 1 times

  Parsons 8 months, 2 weeks ago


Selected Answer: D
No, D should be is correct.
"The logs will be frequently accessed for 90 days and then accessed intermittently." => We still need to store instead of deleting as the
answer A.
upvoted 2 times

  Aninina 8 months, 2 weeks ago


Selected Answer: D
D looks correct. This will meet the requirements of frequently accessing the logs for the first 90 days and then intermittently accessing
them after that. S3 standard-IA is a storage class that is less expensive than S3 standard for infrequently accessed data, so it would be a
more cost-effective option for storing the logs after the first 90 days.
upvoted 1 times

  Morinator 8 months, 3 weeks ago


Selected Answer: A
Cloudwatch for this

https://www.examtopics.com/discussions/amazon/view/59983-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times
Question #251 Topic 1

An Amazon EC2 instance is located in a private subnet in a new VPC. This subnet does not have outbound internet access, but the EC2 instance
needs the ability to download monthly security updates from an outside vendor.

What should a solutions architect do to meet these requirements?

A. Create an internet gateway, and attach it to the VPC. Configure the private subnet route table to use the internet gateway as the default
route.

B. Create a NAT gateway, and place it in a public subnet. Configure the private subnet route table to use the NAT gateway as the default route.

C. Create a NAT instance, and place it in the same subnet where the EC2 instance is located. Configure the private subnet route table to use
the NAT instance as the default route.

D. Create an internet gateway, and attach it to the VPC. Create a NAT instance, and place it in the same subnet where the EC2 instance is
located. Configure the private subnet route table to use the internet gateway as the default route.

Correct Answer: B

Community vote distribution


B (100%)

  mhmt4438 Highly Voted  8 months, 2 weeks ago


Selected Answer: B
B. Create a NAT gateway, and place it in a public subnet. Configure the private subnet route table to use the NAT gateway as the default
route.

This approach will allow the EC2 instance to access the internet and download the monthly security updates while still being located in a
private subnet. By creating a NAT gateway and placing it in a public subnet, it will allow the instances in the private subnet to access the
internet through the NAT gateway. And then, configure the private subnet route table to use the NAT gateway as the default route. This
will ensure that all outbound traffic is directed through the NAT gateway, allowing the EC2 instance to access the internet while still
maintaining the security of the private subnet.
upvoted 7 times

  Manjunathkb 5 months, 3 weeks ago


NAT gateway does not allow internet on it's own. It needs internet gateway too. None of the answers make sense
upvoted 6 times

  Manjunathkb 5 months, 3 weeks ago


refer below link
https://aws.amazon.com/about-aws/whats-new/2021/06/aws-removes-nat-gateways-dependence-on-internet-gateway-for-private-
communications/
upvoted 1 times

  TariqKipkemei Most Recent  5 days, 23 hours ago


Selected Answer: B
Internet Gateway is required anyway to access the internet.
Option B makes more sense: Create a NAT gateway, and place it in a public subnet. Configure the private subnet route table to use the
NAT gateway as the default route.
upvoted 1 times

  Guru4Cloud 2 weeks, 6 days ago


Selected Answer: B
B. Create a NAT gateway, and place it in a public subnet. Configure the private subnet route table to use the NAT gateway as the default
route.
upvoted 1 times

  cookieMr 3 months ago


A. provides direct internet access to the private subnet, which is not desired in this case as the goal is to restrict outbound internet access.

B. allows the EC2 in the private subnet to access the internet through the NAT gateway, which acts as a proxy. It provides controlled
outbound internet access while maintaining the security of the private subnet.

C. is similar to using a NAT gateway, but it involves using a NAT instance. NAT instances require more manual configuration and
management compared to NAT gateways, making them a less preferred option.

D. combines the use of an internet gateway and a NAT instance, which is not necessary. It introduces unnecessary complexity and adds a
NAT instance that requires additional management.
Overall, option B is the most appropriate solution as it utilizes a NAT gateway placed in a public subnet to enable controlled outbound
internet access for the EC2 instance in the private subnet.

NAT Gateways are preferred over NAT Instances by AWS and in general.
upvoted 1 times
  Bmarodi 4 months ago
Selected Answer: B
Option B meets the reqiurements, hence B is right choice.
upvoted 1 times

  Manjunathkb 5 months, 3 weeks ago


D would have been the answer if NAT gateway is installed in public subnet and not where EC2 is located. None of the answers are correct.
upvoted 1 times

  AlessandraSAA 6 months, 3 weeks ago


why not C?
upvoted 1 times

  UnluckyDucky 6 months, 3 weeks ago


Because NAT Gateways are preferred over NAT Instances by AWS and in general.

I have yet to find a situation where a NAT Instance would be more applicable than NAT Gateway which is fully managed and is overall
an easier solution to implement - both in AWS questions or the real world.
upvoted 2 times

  TungPham 7 months, 3 weeks ago


Selected Answer: B
Require NAT gateway
upvoted 1 times

  techhb 8 months, 2 weeks ago


Selected Answer: B
Answer explained here https://medium.com/@tshemku/aws-internet-gateway-vs-nat-gateway-vs-nat-instance-30523096df22
upvoted 1 times

  techhb 8 months, 2 weeks ago


Selected Answer: B
NAT Gateway is right choice
upvoted 1 times

  bamishr 8 months, 3 weeks ago


Selected Answer: B
https://www.examtopics.com/discussions/amazon/view/59966-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 2 times
Question #252 Topic 1

A solutions architect needs to design a system to store client case files. The files are core company assets and are important. The number of files
will grow over time.

The files must be simultaneously accessible from multiple application servers that run on Amazon EC2 instances. The solution must have built-in
redundancy.

Which solution meets these requirements?

A. Amazon Elastic File System (Amazon EFS)

B. Amazon Elastic Block Store (Amazon EBS)

C. Amazon S3 Glacier Deep Archive

D. AWS Backup

Correct Answer: A

Community vote distribution


A (100%)

  TariqKipkemei 5 days, 23 hours ago


Selected Answer: A
File system, scalable, multiple access = Amazon Elastic File System (Amazon EFS)
upvoted 1 times

  Guru4Cloud 2 weeks, 6 days ago


Selected Answer: A
Amazon Elastic File System (Amazon EFS)
upvoted 1 times

  cookieMr 3 months ago


Selected Answer: A
EFS provides a scalable and fully managed file storage service that can be accessed concurrently from multiple EC2. It offers built-in
redundancy by storing data across multiple AZs within a region. With EFS, the client case files can be accessed by multiple application
servers simultaneously, ensuring high availability and scalability as the number of files grows over time.

Option B, EBS, is a block-level storage service that is typically used for attaching to individual EC2 and does not provide concurrent access
to multiple instances, making it unsuitable for this scenario.

Option C, S3 Glacier Deep Archive, is a long-term archival storage service and may not be suitable for active file access and simultaneous
access from multiple application servers.

Option D, AWS Backup, is a centralized backup management service and does not provide the required simultaneous file access and
redundancy features.

Therefore, the most suitable solution is Amazon EFS (option A).


upvoted 4 times

  Bmarodi 4 months ago


Selected Answer: A
Option A meets the requirements, hence A is correct answer.
upvoted 1 times

  moiraqi 4 months, 1 week ago


What does "The solution must have built-in redundancy" mean
upvoted 1 times

  KZM 7 months, 1 week ago


If the application servers are running on Linux or UNIX operating systems, EFS is a the most suitable solution for the given requirements.
upvoted 1 times

  TungPham 7 months, 3 weeks ago


Selected Answer: A
"accessible from multiple application servers that run on Amazon EC2 instances"
upvoted 3 times
  mhmt4438 8 months, 2 weeks ago
Selected Answer: A
Correct answer is A
upvoted 2 times

  Aninina 8 months, 2 weeks ago


Selected Answer: A
EFS Amazon Elastic File System (EFS) automatically grows and shrinks as you add and remove files with no need for management or
provisioning.
upvoted 4 times

  bamishr 8 months, 3 weeks ago


Selected Answer: A
https://www.examtopics.com/discussions/amazon/view/68833-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times
Question #253 Topic 1

A solutions architect has created two IAM policies: Policy1 and Policy2. Both policies are attached to an IAM group.

A cloud engineer is added as an IAM user to the IAM group. Which action will the cloud engineer be able to perform?

A. Deleting IAM users

B. Deleting directories

C. Deleting Amazon EC2 instances

D. Deleting logs from Amazon CloudWatch Logs

Correct Answer: C

Community vote distribution


C (100%)

  JayBee65 Highly Voted  8 months, 1 week ago


ec2:* Allows full control of EC2 instances, so C is correct

The policy only grants get and list permission on IAM users, so not A
ds:Delete deny denies delete-directory, so not B, see https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ds/index.html
The policy only grants get and describe permission on logs, so not D
upvoted 8 times

  TariqKipkemei Most Recent  5 days, 23 hours ago


Selected Answer: C
Deleting Amazon EC2 instances
upvoted 1 times

  Aninina 8 months, 2 weeks ago


Selected Answer: C
C : Deleting Amazon EC2 instances
upvoted 1 times
  mhmt4438 8 months, 2 weeks ago
Selected Answer: C
Answer is C
upvoted 2 times

  Aninina 8 months, 2 weeks ago


C : Deleting Amazon EC2 instances
upvoted 1 times

  bamishr 8 months, 3 weeks ago


Selected Answer: C
https://www.examtopics.com/discussions/amazon/view/27873-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 2 times

  Morinator 8 months, 3 weeks ago


Selected Answer: C
Explicite deny on directories, only available action for deleting is EC2
upvoted 2 times
Question #254 Topic 1

A company is reviewing a recent migration of a three-tier application to a VPC. The security team discovers that the principle of least privilege is
not being applied to Amazon EC2 security group ingress and egress rules between the application tiers.

What should a solutions architect do to correct this issue?

A. Create security group rules using the instance ID as the source or destination.

B. Create security group rules using the security group ID as the source or destination.

C. Create security group rules using the VPC CIDR blocks as the source or destination.

D. Create security group rules using the subnet CIDR blocks as the source or destination.

Correct Answer: B

Community vote distribution


B (100%)

  Aninina Highly Voted  8 months, 2 weeks ago


Selected Answer: B
B. Create security group rules using the security group ID as the source or destination.
This way, the security team can ensure that the least privileged access is given to the application tiers by allowing only the necessary
communication between the security groups. For example, the web tier security group should only allow incoming traffic from the load
balancer security group and outgoing traffic to the application tier security group. This approach provides a more granular and secure
way to control traffic between the different tiers of the application and also allows for easy modification of access if needed.
It's also worth noting that it's good practice to minimize the number of open ports and protocols, and use security groups as a first line of
defense, in addition to network access control lists (ACLs) to control traffic between subnets.
upvoted 6 times

  Wael216 Highly Voted  7 months ago


Selected Answer: B
By using security group IDs, the ingress and egress rules can be restricted to only allow traffic from the necessary source or destination,
and to deny all other traffic. This ensures that only the minimum required traffic is allowed between the application tiers.

Option A is not the best choice because using the instance ID as the source or destination would allow traffic from any instance with that
ID, which may not be limited to the specific application tier.

Option C is also not the best choice because using VPC CIDR blocks would allow traffic from any IP address within the VPC, which may not
be limited to the specific application tier.

Option D is not the best choice because using subnet CIDR blocks would allow traffic from any IP address within the subnet, which may
not be limited to the specific application tier.
upvoted 5 times

  Guru4Cloud Most Recent  2 weeks, 6 days ago


Selected Answer: B
Create security group rules using the security group ID as the source or destination.
This way, the security team can ensure that the least privileged access is given to the application tiers by allowing only the necessary
communication between the security groups. For example, the web tier security group should only allow incoming traffic from the load
balancer security group and outgoing traffic to the application tier security group. This approach provides a more granular and secure
way to control traffic between the different tiers of the application and also allows for easy modification of access if needed.
It's also worth noting that it's good practice to minimize the number of open ports and protocols, and use security groups as a first line of
defense, in addition to network access control lists (ACLs) to control traffic between subnets.
upvoted 1 times

  cookieMr 3 months ago


Selected Answer: B
A. would limit the traffic based on specific instances, which may not be the most suitable solution for applying the principle of least
privilege between application tiers.

B. By using security group IDs in the rules, you can precisely control the traffic between application tiers, allowing only the necessary
communication and adhering to the principle of least privilege.

C. would apply broad rules based on the entire VPC CIDR blocks, which may not provide the necessary level of granularity required for
secure communication between specific application tiers.

D. would limit the traffic based on subnet CIDR blocks, which may not be sufficient for ensuring proper security between application tiers.

In summary, using security group IDs (Option B) is the recommended approach as it allows for precise control of traffic between
application tiers, aligning with the principle of least privilege.
upvoted 3 times
  Bmarodi 4 months ago
Selected Answer: B
I vote for option B.
upvoted 1 times

  LuckyAro 8 months, 2 weeks ago


Selected Answer: B
. Create security group rules using the security group ID as the source or destination
upvoted 1 times

  techhb 8 months, 2 weeks ago


Security Group Rulesapply to instances
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/security-group-rules.html
upvoted 1 times

  mhmt4438 8 months, 2 weeks ago


Selected Answer: B
Correct answer is B
upvoted 2 times

  bamishr 8 months, 3 weeks ago


Selected Answer: B
https://www.examtopics.com/discussions/amazon/view/46463-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times

  Morinator 8 months, 3 weeks ago


Selected Answer: B
B right
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/security-group-rules.html
upvoted 1 times
Question #255 Topic 1

A company has an ecommerce checkout workflow that writes an order to a database and calls a service to process the payment. Users are
experiencing timeouts during the checkout process. When users resubmit the checkout form, multiple unique orders are created for the same
desired transaction.

How should a solutions architect refactor this workflow to prevent the creation of multiple orders?

A. Configure the web application to send an order message to Amazon Kinesis Data Firehose. Set the payment service to retrieve the message
from Kinesis Data Firehose and process the order.

B. Create a rule in AWS CloudTrail to invoke an AWS Lambda function based on the logged application path request. Use Lambda to query the
database, call the payment service, and pass in the order information.

C. Store the order in the database. Send a message that includes the order number to Amazon Simple Notification Service (Amazon SNS). Set
the payment service to poll Amazon SNS, retrieve the message, and process the order.

D. Store the order in the database. Send a message that includes the order number to an Amazon Simple Queue Service (Amazon SQS) FIFO
queue. Set the payment service to retrieve the message and process the order. Delete the message from the queue.

Correct Answer: D

Community vote distribution


D (100%)

  Aninina Highly Voted  8 months, 2 weeks ago


Selected Answer: D
D. Store the order in the database. Send a message that includes the order number to an Amazon Simple Queue Service (Amazon SQS)
FIFO queue. Set the payment service to retrieve the message and process the order. Delete the message from the queue.
This approach ensures that the order creation and payment processing steps are separate and atomic. By sending the order information
to an SQS FIFO queue, the payment service can process the order one at a time and in the order they were received. If the payment
service is unable to process an order, it can be retried later, preventing the creation of multiple orders. The deletion of the message from
the queue after it is processed will prevent the same message from being processed multiple times.
It's worth noting that FIFO queues guarantee that messages are processed in the order they are received, and prevent duplicates.
upvoted 7 times

  TariqKipkemei Most Recent  5 days ago


Selected Answer: D
if the backend can not keep up, queue the tasks.
upvoted 1 times

  Guru4Cloud 2 weeks, 6 days ago


Selected Answer: D
D. Store the order in the database. Send a message that includes the order number to an Amazon Simple Queue Service (Amazon SQS)
FIFO queue. Set the payment service to retrieve the message and process the order. Delete the message from the queue.
upvoted 1 times

  animefan1 3 months ago


Selected Answer: D
The question is related in breaking down the flow. SQS is go-to choice to decouple & DB will be used to store
upvoted 1 times

  cookieMr 3 months ago


Selected Answer: D
A. is not a suitable solution for preventing the creation of multiple orders. This approach does not guarantee the sequential and reliable
processing of orders.

B. is not an appropriate solution for preventing the creation of multiple orders. CloudTrail is primarily used for logging and auditing API
activity, and invoking a Lambda based on the logged request does not ensure the correct order processing.

C. is not a suitable solution. SNS is a publish-subscribe messaging service, and polling it may result in delayed processing and potential
order duplication.

D. is the correct solution. Using an SQS FIFO ensures that the orders are processed in a sequential and reliable manner, preventing the
creation of multiple orders for the same transaction.
upvoted 4 times

  antropaws 3 months, 1 week ago


Why not A?
upvoted 1 times
  Wael216 7 months ago
Selected Answer: D
The use of a FIFO queue in Amazon SQS ensures that messages are processed in the order they are received.
upvoted 1 times

  mhmt4438 8 months, 2 weeks ago


Selected Answer: D
https://www.examtopics.com/discussions/amazon/view/95026-exam-aws-certified-solutions-architect-associate-saa-c03/
upvoted 3 times

  bamishr 8 months, 3 weeks ago


Selected Answer: D
asnwer is d
upvoted 2 times
Question #256 Topic 1

A solutions architect is implementing a document review application using an Amazon S3 bucket for storage. The solution must prevent
accidental deletion of the documents and ensure that all versions of the documents are available. Users must be able to download, modify, and
upload documents.

Which combination of actions should be taken to meet these requirements? (Choose two.)

A. Enable a read-only bucket ACL.

B. Enable versioning on the bucket.

C. Attach an IAM policy to the bucket.

D. Enable MFA Delete on the bucket.

E. Encrypt the bucket using AWS KMS.

Correct Answer: BD

Community vote distribution


BD (100%)

  TariqKipkemei 4 days, 23 hours ago


Selected Answer: BD
Prevent accidental deletion of the documents = Enable MFA Delete on the bucket
Ensure that all versions of the documents are available = Enable versioning on the bucket
upvoted 1 times

  Guru4Cloud 2 weeks, 6 days ago


Selected Answer: BD
Options B & D are the correct answers.
upvoted 1 times

  cookieMr 3 months ago


Selected Answer: BD
B. allows multiple versions of objects in the S3 bucket to be stored. This ensures that all versions of the documents are available, even if
they are accidentally overwritten or deleted.

D. adds an extra layer of protection against accidental deletion of objects in the bucket. With MFA Delete enabled, a user would need to
provide an additional authentication factor to successfully delete objects from the bucket. This helps prevent accidental or unauthorized
deletions and provides an extra level of security for critical documents.

A. would restrict users from modifying or uploading documents. It would not meet the requirement of allowing users to download,
modify, and upload documents.

C. can control access permissions to the bucket, it does not specifically address the requirement of preventing accidental deletion or
ensuring availability of all versions of the documents.

E. Encryption focuses on data protection rather than versioning and deletion prevention.
upvoted 2 times

  Bmarodi 4 months ago


Selected Answer: BD
Options B & D are the correct answers.
upvoted 1 times

  Wael216 7 months ago


Selected Answer: BD
no doubts
upvoted 2 times

  MinHyeok 7 months, 3 weeks ago


아몰랑 ㅇㅁㄹㅇㅁㄹ
upvoted 3 times

  akdavsan 8 months, 2 weeks ago


Selected Answer: BD
b and d ofc
upvoted 1 times
  LuckyAro 8 months, 2 weeks ago
Selected Answer: BD
B & D Definitely.
upvoted 1 times

  david76x 8 months, 2 weeks ago


Selected Answer: BD
B & D is correct
upvoted 1 times

  Aninina 8 months, 2 weeks ago


Selected Answer: BD
B and D for sure guys
upvoted 2 times

  mhmt4438 8 months, 2 weeks ago


Selected Answer: BD
https://www.examtopics.com/discussions/amazon/view/21969-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 2 times
Question #257 Topic 1

A company is building a solution that will report Amazon EC2 Auto Scaling events across all the applications in an AWS account. The company
needs to use a serverless solution to store the EC2 Auto Scaling status data in Amazon S3. The company then will use the data in Amazon S3 to
provide near-real-time updates in a dashboard. The solution must not affect the speed of EC2 instance launches.

How should the company move the data to Amazon S3 to meet these requirements?

A. Use an Amazon CloudWatch metric stream to send the EC2 Auto Scaling status data to Amazon Kinesis Data Firehose. Store the data in
Amazon S3.

B. Launch an Amazon EMR cluster to collect the EC2 Auto Scaling status data and send the data to Amazon Kinesis Data Firehose. Store the
data in Amazon S3.

C. Create an Amazon EventBridge rule to invoke an AWS Lambda function on a schedule. Configure the Lambda function to send the EC2 Auto
Scaling status data directly to Amazon S3.

D. Use a bootstrap script during the launch of an EC2 instance to install Amazon Kinesis Agent. Configure Kinesis Agent to collect the EC2
Auto Scaling status data and send the data to Amazon Kinesis Data Firehose. Store the data in Amazon S3.

Correct Answer: A

Community vote distribution


A (73%) C (27%)

  TariqKipkemei 4 days, 23 hours ago


Selected Answer: A
You can use metric streams to continually stream CloudWatch metrics to a destination of your choice, with near-real-time delivery and low
latency. Supported destinations include AWS destinations such as Amazon Simple Storage Service and several third-party service provider
destinations.
Main usage scenarios for CloudWatch metric streams: Data lake— Create a metric stream and direct it to an Amazon Kinesis Data Firehose
delivery stream that delivers your CloudWatch metrics to a data lake such as Amazon S3.

https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Metric-
Streams.html#:~:text=CloudWatch%20metric%20streams
upvoted 1 times

  Guru4Cloud 2 weeks, 6 days ago


Selected Answer: A
This solution meets the requirements because it is serverless and does not affect the speed of EC2 instance launches. Amazon
CloudWatch metric streams can continuously stream CloudWatch metrics to destinations such as Amazon S3. Amazon Kinesis Data
Firehose can capture, transform, and deliver streaming data into data lakes, data stores, and analytics services. It can directly put the data
into Amazon S3, which can then be used for near-real-time updates in a dashboard.
upvoted 1 times

  Valder21 3 weeks, 5 days ago


Selected Answer: C
Kinesis is for data streams not events. So, C
upvoted 1 times

  cookieMr 3 months ago


Selected Answer: A
B. introduces unnecessary complexity and overhead for collecting and sending the EC2 Auto Scaling status data to S3. It is not the most
efficient serverless solution for this specific requirement.

C. would introduce delays in data updates, as it is not triggered in real-time. Additionally, it adds unnecessary overhead and complexity
compared to using a direct data stream.

D. introduces additional dependencies and management overhead. It may also impact the speed of EC2 instance launches, which is a
requirement that needs to be avoided.

Overall, option A provides a streamlined and serverless solution by leveraging CloudWatch metric streams and Kinesis Data Firehose to
efficiently capture and store the EC2 Auto Scaling status data in S3 without affecting the speed of EC2 instance launches.
upvoted 2 times

  markw92 3 months, 2 weeks ago


A: I was thinking D is the answer but the solution should not impact ec2 launches will make the difference and i fast read the question. A is
a right choice.
upvoted 1 times
  Rahulbit34 4 months, 3 weeks ago
A because of near real time scenario
upvoted 3 times

  UnluckyDucky 6 months, 3 weeks ago


Selected Answer: C
Both A and C are applicable - no doubt there.

C is more straightforward and to the point of the question imho.


upvoted 3 times

  UnluckyDucky 6 months, 3 weeks ago


Changing my answer to *A* as the dashboard will provide near-real updates.

Unless the lambda is configured to run every minute which is not common with schedules - it is not considered near real-time.
upvoted 3 times

  bdp123 7 months, 2 weeks ago


Selected Answer: A
Serverless solution and near real time
upvoted 2 times

  Stanislav4907 7 months, 2 weeks ago


Selected Answer: A
near real time -eliminates c
upvoted 1 times

  aakashkumar1999 7 months, 4 weeks ago


Selected Answer: A
Answer is A
upvoted 1 times

  devonwho 8 months ago


Selected Answer: A
You can use metric streams to continually stream CloudWatch metrics to a destination of your choice, with near-real-time delivery and low
latency. One of the use cases is Data Lake: create a metric stream and direct it to an Amazon Kinesis Data Firehose delivery stream that
delivers your CloudWatch metrics to a data lake such as Amazon S3.

https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Metric-Streams.html
upvoted 2 times

  Stanislav4907 8 months ago


Selected Answer: A
Option C, using an Amazon EventBridge rule to invoke an AWS Lambda function on a schedule to send the EC2 Auto Scaling status data
directly to Amazon S3, may not be the best choice because it may not provide real-time updates to the dashboard.

A schedule-based approach with an EventBridge rule and Lambda function may not be able to deliver the data in near real-time, as the
EC2 Auto Scaling status data is generated dynamically and may not always align with the schedule set by the EventBridge rule.

Additionally, using a schedule-based approach with EventBridge and Lambda also has the potential to create latency, as there may be a
delay between the time the data is generated and the time it is sent to S3.

In this scenario, using Amazon CloudWatch and Kinesis Data Firehose as described in Option A, provides a more reliable and near real-
time solution.
upvoted 1 times

  MikelH93 8 months ago


Selected Answer: A
A seems to be the right answer. Don't think C could be correct as it says "near real-time" and C is on schedule
upvoted 1 times

  KAUS2 8 months, 1 week ago


Selected Answer: C
C. Create an Amazon EventBridge rule to invoke an AWS Lambda function on a schedule. Configure the Lambda function to send the EC2
Auto Scaling status data directly to Amazon S3.
upvoted 1 times

  techhb 8 months, 2 weeks ago


Selected Answer: A
A seemsright choice but serverless keyword confuses,and cloud watch metric steam is server less too.
upvoted 2 times

  Aninina 8 months, 2 weeks ago


Selected Answer: A
A. Use an Amazon CloudWatch metric stream to send the EC2 Auto Scaling status data to Amazon Kinesis Data Firehose. Store the data in
Amazon S3.
upvoted 2 times

  mhmt4438 8 months, 2 weeks ago


Selected Answer: C
C. Create an Amazon EventBridge rule to invoke an AWS Lambda function on a schedule. Configure the Lambda function to send the EC2
Auto Scaling status data directly to Amazon S3.

This approach will use a serverless solution (AWS Lambda) which will not affect the speed of EC2 instance launches. It will use the
EventBridge rule to invoke the Lambda function on schedule to send the data to S3. This will meet the requirement of near-real-time
updates in a dashboard as well. The Lambda function can be triggered by CloudWatch events that are emitted when Auto Scaling events
occur. The function can then collect the necessary data and store it in S3. This direct sending of data to S3 will reduce the number of steps
and hence it is more efficient and cost-effective.
upvoted 2 times

  Aninina 8 months, 2 weeks ago


ChatGPT is not correct here
upvoted 3 times
Question #258 Topic 1

A company has an application that places hundreds of .csv files into an Amazon S3 bucket every hour. The files are 1 GB in size. Each time a file is
uploaded, the company needs to convert the file to Apache Parquet format and place the output file into an S3 bucket.

Which solution will meet these requirements with the LEAST operational overhead?

A. Create an AWS Lambda function to download the .csv files, convert the files to Parquet format, and place the output files in an S3 bucket.
Invoke the Lambda function for each S3 PUT event.

B. Create an Apache Spark job to read the .csv files, convert the files to Parquet format, and place the output files in an S3 bucket. Create an
AWS Lambda function for each S3 PUT event to invoke the Spark job.

C. Create an AWS Glue table and an AWS Glue crawler for the S3 bucket where the application places the .csv files. Schedule an AWS Lambda
function to periodically use Amazon Athena to query the AWS Glue table, convert the query results into Parquet format, and place the output
files into an S3 bucket.

D. Create an AWS Glue extract, transform, and load (ETL) job to convert the .csv files to Parquet format and place the output files into an S3
bucket. Create an AWS Lambda function for each S3 PUT event to invoke the ETL job.

Correct Answer: A

Community vote distribution


D (85%) A (15%)

  Parsons Highly Voted  8 months, 2 weeks ago


Selected Answer: D
No, D should be correct.

"LEAST operational overhead" => Should you fully manage service like Glue instead of manually like the answer A.
upvoted 11 times

  TariqKipkemei Most Recent  4 days, 23 hours ago


Selected Answer: D
AWS Glue can run your extract, transform, and load (ETL) jobs as new data arrives. For example, you can configure AWS Glue to initiate
your ETL jobs to run as soon as new data becomes available in Amazon Simple Storage Service (S3).
Clearly you don't need a lambda function to initiate the ETL job.

https://aws.amazon.com/glue/#:~:text=to%20initiate%20your-,ETL,-jobs%20to%20run

Option A requires writing code to perform the file conversion.

In the exam option D would the best answer.


upvoted 1 times

  Guru4Cloud 2 weeks, 6 days ago


Selected Answer: D
This solution meets the requirements with the least operational overhead because AWS Glue is a fully managed ETL service that makes it
easy to move data between data stores. AWS Glue can read .csv files from an S3 bucket and write the data into Parquet format in another
S3 bucket. The AWS Lambda function can be triggered by an S3 PUT event when a new .csv file is uploaded, and it can start the AWS Glue
ETL job to convert the file to Parquet format. This solution does not require managing any servers or clusters, which reduces operational
overhead.
upvoted 1 times

  cookieMr 3 months ago


D is correct
upvoted 1 times

  cookieMr 3 months ago


A. introduces significant operational overhead. This approach requires managing the Lambda, handling concurrency, and ensuring proper
error handling for large file sizes, which can be challenging.

B. adds unnecessary complexity and operational overhead. Managing the Spark job, handling scalability, and coordinating the Lambda
invocations for each file upload can be cumbersome.

C. introduces additional complexity and may not be the most efficient solution. It involves managing Glue resources, scheduling Lambda,
and querying data even when no new files are uploaded.

Option D leverages AWS Glue's ETL capabilities, allowing you to define and execute a data transformation job at scale. By invoking the ETL
job using an Lambda function for each S3 PUT event, you can ensure that files are efficiently converted to Parquet format without the
need for manual intervention. This approach minimizes operational overhead and provides a streamlined and scalable solution.
upvoted 3 times
  F629 3 months, 2 weeks ago
Selected Answer: A
Both A and D can works, but A is more simple. It's more close to the "Least Operational effort".
upvoted 1 times

  shanwford 5 months, 3 weeks ago


Selected Answer: D
The maximum size for a Lambda event payload is 256 KB - so (A) didn't work with 1GB Files. Glue is recommended for the Parquet
Transformation of AWS.
upvoted 2 times

  jennyka76 7 months, 3 weeks ago


ANS - d
https://aws.amazon.com/blogs/database/how-to-extract-transform-and-load-data-for-analytic-processing-using-aws-glue-part-2/
- READ ARTICLE -
upvoted 2 times

  aws4myself 8 months, 1 week ago


Here A is the correct answer. The reason here is the least operational overhead.
A ==> S3 - Lambda - S3
D ==> S3 - Lambda - Glue - S3

Also, glue cannot convert on fly automatically, you need to write some code there. If you write the same code in lambda it will convert the
same and push the file to S3

Lambda has max memory of 128 MB to 10 GB. So, it can handle it easily.

And we need to consider cost also, glue cost is more. Hope many from this forum realize these differences.
upvoted 4 times

  nder 7 months, 1 week ago


Cost is not a factor. AWS Glue is a fully managed service therefore, it's the least operational overhead
upvoted 2 times

  LuckyAro 8 months ago


We also need to stay with the question, cost was not a consideration in the question.
upvoted 1 times

  JayBee65 8 months, 1 week ago


A is unlikely to work as Lambda may struggle with 1GB size: "< 64 MB, beyond which lambda is likely to hit memory caps", see
https://stackoverflow.com/questions/41504095/creating-a-parquet-file-on-aws-lambda-function
upvoted 2 times

  jainparag1 8 months, 1 week ago


Should be D as Glue is self managed service and provides tel job for converting cab files to parquet off the shelf.
upvoted 1 times

  Joxtat 8 months, 2 weeks ago


Selected Answer: D
https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/three-aws-glue-etl-job-types-for-converting-data-to-apache-
parquet.html
upvoted 1 times

  techhb 8 months, 2 weeks ago


AWS Glue is right solution here.
upvoted 1 times

  mp165 8 months, 2 weeks ago


Selected Answer: D
I am thinking D.

A says lambda will download the .csv...but to where? that seem manual based on that
upvoted 1 times

  mhmt4438 8 months, 2 weeks ago


Selected Answer: A
I think A
upvoted 1 times

  bamishr 8 months, 3 weeks ago


Selected Answer: A
https://www.examtopics.com/discussions/amazon/view/83201-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times
Question #259 Topic 1

A company is implementing new data retention policies for all databases that run on Amazon RDS DB instances. The company must retain daily
backups for a minimum period of 2 years. The backups must be consistent and restorable.

Which solution should a solutions architect recommend to meet these requirements?

A. Create a backup vault in AWS Backup to retain RDS backups. Create a new backup plan with a daily schedule and an expiration period of 2
years after creation. Assign the RDS DB instances to the backup plan.

B. Configure a backup window for the RDS DB instances for daily snapshots. Assign a snapshot retention policy of 2 years to each RDS DB
instance. Use Amazon Data Lifecycle Manager (Amazon DLM) to schedule snapshot deletions.

C. Configure database transaction logs to be automatically backed up to Amazon CloudWatch Logs with an expiration period of 2 years.

D. Configure an AWS Database Migration Service (AWS DMS) replication task. Deploy a replication instance, and configure a change data
capture (CDC) task to stream database changes to Amazon S3 as the target. Configure S3 Lifecycle policies to delete the snapshots after 2
years.

Correct Answer: A

Community vote distribution


A (95%) 5%

  vijaykamal 2 days, 16 hours ago


Selected Answer: B
Here's why Option B is the best choice:

Backup Window: Configuring a backup window for daily snapshots ensures that consistent backups are taken at the specified time each
day. This helps maintain data integrity and consistency.

Snapshot Retention Policy: Assigning a snapshot retention policy of 2 years to each RDS DB instance ensures that the backups are
retained for the required duration.

Amazon Data Lifecycle Manager (Amazon DLM): Amazon DLM can be used to automate the management of EBS snapshots, including RDS
snapshots. You can configure Amazon DLM to schedule snapshot deletions, making it easier to manage the retention policy without
manual intervention.

Option A (AWS Backup) is primarily used for managing backups of resources that may not have built-in backup capabilities, but for
Amazon RDS, it's better to use the built-in snapshot capabilities and Amazon DLM for snapshot retention.
upvoted 1 times

  TariqKipkemei 4 days, 23 hours ago


Selected Answer: A
Create a backup vault in AWS Backup to retain RDS backups. Create a new backup plan with a daily schedule and an expiration period of 2
years after creation. Assign the RDS DB instances to the backup plan.
upvoted 1 times

  Guru4Cloud 2 weeks, 6 days ago


Selected Answer: A
A. Create a backup vault in AWS Backup to retain RDS backups. Create a new backup plan with a daily schedule and an expiration period of
2 years after creation. Assign the RDS DB instances to the backup plan
upvoted 1 times

  animefan1 3 months ago


Selected Answer: A
Backups work with EBS, FSX, RDS. Its managed & can has vault option for more better control over backup retention
upvoted 1 times

  cookieMr 3 months ago


Selected Answer: A
A. suggests using AWS Backup, a centralized backup management service, to retain RDS backups. A backup vault is created, and a backup
plan is defined with a daily schedule and a 2-year retention period for backups. RDS DB instances are assigned to this backup plan.

B. it does not address the requirement for consistent and restorable backups. Snapshots are point-in-time backups and may not provide
the desired level of consistency.

C. it is not designed to provide the backup and restore functionality required for databases. It does not ensure the backups are consistent
or provide an easy restore mechanism.
D. it does not address the requirement for daily backups and retention of consistent backups. It focuses more on replication and change
data capture rather than backup and restore.
upvoted 2 times
  markw92 3 months, 2 weeks ago
Why not B?
upvoted 2 times

  _deepsi_dee29 4 months, 1 week ago


Selected Answer: A
A is correct
upvoted 1 times

  antropaws 4 months, 1 week ago


Why not D?

Creating tasks for ongoing replication using AWS DMS: You can create an AWS DMS task that captures ongoing changes from the source
data store. You can do this capture while you are migrating your data. You can also create a task that captures ongoing changes after you
complete your initial (full-load) migration to a supported target data store. This process is called ongoing replication or change data
capture (CDC). AWS DMS uses this process when replicating ongoing changes from a source data store.
upvoted 1 times

  gold4otas 6 months, 1 week ago


Selected Answer: A
A. Create a backup vault in AWS Backup to retain RDS backups. Create a new backup plan with a daily schedule and an expiration period of
2 years after creation. Assign the RDS DB instances to the backup plan.
upvoted 1 times

  techhb 8 months, 2 weeks ago


Selected Answer: A
A is right choice
upvoted 3 times

  Aninina 8 months, 2 weeks ago


Selected Answer: A
AAAAAA
upvoted 2 times

  mhmt4438 8 months, 2 weeks ago


Selected Answer: A
Correct answer is A
upvoted 2 times

  bamishr 8 months, 3 weeks ago


Selected Answer: A
Create a backup vault in AWS Backup to retain RDS backups. Create a new backup plan with a daily schedule and an expiration period of 2
years after creation. Assign the RDS DB instances to the backup plan.
upvoted 4 times
Question #260 Topic 1

A company’s compliance team needs to move its file shares to AWS. The shares run on a Windows Server SMB file share. A self-managed on-
premises Active Directory controls access to the files and folders.

The company wants to use Amazon FSx for Windows File Server as part of the solution. The company must ensure that the on-premises Active
Directory groups restrict access to the FSx for Windows File Server SMB compliance shares, folders, and files after the move to AWS. The
company has created an FSx for Windows File Server file system.

Which solution will meet these requirements?

A. Create an Active Directory Connector to connect to the Active Directory. Map the Active Directory groups to IAM groups to restrict access.

B. Assign a tag with a Restrict tag key and a Compliance tag value. Map the Active Directory groups to IAM groups to restrict access.

C. Create an IAM service-linked role that is linked directly to FSx for Windows File Server to restrict access.

D. Join the file system to the Active Directory to restrict access.

Correct Answer: D

Community vote distribution


D (85%) A (15%)

  mhmt4438 Highly Voted  8 months, 2 weeks ago


Selected Answer: D
D. Join the file system to the Active Directory to restrict access.

Joining the FSx for Windows File Server file system to the on-premises Active Directory will allow the company to use the existing Active
Directory groups to restrict access to the file shares, folders, and files after the move to AWS. This option allows the company to continue
using their existing access controls and management structure, making the transition to AWS more seamless.
upvoted 12 times

  Guru4Cloud Most Recent  2 weeks, 6 days ago


Selected Answer: D
This allows the on-premises Active Directory to manage permissions to the FSx file shares, meeting the key requirement to use existing AD
groups to control access after migrating to AWS.

Joining FSx to the AD domain allows the native file system permissions, users, and groups to be applied from Active Directory. Access is
handled seamlessly via the trust relationship between FSx and AD.

The other options would not leverage the existing AD identities and groups
upvoted 1 times

  Guru4Cloud 2 weeks, 6 days ago


The other options would not leverage the existing AD identities and groups:

A) AD Connector and IAM groups would require re-mapping AD groups to IAM, adding complexity. Native AD integration is simpler.

B) Tags and IAM groups also don't use native AD semantics.

C) Service-linked roles are not applicable for managing end user access.

So D is the correct option to meet the requirements using the native Active Directory integration built into FSx for Windows.
upvoted 1 times

  mtmayer 1 month, 4 weeks ago


Selected Answer: A
The AD is on-premisses... Your need the connector.
upvoted 1 times

  cookieMr 3 months ago


Selected Answer: D
D. allows the file system to leverage the existing AD infrastructure for authentication and access control.

Option A is incorrect because mapping the AD groups to IAM groups is not applicable in this scenario. IAM is primarily used for managing
access to AWS resources, while the requirement is to integrate with the on-premises AD for access control.

Option B is incorrect because assigning a tag with a Restrict tag key and a Compliance tag value does not provide the necessary
integration with the on-premises AD for access control. Tags are used for organizing and categorizing resources and do not provide
authentication or access control mechanisms.

Option C is incorrect because creating an IAM service-linked role linked directly to FSx for Windows File Server does not integrate with the
on-premises AD. IAM roles are used within AWS for managing permissions and do not provide the necessary integration with external AD
systems.
upvoted 3 times
  Mia2009687 3 months ago
Selected Answer: D
https://docs.aws.amazon.com/fsx/latest/WindowsGuide/aws-ad-integration-fsxW.html
upvoted 1 times

  kraken21 6 months ago


Selected Answer: D
Other options are referring to IAM based control which is not possible. Existing AD should be used without IAM.
upvoted 1 times

  Abhineet9148232 6 months, 2 weeks ago


Selected Answer: D
https://aws.amazon.com/blogs/storage/using-amazon-fsx-for-windows-file-server-with-an-on-premises-active-directory/
upvoted 2 times

  somsundar 6 months, 3 weeks ago


Answer D. Amazon FSx does not support Active Directory Connector .
upvoted 2 times

  Abhineet9148232 6 months, 3 weeks ago


Selected Answer: D
https://docs.aws.amazon.com/fsx/latest/WindowsGuide/self-managed-AD.html
upvoted 3 times

  Yelizaveta 7 months, 2 weeks ago


Selected Answer: D
Note:
Amazon FSx does not support Active Directory Connector and Simple Active Directory.

https://docs.aws.amazon.com/fsx/latest/WindowsGuide/aws-ad-integration-fsxW.html
upvoted 3 times

  aakashkumar1999 7 months, 4 weeks ago


Selected Answer: A
The answer will be AD connector so : A, it will create a proxy between your onpremises AD which you can use to restrict access
upvoted 2 times

  Stanislav4907 8 months ago


Selected Answer: D
Option D: Join the file system to the Active Directory to restrict access.

Joining the FSx for Windows File Server file system to the on-premises Active Directory allows the company to use the existing Active
Directory groups to restrict access to the file shares, folders, and files after the move to AWS. By joining the file system to the Active
Directory, the company can maintain the same access control as before the move, ensuring that the compliance team can maintain
compliance with the relevant regulations and standards.

Options A and B involve creating an Active Directory Connector or assigning a tag to map the Active Directory groups to IAM groups, but
these options do not allow for the use of the existing Active Directory groups to restrict access to the file shares in AWS.

Option C involves creating an IAM service-linked role linked directly to FSx for Windows File Server to restrict access, but this option does
not take advantage of the existing on-premises Active Directory and its access control.
upvoted 3 times

  KAUS2 8 months, 1 week ago


Selected Answer: A
A is correct
Use AD Connector if you only need to allow your on-premises users to log in to AWS applications and services with their Active Directory
credentials. You can also use AD Connector to join Amazon EC2 instances to your existing Active Directory domain.
Pls refer - https://docs.aws.amazon.com/directoryservice/latest/admin-guide/what_is.html#adconnector
upvoted 3 times

  mbuck2023 3 months, 3 weeks ago


wrong, answer is D. Amazon FSx does not support Active Directory Connector and Simple Active Directory. See also
https://docs.aws.amazon.com/fsx/latest/WindowsGuide/self-managed-AD.html.
upvoted 1 times

  techhb 8 months, 2 weeks ago


Going with D here
upvoted 1 times
  Aninina 8 months, 2 weeks ago
Selected Answer: D
D. Join the file system to the Active Directory to restrict access.

The best way to restrict access to the FSx for Windows File Server SMB compliance shares, folders, and files after the move to AWS is to
join the file system to the on-premises Active Directory. This will allow the company to continue using the Active Directory groups to
restrict access to the files and folders, without the need to create additional IAM groups or roles.

By joining the file system to the Active Directory, the company can continue to use the same access control mechanisms it already has in
place and the security configuration will not change.

Option A and B are not applicable to FSx for Windows File Server because it doesn't support the use of IAM groups or tags to restrict
access.

Option C is not appropriate in this case because FSx for Windows File Server does not support using IAM service-linked roles to restrict
access.
upvoted 4 times
Question #261 Topic 1

A company recently announced the deployment of its retail website to a global audience. The website runs on multiple Amazon EC2 instances
behind an Elastic Load Balancer. The instances run in an Auto Scaling group across multiple Availability Zones.

The company wants to provide its customers with different versions of content based on the devices that the customers use to access the
website.

Which combination of actions should a solutions architect take to meet these requirements? (Choose two.)

A. Configure Amazon CloudFront to cache multiple versions of the content.

B. Configure a host header in a Network Load Balancer to forward traffic to different instances.

C. Configure a Lambda@Edge function to send specific objects to users based on the User-Agent header.

D. Configure AWS Global Accelerator. Forward requests to a Network Load Balancer (NLB). Configure the NLB to set up host-based routing to
different EC2 instances.

E. Configure AWS Global Accelerator. Forward requests to a Network Load Balancer (NLB). Configure the NLB to set up path-based routing to
different EC2 instances.

Correct Answer: AC

Community vote distribution


AC (100%)

  Parsons Highly Voted  8 months, 2 weeks ago


Selected Answer: AC
A, C is correct.

NLB lister rule only supports Protocol & Port (Not host/based routing like ALB) => D, E is incorrect.
NLB just works layer 4 (TCP/UDP) instead of Layer 7 (HTTP) => B is incorrect.

After eliminating, AC should be the answer.


upvoted 9 times

  Guru4Cloud Most Recent  2 weeks, 6 days ago


Selected Answer: AC
A. allows customers to receive the appropriate version of the content based on their location and device type.

C. By creating a Lambda@Edge, you can inspect the User-Agent header of incoming requests and determine the type of device being
used. Based on this information, you can customize the response and send the appropriate version of the content to the user.
upvoted 2 times

  cookieMr 3 months ago


Selected Answer: AC
A. allows customers to receive the appropriate version of the content based on their location and device type.

C. By creating a Lambda@Edge, you can inspect the User-Agent header of incoming requests and determine the type of device being
used. Based on this information, you can customize the response and send the appropriate version of the content to the user.

B. does not address the requirement of serving different content versions based on device types.

D. & E. do not address the device-specific content requirement.

Therefore, options A and C are the correct combination of actions to meet the requirement of providing different versions of content
based on the devices that customers use to access the website.
upvoted 2 times

  Yadav_Sanjay 4 months, 2 weeks ago


Selected Answer: AC
NLB does not supports routing
upvoted 1 times

  omoakin 4 months, 3 weeks ago


AC
Configure Amazon CloudFront to cache multiple versions of the content.
Configure a [email protected] function to send specific objects to users based on the User-Agent header.
upvoted 1 times
  omoakin 4 months, 3 weeks ago
C
Configure a [email protected] function to send specific objects to users based on the User-Agent header.
upvoted 1 times

  GalileoEC2 6 months, 1 week ago


Using a Directory Connector to connect the on-premises Active Directory to AWS is one way to enable access to AWS resources, including
Amazon FSx for Windows File Server. However, joining the Amazon FSx for Windows File Server file system to the on-premises Active
Directory is a separate step that allows you to control access to the file shares using the same Active Directory groups that are used on-
premises.
upvoted 1 times

  LoXeras 6 months, 1 week ago


I guess this belongs to the question before #260
upvoted 2 times

  wors 7 months, 3 weeks ago


So will this mean the entire architecture needs to move to lambda in order to leverage off lambda edge? This doesn't make sense as the
question outlines the architecture already in ec2, asg and elb?

Just looking for clarification if I am missing something


upvoted 1 times

  devonwho 8 months ago


Selected Answer: AC
AC are the correct answers.

For C:
IMPROVED USER EXPERIENCE
Lambda@Edge can help improve your users' experience with your websites and web applications across the world, by letting you
personalize content for them without sacrificing performance.

Real-time Image Transformation


You can customize your users' experience by transforming images on the fly based on the user characteristics. For example, you can
resize images based on the viewer's device type—mobile, desktop, or tablet. You can also cache the transformed images at CloudFront
Edge locations to further improve performance when delivering images.

https://aws.amazon.com/lambda/edge/
upvoted 2 times

  mhmt4438 8 months, 2 weeks ago


Selected Answer: AC
Correct answer is A,C
upvoted 3 times

  Aninina 8 months, 2 weeks ago


Selected Answer: AC
C. Configure a Lambda@Edge function to send specific objects to users based on the User-Agent header.

Lambda@Edge allows you to run a Lambda function in response to specific CloudFront events, such as a viewer request, an origin request,
a response, or a viewer response.
upvoted 2 times

  Morinator 8 months, 3 weeks ago


Selected Answer: AC
https://www.examtopics.com/discussions/amazon/view/67881-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 2 times
Question #262 Topic 1

A company plans to use Amazon ElastiCache for its multi-tier web application. A solutions architect creates a Cache VPC for the ElastiCache
cluster and an App VPC for the application’s Amazon EC2 instances. Both VPCs are in the us-east-1 Region.

The solutions architect must implement a solution to provide the application’s EC2 instances with access to the ElastiCache cluster.

Which solution will meet these requirements MOST cost-effectively?

A. Create a peering connection between the VPCs. Add a route table entry for the peering connection in both VPCs. Configure an inbound rule
for the ElastiCache cluster’s security group to allow inbound connection from the application’s security group.

B. Create a Transit VPC. Update the VPC route tables in the Cache VPC and the App VPC to route traffic through the Transit VPC. Configure an
inbound rule for the ElastiCache cluster's security group to allow inbound connection from the application’s security group.

C. Create a peering connection between the VPCs. Add a route table entry for the peering connection in both VPCs. Configure an inbound rule
for the peering connection’s security group to allow inbound connection from the application’s security group.

D. Create a Transit VPC. Update the VPC route tables in the Cache VPC and the App VPC to route traffic through the Transit VPC. Configure an
inbound rule for the Transit VPC’s security group to allow inbound connection from the application’s security group.

Correct Answer: A

Community vote distribution


A (100%)

  mhmt4438 Highly Voted  8 months, 2 weeks ago


Selected Answer: A
A. Create a peering connection between the VPCs. Add a route table entry for the peering connection in both VPCs. Configure an inbound
rule for the ElastiCache cluster’s security group to allow inbound connection from the application’s security group.

Creating a peering connection between the VPCs allows the application's EC2 instances to communicate with the ElastiCache cluster
directly and efficiently. This is the most cost-effective solution as it does not involve creating additional resources such as a Transit VPC,
and it does not incur additional costs for traffic passing through the Transit VPC. Additionally, it is also more secure as it allows you to
configure a more restrictive security group rule to allow inbound connection from only the application's security group.
upvoted 12 times

  TariqKipkemei Most Recent  4 days, 23 hours ago


Selected Answer: A
Create a peering connection between the VPCs. Add a route table entry for the peering connection in both VPCs. Configure an inbound
rule for the ElastiCache cluster’s security group to allow inbound connection from the application’s security group.
upvoted 1 times

  Guru4Cloud 2 weeks, 6 days ago


Selected Answer: A
Create a VPC peering connection between the Cache VPC and App VPC. This allows private IP connectivity between the VPCs.
Add route table entries in each VPC to route traffic destined to the other VPC via the peering connection. This enables network routing.
Configure security groups to allow inbound connections from the application instances to the ElastiCache cluster.
upvoted 1 times

  cookieMr 3 months ago


Selected Answer: A
Creating a peering connection between the VPCs is a cost-effective way to establish connectivity. By adding a route table entry for the
peering connection in both VPCs, traffic can flow between them. Configuring an inbound rule in the ElastiCache cluster's security group
allows inbound connections from the application's security group, enabling access to the ElastiCache cluster from the EC2 instances in the
App VPC.

Option B suggests creating a Transit VPC, which adds unnecessary complexity and cost for this scenario.

Option C suggests configuring an inbound rule for the peering connection's security group, which is not necessary as the security group
for the ElastiCache cluster should be used to control inbound connections.

Option D suggests configuring an inbound rule for the Transit VPC's security group, which is not needed in this case and adds
unnecessary complexity.

Therefore, option A is the most cost-effective solution to provide the application's EC2 instances with access to the ElastiCache cluster.
upvoted 1 times

  smartegnine 3 months, 2 weeks ago


Selected Answer: A
A is correct,
1. VPC transit is used for more complex architecture and can do VPCs to VPCs connectivity. But for simple VPC 2 VPC can use peer
connection.
2.To enable private IPv4 traffic between instances in peered VPCs, you must add a route to the route tables associated with the subnets for
both instances.

So base on 1, B and D are out, base on 2 C is out


upvoted 1 times

  wRhlH 3 months, 3 weeks ago


Why not C ? any explanation?
upvoted 1 times

  smartegnine 3 months, 2 weeks ago


Application read from ElasticCache, not viseversa, so inbound rule should be ElasticCach
upvoted 2 times

  Cor5in 3 months, 1 week ago


Thank you Sir!
upvoted 1 times

  smartegnine 3 months, 2 weeks ago


To enable private IPv4 traffic between instances in peered VPCs, you must add a route to the route tables associated with the subnets
for both instances.

https://docs.aws.amazon.com/vpc/latest/peering/vpc-peering-routing.html
upvoted 1 times

  nder 7 months ago


Selected Answer: A
Cost Effectively!
upvoted 1 times
Question #263 Topic 1

A company is building an application that consists of several microservices. The company has decided to use container technologies to deploy its
software on AWS. The company needs a solution that minimizes the amount of ongoing effort for maintenance and scaling. The company cannot
manage additional infrastructure.

Which combination of actions should a solutions architect take to meet these requirements? (Choose two.)

A. Deploy an Amazon Elastic Container Service (Amazon ECS) cluster.

B. Deploy the Kubernetes control plane on Amazon EC2 instances that span multiple Availability Zones.

C. Deploy an Amazon Elastic Container Service (Amazon ECS) service with an Amazon EC2 launch type. Specify a desired task number level of
greater than or equal to 2.

D. Deploy an Amazon Elastic Container Service (Amazon ECS) service with a Fargate launch type. Specify a desired task number level of
greater than or equal to 2.

E. Deploy Kubernetes worker nodes on Amazon EC2 instances that span multiple Availability Zones. Create a deployment that specifies two or
more replicas for each microservice.

Correct Answer: AD

Community vote distribution


AD (100%)

  TariqKipkemei 4 days, 23 hours ago


Selected Answer: AD
Company needs a solution that minimizes the amount of ongoing effort for maintenance and scaling = Serverless = ECS with Fargate.
upvoted 1 times

  Guru4Cloud 2 weeks, 6 days ago


Selected Answer: AD
ECS allows deploying and managing containers without having to provision the underlying infrastructure. This minimizes maintenance
effort.
Using Fargate launch type means ECS will handle provisioning and scaling the infrastructure automatically. This removes the management
overhead for the company.
Running multiple tasks and specifying desired count ≥ 2 will provide high availability across Availability Zones.
Together, ECS plus Fargate provide a fully managed container platform. The company doesn't need to provision or manage servers.
upvoted 1 times

  cookieMr 3 months ago


Selected Answer: AD
Options B and E suggest deploying the Kubernetes control plane and worker nodes on EC2 instances, which would require managing the
infrastructure and add ongoing maintenance overhead, contrary to the requirement of minimizing effort.

Option C suggests using the Amazon EC2 launch type for ECS, which still requires managing EC2 instances and is not as cost-effective and
scalable as using Fargate.

Therefore, the combination of deploying an Amazon ECS cluster and an ECS service with a Fargate launch type (options A and D) is the
most suitable for minimizing maintenance and scaling effort without managing additional infrastructure.
upvoted 3 times

  LoXeras 6 months, 1 week ago


Selected Answer: AD
AWS Farget is server less solution to use on ECS: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS_Fargate.html
upvoted 2 times

  lambda15 6 months, 2 weeks ago


why is c is incorrect ?
upvoted 1 times

  Julio98 6 months, 1 week ago


Because in the question says, "minimizes the amount of ongoing effort for maintenance and scaling", and EC2 instances you need
effort to maintain the infrastructure unlike fargate that is serverless.
upvoted 2 times

  WherecanIstart 6 months, 3 weeks ago


Selected Answer: AD
Amazon Fargate is a service that is fully manageable by Amazon; it offers provisioning, configuration and scaling feature. It is "serverless"..
upvoted 1 times
  AlessandraSAA 6 months, 4 weeks ago
Selected Answer: AD
ECS has 2 launch type, EC2 (you maintain the infra) and Fargate (serverless). Since the question ask for no additional infra to manage it
should be Fargate.
upvoted 2 times

  devonwho 8 months ago


Selected Answer: AD
AWS Fargate is a technology that you can use with Amazon ECS to run containers without having to manage servers or clusters of Amazon
EC2 instances. With Fargate, you no longer have to provision, configure, or scale clusters of virtual machines to run containers.

https://docs.aws.amazon.com/AmazonECS/latest/userguide/what-is-fargate.html
upvoted 3 times

  Aninina 8 months, 2 weeks ago


A D is the correct answer
upvoted 1 times

  mhmt4438 8 months, 2 weeks ago


Selected Answer: AD
A,D is correct answer
upvoted 2 times

  AHUI 8 months, 2 weeks ago


AD:
https://www.examtopics.com/discussions/amazon/view/60032-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 2 times

  Morinator 8 months, 3 weeks ago


Selected Answer: AD
AD - EC2 out for this, cluster + fargate is the right answer
upvoted 3 times
Question #264 Topic 1

A company has a web application hosted over 10 Amazon EC2 instances with traffic directed by Amazon Route 53. The company occasionally
experiences a timeout error when attempting to browse the application. The networking team finds that some DNS queries return IP addresses of
unhealthy instances, resulting in the timeout error.

What should a solutions architect implement to overcome these timeout errors?

A. Create a Route 53 simple routing policy record for each EC2 instance. Associate a health check with each record.

B. Create a Route 53 failover routing policy record for each EC2 instance. Associate a health check with each record.

C. Create an Amazon CloudFront distribution with EC2 instances as its origin. Associate a health check with the EC2 instances.

D. Create an Application Load Balancer (ALB) with a health check in front of the EC2 instances. Route to the ALB from Route 53.

Correct Answer: D

Community vote distribution


D (64%) B (28%) 8%

  TariqKipkemei 4 days, 23 hours ago


Selected Answer: B
Clearly the question is all about Amazon Route 53 that has Failover routing policy that is used when you want to configure active-passive
failover.
upvoted 1 times

  Guru4Cloud 2 weeks, 6 days ago


Selected Answer: D
ALB performs health checks on the EC2 instances, so it will only route traffic to healthy instances. This avoids the timeout errors.
ALB provides load balancing across the instances, improving performance and availability.
Route 53 routes to the ALB DNS name, so you don't have to manage records for each EC2 instance.
This is a standard and robust architecture for public-facing web applications. The ALB acts as the entry point and handles health checks
and scaling.
upvoted 2 times

  jlteunissen 3 weeks, 2 days ago


Selected Answer: B
It is not clear from the question whether the 10 EC2s are running within the same region. ALB can only direct traffic within region, while
route 53 can route traffic to multiple locations, hence C and D are wrong.
upvoted 2 times

  slackbot 3 weeks, 3 days ago


i was looking at A, but indeed D is the best option, because the usually the TTL of the records is at least 60 seconds (nobody sets lower
unless testing something ,because there is a charge per number of unique requests. ALB health check can be set as low as desired, which
helps exclude the problematic ec2 faster than the DNS TTL expires
upvoted 1 times

  cookieMr 3 months ago


Selected Answer: D
By creating an ALB and configuring health checks, the architect ensures that only healthy instances receive traffic. The ALB periodically
checks the health of the EC2 instances based on the configured health check settings.

Routing traffic to the ALB from Route 53 ensures that DNS queries return the IP address of the ALB instead of individual instances. This
allows the ALB to distribute traffic only to healthy instances, avoiding timeouts caused by unhealthy instances.

A & B: While associating health checks with each record can help identify unhealthy instances, it does not provide automatic load
balancing and distribution of traffic to healthy instances.

C: While CloudFront can improve performance and availability, it is primarily a CDN and may not directly address the issue of load
balancing and distributing traffic to healthy instances.

Therefore, option D is the most appropriate solution to overcome the timeout errors by implementing an ALB with health checks and
routing traffic through Route 53.
upvoted 3 times

  joechen2023 3 months, 1 week ago


Selected Answer: C
I believe both C and D will work, but C seems less complex.
hopefully somebody here is more advanced(not an old student learning AWS like me) to explain why not C.
upvoted 2 times
  Abrar2022 4 months ago
Selected Answer: D
Option D allows for the creation of an Application Load Balancer which can detect unhealthy instances and redirect traffic away from
them.
upvoted 2 times

  Steve_4542636 7 months ago


Selected Answer: D
I vote d
upvoted 1 times

  techhb 8 months, 2 weeks ago


Selected Answer: D
Its D only
upvoted 1 times

  techhb 8 months, 2 weeks ago


Selected Answer: B
Why not B
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover-types.html#dns-failover-types-active-passive
upvoted 4 times

  techhb 8 months, 2 weeks ago


Its D,found the root cause
Option B is not the best option to overcome these timeout errors because it is not designed to handle traffic directed by Amazon Route
53. Option B creates a failover routing policy record for each EC2 instance, which is designed to route traffic to a backup EC2 instance if
one of the EC2 instances becomes unhealthy. This is not ideal for routing traffic from Route 53 as it does not allow for the redirection of
traffic away from unhealthy instances. Option D would be the best choice as it allows for the creation of an Application Load Balancer
which can detect unhealthy instances and redirect traffic away from them.
upvoted 5 times

  F629 3 months, 2 weeks ago


I think the problem of Failover routing policy is that it always send the requests to the same primary instance, not spread into all
healthy instances.
upvoted 1 times

  Aninina 8 months, 2 weeks ago


Selected Answer: D
D is correct
upvoted 1 times

  mhmt4438 8 months, 2 weeks ago


Selected Answer: D
D is correct
upvoted 2 times

  AHUI 8 months, 2 weeks ago


Ans: D
https://www.examtopics.com/discussions/amazon/view/83982-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times

  Aninina 8 months, 2 weeks ago


Selected Answer: D
D. Create an Application Load Balancer (ALB) with a health check in front of the EC2 instances. Route to the ALB from Route 53.

An Application Load Balancer (ALB) allows you to distribute incoming traffic across multiple backend instances, and can automatically
route traffic to healthy instances while removing traffic from unhealthy instances. By using an ALB in front of the EC2 instances and
routing traffic to it from Route 53, the load balancer can perform health checks on the instances and only route traffic to healthy
instances, which should help to reduce or eliminate timeout errors caused by unhealthy instances.
upvoted 4 times
Question #265 Topic 1

A solutions architect needs to design a highly available application consisting of web, application, and database tiers. HTTPS content delivery
should be as close to the edge as possible, with the least delivery time.

Which solution meets these requirements and is MOST secure?

A. Configure a public Application Load Balancer (ALB) with multiple redundant Amazon EC2 instances in public subnets. Configure Amazon
CloudFront to deliver HTTPS content using the public ALB as the origin.

B. Configure a public Application Load Balancer with multiple redundant Amazon EC2 instances in private subnets. Configure Amazon
CloudFront to deliver HTTPS content using the EC2 instances as the origin.

C. Configure a public Application Load Balancer (ALB) with multiple redundant Amazon EC2 instances in private subnets. Configure Amazon
CloudFront to deliver HTTPS content using the public ALB as the origin.

D. Configure a public Application Load Balancer with multiple redundant Amazon EC2 instances in public subnets. Configure Amazon
CloudFront to deliver HTTPS content using the EC2 instances as the origin.

Correct Answer: C

Community vote distribution


C (100%)

  Aninina Highly Voted  8 months, 2 weeks ago


C. Configure a public Application Load Balancer (ALB) with multiple redundant Amazon EC2 instances in private subnets. Configure
Amazon CloudFront to deliver HTTPS content using the public ALB as the origin.

This solution meets the requirements for a highly available application with web, application, and database tiers, as well as providing
edge-based content delivery. Additionally, it maximizes security by having the ALB in a private subnet, which limits direct access to the
web servers, while still being able to serve traffic over the Internet via the public ALB. This will ensure that the web servers are not
exposed to the public Internet, which reduces the attack surface and provides a secure way to access the application.
upvoted 12 times

  Guru4Cloud Most Recent  2 weeks, 6 days ago


Selected Answer: C
Keyword: Instances in private, ALB in public, point cloudfront to the public ALB
upvoted 1 times

  cookieMr 3 months ago


Selected Answer: C
A. exposes the EC2 instances directly to the public internet, which may compromise security.

B. lacks a load balancer in the public subnet, which is required for efficient load distribution and high availability.

D. provides load balancing and HTTPS content delivery, it exposes the EC2 instances directly to the public internet, which may pose
security risks.

C. provides high availability, secure access through private subnets, and optimized HTTPS content delivery using CloudFront with a public
ALB as the origin.
upvoted 3 times

  mhmt4438 8 months, 2 weeks ago


Selected Answer: C
Answer is C
upvoted 2 times

  AHUI 8 months, 2 weeks ago


ans: C
https://www.examtopics.com/discussions/amazon/view/46401-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times

  Morinator 8 months, 3 weeks ago


Selected Answer: C
Instances in private, ALB in public, point cloudfront to the public ALB
upvoted 4 times
Question #266 Topic 1

A company has a popular gaming platform running on AWS. The application is sensitive to latency because latency can impact the user
experience and introduce unfair advantages to some players. The application is deployed in every AWS Region. It runs on Amazon EC2 instances
that are part of Auto Scaling groups configured behind Application Load Balancers (ALBs). A solutions architect needs to implement a mechanism
to monitor the health of the application and redirect traffic to healthy endpoints.

Which solution meets these requirements?

A. Configure an accelerator in AWS Global Accelerator. Add a listener for the port that the application listens on, and attach it to a Regional
endpoint in each Region. Add the ALB as the endpoint.

B. Create an Amazon CloudFront distribution and specify the ALB as the origin server. Configure the cache behavior to use origin cache
headers. Use AWS Lambda functions to optimize the traffic.

C. Create an Amazon CloudFront distribution and specify Amazon S3 as the origin server. Configure the cache behavior to use origin cache
headers. Use AWS Lambda functions to optimize the traffic.

D. Configure an Amazon DynamoDB database to serve as the data store for the application. Create a DynamoDB Accelerator (DAX) cluster to
act as the in-memory cache for DynamoDB hosting the application data.

Correct Answer: A

Community vote distribution


A (100%)

  Aninina Highly Voted  8 months, 2 weeks ago


Selected Answer: A
A. Configure an accelerator in AWS Global Accelerator. Add a listener for the port that the application listens on, and attach it to a Regional
endpoint in each Region. Add the ALB as the endpoint.

AWS Global Accelerator directs traffic to the optimal healthy endpoint based on health checks, it can also route traffic to the closest
healthy endpoint based on geographic location of the client. By configuring an accelerator and attaching it to a Regional endpoint in each
Region, and adding the ALB as the endpoint, the solution will redirect traffic to healthy endpoints, improving the user experience by
reducing latency and ensuring that the application is running optimally. This solution will ensure that traffic is directed to the closest
healthy endpoint and will help to improve the overall user experience.
upvoted 14 times

  Bhrino Highly Voted  7 months, 1 week ago


Selected Answer: A
Global accelerators can be used for non http cases such as UDP, tcp , gaming , or voip
upvoted 7 times

  Guru4Cloud Most Recent  3 weeks, 2 days ago


Selected Answer: A
A. Configure an accelerator in AWS Global Accelerator. Add a listener for the port that the application listens on, and attach it to a Regional
endpoint in each Region. Add the ALB as the endpoint
upvoted 1 times

  bjexamprep 2 months ago


Is any answer relevant to the question?
upvoted 1 times

  cookieMr 3 months ago


Selected Answer: A
B. While CloudFront can help with caching and content delivery, it does not provide the mechanism to monitor the health of the
application or perform traffic redirection based on health checks.

C. This configuration is suitable for static content delivery but does not address the health monitoring and traffic redirection requirements
of the application.

D. While this can enhance performance, it does not monitor the health of the application or redirect traffic based on health checks.

Therefore, option A is the most suitable solution as it leverages AWS Global Accelerator to monitor application health, route traffic to
healthy endpoints, and optimize the user experience while addressing latency concerns.
upvoted 1 times

  antropaws 4 months, 1 week ago


Selected Answer: A
Agree with A
upvoted 1 times

  michellemeloc 4 months, 2 weeks ago


Selected Answer: A
Delivery gaming content --> AWS GLOBAL ACCELERATOR
upvoted 5 times

  mhmt4438 8 months, 2 weeks ago


Selected Answer: A
Correct answer is A
upvoted 2 times

  AHUI 8 months, 2 weeks ago


A:
https://www.examtopics.com/discussions/amazon/view/46403-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times

  alanp 8 months, 2 weeks ago


A. When you have an Application Load Balancer or Network Load Balancer that includes multiple target groups, Global Accelerator
considers the load balancer endpoint to be healthy only if each target group behind the load balancer has at least one healthy target. If
any single target group for the load balancer has only unhealthy targets, Global Accelerator considers the endpoint to be unhealthy.
https://docs.aws.amazon.com/global-accelerator/latest/dg/about-endpoint-groups-health-check-options.html
upvoted 7 times

  Morinator 8 months, 3 weeks ago


Selected Answer: A
https://docs.aws.amazon.com/global-accelerator/latest/dg/about-endpoint-groups-health-check-options.html
upvoted 1 times
Question #267 Topic 1

A company has one million users that use its mobile app. The company must analyze the data usage in near-real time. The company also must
encrypt the data in near-real time and must store the data in a centralized location in Apache Parquet format for further processing.

Which solution will meet these requirements with the LEAST operational overhead?

A. Create an Amazon Kinesis data stream to store the data in Amazon S3. Create an Amazon Kinesis Data Analytics application to analyze the
data. Invoke an AWS Lambda function to send the data to the Kinesis Data Analytics application.

B. Create an Amazon Kinesis data stream to store the data in Amazon S3. Create an Amazon EMR cluster to analyze the data. Invoke an AWS
Lambda function to send the data to the EMR cluster.

C. Create an Amazon Kinesis Data Firehose delivery stream to store the data in Amazon S3. Create an Amazon EMR cluster to analyze the
data.

D. Create an Amazon Kinesis Data Firehose delivery stream to store the data in Amazon S3. Create an Amazon Kinesis Data Analytics
application to analyze the data.

Correct Answer: D

Community vote distribution


D (100%)

  mhmt4438 Highly Voted  8 months, 2 weeks ago


Selected Answer: D
D. Create an Amazon Kinesis Data Firehose delivery stream to store the data in Amazon S3. Create an Amazon Kinesis Data Analytics
application to analyze the data.

This solution will meet the requirements with the least operational overhead as it uses Amazon Kinesis Data Firehose, which is a fully
managed service that can automatically handle the data collection, data transformation, encryption, and data storage in near-real time.
Kinesis Data Firehose can automatically store the data in Amazon S3 in Apache Parquet format for further processing. Additionally, it
allows you to create an Amazon Kinesis Data Analytics application to analyze the data in near real-time, with no need to manage any
infrastructure or invoke any Lambda function. This way you can process a large amount of data with the least operational overhead.
upvoted 28 times

  antropaws 4 months, 1 week ago


https://aws.amazon.com/blogs/big-data/analyzing-apache-parquet-optimized-data-using-amazon-kinesis-data-firehose-amazon-
athena-and-amazon-redshift/
upvoted 1 times

  WherecanIstart 6 months, 3 weeks ago


Thanks for the explanation!
upvoted 1 times

  jainparag1 8 months, 1 week ago


Nicely explained. Thanks.
upvoted 1 times

  LuckyAro 8 months, 2 weeks ago


Apache Parquet format processing was not mentioned in the answer options. Strange.
upvoted 5 times

  TariqKipkemei Most Recent  3 days, 1 hour ago


Selected Answer: D
Create an Amazon Kinesis Data Firehose delivery stream to store the data in Amazon S3. Create an Amazon Kinesis Data Analytics
application to analyze the data
upvoted 1 times

  Guru4Cloud 2 weeks, 6 days ago


Selected Answer: D
D. Create an Amazon Kinesis Data Firehose delivery stream to store the data in Amazon S3. Create an Amazon Kinesis Data Analytics
application to analyze the data
upvoted 1 times

  cookieMr 3 months ago


Selected Answer: D
A. requires invoking an Lambda to send the data to the analytics application. This introduces additional operational overhead and
complexity.
B. While EMR is a powerful tool for big data processing, it requires more operational management and configuration compared to Kinesis
Data Analytics.

C. introduces unnecessary complexity by involving EMR for data analysis when Kinesis Data Analytics can perform the analysis in a more
streamlined and automated manner.

Therefore, option D is the most suitable solution as it leverages Kinesis Data Firehose for data ingestion, stores the data in S3, and utilizes
Kinesis Data Analytics for near-real-time analysis, providing a low operational overhead solution for data usage analysis and encryption.
upvoted 2 times
  AHUI 8 months, 2 weeks ago
D:
https://www.examtopics.com/discussions/amazon/view/82022-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times

  Aninina 8 months, 2 weeks ago


Selected Answer: D
D. Create an Amazon Kinesis Data Firehose delivery stream to store the data in Amazon S3. Create an Amazon Kinesis Data Analytics
application to analyze the data.

Amazon Kinesis Data Firehose can automatically encrypt and store the data in Amazon S3 in Apache Parquet format for further
processing, which reduces the operational overhead. It also allows for near-real-time data analysis using Kinesis Data Analytics, which is a
fully managed service that makes it easy to analyze streaming data using SQL. This solution eliminates the need for setting up and
maintaining an EMR cluster, which would require more operational overhead.
upvoted 2 times
Question #268 Topic 1

A gaming company has a web application that displays scores. The application runs on Amazon EC2 instances behind an Application Load
Balancer. The application stores data in an Amazon RDS for MySQL database. Users are starting to experience long delays and interruptions that
are caused by database read performance. The company wants to improve the user experience while minimizing changes to the application’s
architecture.

What should a solutions architect do to meet these requirements?

A. Use Amazon ElastiCache in front of the database.

B. Use RDS Proxy between the application and the database.

C. Migrate the application from EC2 instances to AWS Lambda.

D. Migrate the database from Amazon RDS for MySQL to Amazon DynamoDB.

Correct Answer: A

Community vote distribution


B (54%) A (46%)

  Steve_4542636 Highly Voted  7 months ago


Selected Answer: A
Rds proxy is for too many connections, not for performance
upvoted 12 times

  vipyodha 3 months, 1 week ago


to use elasticache , you need to perform heavy code change ,aand also elasticache do chaching that can improve read perfromance but
will not provide scalability
upvoted 2 times

  Yadav_Sanjay 4 months, 2 weeks ago


Can't use cache as score gates updated. If data would have been static then definitely can go with A. But here score is dynamic...
upvoted 5 times

  rfelipem 4 months ago


Users are starting to experience long delays and interruptions caused by the "read performance" of the database... While the score
is dynamic, there is also read activity in the DB that is causing the delays and outages and this can be improved with Elastic Cache.
upvoted 3 times

  kraken21 Highly Voted  6 months ago


Selected Answer: B
RDX proxy will :"improve the user experience while minimizing changes".
upvoted 10 times

  vijaykamal Most Recent  2 days, 15 hours ago


Selected Answer: B
Option A suggests using Amazon ElastiCache, which is a good solution for caching frequently accessed data but may require more
application changes compared to RDS Proxy.
upvoted 1 times

  TariqKipkemei 3 days, 1 hour ago


Selected Answer: A
Read performance = Amazon ElastiCache
DB connection timeouts = RDS Proxy
upvoted 1 times

  JKevin778 6 days, 10 hours ago


Selected Answer: A
“The application stores data in an Amazon RDS for MySQL databas”
There is only one database used in this case, therefore no where to use RDS Proxy.
SO A.
upvoted 1 times

  LazyTs 3 weeks, 5 days ago


Selected Answer: A
It should be A, question said low performance due to "read" -> elasticache
upvoted 1 times
  oguzbeliren 1 month, 4 weeks ago
Answer is A:
B also would be an option but in order to use RDS Proxy we need an additional configuration in the server. The question is specifically
asking us to avoid from it.
upvoted 1 times

  ccmc 1 month, 4 weeks ago


elasticache require manual coding and it requires code level changes as well, architecture also changes, answer should be RDS proxy
upvoted 1 times

  Undisputed 2 months ago


Selected Answer: A
Implement caching mechanisms to reduce the need for frequent database reads. Use Amazon ElastiCache, a managed in-memory
caching service, to store frequently accessed data in-memory. This can significantly reduce the response time for frequently requested
data and lower the load on the database.
upvoted 1 times

  darekw 2 months ago


Question says: "long delays and interruptions that are caused by database read performance". So in my understanding improving reads
should help.
upvoted 1 times

  vini15 2 months, 2 weeks ago


I think it should be A.
RDS proxy only suitable for read-write database connections.
upvoted 1 times

  corruptbits 2 months, 2 weeks ago


Selected Answer: A
Elasticache is best for read usecases, RDX proxy is best for read and write usecases
upvoted 2 times

  animefan1 3 months ago


Selected Answer: A
Elasticache will improve read performance. THe question asks for 'no less architectural changes'. Yes, implementing elasticache will need
efforts, but those are from code side & not architecture.
upvoted 2 times

  cookieMr 3 months ago


Selected Answer: B
A. While ElastiCache can improve read performance by caching frequently accessed data, it requires changes to the application's
architecture. Additionally, it may not provide the same level of improvement in read performance as RDS Proxy, especially if the
application's database usage involves complex queries or frequent data updates.

C. While Lambda can offer benefits such as scalability and reduced operational overhead, it may not directly address the database read
performance issues. Migrating to Lambda would require significant changes to the application's architecture and codebase.

D. While DynamoDB is a scalable and high-performance NoSQL database, migrating from a relational database like MySQL to DynamoDB
would require significant changes to the application's data model and query patterns.

Therefore, option B is the most appropriate solution as it leverages RDS Proxy to optimize database connections and improve read
performance, minimizing changes to the application's architecture and providing a scalable and efficient solution for addressing the
database read performance issues.
upvoted 3 times

  twowind 3 months ago


Selected Answer: A
The problem has been shown to be long delays and outages caused by read performance, and it is clear that B is not appropriate. a.
Although it will modify the code, it will not modify the architecture of the application.
upvoted 1 times

  jayce5 3 months, 2 weeks ago


Selected Answer: B
It is not clearly stated, but I believe that the game scores will be updated frequently.
The answer should be the RDS proxy, not the cache.
upvoted 2 times

  migo7 3 months, 3 weeks ago


Selected Answer: B
B is correct as it requires minimum changes and A is wrong because creating the cache will require writing manual coding
upvoted 1 times
Question #269 Topic 1

An ecommerce company has noticed performance degradation of its Amazon RDS based web application. The performance degradation is
attributed to an increase in the number of read-only SQL queries triggered by business analysts. A solutions architect needs to solve the problem
with minimal changes to the existing web application.

What should the solutions architect recommend?

A. Export the data to Amazon DynamoDB and have the business analysts run their queries.

B. Load the data into Amazon ElastiCache and have the business analysts run their queries.

C. Create a read replica of the primary database and have the business analysts run their queries.

D. Copy the data into an Amazon Redshift cluster and have the business analysts run their queries.

Correct Answer: C

Community vote distribution


C (100%)

  nileeka97 1 week ago


Selected Answer: C
C. Create a read replica of the primary database and have the business analysts run their queries
upvoted 1 times

  cookieMr 3 months ago


Selected Answer: C
A. While DynamoDB is a scalable NoSQL database, it requires changes to the application's data model and query patterns.

B. ElastiCache is an in-memory data store that can improve query performance, but it is primarily used for caching rather than running
complex queries.

D. Redshift is a powerful data warehousing solution, but migrating the data and adapting the queries to Redshift's columnar architecture
would require significant changes to the application and query logic.

Therefore, option C is the most appropriate recommendation as it leverages read replicas in RDS to offload read-only query traffic from
the primary database, allowing the business analysts to run their queries without impacting the performance of the web application. It
provides a scalable and efficient solution with minimal changes to the existing web application.
upvoted 1 times

  antropaws 4 months, 1 week ago


Selected Answer: C
C, no doubt.
upvoted 2 times

  mhmt4438 8 months, 2 weeks ago


Selected Answer: C
C is correct answer
upvoted 2 times

  Aninina 8 months, 2 weeks ago


Selected Answer: C
C. Create a read replica of the primary database and have the business analysts run their queries.

Creating a read replica of the primary RDS database will offload the read-only SQL queries from the primary database, which will help to
improve the performance of the web application. Read replicas are exact copies of the primary database that can be used to handle read-
only traffic, which will reduce the load on the primary database and improve the performance of the web application. This solution can be
implemented with minimal changes to the existing web application, as the business analysts can continue to run their queries on the read
replica without modifying the code.
upvoted 4 times

  bamishr 8 months, 3 weeks ago


Selected Answer: C
Create a read replica of the primary database and have the business analysts run their queries.
upvoted 1 times
Question #270 Topic 1

A company is using a centralized AWS account to store log data in various Amazon S3 buckets. A solutions architect needs to ensure that the
data is encrypted at rest before the data is uploaded to the S3 buckets. The data also must be encrypted in transit.

Which solution meets these requirements?

A. Use client-side encryption to encrypt the data that is being uploaded to the S3 buckets.

B. Use server-side encryption to encrypt the data that is being uploaded to the S3 buckets.

C. Create bucket policies that require the use of server-side encryption with S3 managed encryption keys (SSE-S3) for S3 uploads.

D. Enable the security option to encrypt the S3 buckets through the use of a default AWS Key Management Service (AWS KMS) key.

Correct Answer: A

Community vote distribution


A (100%)

  techhb Highly Voted  8 months, 2 weeks ago


Selected Answer: A
here keyword is "before" "the data is encrypted at rest before the data is uploaded to the S3 buckets."
upvoted 11 times

  TariqKipkemei Most Recent  3 days, 1 hour ago


Selected Answer: A
Use client-side encryption to encrypt the data that is being uploaded to the S3 buckets
upvoted 1 times

  Guru4Cloud 3 weeks, 2 days ago


Selected Answer: A
A. Use client-side encryption to encrypt the data that is being uploaded to the S3 buckets.
upvoted 1 times

  Guru4Cloud 3 weeks, 4 days ago


Selected Answer: A
A. Use client-side encryption to encrypt the data that is being uploaded to the S3 buckets.
upvoted 1 times

  Abobaloyi 3 months, 1 week ago


Selected Answer: A
data must be encrypted before uploaded , which means the client need to do it before uploading the data to S3
upvoted 2 times

  datz 5 months, 3 weeks ago


Selected Answer: A
A, would meet requirements.
upvoted 1 times

  nder 7 months, 1 week ago


Selected Answer: A
Because the data must be encrypted while in transit
upvoted 2 times

  LuckyAro 8 months ago


Selected Answer: A
A is correct IMO
upvoted 1 times

  mhmt4438 8 months, 2 weeks ago


Selected Answer: A
https://www.examtopics.com/discussions/amazon/view/53840-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 3 times

  Aninina 8 months, 2 weeks ago


Selected Answer: A
A. Use client-side encryption to encrypt the data that is being uploaded to the S3 buckets.
upvoted 2 times
  bamishr 8 months, 3 weeks ago
Selected Answer: A
Use client-side encryption to encrypt the data that is being uploaded to the S3 buckets
upvoted 2 times

  Kesha 2 months ago


B. With server-side encryption, it automatically encrypts the data at rest using encryption keys managed by AWS.
upvoted 1 times
Question #271 Topic 1

A solutions architect observes that a nightly batch processing job is automatically scaled up for 1 hour before the desired Amazon EC2 capacity is
reached. The peak capacity is the ‘same every night and the batch jobs always start at 1 AM. The solutions architect needs to find a cost-effective
solution that will allow for the desired EC2 capacity to be reached quickly and allow the Auto Scaling group to scale down after the batch jobs are
complete.

What should the solutions architect do to meet these requirements?

A. Increase the minimum capacity for the Auto Scaling group.

B. Increase the maximum capacity for the Auto Scaling group.

C. Configure scheduled scaling to scale up to the desired compute level.

D. Change the scaling policy to add more EC2 instances during each scaling operation.

Correct Answer: C

Community vote distribution


C (100%)

  ManOnTheMoon Highly Voted  7 months, 3 weeks ago


GOOD LUCK EVERYONE :) YOU CAN DO THIS
upvoted 15 times

  david76x Highly Voted  8 months, 2 weeks ago


Selected Answer: C
C is correct. Goodluck everybody!
upvoted 8 times

  Guru4Cloud Most Recent  3 weeks, 4 days ago


Selected Answer: C
Configuring scheduled scaling actions allows the Auto Scaling group to scale up to the desired capacity at a scheduled time (1 AM in this
case) when the batch jobs start. This ensures the desired compute capacity is reached immediately.

The Auto Scaling group can then scale down based on metrics after the batch jobs complete.
upvoted 3 times

  hsinchang 2 months, 1 week ago


Selected Answer: C
The time is given, use scheduled for optimal cost
upvoted 1 times

  qacollin 5 months, 2 weeks ago


just scheduled my exam :)
upvoted 4 times

  awscerts023 7 months, 3 weeks ago


Reached here ! Did anyone schedule the real exam now ? How was it ?
upvoted 3 times

  pal40sg 7 months, 3 weeks ago


Thanks to everyone who contributed with answers :)
upvoted 3 times

  ProfXsamson 8 months ago


Selected Answer: C
C. I'm here at the end, leaving this here for posterity sake 02/01/2023.
upvoted 3 times

  dedline 8 months, 1 week ago


GL ALL!
upvoted 4 times

  mhmt4438 8 months, 2 weeks ago


Selected Answer: C
https://www.examtopics.com/discussions/amazon/view/27868-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times
  Aninina 8 months, 2 weeks ago
Selected Answer: C
C. Configure scheduled scaling to scale up to the desired compute level.

By configuring scheduled scaling, the solutions architect can set the Auto Scaling group to automatically scale up to the desired compute
level at a specific time (1AM) when the batch job starts and then automatically scale down after the job is complete. This will allow the
desired EC2 capacity to be reached quickly and also help in reducing the cost.
upvoted 4 times

  bamishr 8 months, 3 weeks ago


Selected Answer: C
Configure scheduled scaling to scale up to the desired compute level.
upvoted 1 times

  Morinator 8 months, 3 weeks ago


Selected Answer: C
predictable = schedule scaling
upvoted 3 times
Question #272 Topic 1

A company serves a dynamic website from a fleet of Amazon EC2 instances behind an Application Load Balancer (ALB). The website needs to
support multiple languages to serve customers around the world. The website’s architecture is running in the us-west-1 Region and is exhibiting
high request latency for users that are located in other parts of the world.

The website needs to serve requests quickly and efficiently regardless of a user’s location. However, the company does not want to recreate the
existing architecture across multiple Regions.

What should a solutions architect do to meet these requirements?

A. Replace the existing architecture with a website that is served from an Amazon S3 bucket. Configure an Amazon CloudFront distribution
with the S3 bucket as the origin. Set the cache behavior settings to cache based on the Accept-Language request header.

B. Configure an Amazon CloudFront distribution with the ALB as the origin. Set the cache behavior settings to cache based on the Accept-
Language request header.

C. Create an Amazon API Gateway API that is integrated with the ALB. Configure the API to use the HTTP integration type. Set up an API
Gateway stage to enable the API cache based on the Accept-Language request header.

D. Launch an EC2 instance in each additional Region and configure NGINX to act as a cache server for that Region. Put all the EC2 instances
and the ALB behind an Amazon Route 53 record set with a geolocation routing policy.

Correct Answer: B

Community vote distribution


B (100%)

  Yechi Highly Voted  7 months, 2 weeks ago


Selected Answer: B
Configuring caching based on the language of the viewer
If you want CloudFront to cache different versions of your objects based on the language specified in the request, configure CloudFront to
forward the Accept-Language header to your origin.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/header-caching.html
upvoted 7 times

  Guru4Cloud Most Recent  3 weeks, 4 days ago


Selected Answer: B
By caching content based on the Accept-Language request header, CloudFront can serve the appropriate version of the website to users
based on their language preferences. This solution allows the company to improve the website’s performance for users around the world
without having to recreate the existing architecture in multiple Regions.
upvoted 2 times

  A1975 2 months ago


Selected Answer: B
CloudFront allows you to customize cache behavior based on various request headers. By setting the cache behavior to cache based on
the Accept-Language request header, CloudFront can store and serve language-specific versions of the website content, reducing the
need to repeatedly fetch the content from the ALB for users with the same language preference.
upvoted 1 times

  kraken21 6 months ago


Selected Answer: B
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/header-caching.html#header-caching-web-language
upvoted 1 times

  vherman 7 months ago


Selected Answer: B
B is correct
upvoted 1 times

  Steve_4542636 7 months ago


Selected Answer: B
I think it's b
upvoted 1 times

  LuckyAro 7 months, 1 week ago


Selected Answer: B
B is the correct answer
upvoted 1 times
Question #273 Topic 1

A rapidly growing ecommerce company is running its workloads in a single AWS Region. A solutions architect must create a disaster recovery
(DR) strategy that includes a different AWS Region. The company wants its database to be up to date in the DR Region with the least possible
latency. The remaining infrastructure in the DR Region needs to run at reduced capacity and must be able to scale up if necessary.

Which solution will meet these requirements with the LOWEST recovery time objective (RTO)?

A. Use an Amazon Aurora global database with a pilot light deployment.

B. Use an Amazon Aurora global database with a warm standby deployment.

C. Use an Amazon RDS Multi-AZ DB instance with a pilot light deployment.

D. Use an Amazon RDS Multi-AZ DB instance with a warm standby deployment.

Correct Answer: B

Community vote distribution


B (97%)

  nickolaj Highly Voted  7 months, 2 weeks ago


Selected Answer: B
Option A is incorrect because while Amazon Aurora global database is a good solution for disaster recovery, pilot light deployment
provides only a minimalistic setup and would require manual intervention to make the DR Region fully operational, which increases the
recovery time.

Option B is a better choice than Option A as it provides a warm standby deployment, which is an automated and more scalable setup than
pilot light deployment. In this setup, the database is replicated to the DR Region, and the standby instance can be brought up quickly in
case of a disaster.

Option C is incorrect because Multi-AZ DB instances provide high availability, not disaster recovery.

Option D is a good choice for high availability, but it does not meet the requirement for DR in a different region with the least possible
latency.
upvoted 16 times

  Yechi Highly Voted  7 months, 2 weeks ago


Selected Answer: B
Note: The difference between pilot light and warm standby can sometimes be difficult to understand. Both include an environment in your
DR Region with copies of your primary Region assets. The distinction is that pilot light cannot process requests without additional action
taken first, whereas warm standby can handle traffic (at reduced capacity levels) immediately. The pilot light approach requires you to
“turn on” servers, possibly deploy additional (non-core) infrastructure, and scale up, whereas warm standby only requires you to scale up
(everything is already deployed and running). Use your RTO and RPO needs to help you choose between these approaches.

https://docs.aws.amazon.com/whitepapers/latest/disaster-recovery-workloads-on-aws/disaster-recovery-options-in-the-cloud.html
upvoted 14 times

  TariqKipkemei Most Recent  3 days ago


Selected Answer: B
The warm standby approach involves ensuring that there is a scaled down, but fully functional, copy of your production environment in
another Region.
With the pilot light approach, you replicate your data from one Region to another and provision a copy of your core workload
infrastructure. Resources required to support data replication and backup, such as databases and object storage, are always on. Other
elements, such as application servers, are loaded with application code and configurations, but are "switched off".
upvoted 1 times

  Guru4Cloud 3 weeks, 4 days ago


Selected Answer: B
An Amazon Aurora global database with a warm standby deployment provides continuous replication from one AWS Region to another,
keeping the DR database up-to-date with minimal latency.
upvoted 1 times

  A1975 2 months ago


Selected Answer: B
In a Pilot Light scenario, only an EC2 Instance and a DB may be running. In Warm Standby, however, everything is running — in a much
smaller capacity. This means the load balancer, gateways, databases, all subnets, and everything else are ready to go on a moment's
notice.

with reference to below statement Option B is a better choice than Option A.


"The remaining infrastructure in the DR Region needs to run at reduced capacity and must be able to scale up if necessary".
upvoted 1 times
  krisfromtw 7 months, 2 weeks ago
Selected Answer: D
should be D.
upvoted 1 times

  leoattf 7 months, 1 week ago


No, my friend. The question asks for deployment in another Region. Hence, it cannot be C or D.
The answer is B because is Global (different regions) and Ward Standby has faster RTO than Pilot Light.
upvoted 7 times
Question #274 Topic 1

A company runs an application on Amazon EC2 instances. The company needs to implement a disaster recovery (DR) solution for the application.
The DR solution needs to have a recovery time objective (RTO) of less than 4 hours. The DR solution also needs to use the fewest possible AWS
resources during normal operations.

Which solution will meet these requirements in the MOST operationally efficient way?

A. Create Amazon Machine Images (AMIs) to back up the EC2 instances. Copy the AMIs to a secondary AWS Region. Automate infrastructure
deployment in the secondary Region by using AWS Lambda and custom scripts.

B. Create Amazon Machine Images (AMIs) to back up the EC2 instances. Copy the AMIs to a secondary AWS Region. Automate infrastructure
deployment in the secondary Region by using AWS CloudFormation.

C. Launch EC2 instances in a secondary AWS Region. Keep the EC2 instances in the secondary Region active at all times.

D. Launch EC2 instances in a secondary Availability Zone. Keep the EC2 instances in the secondary Availability Zone active at all times.

Correct Answer: D

Community vote distribution


B (100%)

  NolaHOla Highly Voted  7 months, 2 weeks ago


Guys, sorry but I don't really have time to deepdive as my exam is soon. Based on chatGPT and my previous study the answer should be B
"Create Amazon Machine Images (AMIs) to back up the EC2 instances. Copy the AMIs to a secondary AWS Region. Automate infrastructure
deployment in the secondary Region by using AWS CloudFormation," would likely be the most suitable solution for the given
requirements.

This option allows for the creation of Amazon Machine Images (AMIs) to back up the EC2 instances, which can then be copied to a
secondary AWS region to provide disaster recovery capabilities. The infrastructure deployment in the secondary region can be automated
using AWS CloudFormation, which can help to reduce the amount of time and resources needed for deployment and management.
upvoted 7 times

  NBone 2 months, 1 week ago


please how do you use chatGPT to study for these questions?
upvoted 1 times

  vijaykamal Most Recent  2 days, 15 hours ago


Selected Answer: B
Option D suggests launching EC2 instances in a secondary Availability Zone (AZ), but AZs are not separate AWS Regions. While it provides
high availability within a Region, it doesn't offer geographic redundancy, which is essential for disaster recovery.
upvoted 1 times

  TariqKipkemei 3 days ago


Selected Answer: B
needs to use the fewest possible AWS resources during normal operations = backup & restore
upvoted 1 times

  Guru4Cloud 3 weeks, 4 days ago


Selected Answer: B
Create Amazon Machine Images (AMIs) to back up the EC2 instances. Copy the AMIs to a secondary AWS Region. Automate infrastructure
deployment in the secondary Region by using AWS CloudFormation
upvoted 1 times

  AMYMY 3 weeks, 5 days ago


B SHOULD BE RIGHT
upvoted 1 times

  A1975 2 months ago


Selected Answer: B
Option A: Add complexity and management overhead.

Option B: Creating AMIs for backup and using AWS CloudFormation for infrastructure deployment in the secondary Region is a more
streamlined and automated approach. CloudFormation allows you to define and provision resources in a declarative manner, making it
easier to maintain and update your infrastructure. This solution is more operationally efficient compared to Option A.

Option C: could be expensive and not fully aligned with the requirement of using the fewest possible AWS resources during normal
operations.
Option D: might not be sufficient for meeting the DR requirements, as Availability Zones are still within the same AWS Region and might
be subject to the same regional-level failures.
upvoted 1 times
  NBone 2 months, 1 week ago
Please I would really appreciate clarification with this question. The community has voted 100% that the right answer is B. However, option
D is shown to be the correct answer. So, who sets the correct answer? Which one should new comers like myself believe? the community's
or the other (which am guessing is set by the moderators???) Please help.
upvoted 1 times

  SimiTik 5 months, 1 week ago


C may satisfy the requirement of using the fewest possible AWS resources during normal operations, it may not be the most operationally
efficient or cost-effective solution in the long term.
upvoted 2 times

  AlmeroSenior 7 months, 1 week ago


So Weird , they have product for this > Elastic Disaster Recovery , but option is not given .
upvoted 1 times

  Yechi 7 months, 2 weeks ago


Selected Answer: B
https://docs.aws.amazon.com/zh_cn/whitepapers/latest/disaster-recovery-workloads-on-aws/disaster-recovery-options-in-the-
cloud.html#backup-and-restore
upvoted 3 times

  nickolaj 7 months, 2 weeks ago


Selected Answer: B
Option B would be the most operationally efficient solution for implementing a DR solution for the application, meeting the requirement
of an RTO of less than 4 hours and using the fewest possible AWS resources during normal operations.

By creating Amazon Machine Images (AMIs) to back up the EC2 instances and copying them to a secondary AWS Region, the company can
ensure that they have a reliable backup in the event of a disaster. By using AWS CloudFormation to automate infrastructure deployment in
the secondary Region, the company can minimize the amount of time and effort required to set up the DR solution.
upvoted 4 times

  Joan111edu 7 months, 2 weeks ago


Selected Answer: B
the answer should be B
--->recovery time objective (RTO) of less than 4 hours.

https://docs.aws.amazon.com/zh_cn/whitepapers/latest/disaster-recovery-workloads-on-aws/disaster-recovery-options-in-the-
cloud.html#backup-and-restore
upvoted 3 times
Question #275 Topic 1

A company runs an internal browser-based application. The application runs on Amazon EC2 instances behind an Application Load Balancer. The
instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. The Auto Scaling group scales up to 20 instances during
work hours, but scales down to 2 instances overnight. Staff are complaining that the application is very slow when the day begins, although it runs
well by mid-morning.

How should the scaling be changed to address the staff complaints and keep costs to a minimum?

A. Implement a scheduled action that sets the desired capacity to 20 shortly before the office opens.

B. Implement a step scaling action triggered at a lower CPU threshold, and decrease the cooldown period.

C. Implement a target tracking action triggered at a lower CPU threshold, and decrease the cooldown period.

D. Implement a scheduled action that sets the minimum and maximum capacity to 20 shortly before the office opens.

Correct Answer: A

Community vote distribution


C (67%) A (33%)

  asoli Highly Voted  6 months, 2 weeks ago


Selected Answer: C
At first, I thought the answer is A. But it is C.

It seems that there is no information in the question about CPU or Memory usage.
So, we might think the answer is A. why? because what we need is to have the required (desired) number of instances. It already has
scheduled scaling that works well in this scenario. Scale down after working hours and scale up in working hours. So, it just needs to
adjust the desired number to start from 20 instances.

But here is the point it shows A is WRONG!!!


If it started with desired 20 instances, it will keep it for the whole day. What if the load is reduced? We do not need to keep the 20
instances always. That 20 is the MAXIMUM number we need, no the DESIRE number. So it is against COST that is the main objective of this
question.

So, the answer is C


upvoted 11 times

  mandragon 4 months, 3 weeks ago


If it stars with 20 instances it will not keep it all day. It will scale down based on demand. The scheduled action in Option A simply
ensures that there are enough instances running to handle the increased traffic when the day begins, while still allowing the Auto
Scaling group to scale up or down based on demand during the rest of the day.
https://docs.aws.amazon.com/autoscaling/ec2/userguide/scale-your-group.html
upvoted 5 times

  TariqKipkemei Most Recent  3 days ago


To keep costs to a minimum target tracking is the best option.
For example the scaling metric is the average CPU utilization of the EC2 auto scaling instances, and their average during the day should
always be 80%. When CloudWatch detects that the average CPU utilization is beyond 80% at start of day, it will trigger the target tracking
policy to scale out the auto scaling group to meet this target utilization. Once everything is settled and the average CPU utilization has
gone below 80% at night, another scale in action will kick in and reduce the number of auto scaling instances in the auto scaling group.
upvoted 1 times

  TariqKipkemei 3 days ago


Option C is best
upvoted 1 times

  Ramdi1 1 week, 3 days ago


Selected Answer: A
I am going A based on it stating upto 20 so you already know what they maximum they use which is n a sense consistent. however i can
see why people have put C. I think they need more clarification on the questions.
upvoted 1 times

  Uzbekistan 2 weeks, 3 days ago


Selected Answer: A
A. Implement a scheduled action that sets the desired capacity to 20 shortly before the office opens.
upvoted 1 times

  Uzbekistan 2 weeks, 3 days ago


CHATGPT says Answers is A
A. Implement a scheduled action that sets the desired capacity to 20 shortly before the office opens.
upvoted 1 times

  BrijMohan08 3 weeks, 5 days ago


Selected Answer: A
Scaling Out: In the morning when you schedule the AWS EC2 scaling to have a minimum and maximum of 20 instances, if the load on your
application increases beyond the current number of instances, AWS Auto Scaling will automatically launch new instances to meet the
demand up to the maximum of 20 instances.

Scaling In: As the load on your application decreases in the afternoon or night, AWS Auto Scaling will continuously monitor the health and
load of your instances. If the instances are underutilized and can be terminated without affecting your application's performance, AWS
Auto Scaling will automatically scale in by terminating excess instances,

Why not D? If you specify the min instance, AWS will always keep the minimum number of instances (20 in this case) running.
upvoted 2 times

  LazyTs 3 weeks, 5 days ago


It's A, C will not be fast enough with the sudden influx of the users, if C is fast enough then the original scenario should already be good
enough as the 20 is already the max which set to start at working hours(when CPU starts to spin up)
upvoted 1 times

  kapalulz 1 month, 3 weeks ago


Selected Answer: C
C. Implement a target tracking action triggered at a lower CPU threshold, and decrease the cooldown period
upvoted 1 times

  Mia2009687 2 months, 3 weeks ago


Selected Answer: C
I was in team A. But from the definition of desired capacity, it seems once we set it as 20, it will try to keep it as 20 which is not saving cost.

Desired capacity: Represents the initial capacity of the Auto Scaling group at the time of creation. An Auto Scaling group attempts to
maintain the desired capacity. It starts by launching the number of instances that are specified for the desired capacity, and maintains this
number of instances as long as there are no scaling policies or scheduled actions attached to the Auto Scaling group.
upvoted 2 times

  DrWatson 4 months ago


Selected Answer: A
https://docs.aws.amazon.com/autoscaling/ec2/userguide/consolidated-view-of-warm-up-and-cooldown-settings.html
DefaultCooldown
Only needed if you use simple scaling policies.
API operation: CreateAutoScalingGroup, UpdateAutoScalingGroup
The amount of time, in seconds, between one scaling activity ending and another one starting due to simple scaling policies. For more
information, see Scaling cooldowns for Amazon EC2 Auto Scaling (https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-
scaling-scaling-cooldowns.html)
Default: 300 seconds.
upvoted 1 times

  Konb 4 months, 1 week ago


Selected Answer: A
I think the "cost" part that talks against A is a catch. No information why the EC2s are slow - maybe it's not CPU?

On the other hand we know that "Auto Scaling group scales up to 20 instances during work hours". A seems to be the only option that
kinda satisfies requirements.
upvoted 1 times

  xmark443 4 months, 2 weeks ago


There may be days when the demand is lower. So schedule scaling is more cost than target tracking.
upvoted 1 times

  justhereforccna 4 months, 3 weeks ago


Selected Answer: A
Have to go with A on this one
upvoted 1 times

  kruasan 5 months ago


Selected Answer: C
This option will scale up capacity faster in the morning to improve performance, but will still allow capacity to scale down during off hours.
It achieves this as follows:
• A target tracking action scales based on a CPU utilization target. By triggering at a lower CPU threshold in the morning, the Auto Scaling
group will start scaling up sooner as traffic ramps up, launching instances before utilization gets too high and impacts performance.
• Decreasing the cooldown period allows Auto Scaling to scale more aggressively, launching more instances faster until the target is
reached. This speeds up the ramp-up of capacity.
• However, unlike a scheduled action to set a fixed minimum/maximum capacity, with target tracking the group can still scale down during
off hours based on demand. This helps minimize costs.
upvoted 2 times
  Dr_Chomp 5 months, 3 weeks ago
Selected Answer: A
I'm going with A - it tells us that 20 instances is the normal capacity during the work day - so scheduling that at the start of the work day
means you don't need to put load on the system to trigger scale-out. So this is like a warm start. Cool down has nothing to do with
anything and it doesn't mention anything about CPU/resources for target setting.
upvoted 1 times

  kraken21 6 months ago


Selected Answer: C
How should the scaling be changed to address the staff complaints and keep costs to a minimum? "Option C" scaling based on metrics
and with the combination of reducing the cooldown the cost part is addressed.
upvoted 1 times

  dcp 6 months, 2 weeks ago


Selected Answer: A
I will go with A based on this "The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. The Auto Scaling
group scales up to 20 instances during work hours, but scales down to 2 instances overnight."
Setting the instances to 20 before the office hours start should address the issue.
upvoted 1 times

  kraken21 6 months ago


How about the cost part :"How should the scaling be changed to address the staff complaints and keep costs to a minimum?". By
scaling to 20 instances you are abusing instance cost. C is a better option.
upvoted 1 times
Question #276 Topic 1

A company has a multi-tier application deployed on several Amazon EC2 instances in an Auto Scaling group. An Amazon RDS for Oracle instance
is the application’ s data layer that uses Oracle-specific PL/SQL functions. Traffic to the application has been steadily increasing. This is causing
the EC2 instances to become overloaded and the RDS instance to run out of storage. The Auto Scaling group does not have any scaling metrics
and defines the minimum healthy instance count only. The company predicts that traffic will continue to increase at a steady but unpredictable
rate before leveling off.

What should a solutions architect do to ensure the system can automatically scale for the increased traffic? (Choose two.)

A. Configure storage Auto Scaling on the RDS for Oracle instance.

B. Migrate the database to Amazon Aurora to use Auto Scaling storage.

C. Configure an alarm on the RDS for Oracle instance for low free storage space.

D. Configure the Auto Scaling group to use the average CPU as the scaling metric.

E. Configure the Auto Scaling group to use the average free memory as the scaling metric.

Correct Answer: AC

Community vote distribution


AD (90%) 10%

  klayytech Highly Voted  6 months ago


Selected Answer: AD
A) Configure storage Auto Scaling on the RDS for Oracle instance.
= Makes sense. With RDS Storage Auto Scaling, you simply set your desired maximum storage limit, and Auto Scaling takes care of the
rest.

B) Migrate the database to Amazon Aurora to use Auto Scaling storage.


= Scenario specifies application's data layer uses Oracle-specific PL/SQL functions. This rules out migration to Aurora.

C) Configure an alarm on the RDS for Oracle instance for low free storage space.
= You could do this but what does it fix? Nothing. The CW notification isn't going to trigger anything.

D) Configure the Auto Scaling group to use the average CPU as the scaling metric.
= Makes sense. The CPU utilization is the precursor to the storage outage. When the ec2 instances are overloaded, the RDS instance
storage hits its limits, too.
upvoted 10 times

  vijaykamal Most Recent  2 days, 15 hours ago


Selected Answer: AD
Option B (Migrate the database to Amazon Aurora) may be a good long-term solution, but it involves database migration, which can be
complex and time-consuming. For immediate scalability and to address the storage issue, configuring storage Auto Scaling on the existing
RDS instance is a more immediate and straightforward solution.

Option C (Configure an alarm on the RDS for Oracle instance for low free storage space) is useful for monitoring, but it doesn't proactively
address the storage issue by automatically expanding storage as needed.

Option E (Configure the Auto Scaling group to use the average free memory as the scaling metric) is less common as a scaling metric for
EC2 instances compared to CPU utilization. While memory can be an important factor for application performance, CPU utilization is
typically a more commonly used metric for scaling decisions. It also doesn't directly address the RDS storage issue.
upvoted 1 times

  Guru4Cloud 3 weeks, 4 days ago


Selected Answer: AD
A. By enabling storage Auto Scaling on the RDS for Oracle instance, it will automatically add more storage when the existing storage is
running out, ensuring the application's data layer can handle the increased data storage requirements.

D. By configuring the Auto Scaling group to use the average CPU utilization as the scaling metric, it can automatically add more EC2
instances to the Auto Scaling group when the CPU utilization exceeds a certain threshold. This will help handle the increased traffic and
workload on the EC2 instances in the multi-tier application.
upvoted 1 times

  A1975 2 months ago


Selected Answer: AD
A. By enabling storage Auto Scaling on the RDS for Oracle instance, it will automatically add more storage when the existing storage is
running out, ensuring the application's data layer can handle the increased data storage requirements.
D. By configuring the Auto Scaling group to use the average CPU utilization as the scaling metric, it can automatically add more EC2
instances to the Auto Scaling group when the CPU utilization exceeds a certain threshold. This will help handle the increased traffic and
workload on the EC2 instances in the multi-tier application.
upvoted 1 times
  kruasan 5 months ago
Selected Answer: AD
These options will allow the system to scale both the compute tier (EC2 instances) and the data tier (RDS storage) automatically as traffic
increases:
A. Storage Auto Scaling will allow the RDS for Oracle instance to automatically increase its allocated storage when free storage space gets
low. This ensures the database does not run out of capacity and can continue serving data to the application.
D. Configuring the EC2 Auto Scaling group to scale based on average CPU utilization will allow it to launch additional instances
automatically as traffic causes higher CPU levels across the instances. This scales the compute tier to handle increased demand.
upvoted 2 times

  kraken21 6 months ago


Selected Answer: AD
Auto scaling storage RDS will ease storage issues and migrating Oracle Pl/Sql to Aurora is cumbersome. Also Aurora has auto storage
scaling by default.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PIOPS.StorageTypes.html#USER_PIOPS.Autoscaling
upvoted 2 times

  Nel8 7 months ago


Selected Answer: BD
My answer is B & D...
B. Migrate the database to Amazon Aurora to use Auto Scaling Storage. --- Aurora storage is also self-healing. Data blocks and disks are
continuously scanned for errors and repaired automatically.
D. Configurate the Auto Scaling group to sue the average CPU as the scaling metric. -- Good choice.

I believe either A & C or B & D options will work.


upvoted 3 times

  FourOfAKind 7 months ago


In this question, you have Oracle DB, and Amazon Aurora is for MySQL/PostgreSQL. A and D are the correct choices.
upvoted 5 times

  dcp 6 months, 2 weeks ago


You can migrate Oracle PL/SQL to Aurora:
https://docs.aws.amazon.com/dms/latest/oracle-to-aurora-mysql-migration-playbook/chap-oracle-aurora-mysql.sql.html
upvoted 1 times

  dcp 6 months, 2 weeks ago


I still think A is the answer, because RDS for Oracle auto scaling once enabled it will automatically adjust the storage capacity.
upvoted 1 times

  Ja13 7 months, 1 week ago


Selected Answer: AD
a and d
upvoted 3 times

  KZM 7 months, 1 week ago


A and D.
upvoted 3 times

  GwonLEE 7 months, 1 week ago


Selected Answer: AD
a and d
upvoted 3 times

  LuckyAro 7 months, 1 week ago


Selected Answer: AD
A and D
upvoted 2 times

  Joan111edu 7 months, 2 weeks ago


Selected Answer: AD
https://www.examtopics.com/discussions/amazon/view/46534-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times

  ChrisG1454 7 months, 2 weeks ago


answer is A and D
upvoted 1 times

  ChrisG1454 7 months, 2 weeks ago


https://www.examtopics.com/discussions/amazon/view/46534-exam-aws-certified-solutions-architect-associate-saa-
c02/#:~:text=%22This%20overloads%20the%20EC2%20instances%20and%20causes%20the,the%20RDS%20for%20Oracle%20instance%
20upvoted%202%20times
upvoted 1 times

  rrharris 7 months, 2 weeks ago


A and D are the Answers
upvoted 1 times
Question #277 Topic 1

A company provides an online service for posting video content and transcoding it for use by any mobile platform. The application architecture
uses Amazon Elastic File System (Amazon EFS) Standard to collect and store the videos so that multiple Amazon EC2 Linux instances can access
the video content for processing. As the popularity of the service has grown over time, the storage costs have become too expensive.

Which storage solution is MOST cost-effective?

A. Use AWS Storage Gateway for files to store and process the video content.

B. Use AWS Storage Gateway for volumes to store and process the video content.

C. Use Amazon EFS for storing the video content. Once processing is complete, transfer the files to Amazon Elastic Block Store (Amazon
EBS).

D. Use Amazon S3 for storing the video content. Move the files temporarily over to an Amazon Elastic Block Store (Amazon EBS) volume
attached to the server for processing.

Correct Answer: A

Community vote distribution


D (76%) A (24%)

  bdp123 Highly Voted  7 months, 2 weeks ago


Selected Answer: D
Storage gateway is not used for storing content - only to transfer to the Cloud
upvoted 14 times

  kraken21 Highly Voted  6 months ago


Selected Answer: D
There is no on-prem/non Aws infrastructure to create a gateway. Also, EFS+EBS is more expensive that EFS and S3. So D is the best option.
upvoted 5 times

  Guru4Cloud Most Recent  3 weeks, 4 days ago


Selected Answer: D
Amazon S3 provides low-cost object storage for storing large amounts of unstructured data like videos. The videos can be stored in S3
durably and reliably.

For processing, the video files can be temporarily copied from S3 to an EBS volume attached to the EC2 instance. EBS provides low latency
block storage for high performance video processing.

Once processing is complete, the output can be stored back in S3.


upvoted 1 times

  bjexamprep 1 month, 3 weeks ago


Selected Answer: D
The question doesn't give enough information. Well, quite a few AWS exam questions don't provide enough info.
Ideally, A could be the best answer if it mentions S3 as the backend of storage gateway. Because if it doesn't mention S3 as the backend,
that implies either Storage gateway as the storage(which is impossible) or continue using EFS(also impossible).
D is not ideal, because it will introduce video download cost for downloading files from S3 to EBS temporary storage. But it is the best
option we have.
upvoted 1 times

  Undisputed 2 months ago


Selected Answer: D
A more cost-effective storage solution for this scenario would be Amazon Simple Storage Service (Amazon S3). Amazon S3 is an object
storage service that offers high scalability, durability, and availability at a lower cost compared to Amazon EFS. By using Amazon S3, you
only pay for the storage you use, and it is typically more cost-efficient for scenarios where data is accessed less frequently, such as video
storage for processing.
upvoted 1 times

  smartegnine 3 months, 2 weeks ago


Selected Answer: A
The result should be A.

Amazon storage gateway has 4 types, S3 File Gateway, FSx file gateway, Type Gateway and Volume Gateway.

If not specific reference file gateway should be default as S3 gateway, which sent file over to S3 the most cost effective storage in AWS.
Why not D, the reason is last sentence, there are multiple EC2 servers for processing the video and EBS can only attach to 1 EC2 instance
at a time, so if you use EBS, which mean for each EC2 instance you will have 1 EBS. This rule out D.
upvoted 1 times

  argl1995 3 months ago


We can use multi-attach feature of EBS to attach one EBS volume to multiple Ec2 instances
upvoted 2 times

  RainWhisper 3 months, 1 week ago


AWS Storage Gateway = extend storage to onprem
upvoted 1 times

  MostafaWardany 3 months, 3 weeks ago


Selected Answer: D
D: MOST cost-effective of these options = S3
upvoted 1 times

  omoakin 4 months, 1 week ago


CCCCCCCCCCCCCCC
upvoted 1 times

  kruasan 5 months ago


Selected Answer: D
he most cost-effective storage solution in this scenario would be:
D. Use Amazon S3 for storing the video content. Move the files temporarily over to an Amazon Elastic Block Store (Amazon EBS) volume
attached to the server for processing.
This option provides the lowest-cost storage by using:
• Amazon S3 for large-scale, durable, and inexpensive storage of the video content. S3 storage costs are significantly lower than EFS.
• Amazon EBS only temporarily during processing. By mounting an EBS volume only when a video needs to be processed, and
unmounting it after, the time the content spends on the higher-cost EBS storage is minimized.
• The EBS volume can be sized to match the workload needs for active processing, keeping costs lower. The volume does not need to store
the entire video library long-term.
upvoted 1 times

  GalileoEC2 6 months, 1 week ago


Option A, which uses AWS Storage Gateway for files to store and process the video content, would be the most cost-effective solution.

With this approach, you would use an AWS Storage Gateway file gateway to access the video content stored in Amazon S3. The file
gateway presents a file interface to the EC2 instances, allowing them to access the video content as if it were stored on a local file system.
The video processing tasks can be performed on the EC2 instances, and the processed files can be stored back in S3.

This approach is cost-effective because it leverages the lower cost of Amazon S3 for storage while still allowing for easy access to the video
content from the EC2 instances using a file interface. Additionally, Storage Gateway provides caching capabilities that can further improve
performance by reducing the need to access S3 directly.
upvoted 1 times

  scs50 6 months, 2 weeks ago


Selected Answer: A
Amazon S3 File gateway is using S3 behind the scene.
https://docs.aws.amazon.com/filegateway/latest/files3/what-is-file-s3.html
upvoted 1 times

  CapJackSparrow 6 months, 3 weeks ago


Amazon S3 File Gateway

Amazon S3 File Gateway presents a file interface that enables you to store files as objects in Amazon S3 using the industry-standard NFS
and SMB file protocols, and access those files via NFS and SMB from your data center or Amazon EC2, or access those files as objects
directly in Amazon S3. POSIX-style metadata, including ownership, permissions, and timestamps are durably stored in Amazon S3 in the
user-metadata of the object associated with the file. Once objects are transferred to S3, they can be managed as native S3 objects and
bucket policies such as lifecycle management and Cross-Region Replication (CRR), and can be applied directly to objects stored in your
bucket. Amazon S3 File Gateway also publishes audit logs for SMB file share user operations to Amazon CloudWatch.

Customers can use Amazon S3 File Gateway to back up on-premises file data as objects in Amazon S3 (including Microsoft SQL Server and
Oracle databases and logs), and for hybrid cloud workflows using data generated by on-premises applications for processing by AWS
services such as machine learning or big data analytics.
upvoted 1 times

  Brak 6 months, 4 weeks ago


Selected Answer: A
It can't be D, since there are multiple servers accessing the video files which rules out EBS. File Gateway provides a shared filesystem to
replace EFS, but uses S3 for storage to reduce costs.
upvoted 5 times

  KZM 7 months, 1 week ago


Using Amazon S3 for storing video content is the best way for cost-effectiveness I think. But I am still confused about why moved the data
to EBS.
upvoted 3 times

  KZM 7 months, 1 week ago


A better solution would be to use a transcoding service like Amazon Elastic Transcoder to process the video content directly from
Amazon S3. This would eliminate the need for storing the content on an EBS volume, reduce storage costs, and simplify the
architecture by removing the need for managing EBS volumes.
upvoted 2 times

  AlmeroSenior 7 months, 1 week ago


Selected Answer: A
A looks right . File Gateway is S3 , but exposes it as NFS/SMB . So no need for costly retrieval like option D , or C consuming expensive EBS .
upvoted 2 times

  AlmeroSenior 7 months, 1 week ago


A looks right . File Gateway is S3 , but exposes it as NFS/SMB . So no need for costly retrieval like option D , or C consuming expensive EBS .
upvoted 1 times

  NolaHOla 7 months, 1 week ago


Can someone please explain or provide information why not C? If we go with option D it states that we store the Content in S3 which is
indeed cheaper, but then we move them to EBS for processing, how are multiple Linux instances, gonna process the same videos from EBS
when they can't read them simultaneously.
Where for Option C, we indeed keep the EFS, then we process from there and move them to EBS for reading? seems more logical to me
upvoted 1 times

  asoli 6 months, 2 weeks ago


EFS has a lower cost than EBS in general. So, moving from EFS to EBS will not reduce cost
upvoted 1 times
Question #278 Topic 1

A company wants to create an application to store employee data in a hierarchical structured relationship. The company needs a minimum-latency
response to high-traffic queries for the employee data and must protect any sensitive data. The company also needs to receive monthly email
messages if any financial information is present in the employee data.

Which combination of steps should a solutions architect take to meet these requirements? (Choose two.)

A. Use Amazon Redshift to store the employee data in hierarchies. Unload the data to Amazon S3 every month.

B. Use Amazon DynamoDB to store the employee data in hierarchies. Export the data to Amazon S3 every month.

C. Configure Amazon Macie for the AWS account. Integrate Macie with Amazon EventBridge to send monthly events to AWS Lambda.

D. Use Amazon Athena to analyze the employee data in Amazon S3. Integrate Athena with Amazon QuickSight to publish analysis dashboards
and share the dashboards with users.

E. Configure Amazon Macie for the AWS account. Integrate Macie with Amazon EventBridge to send monthly notifications through an Amazon
Simple Notification Service (Amazon SNS) subscription.

Correct Answer: CD

Community vote distribution


BE (100%)

  Bhawesh Highly Voted  7 months, 2 weeks ago


Selected Answer: BE
Data in hierarchies : Amazon DynamoDB
B. Use Amazon DynamoDB to store the employee data in hierarchies. Export the data to Amazon S3 every month.

Sensitive Info: Amazon Macie


E. Configure Amazon Macie for the AWS account. Integrate Macie with Amazon EventBridge to send monthly notifications through an
Amazon Simple Notification Service (Amazon SNS) subscription.
upvoted 9 times

  gold4otas 6 months, 1 week ago


Can someone please provide explanation why options "B" & "C" are the correct options?
upvoted 1 times

  smartegnine 3 months, 2 weeks ago


C is half statement once event sent to Lambda what is next? Should send email right, but it does not say it.
upvoted 1 times

  Guru4Cloud Most Recent  3 weeks, 4 days ago


Selected Answer: BE
B and E are the steps to meet all of the requirements.

B meets the need to store hierarchical employee data in DynamoDB for low latency queries at high traffic. DynamoDB can handle the
access patterns for hierarchical data. Exporting to S3 monthly provides an audit trail.

E sets up Macie to analyze sensitive data and integrate with EventBridge to trigger monthly SNS notifications when financial data is
present.
upvoted 1 times

  A1975 2 months ago


Selected Answer: BE
]B. Amazon DynamoDB is a fully managed NoSQL database service that provides low-latency, high-performance storage for hierarchical
data. It handle high-traffic queries and delivering fast responses to retrieve employee data efficiently.

E. Amazon Macie is a service that uses machine learning to automatically discover, classify, and protect sensitive data in AWS. Integrating
Macie with Amazon EventBridge allows you to receive events whenever any financial information is identified in the employee data. By
using Amazon SNS, you can receive these notifications via email.
upvoted 1 times

  cesargalindo123 3 months, 1 week ago


AE
https://aws.amazon.com/es/blogs/big-data/query-hierarchical-data-models-within-amazon-redshift/
upvoted 1 times

  kruasan 5 months ago


Selected Answer: BE
, the combination of DynamoDB for fast data queries, S3 for durable storage and backups, Macie for sensitive data monitoring, and
EventBridge + SNS for email notifications satisfies all needs: fast query response, sensitive data protection, and monthly alerts. The
solutions architect should implement DynamoDB with export to S3, and configure Macie with integration to send SNS email notifications.
upvoted 1 times

  kruasan 5 months ago


Generally, for building a hierarchical relationship model, a graph database such as Amazon Neptune is a better choice. In some cases,
however, DynamoDB is a better choice for hierarchical data modeling because of its flexibility, security, performance, and scale.

https://docs.aws.amazon.com/prescriptive-guidance/latest/dynamodb-hierarchical-data-model/introduction.html
upvoted 2 times

  darn 5 months, 1 week ago


why Dynamo and not Redshift?
upvoted 2 times

  kruasan 5 months ago


3. Hierarchical data - DynamoDB supports hierarchical (nested) data structures well in a NoSQL data model. Defining hierarchical
employee data may be more complex in Redshift's columnar SQL data warehouse structure. DynamoDB is built around flexible data
schemas that can represent complex relationships.
4. Data export - Both DynamoDB and Redshift allow exporting data to S3, so that requirement could be met with either service.
However, overall DynamoDB is the better fit based on the points above regarding latency, scalability, and support for hierarchical data.
upvoted 4 times

  kruasan 5 months ago


1. Low latency - DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with single-digit
millisecond latency. Redshift is a data warehouse solution optimized for complex analytical queries, so query latency would typically be
higher. Since the requirements specify minimum latency for high-traffic queries, DynamoDB is better suited.
2. Scalability - DynamoDB is highly scalable, able to handle very high read and write throughput with no downtime. Redshift also scales,
but may experience some downtime during rescale operations. For a high-traffic application, DynamoDB's scalability and availability
are better matched.
upvoted 2 times

  PRASAD180 7 months, 1 week ago


BE is crt 100%
upvoted 1 times

  KZM 7 months, 1 week ago


B and E
To send monthly email messages, an SNS service is required.
upvoted 2 times

  skiwili 7 months, 2 weeks ago


Selected Answer: BE
B and E
upvoted 3 times
Question #279 Topic 1

A company has an application that is backed by an Amazon DynamoDB table. The company’s compliance requirements specify that database
backups must be taken every month, must be available for 6 months, and must be retained for 7 years.

Which solution will meet these requirements?

A. Create an AWS Backup plan to back up the DynamoDB table on the first day of each month. Specify a lifecycle policy that transitions the
backup to cold storage after 6 months. Set the retention period for each backup to 7 years.

B. Create a DynamoDB on-demand backup of the DynamoDB table on the first day of each month. Transition the backup to Amazon S3 Glacier
Flexible Retrieval after 6 months. Create an S3 Lifecycle policy to delete backups that are older than 7 years.

C. Use the AWS SDK to develop a script that creates an on-demand backup of the DynamoDB table. Set up an Amazon EventBridge rule that
runs the script on the first day of each month. Create a second script that will run on the second day of each month to transition DynamoDB
backups that are older than 6 months to cold storage and to delete backups that are older than 7 years.

D. Use the AWS CLI to create an on-demand backup of the DynamoDB table. Set up an Amazon EventBridge rule that runs the command on the
first day of each month with a cron expression. Specify in the command to transition the backups to cold storage after 6 months and to delete
the backups after 7 years.

Correct Answer: B

Community vote distribution


A (79%) B (21%)

  vijaykamal 2 days, 15 hours ago


Selected Answer: A
Option B mentions using Amazon S3 Glacier Flexible Retrieval, but DynamoDB doesn't natively support transitioning backups to Amazon
S3 Glacier. Options C and D involve custom scripts and EventBridge rules, which add complexity and may not be as reliable or efficient as
using AWS Backup for this purpose.
upvoted 1 times

  chanchal133 1 month ago


Selected Answer: A
A is right ans
upvoted 1 times

  MNotABot 2 months, 3 weeks ago


All except A are "On-demand"
upvoted 1 times

  narddrer 2 months, 3 weeks ago


Selected Answer: B
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/BackupRestore.html
Using DynamoDB with AWS Backup, you can copy your on-demand backups across AWS accounts and Regions, add cost allocation tags to
on-demand backups, and transition on-demand backups to cold storage for lower costs. To use these advanced features, you must opt in
to AWS Backup.
upvoted 3 times

  kruasan 5 months ago


Selected Answer: A
This solution satisfies the requirements in the following ways:
• AWS Backup will automatically take full backups of the DynamoDB table on the schedule defined in the backup plan (the first of each
month).
• The lifecycle policy can transition backups to cold storage after 6 months, meeting that requirement.
• Setting a 7-year retention period in the backup plan will ensure each backup is retained for 7 years as required.
• AWS Backup manages the backup jobs and lifecycle policies, requiring no custom scripting or management.
upvoted 2 times

  TariqKipkemei 6 months, 1 week ago


Answer is A
upvoted 1 times

  mmustafa4455 6 months, 2 weeks ago


Selected Answer: A
The correct Answer is A
https://aws.amazon.com/blogs/database/set-up-scheduled-backups-for-amazon-dynamodb-using-aws-backup/
upvoted 1 times

  mmustafa4455 6 months, 2 weeks ago


Its B.
https://aws.amazon.com/blogs/database/set-up-scheduled-backups-for-amazon-dynamodb-using-aws-backup/
upvoted 2 times

  Wael216 7 months, 1 week ago


Selected Answer: A
A is the answer
upvoted 1 times

  LuckyAro 7 months, 1 week ago


Selected Answer: A
A is the answer.
upvoted 1 times

  skiwili 7 months, 2 weeks ago


Selected Answer: A
A is the correct answe
upvoted 1 times

  rrharris 7 months, 2 weeks ago


A is the Answer

can be used to create backup schedules and retention policies for DynamoDB tables
upvoted 2 times

  kpato87 7 months, 2 weeks ago


Selected Answer: A
A. Create an AWS Backup plan to back up the DynamoDB table on the first day of each month. Specify a lifecycle policy that transitions the
backup to cold storage after 6 months. Set the retention period for each backup to 7 years.
upvoted 3 times
Question #280 Topic 1

A company is using Amazon CloudFront with its website. The company has enabled logging on the CloudFront distribution, and logs are saved in
one of the company’s Amazon S3 buckets. The company needs to perform advanced analyses on the logs and build visualizations.

What should a solutions architect do to meet these requirements?

A. Use standard SQL queries in Amazon Athena to analyze the CloudFront logs in the S3 bucket. Visualize the results with AWS Glue.

B. Use standard SQL queries in Amazon Athena to analyze the CloudFront logs in the S3 bucket. Visualize the results with Amazon QuickSight.

C. Use standard SQL queries in Amazon DynamoDB to analyze the CloudFront logs in the S3 bucket. Visualize the results with AWS Glue.

D. Use standard SQL queries in Amazon DynamoDB to analyze the CloudFront logs in the S3 bucket. Visualize the results with Amazon
QuickSight.

Correct Answer: A

Community vote distribution


B (88%) 13%

  rrharris Highly Voted  7 months, 2 weeks ago


Answer is B - Quicksite creating data visualizations

https://docs.aws.amazon.com/quicksight/latest/user/welcome.html
upvoted 5 times

  Guru4Cloud Most Recent  3 weeks, 4 days ago


Selected Answer: B
OptionB: Amazon Athena allows you to run standard SQL queries directly on the data stored in the S3 bucket.
Amazon QuickSight is a business intelligence (BI) service that allows you to create interactive and visual dashboards to analyze data. You
can connect Amazon QuickSight to Amazon Athena to visualize the results of your SQL queries from the CloudFront logs.
upvoted 1 times

  A1975 2 months ago


Selected Answer: B
OptionB: Amazon Athena allows you to run standard SQL queries directly on the data stored in the S3 bucket.
Amazon QuickSight is a business intelligence (BI) service that allows you to create interactive and visual dashboards to analyze data. You
can connect Amazon QuickSight to Amazon Athena to visualize the results of your SQL queries from the CloudFront logs.
upvoted 1 times

  ajay258 4 months, 2 weeks ago


Answer is B
upvoted 1 times

  FFO 5 months, 3 weeks ago


Selected Answer: B
Athena and Quicksight. Glue is for ETL transformation
upvoted 1 times

  TariqKipkemei 6 months, 1 week ago


Answer is B
Analysis on S3 = Athena
Visualizations = Quicksight
upvoted 1 times

  GalileoEC2 6 months, 1 week ago


Why the Hell A?
upvoted 1 times

  GalileoEC2 6 months, 2 weeks ago


Why A! as far as I know Glue is not used for visualization
upvoted 1 times

  Bhrino 7 months, 1 week ago


Selected Answer: B
B because athena can be used to analyse data in s3 buckets and AWS quicksight is literally used to create visual representation of data
upvoted 1 times
  LuckyAro 7 months, 1 week ago
Selected Answer: B
Using Athena to query the CloudFront logs in the S3 bucket and QuickSight to visualize the results is the best solution because it is cost-
effective, scalable, and requires no infrastructure setup. It also provides a robust solution that enables the company to perform advanced
analysis and build interactive visualizations without the need for a dedicated team of developers.
upvoted 1 times

  skiwili 7 months, 2 weeks ago


Selected Answer: B
Yes B is the answer
upvoted 1 times

  obatunde 7 months, 2 weeks ago


Selected Answer: B
Correct answer should be B.
upvoted 1 times

  Namrash 7 months, 2 weeks ago


B is correct
upvoted 1 times

  kpato87 7 months, 2 weeks ago


Selected Answer: B
Amazon Athena can be used to analyze data in S3 buckets using standard SQL queries without requiring any data transformation. By
using Athena, a solutions architect can easily and efficiently query the CloudFront logs stored in the S3 bucket. The results of the queries
can be visualized using Amazon QuickSight, which provides powerful data visualization capabilities and easy-to-use dashboards. Together,
Athena and QuickSight provide a cost-effective and scalable solution to analyze CloudFront logs and build visualizations.
upvoted 4 times

  Joan111edu 7 months, 2 weeks ago


Selected Answer: B
should be B
upvoted 3 times

  bdp123 7 months, 2 weeks ago


Selected Answer: D
https://aws.amazon.com/blogs/big-data/harmonize-query-and-visualize-data-from-various-providers-using-aws-glue-amazon-athena-and-
amazon-quicksight/
https://docs.aws.amazon.com/comprehend/latest/dg/tutorial-reviews-visualize.html
upvoted 2 times

  tellmenowwwww 7 months, 1 week ago


attached file realted with B
upvoted 1 times
Question #281 Topic 1

A company runs a fleet of web servers using an Amazon RDS for PostgreSQL DB instance. After a routine compliance check, the company sets a
standard that requires a recovery point objective (RPO) of less than 1 second for all its production databases.

Which solution meets these requirements?

A. Enable a Multi-AZ deployment for the DB instance.

B. Enable auto scaling for the DB instance in one Availability Zone.

C. Configure the DB instance in one Availability Zone, and create multiple read replicas in a separate Availability Zone.

D. Configure the DB instance in one Availability Zone, and configure AWS Database Migration Service (AWS DMS) change data capture (CDC)
tasks.

Correct Answer: D

Community vote distribution


A (92%) 8%

  KZM Highly Voted  7 months, 1 week ago


A:
By using Multi-AZ deployment, the company can achieve an RPO of less than 1 second because the standby instance is always in sync with
the primary instance, ensuring that data changes are continuously replicated.
upvoted 8 times

  rrharris Highly Voted  7 months, 2 weeks ago


Correct Answer is A
upvoted 7 times

  A1975 Most Recent  1 month, 4 weeks ago


Selected Answer: A
Read Replicas:
Read Replicas are asynchronous and support read scalability.
It is uese to improve performance.
Read Replicas can be in the same region or in a different region for disaster recovery purposes, but this involves manual intervention,
which means Read Replicas do not provide automatic failover and requires DNS updates and application changes

Multi-AZ:
Multi-AZ maintains a synchronous standby replica of the primary instance in a different Availability Zone within the same region.
Multi-AZ deployments provide high availability and automatic failover.

Option A is better choice with respect to below statement,


"the company sets a standard that requires a recovery point objective (RPO) of less than 1 second for all its production databases."
upvoted 2 times

  narddrer 2 months, 3 weeks ago


Selected Answer: D
option A doesn't provide Data integrity only achieved in Option D using CDC.
upvoted 1 times

  FFO 5 months, 3 weeks ago


Selected Answer: A
Used for DR. Every single change is replicated in a standby AZ. If we lose the main AZ, (uses the same DNS name) standby becomes
automatic failover and the new main DB.
upvoted 3 times

  TariqKipkemei 6 months, 1 week ago


Answer is A
High availability = Multi AZ
upvoted 1 times

  Steve_4542636 7 months ago


Selected Answer: A
My vote is A
upvoted 1 times

  ManOnTheMoon 7 months, 1 week ago


Agree with A
upvoted 1 times
  LuckyAro 7 months, 1 week ago
Selected Answer: A
Multi-AZ is a synchronous communication with the Master in "real time" and fail over will be almost instant.
upvoted 2 times

  GwonLEE 7 months, 2 weeks ago


Selected Answer: A
correct is A
upvoted 1 times

  Namrash 7 months, 2 weeks ago


A should be correct
upvoted 2 times

  Joan111edu 7 months, 2 weeks ago


Selected Answer: A
should be A
upvoted 2 times
Question #282 Topic 1

A company runs a web application that is deployed on Amazon EC2 instances in the private subnet of a VPC. An Application Load Balancer (ALB)
that extends across the public subnets directs web traffic to the EC2 instances. The company wants to implement new security measures to
restrict inbound traffic from the ALB to the EC2 instances while preventing access from any other source inside or outside the private subnet of
the EC2 instances.

Which solution will meet these requirements?

A. Configure a route in a route table to direct traffic from the internet to the private IP addresses of the EC2 instances.

B. Configure the security group for the EC2 instances to only allow traffic that comes from the security group for the ALB.

C. Move the EC2 instances into the public subnet. Give the EC2 instances a set of Elastic IP addresses.

D. Configure the security group for the ALB to allow any TCP traffic on any port.

Correct Answer: C

Community vote distribution


B (100%)

  Guru4Cloud 3 weeks, 4 days ago


Selected Answer: B
Configure the security group for the EC2 instances to only allow traffic that comes from the security group for the ALB
upvoted 1 times

  awslearner7 2 months, 1 week ago


can anybody explains the question?
upvoted 1 times

  Abrar2022 4 months ago


Read the discussion, that’s the whole point why examtopics picks the wrong answer. Follow most voted answer not examtopics answer
upvoted 4 times

  antropaws 4 months, 1 week ago


Selected Answer: B
It's very confusing that the system marks C as correct.
upvoted 1 times

  FFO 5 months, 3 weeks ago


Selected Answer: B
This is B. Question already tells us they only want ONLY traffic from the ALB.
upvoted 1 times

  TariqKipkemei 6 months, 1 week ago


Answer is B
upvoted 1 times

  GalileoEC2 6 months, 2 weeks ago


Why C! another cazy answer , If i am concern about security why I would want to expose my EC2 to the public internet,not make sense at
all, am I
correct with this? I also go with B
upvoted 2 times

  LuckyAro 7 months, 1 week ago


Selected Answer: B
B is the correct answer.
upvoted 2 times

  kpato87 7 months, 2 weeks ago


Selected Answer: B
configure the security group for the EC2 instances to only allow traffic that comes from the security group for the ALB. This ensures that
only the traffic originating from the ALB is allowed access to the EC2 instances in the private subnet, while denying any other traffic from
other sources. The other options do not provide a suitable solution to meet the stated requirements.
upvoted 3 times

  Bhawesh 7 months, 2 weeks ago


Selected Answer: B
B. Configure the security group for the EC2 instances to only allow traffic that comes from the security group for the ALB.
upvoted 3 times
Question #283 Topic 1

A research company runs experiments that are powered by a simulation application and a visualization application. The simulation application
runs on Linux and outputs intermediate data to an NFS share every 5 minutes. The visualization application is a Windows desktop application that
displays the simulation output and requires an SMB file system.

The company maintains two synchronized file systems. This strategy is causing data duplication and inefficient resource usage. The company
needs to migrate the applications to AWS without making code changes to either application.

Which solution will meet these requirements?

A. Migrate both applications to AWS Lambda. Create an Amazon S3 bucket to exchange data between the applications.

B. Migrate both applications to Amazon Elastic Container Service (Amazon ECS). Configure Amazon FSx File Gateway for storage.

C. Migrate the simulation application to Linux Amazon EC2 instances. Migrate the visualization application to Windows EC2 instances.
Configure Amazon Simple Queue Service (Amazon SQS) to exchange data between the applications.

D. Migrate the simulation application to Linux Amazon EC2 instances. Migrate the visualization application to Windows EC2 instances.
Configure Amazon FSx for NetApp ONTAP for storage.

Correct Answer: D

Community vote distribution


D (95%) 5%

  LuckyAro Highly Voted  7 months, 1 week ago


Selected Answer: D
Amazon FSx for NetApp ONTAP provides shared storage between Linux and Windows file systems.
upvoted 9 times

  rrharris Highly Voted  7 months, 2 weeks ago


Answer is D
upvoted 7 times

  Guru4Cloud Most Recent  3 weeks, 4 days ago


Selected Answer: D
The key requirements are:

Simulation app runs on Linux, outputs data to NFS


Visualization app runs on Windows, requires SMB file system
Migrate apps to AWS without code changes
Eliminate data duplication and inefficient resource usage
upvoted 1 times

  Abrar2022 4 months ago


For shared storage between Linux and windows you need to implement Amazon FSx for NetApp ONTAP
upvoted 2 times

  kruasan 5 months ago


Selected Answer: D
This solution satisfies the needs in the following ways:
• Amazon EC2 provides a seamless migration path for the existing server-based applications without code changes. The simulation app
can run on Linux EC2 instances and the visualization app on Windows EC2 instances.
• Amazon FSx for NetApp ONTAP provides highly performant file storage that is accessible via both NFS and SMB. This allows the
simulation app to write to NFS shares as currently designed, and the visualization app to access the same data via SMB.
• FSx for NetApp ONTAP ensures the data is synchronized and up to date across the file systems. This addresses the data duplication
issues of the current setup.
• Resources can be scaled efficiently since EC2 and FSx provide scalable compute and storage on demand.
upvoted 5 times

  kruasan 5 months ago


The other options would require more significant changes:
A. Migrating to Lambda would require re-architecting both applications and not meet the requirement to avoid code changes. S3 does
not provide file system access.
B. While ECS could run the apps without code changes, FSx File Gateway only provides S3 or EFS storage, neither of which offer both
NFS and SMB access. Data exchange would still be an issue.
C. Using SQS for data exchange between EC2 instances would require code changes to implement a messaging system rather than a
shared file system.
upvoted 1 times

  mr_kanchan 1 month, 3 weeks ago


How does the data duplication issue get addressed on selecting D ?
upvoted 1 times

  Reckless_Jas 1 month, 1 week ago


Maybe I'm wrong, but I feel like the data is duplicated between the two types of EC2 instances. By using the FSX ONTAP will
address this issue.
upvoted 1 times

  Wael216 7 months ago


Selected Answer: D
windows => FSX
we didn't mention containers => can't be ECS
upvoted 1 times

  everfly 7 months, 1 week ago


Selected Answer: D
Amazon FSx for NetApp ONTAP is a fully managed service that provides shared file storage built on NetApp’s popular ONTAP file system. It
supports NFS, SMB, and iSCSI protocols2 and also allows multi-protocol access to the same data
upvoted 1 times

  Yechi 7 months, 2 weeks ago


Selected Answer: D
Amazon FSx for NetApp ONTAP is a fully-managed shared storage service built on NetApp’s popular ONTAP file system. Amazon FSx for
NetApp ONTAP provides the popular features, performance, and APIs of ONTAP file systems with the agility, scalability, and simplicity of a
fully managed AWS service, making it easier for customers to migrate on-premises applications that rely on NAS appliances to AWS. FSx
for ONTAP file systems are similar to on-premises NetApp clusters. Within each file system that you create, you also create one or more
storage virtual machines (SVMs). These are isolated file servers each with their own endpoints for NFS, SMB, and management access, as
well as authentication (for both administration and end-user data access). In turn, each SVM has one or more volumes which store your
data.
https://aws.amazon.com/de/blogs/storage/getting-started-cloud-file-storage-with-amazon-fsx-for-netapp-ontap-using-netapp-
management-tools/
upvoted 3 times

  zTopic 7 months, 2 weeks ago


Selected Answer: B
B is correct I believe
upvoted 1 times
Question #284 Topic 1

As part of budget planning, management wants a report of AWS billed items listed by user. The data will be used to create department budgets. A
solutions architect needs to determine the most efficient way to obtain this report information.

Which solution meets these requirements?

A. Run a query with Amazon Athena to generate the report.

B. Create a report in Cost Explorer and download the report.

C. Access the bill details from the billing dashboard and download the bill.

D. Modify a cost budget in AWS Budgets to alert with Amazon Simple Email Service (Amazon SES).

Correct Answer: B

Community vote distribution


B (100%)

  Guru4Cloud 3 weeks, 4 days ago


Selected Answer: B
­° Cost Explorer is a AWS service that allows you to view, analyze, and manage your AWS costs and usage. It provides a variety of reports
that you can use to track your costs, including a report of AWS billed items listed by user.
° Creating a report in Cost Explorer is a quick and easy way to get the information you need. You can customize the report to include the
specific data you want, and you can download the report in a variety of formats, including CSV, Excel, and PDF.
upvoted 2 times

  Guru4Cloud 3 weeks, 4 days ago


This is trick question -
You need to know the differences between the billing services.
upvoted 1 times

  DagsH 6 months, 2 weeks ago


Selected Answer: B
Cost Explorer looks at the usage pattern or history
upvoted 3 times

  WherecanIstart 6 months, 3 weeks ago


Selected Answer: B
Cost Explorer
upvoted 1 times

  pcops 7 months, 2 weeks ago


Answer is B
upvoted 2 times

  fulingyu288 7 months, 2 weeks ago


Selected Answer: B
Answer is B
upvoted 3 times

  rrharris 7 months, 2 weeks ago


Answer is B
upvoted 2 times
Question #285 Topic 1

A company hosts its static website by using Amazon S3. The company wants to add a contact form to its webpage. The contact form will have
dynamic server-side components for users to input their name, email address, phone number, and user message. The company anticipates that
there will be fewer than 100 site visits each month.

Which solution will meet these requirements MOST cost-effectively?

A. Host a dynamic contact form page in Amazon Elastic Container Service (Amazon ECS). Set up Amazon Simple Email Service (Amazon SES)
to connect to any third-party email provider.

B. Create an Amazon API Gateway endpoint with an AWS Lambda backend that makes a call to Amazon Simple Email Service (Amazon SES).

C. Convert the static webpage to dynamic by deploying Amazon Lightsail. Use client-side scripting to build the contact form. Integrate the
form with Amazon WorkMail.

D. Create a t2.micro Amazon EC2 instance. Deploy a LAMP (Linux, Apache, MySQL, PHP/Perl/Python) stack to host the webpage. Use client-
side scripting to build the contact form. Integrate the form with Amazon WorkMail.

Correct Answer: B

Community vote distribution


B (88%) 12%

  obatunde Highly Voted  7 months, 1 week ago


Selected Answer: B
Correct answer is B. https://aws.amazon.com/blogs/architecture/create-dynamic-contact-forms-for-s3-static-websites-using-aws-lambda-
amazon-api-gateway-and-amazon-ses/
upvoted 6 times

  Guru4Cloud Most Recent  3 weeks, 4 days ago


Selected Answer: B
B is the most cost-effective solution for this use case.

The key requirements are:

Static website hosted on S3


Add a contact form with server-side processing
Low traffic website (<100 visits per month.
upvoted 1 times

  rogerHS 2 months, 4 weeks ago


why not C
upvoted 2 times

  Guru4Cloud 3 weeks, 4 days ago


Option C uses Lightsail which incurs charges even at low usage. Not cost effective for low traffic sites.
upvoted 1 times

  kruasan 5 months ago


Selected Answer: B
This solution is the most cost-efficient for the anticipated 100 monthly visits because:
• API Gateway charges are based on API calls. With only 100 visits, charges would be minimal.
• AWS Lambda provides compute time for the backend code in increments of 100ms, so charges would also be negligible for this
workload.
• Amazon SES is used only for sending emails from the submitted contact forms. SES has a generous free tier of 62,000 emails per month,
so there would be no charges for sending the contact emails.
• No EC2 instances or other infrastructure needs to be run and paid for.
upvoted 2 times

  datz 5 months, 3 weeks ago


Selected Answer: B
B would be cheaper than option D,

Member only 100 site visits per month, so you are comparing API GW used 100 times a month with constantly running EC2...
upvoted 1 times

  Steve_4542636 7 months ago


Selected Answer: B
Both api gateway and lambda are serverless so charges apply only on the 100 form submissions per month
upvoted 1 times
  bdp123 7 months, 1 week ago
Selected Answer: B
After looking at cost of Workmail compared to SES - probably 'B' is better
upvoted 2 times

  bdp123 7 months, 1 week ago


Selected Answer: D
Create a t2 micro Amazon EC2 instance. Deploy a LAMP (Linux Apache MySQL, PHP/Perl/Python) stack to host the webpage (free open-
source). Use client-side scripting to build the contact form. Integrate the form with Amazon WorkMail. This solution will provide the
company with the necessary components to host the contact form page and integrate it with Amazon WorkMail at the lowest cost. Option
A requires the use of Amazon ECS, which is more expensive than EC2, and Option B requires the use of Amazon API Gateway, which is also
more expensive than EC2. Option C requires the use of Amazon Lightsail, which is more expensive than EC2.
https://aws.amazon.com/what-is/lamp-stack/
upvoted 1 times

  Guru4Cloud 3 weeks, 4 days ago


Option D uses EC2 which has a higher monthly cost than serverless options. LAMP stack adds complexity for a simple contact form.
upvoted 1 times

  SkyZeroZx 5 months ago


3 millon API Gateway == 3,50 USD (EE.UU. Este (Ohio))

Is more cheaper letter B

https://aws.amazon.com/es/api-gateway/pricing/

https://aws.amazon.com/es/lambda/pricing/
upvoted 1 times

  Palanda 7 months, 1 week ago


Selected Answer: B
It's B
upvoted 1 times

  LuckyAro 7 months, 1 week ago


Selected Answer: B
B allows the company to create an API endpoint using AWS Lambda, which is a cost-effective and scalable solution for a contact form with
low traffic. The backend can make a call to Amazon SES to send email notifications, which simplifies the process and reduces complexity.
upvoted 1 times

  cloudbusting 7 months, 2 weeks ago


it is B : https://aws.amazon.com/blogs/architecture/create-dynamic-contact-forms-for-s3-static-websites-using-aws-lambda-amazon-api-
gateway-and-amazon-ses/
upvoted 3 times

  bdp123 7 months, 2 weeks ago


Selected Answer: D
https://docs.aws.amazon.com/lambda/latest/dg/services-apigateway.html
Using AWS Lambda with Amazon API Gateway - AWS Lambda
https://docs.aws.amazon.com/lambda/latest/dg/services-apigateway.html
https://aws.amazon.com/lambda/faqs/
AWS Lambda FAQs
https://aws.amazon.com/lambda/faqs/
upvoted 1 times

  Guru4Cloud 3 weeks, 4 days ago


Option D uses EC2 which has a higher monthly cost than serverless options. LAMP stack adds complexity for a simple contact form.
upvoted 1 times
Question #286 Topic 1

A company has a static website that is hosted on Amazon CloudFront in front of Amazon S3. The static website uses a database backend. The
company notices that the website does not reflect updates that have been made in the website’s Git repository. The company checks the
continuous integration and continuous delivery (CI/CD) pipeline between the Git repository and Amazon S3. The company verifies that the
webhooks are configured properly and that the CI/CD pipeline is sending messages that indicate successful deployments.

A solutions architect needs to implement a solution that displays the updates on the website.

Which solution will meet these requirements?

A. Add an Application Load Balancer.

B. Add Amazon ElastiCache for Redis or Memcached to the database layer of the web application.

C. Invalidate the CloudFront cache.

D. Use AWS Certificate Manager (ACM) to validate the website’s SSL certificate.

Correct Answer: B

Community vote distribution


C (94%) 6%

  fulingyu288 Highly Voted  7 months, 2 weeks ago


Selected Answer: C
Invalidate the CloudFront cache: The solutions architect should invalidate the CloudFront cache to ensure that the latest version of the
website is being served to users.
upvoted 8 times

  Guru4Cloud Most Recent  3 weeks, 4 days ago


Selected Answer: C
C. Invalidate the CloudFront cache
upvoted 1 times

  Damdom 1 month, 2 weeks ago


C. Invalidate the CloudFront cache.

Explanation:

Invalidate the CloudFront cache to ensure that the latest updates from the Git repository are reflected on the static website. When
updates are made to the website's Git repository and deployed to Amazon S3, the CloudFront cache may still be serving the old cached
content to users. By invalidating the CloudFront cache, you're instructing CloudFront to fetch fresh content from the origin (Amazon S3)
and serve it to users.
upvoted 3 times

  riccardoto 1 month, 3 weeks ago


Selected Answer: C
C is the most reasonable cause, though the question is not well-written - "The static website uses a database backend." does not make a
lot of sense to me.
upvoted 1 times

  kruasan 5 months ago


Selected Answer: B
Since the static website is hosted behind CloudFront, updates made to the S3 bucket will not be visible on the site until the CloudFront
cache expires or is invalidated. By invalidating the CloudFront cache after deploying updates, the latest version in S3 will be pulled and the
updates will then appear on the live site.
upvoted 1 times

  RoroJ 4 months, 1 week ago


Isn't that C?
upvoted 2 times

  Namrash 7 months, 2 weeks ago


B should be the right one
upvoted 1 times

  Neorem 7 months, 2 weeks ago


Selected Answer: C
We need to create an Cloudfront invalidation
upvoted 2 times

  Bhawesh 7 months, 2 weeks ago


Selected Answer: C
C. Invalidate the CloudFront cache.
Problem is the CF cache. After invalidating the CloudFront cache, CF will be forces to read the updated static page from the S3 and the S3
changes will start being visible.
upvoted 3 times
Question #287 Topic 1

A company wants to migrate a Windows-based application from on premises to the AWS Cloud. The application has three tiers: an application tier,
a business tier, and a database tier with Microsoft SQL Server. The company wants to use specific features of SQL Server such as native backups
and Data Quality Services. The company also needs to share files for processing between the tiers.

How should a solutions architect design the architecture to meet these requirements?

A. Host all three tiers on Amazon EC2 instances. Use Amazon FSx File Gateway for file sharing between the tiers.

B. Host all three tiers on Amazon EC2 instances. Use Amazon FSx for Windows File Server for file sharing between the tiers.

C. Host the application tier and the business tier on Amazon EC2 instances. Host the database tier on Amazon RDS. Use Amazon Elastic File
System (Amazon EFS) for file sharing between the tiers.

D. Host the application tier and the business tier on Amazon EC2 instances. Host the database tier on Amazon RDS. Use a Provisioned IOPS
SSD (io2) Amazon Elastic Block Store (Amazon EBS) volume for file sharing between the tiers.

Correct Answer: B

Community vote distribution


B (83%) C (17%)

  KZM Highly Voted  7 months, 1 week ago


It is B:
A: Incorrect> FSx file Gateway designed for low latency and efficient access to in-cloud FSx for Windows File Server file shares from your
on-premises facility.

B: Correct> This solution will allow the company to host all three tiers on Amazon EC2 instances while using Amazon FSx for Windows File
Server to provide Windows-based file sharing between the tiers. This will allow the company to use specific features of SQL Server, such as
native backups and Data Quality Services, while sharing files for processing between the tiers.

C: Incorrect> Currently, Amazon EFS supports the NFSv4.1 protocol and does not natively support the SMB protocol, and can't be used in
Windows instances yet.

D: Incorrect> Amazon EBS is a block-level storage solution that is typically used to store data at the operating system level, rather than for
file sharing between servers.
upvoted 8 times

  Guru4Cloud Most Recent  3 weeks, 5 days ago


Selected Answer: B
B. Host all three tiers on Amazon EC2 instances. Use Amazon FSx for Windows File Server for file sharing between the tiers.
upvoted 1 times

  Abrar2022 4 months ago


The question mentions Microsoft = windows
EFS is Linux
upvoted 1 times

  kruasan 5 months ago


Selected Answer: B
This design satisfies the needs in the following ways:
• Running all tiers on EC2 allows using SQL Server on EC2 with its native features like backups and Data Quality Services. SQL Server
cannot be run directly on RDS.
• Amazon FSx for Windows File Server provides fully managed Windows file storage with SMB access. This allows sharing files between the
Windows EC2 instances for all three tiers.
• FSx for Windows File Server has high performance, so it can handle file sharing needs between the tiers.
upvoted 1 times

  kruasan 5 months ago


The other options would not meet requirements:
A. FSx File Gateway only provides access to S3 or EFS storage. It cannot be used directly for Windows file sharing.
C. RDS cannot run SQL Server or its native tools. The database tier needs to run on EC2.
D. EBS volumes can only be attached to a single EC2 instance. They cannot be shared between tiers for file exchanges.
upvoted 1 times

  Netgear 1 week, 5 days ago


No, there is RDS for SQL Server.
https://aws.amazon.com/rds/sqlserver/
upvoted 1 times
  ManOnTheMoon 7 months, 1 week ago
Why not C?
upvoted 1 times

  KZM 7 months, 1 week ago


Currently, Amazon EFS supports the NFSv4.1 protocol and does not natively support the SMB protocol, and can't be used in Windows
instances yet.
upvoted 2 times

  AlmeroSenior 7 months, 1 week ago


Selected Answer: B
Yup B . RDS will not work , Native Backup only to S3 , and Data Quality is not supported , so all EC2 .
https://aws.amazon.com/premiumsupport/knowledge-center/native-backup-rds-sql-server/ and https://www.sqlserver-
dba.com/2021/07/aws-rds-sql-server-limitations.html
upvoted 2 times

  LuckyAro 7 months, 1 week ago


After further research, I concur that the correct answer is B. Native Back up and Data Quality not supported on RDS for Ms SQL
upvoted 2 times

  LuckyAro 7 months, 1 week ago


Selected Answer: C
C.
Host the application tier and the business tier on Amazon EC2 instances.
Host the database tier on Amazon RDS.
Use Amazon Elastic File System (Amazon EFS) for file sharing between the tiers.

This solution allows the company to use specific features of SQL Server such as native backups and Data Quality Services, by hosting the
database tier on Amazon RDS. It also enables file sharing between the tiers using Amazon EFS, which is a fully managed, highly available,
and scalable file system. Amazon EFS provides shared access to files across multiple instances, which is important for processing files
between the tiers. Additionally, hosting the application and business tiers on Amazon EC2 instances provides the company with the
flexibility to configure and manage the environment according to their requirements.
upvoted 2 times

  rushi0611 4 months, 4 weeks ago


How are you gonna connect the EFS to windows based ??
upvoted 1 times

  Yechi 7 months, 2 weeks ago


Selected Answer: B
Data Quality Services: If this feature is critical to your workload, consider choosing Amazon RDS Custom or Amazon EC2.
https://docs.aws.amazon.com/prescriptive-guidance/latest/migration-sql-server/comparison.html
upvoted 3 times

  Bhawesh 7 months, 2 weeks ago


Selected Answer: B
Correct Answer: B
upvoted 3 times
Question #288 Topic 1

A company is migrating a Linux-based web server group to AWS. The web servers must access files in a shared file store for some content. The
company must not make any changes to the application.

What should a solutions architect do to meet these requirements?

A. Create an Amazon S3 Standard bucket with access to the web servers.

B. Configure an Amazon CloudFront distribution with an Amazon S3 bucket as the origin.

C. Create an Amazon Elastic File System (Amazon EFS) file system. Mount the EFS file system on all web servers.

D. Configure a General Purpose SSD (gp3) Amazon Elastic Block Store (Amazon EBS) volume. Mount the EBS volume to all web servers.

Correct Answer: A

Community vote distribution


C (100%)

  Bhawesh Highly Voted  7 months, 2 weeks ago


Selected Answer: C
Since no code change is permitted, below choice makes sense for the unix server's file sharing:
C. Create an Amazon Elastic File System (Amazon EFS) file system. Mount the EFS file system on all web servers.
upvoted 11 times

  Guru4Cloud Most Recent  3 weeks, 5 days ago


Selected Answer: C
Since no code change is permitted, below choice makes sense for the unix server's file sharing:
upvoted 1 times

  callmejaja 2 months, 3 weeks ago


Selected Answer: C
Since no code change is permitted, below choice makes sense for the unix server's file sharing:
upvoted 1 times

  antropaws 4 months, 1 week ago


Selected Answer: C
C is correct.
upvoted 1 times

  kruasan 5 months ago


Selected Answer: C
This solution satisfies the needs in the following ways:
• EFS provides a fully managed elastic network file system that can be mounted on multiple EC2 instances concurrently.
• The EFS file system appears as a standard file system mount on the Linux web servers, requiring no application changes. The servers can
access shared files as if they were on local storage.
• EFS is highly available, durable, and scalable, providing a robust shared storage solution.
upvoted 1 times

  kruasan 5 months ago


The other options would require modifying the application or do not provide a standard file system:
A. S3 does not provide a standard file system mount or share. The application would need to be changed to access S3 storage.
B. CloudFront is a content delivery network and caching service. It does not provide a file system mount or share and would require
application changes.
D. EBS volumes can only attach to a single EC2 instance. They cannot be mounted by multiple servers concurrently and do not provide a
shared file system.
upvoted 2 times

  Steve_4542636 7 months ago


Selected Answer: C
No application changes are allowed and EFS is compatible with Linux
upvoted 1 times

  LuckyAro 7 months, 1 week ago


Selected Answer: C
C is the answer:
Create an Amazon Elastic File System (Amazon EFS) file system.
Mount the EFS file system on all web servers.
To meet the requirements of providing a shared file store for Linux-based web servers without making changes to the application, using
an Amazon EFS file system is the best solution.
Amazon EFS is a managed NFS file system service that provides shared access to files across multiple Linux-based instances, which makes
it suitable for this use case.

Amazon S3 is not ideal for this scenario since it is an object storage service and not a file system, and it requires additional tools or
libraries to mount the S3 bucket as a file system.

Amazon CloudFront can be used to improve content delivery performance but is not necessary for this requirement.

Additionally, Amazon EBS volumes can only be mounted to one instance at a time, so it is not suitable for sharing files across multiple
instances.
upvoted 2 times

  Karlos99 7 months ago


But what about aws ebs multi attach?
upvoted 2 times

  elearningtakai 6 months ago


Amazon EBS Multi-Attach enables you to attach a single Provisioned IOPS SSD (io1 or io2) volume to multiple instances. EBS General
Purpose SSD (gp3) doesn't support Multi-Attach
upvoted 1 times
Question #289 Topic 1

A company has an AWS Lambda function that needs read access to an Amazon S3 bucket that is located in the same AWS account.

Which solution will meet these requirements in the MOST secure manner?

A. Apply an S3 bucket policy that grants read access to the S3 bucket.

B. Apply an IAM role to the Lambda function. Apply an IAM policy to the role to grant read access to the S3 bucket.

C. Embed an access key and a secret key in the Lambda function’s code to grant the required IAM permissions for read access to the S3
bucket.

D. Apply an IAM role to the Lambda function. Apply an IAM policy to the role to grant read access to all S3 buckets in the account.

Correct Answer: D

Community vote distribution


B (100%)

  antropaws 4 months, 1 week ago


Selected Answer: B
B is correct.
upvoted 1 times

  kruasan 5 months ago


Selected Answer: B
This solution satisfies the needs in the most secure manner:
• An IAM role provides temporary credentials to the Lambda function to access AWS resources. The function does not have persistent
credentials.
• The IAM policy grants least privilege access by specifying read access only to the specific S3 bucket needed. Access is not granted to all
S3 buckets.
• If the Lambda function is compromised, the attacker would only gain access to the one specified S3 bucket. They would not receive
broad access to resources.
upvoted 2 times

  kruasan 5 months ago


The other options are less secure:
A. A bucket policy grants open access to a resource. It is a less granular way to provide access and grants more privilege than needed.
C. Embedding access keys in code is extremely insecure and against best practices. The keys provide full access and are at major risk of
compromise if the code leaks.
D. Granting access to all S3 buckets provides far too much privilege if only one bucket needs access. It greatly expands the impact if
compromised.
upvoted 1 times

  Dr_Chomp 5 months, 3 weeks ago


Selected Answer: B
you dont want to grant access to all S3 buckets (which is answer D) - only the one identified (so answer A)
upvoted 1 times

  Steve_4542636 7 months ago


Selected Answer: B
B is only for one bucket and you want to use Role based security here.
upvoted 1 times

  Ja13 7 months, 1 week ago


Selected Answer: B
C, it says MOST secure manner, so only to one bucket
upvoted 1 times

  Joxtat 7 months, 1 week ago


Selected Answer: B
https://docs.aws.amazon.com/lambda/latest/dg/lambda-permissions.html
upvoted 1 times

  kpato87 7 months, 2 weeks ago


Selected Answer: B
This is the most secure and recommended way to provide an AWS Lambda function with access to an S3 bucket. It involves creating an
IAM role that the Lambda function assumes, and attaching an IAM policy to the role that grants the necessary permissions to read from
the S3 bucket.
upvoted 3 times
  Joan111edu 7 months, 2 weeks ago
Selected Answer: B
B. Least of privilege
upvoted 2 times
Question #290 Topic 1

A company hosts a web application on multiple Amazon EC2 instances. The EC2 instances are in an Auto Scaling group that scales in response to
user demand. The company wants to optimize cost savings without making a long-term commitment.

Which EC2 instance purchasing option should a solutions architect recommend to meet these requirements?

A. Dedicated Instances only

B. On-Demand Instances only

C. A mix of On-Demand Instances and Spot Instances

D. A mix of On-Demand Instances and Reserved Instances

Correct Answer: B

Community vote distribution


C (79%) B (21%)

  Kt 1 week, 4 days ago


Exam topic is not free anymore. Anyone has free access ?
upvoted 1 times

  Damdom 1 month, 2 weeks ago


Selected Answer: C
By combining On-Demand Instances for steady-state workloads or critical components and Spot Instances for less critical or burstable
workloads, you can achieve a balance between cost savings and performance. This strategy allows you to optimize costs without making a
long-term commitment, as Spot Instances provide cost savings without the need for upfront payments or long-term contracts.
upvoted 2 times

  Abrar2022 4 months ago


Selected Answer: C
It's about COST, not operational efficiency for this question.
upvoted 1 times

  kraken21 6 months ago


Selected Answer: C
Autoscaling with ALB / scale up on demand using on demand and spot instance combination makes sense. Reserved will not fit the no-
long term commitment clause.
upvoted 1 times

  WherecanIstart 6 months, 1 week ago


Selected Answer: C
Without commitment....Spot instances
upvoted 1 times

  cegama543 6 months, 3 weeks ago


Selected Answer: B
If the company wants to optimize cost savings without making a long-term commitment, then using only On-Demand Instances may not
be the most cost-effective option. Spot Instances can be significantly cheaper than On-Demand Instances, but they come with the risk of
being interrupted if the Spot price increases above your bid price. If the company is willing to accept this risk, a mix of On-Demand
Instances and Spot Instances may be the best option to optimize cost savings while maintaining the desired level of scalability.

However, if the company wants the most predictable pricing and does not want to risk instance interruption, then using only On-Demand
Instances is a good choice. It ultimately depends on the company's priorities and risk tolerance.
upvoted 3 times

  Steve_4542636 7 months ago


Selected Answer: C
It's about COST, not operational efficiency for this question.
upvoted 1 times

  Samuel03 7 months, 1 week ago


Selected Answer: C
Should be C
upvoted 1 times

  bdp123 7 months, 1 week ago


Selected Answer: C
https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-mixed-instances-groups.html
upvoted 1 times

  AlmeroSenior 7 months, 1 week ago


Selected Answer: C
C - WEB apps , mostly Stateless , and ASG support OnDemand and Spot mix , in fact , you can prioritize to have Ondemand , before it uses
Spot > https://docs.aws.amazon.com/autoscaling/ec2/userguide/launch-template-spot-instances.html
upvoted 1 times

  designmood22 7 months, 1 week ago


Selected Answer: C
Answer : C. A mix of On-Demand Instances and Spot Instances
upvoted 1 times

  LuckyAro 7 months, 1 week ago


Selected Answer: C
To optimize cost savings without making a long-term commitment, a mix of On-Demand Instances and Spot Instances would be the best
EC2 instance purchasing option to recommend.
By combining On-Demand and Spot Instances, the company can take advantage of the cost savings offered by Spot Instances during
periods of low demand while maintaining the reliability and stability of On-Demand Instances during periods of high demand. This
provides a cost-effective solution that can scale with user demand without making a long-term commitment.
upvoted 1 times

  NolaHOla 7 months, 1 week ago


In this scenario, a mix of On-Demand Instances and Spot Instances is the most cost-effective option, as it can provide significant cost
savings while maintaining application availability. The Auto Scaling group can be configured to launch Spot Instances when the demand is
high and On-Demand Instances when demand is low or when Spot Instances are not available. This approach provides a balance between
cost savings and reliability.
upvoted 3 times

  minglu 7 months, 2 weeks ago


In my opinion, it is C, on demand instances and spot instances can be in a single auto scaling group.
upvoted 3 times
Question #291 Topic 1

A media company uses Amazon CloudFront for its publicly available streaming video content. The company wants to secure the video content
that is hosted in Amazon S3 by controlling who has access. Some of the company’s users are using a custom HTTP client that does not support
cookies. Some of the company’s users are unable to change the hardcoded URLs that they are using for access.

Which services or methods will meet these requirements with the LEAST impact to the users? (Choose two.)

A. Signed cookies

B. Signed URLs

C. AWS AppSync

D. JSON Web Token (JWT)

E. AWS Secrets Manager

Correct Answer: CE

Community vote distribution


AB (79%) 6%
( )

  leoattf Highly Voted  7 months, 1 week ago


Selected Answer: AB
I thought that option A was totally wrong, because the question mentions "HTTP client does not support cookies". However it is right,
along with option B. Check the link bellow, first paragraph.
https://aws.amazon.com/blogs/media/secure-content-using-cloudfront-functions/
upvoted 15 times

  Steve_4542636 7 months ago


Thanks for this! What a tricky question. If the client doesn't support cookies, THEN they use the signed S3 Urls.
upvoted 5 times

  johnmcclane78 Highly Voted  7 months ago


B. Signed URLs - This method allows the media company to control who can access the video content by creating a time-limited URL with a
cryptographic signature. This URL can be distributed to the users who are unable to change the hardcoded URLs they are using for access,
and they can access the content without needing to support cookies.

D. JSON Web Token (JWT) - This method allows the media company to control who can access the video content by creating a secure token
that contains user authentication and authorization information. This token can be distributed to the users who are using a custom HTTP
client that does not support cookies. The users can include this token in their requests to access the content without needing to support
cookies.

Therefore, options B and D are the correct answers.

Option A (Signed cookies) would not work for users who are using a custom HTTP client that does not support cookies. Option C (AWS
AppSync) is not relevant to the requirement of securing video content. Option E (AWS Secrets Manager) is a service used for storing and
retrieving secrets, which is not relevant to the requirement of securing video content.
upvoted 12 times

  tabbyDolly Most Recent  2 weeks ago


AB - https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html
upvoted 1 times

  Guru4Cloud 3 weeks, 5 days ago


Selected Answer: BD
B and D are the correct options for meeting the requirements with the least impact to users.

Signed URLs allow access to individual objects in Amazon S3 for a specified time period without requiring cookies. This allows the custom
HTTP client users to access content.

JSON Web Tokens (JWT) allow users to get temporary access tokens that can be passed in requests. This allows users with hardcoded URLs
to access content without updating URLs.
upvoted 1 times

  Guru4Cloud 3 weeks, 5 days ago


No good
Signed cookies require client support and may impact users.

AWS AppSync and Secrets Manager do not help address the specific access requirements.
Good

So Signed URLs and JWTs allow securing access to S3 content with minimal impact to users, meeting the requirements.
upvoted 1 times
  riccardoto 1 month, 3 weeks ago
Selected Answer: BD
I understand why many users here are voting AB, but in my opinion BD is more correct.
Using JWT or signed urls will work both for users that cannot use cookies or cannot change the url.
upvoted 1 times

  katetel 2 months, 2 weeks ago


Selected Answer: AB
it's correct
upvoted 1 times

  MrAWSAssociate 3 months, 1 week ago


Selected Answer: CE
These are the right answers!
upvoted 2 times

  DrWatson 4 months ago


Selected Answer: AB
"Some of the company’s users" does not support cookies, then they'll use Signed URLs.
"Some of the company’s users" are unable to change the hardcoded URLs, then they'll use Signed cookies.
upvoted 1 times

  kruasan 5 months ago


Selected Answer: AB
Signed cookies would allow the media company to authorize access to related content (like HLS video segments) with a single signature,
minimizing implementation overhead. This works for users that can support cookies.
Signed URLs would allow the media company to sign each URL individually to control access, supporting users that cannot use cookies. By
embedding the signature in the URL, existing hardcoded URLs would not need to change.
upvoted 1 times

  kruasan 5 months ago


C. AWS AppSync - This is for building data-driven apps with real-time and offline capabilities. It does not directly help with securing
streaming content.
D. JSON Web Token (JWT) - Although JWTs can be used for authorization, they would require the client to get a token and validate/check
access on the server for each request. This does not work for hardcoded URLs and minimizes impact.
E. AWS Secrets Manager - This service is for managing secrets, not for controlling access to resources. It would not meet the
requirements.
upvoted 1 times

  Shrestwt 5 months, 2 weeks ago


A. Signed cookies: CloudFront signed cookies allow you to control who can access your content when you don't want to change your
current URLs.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-signed-cookies.html

B. Signed URLs: This method allows the media company to control who can access the video content by creating a time-limited URL with a
cryptographic signature.
upvoted 1 times

  ahilan26 5 months, 3 weeks ago


Selected Answer: AB
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-choosing-signed-urls-cookies.html
upvoted 2 times

  CapJackSparrow 6 months, 3 weeks ago


Some of the company’s users are using a custom HTTP client that does not support cookies.
**Singned URLS

Some of the company’s users are unable to change the hardcoded URLs that they are using for access. **Signed cookies
upvoted 5 times

  TungPham 7 months ago


Selected Answer: AB
https://aws.amazon.com/vi/blogs/media/awse-protecting-your-media-assets-with-token-authentication/
JSON Web Token (JWT) need using with Lambda@Edge
upvoted 3 times

  HaineHess 7 months ago


Selected Answer: BD
b d seems good
upvoted 1 times
  bdp123 7 months, 1 week ago
Selected Answer: AB
It says some use a custom HTTP client that does not support cookies - those will use signed URLs which has precedence over cookies
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-choosing-signed-urls-cookies.html
upvoted 1 times

  FFO 5 months, 3 weeks ago


AB is wrong, your point that cookies are disabled eliminates the use of signed cookies. The hard coding eliminates the use of signed
URLs. so AB totally eliminated. read the article further not just the first few lines, the read up signed URLs
upvoted 1 times

  pagom 7 months, 1 week ago


Selected Answer: BD
B, D
Presigned URL uses the GET Parameter. That is, authentication is performed using Query String. The string containing Query String is a
URI, not a URL. Therefore, B can be the answer.
The authentication method using JWT Token may use HTTP Header. This is not using cookies. Therefore, D can be the answer.
Please understand even if the sentence is awkward. I am not an English speaker.
upvoted 1 times

  ChrisG1454 7 months, 1 week ago


Using Appsync is possible
https://stackoverflow.com/questions/48495338/how-to-upload-file-to-aws-s3-using-aws-appsync
upvoted 1 times
Question #292 Topic 1

A company is preparing a new data platform that will ingest real-time streaming data from multiple sources. The company needs to transform the
data before writing the data to Amazon S3. The company needs the ability to use SQL to query the transformed data.

Which solutions will meet these requirements? (Choose two.)

A. Use Amazon Kinesis Data Streams to stream the data. Use Amazon Kinesis Data Analytics to transform the data. Use Amazon Kinesis Data
Firehose to write the data to Amazon S3. Use Amazon Athena to query the transformed data from Amazon S3.

B. Use Amazon Managed Streaming for Apache Kafka (Amazon MSK) to stream the data. Use AWS Glue to transform the data and to write the
data to Amazon S3. Use Amazon Athena to query the transformed data from Amazon S3.

C. Use AWS Database Migration Service (AWS DMS) to ingest the data. Use Amazon EMR to transform the data and to write the data to
Amazon S3. Use Amazon Athena to query the transformed data from Amazon S3.

D. Use Amazon Managed Streaming for Apache Kafka (Amazon MSK) to stream the data. Use Amazon Kinesis Data Analytics to transform the
data and to write the data to Amazon S3. Use the Amazon RDS query editor to query the transformed data from Amazon S3.

E. Use Amazon Kinesis Data Streams to stream the data. Use AWS Glue to transform the data. Use Amazon Kinesis Data Firehose to write the
data to Amazon S3. Use the Amazon RDS query editor to query the transformed data from Amazon S3.

Correct Answer: AB

Community vote distribution


AB (80%) AE (20%)

  Steve_4542636 Highly Voted  7 months ago


Selected Answer: AB
OK, for B I did some research, https://docs.aws.amazon.com/glue/latest/dg/add-job-streaming.html

"You can create streaming extract, transform, and load (ETL) jobs that run continuously, consume data from streaming sources like
Amazon Kinesis Data Streams, Apache Kafka, and Amazon Managed Streaming for Apache Kafka (Amazon MSK). The jobs cleanse and
transform the data, and then load the results into Amazon S3 data lakes or JDBC data stores."
upvoted 5 times

  Guru4Cloud Most Recent  3 weeks, 5 days ago


Selected Answer: AB
A and B are correct.

A uses Kinesis Data Streams for streaming, Kinesis Data Analytics for transformation, Kinesis Data Firehose for writing to S3, and Athena
for SQL queries on S3 data.

B uses Amazon MSK for streaming, AWS Glue for transformation and writing to S3, and Athena for SQL queries on S3 data.
upvoted 1 times

  Diqian 1 month, 1 week ago


Why E is incorrect?
upvoted 1 times

  MrCloudy 5 months, 2 weeks ago


Selected Answer: AE
To transform real-time streaming data from multiple sources, write it to Amazon S3, and query the transformed data using SQL, the
company can use the following solutions: Amazon Kinesis Data Streams, Amazon Kinesis Data Analytics, and Amazon Kinesis Data
Firehose. The transformed data can be queried using Amazon Athena. Therefore, options A and E are the correct answers.

Option A is correct because it uses Amazon Kinesis Data Streams to stream data from multiple sources, Amazon Kinesis Data Analytics to
transform the data, and Amazon Kinesis Data Firehose to write the data to Amazon S3. Amazon Athena can be used to query the
transformed data in Amazon S3.

Option E is also correct because it uses Amazon Kinesis Data Streams to stream data from multiple sources, AWS Glue to transform the
data, and Amazon Kinesis Data Firehose to write the data to Amazon S3. Amazon Athena can be used to query the transformed data in
Amazon S3.
upvoted 3 times

  sand444 1 week, 1 day ago


Amazon Athena is not in option E
upvoted 1 times

  Paras043 5 months, 3 weeks ago


But how can you transform data using kinesis data analytics ??
upvoted 2 times

  luisgu 4 months, 3 weeks ago


See https://aws.amazon.com/kinesis/data-analytics/faqs/?nc=sn&loc=6
upvoted 1 times

  kraken21 6 months ago


Selected Answer: AB
DMS can move data from DBs to streaming services and cannot natively handle streaming data. Hence A.B makes sense. Also AWS
Glue/ETL can handle MSK streaming https://docs.aws.amazon.com/glue/latest/dg/add-job-streaming.html.
upvoted 1 times

  elearningtakai 6 months ago


Selected Answer: AB
The solutions that meet the requirements of streaming real-time data, transforming the data before writing to S3, and querying the
transformed data using SQL are A and B.

Option C: This option is not ideal for streaming real-time data as AWS DMS is not optimized for real-time data ingestion.
Option D & E: These option are not recommended as the Amazon RDS query editor is not designed for querying data in S3, and it is not
efficient for running complex queries.
upvoted 3 times

  gold4otas 6 months, 1 week ago


Selected Answer: AB
The correct answers are options A & B
upvoted 1 times

  TungPham 7 months ago


may Amazon RDS query editor to query the transformed data from Amazon S3 ?
i don't think so, plz get link docs to that
upvoted 1 times

  ManOnTheMoon 7 months, 1 week ago


Why not A & D?
upvoted 1 times

  TungPham 7 months ago


may Amazon RDS query editor to query the transformed data from Amazon S3 ?
i don't think so, plz get link docs to that
upvoted 1 times

  LuckyAro 7 months, 1 week ago


Selected Answer: AB
A and B
upvoted 1 times

  designmood22 7 months, 1 week ago


Answer is : A & B
upvoted 1 times

  rrharris 7 months, 2 weeks ago


Answer is A and B
upvoted 2 times

  NolaHOla 7 months, 2 weeks ago


A and B
upvoted 2 times
Question #293 Topic 1

A company has an on-premises volume backup solution that has reached its end of life. The company wants to use AWS as part of a new backup
solution and wants to maintain local access to all the data while it is backed up on AWS. The company wants to ensure that the data backed up on
AWS is automatically and securely transferred.

Which solution meets these requirements?

A. Use AWS Snowball to migrate data out of the on-premises solution to Amazon S3. Configure on-premises systems to mount the Snowball
S3 endpoint to provide local access to the data.

B. Use AWS Snowball Edge to migrate data out of the on-premises solution to Amazon S3. Use the Snowball Edge file interface to provide on-
premises systems with local access to the data.

C. Use AWS Storage Gateway and configure a cached volume gateway. Run the Storage Gateway software appliance on premises and
configure a percentage of data to cache locally. Mount the gateway storage volumes to provide local access to the data.

D. Use AWS Storage Gateway and configure a stored volume gateway. Run the Storage Gateway software appliance on premises and map the
gateway storage volumes to on-premises storage. Mount the gateway storage volumes to provide local access to the data.

Correct Answer: D

Community vote distribution


D (100%)
  Steve_4542636 Highly Voted  7 months ago
Selected Answer: D
The question states, "wants to maintain local access to all the data" This is storage gateway. Cached gateway stores only the frequently
accessed data locally which is not what the problem statement asks for.
upvoted 8 times

  Guru4Cloud Most Recent  3 weeks, 5 days ago


Selected Answer: D
@kruasan well explained
upvoted 1 times

  kruasan 5 months ago


Selected Answer: D
1. The company wants to maintain local access to all the data. Only stored volumes keep the complete dataset on-premises, providing low-
latency access. Cached volumes only cache a subset locally.
2. The company wants the data backed up on AWS. With stored volumes, periodic backups (snapshots) of the on-premises data are sent to
S3, providing durable and scalable backup storage.
3. The company wants the data transfer to AWS to be automatic and secure. Storage Gateway provides an encrypted connection between
the on-premises gateway and AWS storage. Backups to S3 are sent asynchronously and automatically based on the backup schedule
configured.
upvoted 2 times

  ChrisG1454 7 months, 1 week ago


Ans = D

https://docs.aws.amazon.com/storagegateway/latest/vgw/WhatIsStorageGateway.html
upvoted 3 times

  Neha999 7 months, 2 weeks ago


D
https://www.examtopics.com/discussions/amazon/view/43725-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 2 times

  bdp123 7 months, 2 weeks ago


Selected Answer: D
https://aws.amazon.com/storagegateway/faqs/#:~:text=In%20the%20cached%20mode%2C%20your,asynchronously%20backed%20up%2
0to%20AWS.
In the cached mode, your primary data is written to S3, while retaining your frequently accessed data locally in a cache for low-latency
access.
In the stored mode, your primary data is stored locally and your entire dataset is available for low-latency access while asynchronously
backed up to AWS.
upvoted 2 times
Question #294 Topic 1

An application that is hosted on Amazon EC2 instances needs to access an Amazon S3 bucket. Traffic must not traverse the internet.

How should a solutions architect configure access to meet these requirements?

A. Create a private hosted zone by using Amazon Route 53.

B. Set up a gateway VPC endpoint for Amazon S3 in the VPC.

C. Configure the EC2 instances to use a NAT gateway to access the S3 bucket.

D. Establish an AWS Site-to-Site VPN connection between the VPC and the S3 bucket.

Correct Answer: B

Community vote distribution


B (100%)

  Guru4Cloud 3 weeks, 5 days ago


Selected Answer: B
The correct answer is B. Set up a gateway VPC endpoint for Amazon S3 in the VPC.

A gateway VPC endpoint is a private way for Amazon EC2 instances in a VPC to access AWS services, such as Amazon S3, without having to
go through the internet. This can help to improve security and performance.
upvoted 1 times

  Steve_4542636 7 months ago


Selected Answer: B
S3 and DynamoDB are the only services with Gateway endpoint options
upvoted 2 times

  ManOnTheMoon 7 months, 1 week ago


Agree with B
upvoted 1 times

  jennyka76 7 months, 1 week ago


ANSWER - B
https://docs.aws.amazon.com/vpc/latest/privatelink/gateway-endpoints.htmlR B
upvoted 1 times

  LuckyAro 7 months, 1 week ago


Selected Answer: B
B is correct
upvoted 1 times

  skiwili 7 months, 2 weeks ago


Selected Answer: B
Bbbbbbbb
upvoted 3 times
Question #295 Topic 1

An ecommerce company stores terabytes of customer data in the AWS Cloud. The data contains personally identifiable information (PII). The
company wants to use the data in three applications. Only one of the applications needs to process the PII. The PII must be removed before the
other two applications process the data.

Which solution will meet these requirements with the LEAST operational overhead?

A. Store the data in an Amazon DynamoDB table. Create a proxy application layer to intercept and process the data that each application
requests.

B. Store the data in an Amazon S3 bucket. Process and transform the data by using S3 Object Lambda before returning the data to the
requesting application.

C. Process the data and store the transformed data in three separate Amazon S3 buckets so that each application has its own custom
dataset. Point each application to its respective S3 bucket.

D. Process the data and store the transformed data in three separate Amazon DynamoDB tables so that each application has its own custom
dataset. Point each application to its respective DynamoDB table.

Correct Answer: B

Community vote distribution


B (88%) 6%

  fruto123 Highly Voted  7 months, 1 week ago


Selected Answer: B
B is the right answer and the proof is in this link.

https://aws.amazon.com/blogs/aws/introducing-amazon-s3-object-lambda-use-your-code-to-process-data-as-it-is-being-retrieved-from-
s3/
upvoted 9 times

  Guru4Cloud 3 weeks, 5 days ago


This is so wrong
upvoted 1 times

  Steve_4542636 Highly Voted  7 months ago


Selected Answer: B
Actually this is what Macie is best used for.
upvoted 8 times

  Abrar2022 Most Recent  4 months ago


Selected Answer: B
Store the data in an Amazon S3 bucket and using S3 Object Lambda to process and transform the data before returning it to the
requesting application. This approach allows the PII to be removed in real-time and without the need to create separate datasets or tables
for each application.
upvoted 1 times

  antropaws 4 months, 1 week ago


Selected Answer: A
@fruto123 and everyone that upvoted:

Is it plausible that S3 Object Lambda can process terabytes of data in 60 seconds? The same link you shared states that the maximum
duration for a Lambda function used by S3 Object Lambda is 60 seconds.

Answer is A.
upvoted 2 times

  antropaws 4 months, 1 week ago


Chat GPT:

Isn't just 60 seconds the maximum duration for a Lambda function used by S3 Object Lambda? How can it process terabytes of data in
60 seconds?

You are correct that the maximum duration for a Lambda function used by S3 Object Lambda is 60 seconds.

Given the time constraint, it is not feasible to process terabytes of data within a single Lambda function execution.
S3 Object Lambda is designed for lightweight and real-time transformations rather than extensive processing of large datasets.

To handle terabytes of data, you would typically need to implement a distributed processing solution using services like Amazon EMR,
AWS Glue, or AWS Batch. These services are specifically designed to handle big data workloads and provide scalability and distributed
processing capabilities.

So, while S3 Object Lambda can be useful for lightweight processing tasks, it is not the appropriate tool for processing terabytes of
data within the execution time limits of a Lambda function.
upvoted 1 times

  Kp88 2 months ago


Terabyte is just the storage. Lambda only need to process which application request. Think like removing/scratching off your social
security number before sharing your doc to a third party.
upvoted 1 times

  kruasan 5 months ago


Selected Answer: B
• Storing the raw data in S3 provides a durable, scalable data lake. S3 requires little ongoing management overhead.
• S3 Object Lambda can be used to filter and process the data on retrieval transparently. This minimizes operational overhead by avoiding
the need to preprocess and store multiple transformed copies of the data.
• Only one copy of the data needs to be stored and maintained in S3. S3 Object Lambda will transform the data on read based on the
requesting application.
• No additional applications or proxies need to be developed and managed to handle the data transformation. S3 Object Lambda provides
this functionality.
upvoted 2 times

  kruasan 5 months ago


Option A requires developing and managing a proxy app layer to handle data transformation, adding overhead.
Options C and D require preprocessing and storing multiple copies of the transformed data, adding storage and management
overhead.
Option B using S3 Object Lambda minimizes operational overhead by handling data transformation on read transparently using the
native S3 functionality. Only one raw data copy is stored in S3, with no additional applications required.
upvoted 1 times

  pagom 7 months, 1 week ago


Selected Answer: B
https://aws.amazon.com/ko/blogs/korea/introducing-amazon-s3-object-lambda-use-your-code-to-process-data-as-it-is-being-retrieved-
from-s3/
upvoted 4 times

  LuckyAro 7 months, 1 week ago


Selected Answer: B
B is the correct answer.
Amazon S3 Object Lambda allows you to add custom code to S3 GET requests, which means that you can modify the data before it is
returned to the requesting application. In this case, you can use S3 Object Lambda to remove the PII before the data is returned to the
two applications that do not need to process PII. This approach has the least operational overhead because it does not require creating
separate datasets or proxy application layers, and it allows you to maintain a single copy of the data in an S3 bucket.
upvoted 4 times

  NolaHOla 7 months, 1 week ago


To meet the requirement of removing the PII before processing by two of the applications, it would be most efficient to use option B,
which involves storing the data in an Amazon S3 bucket and using S3 Object Lambda to process and transform the data before returning
it to the requesting application. This approach allows the PII to be removed in real-time and without the need to create separate datasets
or tables for each application. S3 Object Lambda can be configured to automatically remove PII from the data before it is sent to the non-
PII processing applications. This solution provides a cost-effective and scalable way to meet the requirement with the least operational
overhead.
upvoted 2 times

  minglu 7 months, 2 weeks ago


Selected Answer: B
I think it is B.
upvoted 1 times

  skiwili 7 months, 2 weeks ago


Selected Answer: C
Looks like C is the correct answer
upvoted 2 times
Question #296 Topic 1

A development team has launched a new application that is hosted on Amazon EC2 instances inside a development VPC. A solutions architect
needs to create a new VPC in the same account. The new VPC will be peered with the development VPC. The VPC CIDR block for the development
VPC is 192.168.0.0/24. The solutions architect needs to create a CIDR block for the new VPC. The CIDR block must be valid for a VPC peering
connection to the development VPC.

What is the SMALLEST CIDR block that meets these requirements?

A. 10.0.1.0/32

B. 192.168.0.0/24

C. 192.168.1.0/32

D. 10.0.1.0/24

Correct Answer: B

Community vote distribution


D (100%)

  BrainOBrain Highly Voted  7 months, 2 weeks ago


Selected Answer: D
10.0.1.0/32 and 192.168.1.0/32 are too small for VPC, and /32 network is only 1 host
192.168.0.0/24 is overlapping with existing VPC
upvoted 9 times

  Guru4Cloud Most Recent  3 weeks, 5 days ago


Selected Answer: D
10.0.1.0/32 and 192.168.1.0/32 are too small for VPC, and /32 network is only 1 host
192.168.0.0/24 is overlapping with existing VPC
upvoted 1 times

  Abrar2022 4 months ago


Definitely D. The only valid VPC CIDR block that does not overlap with the development VPC CIDR block among the options. The other 2
CIDR block options are too small.
upvoted 1 times

  antropaws 4 months, 1 week ago


Selected Answer: D
D is correct.
upvoted 1 times

  kruasan 5 months ago


Selected Answer: D
• Option A (10.0.1.0/32) is invalid - a /32 CIDR prefix is a host route, not a VPC range.
• Option B (192.168.0.0/24) overlaps the development VPC and so cannot be used.
• Option C (192.168.1.0/32) is invalid - a /32 CIDR prefix is a host route, not a VPC range.
• Option D (10.0.1.0/24) satisfies the non-overlapping CIDR requirement but is a larger block than needed. Since only two VPCs need to be
peered, a /24 block provides more addresses than necessary.
upvoted 3 times

  channn 6 months ago


Selected Answer: D
D is the only correct answer
upvoted 1 times

  r04dB10ck 6 months, 2 weeks ago


Selected Answer: D
only one valid with no overlap
upvoted 1 times

  Steve_4542636 7 months ago


Selected Answer: D
A process by elimination solution here. a CIDR value is the number of bits that are lockeed so 10.0.0.0/32 means no range.
upvoted 2 times
  LuckyAro 7 months, 1 week ago
Selected Answer: D
Answer is D, 10.0.1.0/24.
upvoted 1 times

  skiwili 7 months, 2 weeks ago


Selected Answer: D
Yes D is the answer
upvoted 1 times

  obatunde 7 months, 2 weeks ago


Selected Answer: D
Definitely D. It is the only valid VPC CIDR block that does not overlap with the development VPC CIDR block among the options.
upvoted 1 times

  bdp123 7 months, 2 weeks ago


Selected Answer: D
The allowed block size is between a /28 netmask and /16 netmask.
The CIDR block must not overlap with any existing CIDR block that's associated with the VPC.
https://docs.aws.amazon.com/vpc/latest/userguide/configure-your-vpc.html
upvoted 4 times
Question #297 Topic 1

A company deploys an application on five Amazon EC2 instances. An Application Load Balancer (ALB) distributes traffic to the instances by using
a target group. The average CPU usage on each of the instances is below 10% most of the time, with occasional surges to 65%.

A solutions architect needs to implement a solution to automate the scalability of the application. The solution must optimize the cost of the
architecture and must ensure that the application has enough CPU resources when surges occur.

Which solution will meet these requirements?

A. Create an Amazon CloudWatch alarm that enters the ALARM state when the CPUUtilization metric is less than 20%. Create an AWS Lambda
function that the CloudWatch alarm invokes to terminate one of the EC2 instances in the ALB target group.

B. Create an EC2 Auto Scaling group. Select the existing ALB as the load balancer and the existing target group as the target group. Set a
target tracking scaling policy that is based on the ASGAverageCPUUtilization metric. Set the minimum instances to 2, the desired capacity to
3, the maximum instances to 6, and the target value to 50%. Add the EC2 instances to the Auto Scaling group.

C. Create an EC2 Auto Scaling group. Select the existing ALB as the load balancer and the existing target group as the target group. Set the
minimum instances to 2, the desired capacity to 3, and the maximum instances to 6. Add the EC2 instances to the Auto Scaling group.

D. Create two Amazon CloudWatch alarms. Configure the first CloudWatch alarm to enter the ALARM state when the average CPUUtilization
metric is below 20%. Configure the second CloudWatch alarm to enter the ALARM state when the average CPUUtilization matric is above 50%.
Configure the alarms to publish to an Amazon Simple Notification Service (Amazon SNS) topic to send an email message. After receiving the
message, log in to decrease or increase the number of EC2 instances that are running.

Correct Answer: D

Community vote distribution


B (94%) 6%

  bdp123 Highly Voted  7 months, 2 weeks ago


Selected Answer: B
Just create an auto scaling policy
upvoted 8 times

  vilagiri Most Recent  2 days, 10 hours ago


I picked B.. I am not 100% sure..The application is deployed in 5 instances initially. What is the logic behind 2/3/6 ASG. Because utilization is
10%, we can set min 2? I know for sure I am not going to get this ASG question correct in the exam.
upvoted 1 times

  Guru4Cloud 3 weeks, 5 days ago


Selected Answer: B
The correct answer is B.

This solution will meet the requirements because it will:

Automate the scalability of the application by using EC2 Auto Scaling.


Optimize the cost of the architecture by only scaling the number of EC2 instances up when needed.
Ensure that the application has enough CPU resources when surges occur by setting the target value of the target tracking scaling policy
to 50%.
upvoted 1 times

  ajchi1980 3 months ago


Wrong answers: Options A, C, and D are not the most appropriate solutions:

Option A suggests creating a CloudWatch alarm to terminate an EC2 instance when CPU utilization is less than 20%. However, this
approach does not ensure that the application will have enough CPU resources during surges, as it only terminates instances when CPU
utilization is low, which may not meet the requirements.
Option C suggests creating an Auto Scaling group without any specific scaling policies or configurations. This approach does not address
the need for automated scaling based on CPU utilization, making it insufficient for the given requirements.
Option D suggests using CloudWatch alarms to send notifications via Amazon SNS and manually adjusting the number of instances based
on the received messages. This approach lacks automation and requires manual intervention, which does not optimize cost or meet the
requirement of automated scalability.
Therefore, Option B is the most appropriate solution in this case.
upvoted 1 times

  ajchi1980 3 months ago


Selected Answer: B
Explanation:
Option B leverages EC2 Auto Scaling, which is designed to automatically adjust the number of instances based on specified metrics. By
setting a target tracking scaling policy based on average CPU utilization, the Auto Scaling group can dynamically scale the number of
instances to maintain the desired level of CPU resources. The minimum instances of 2 ensure a minimum baseline capacity, while the
desired capacity of 3 ensures at least three instances are running even during normal traffic. The maximum instances of 6 cap the upper
limit to control costs.
upvoted 1 times
  RoroJ 4 months, 1 week ago
Selected Answer: D
Auto Scaling group must have an AMI for it.
upvoted 1 times

  th3k33n 4 months, 2 weeks ago


how can we set max to 6 since the company is using 5 ec2 instance
upvoted 1 times

  examtopictempacc 4 months, 2 weeks ago


In the scenario you provided, you're setting up an Auto Scaling group to manage the instances for you, and the settings (min 2, desired
3, max 6) are for the Auto Scaling group, not for your existing instances. When you integrate the instances into the Auto Scaling group,
you are effectively moving from a fixed instance count to a dynamic one that can range from 2 to 6 based on the demand.
The existing 5 instances can be included in the Auto Scaling group, but the group can reduce the number of instances if the load is low
(to the minimum specified, which is 2 in this case) and can also add more instances (up to a maximum of 6) if the load increases.
upvoted 1 times

  kruasan 5 months ago


Selected Answer: B
Reasons:
• An Auto Scaling group will automatically scale the EC2 instances to match changes in demand. This optimizes cost by only running as
many instances as needed.
• A target tracking scaling policy monitors the ASGAverageCPUUtilization metric and scales to keep the average CPU around the 50%
target value. This ensures there are enough resources during CPU surges.
• The ALB and target group are reused, so the application architecture does not change. The Auto Scaling group is associated to the
existing load balancer setup.
• A minimum of 2 and maximum of 6 instances provides the ability to scale between 3 and 6 instances as needed based on demand.
• Costs are optimized by starting with only 3 instances (the desired capacity) and scaling up as needed. When CPU usage drops, instances
are terminated to match the desired capacity.
upvoted 2 times

  kruasan 5 months ago


Option A - terminates instances reactively based on low CPU and may not provide enough capacity during surges. Does not optimize
cost.
Option C - lacks a scaling policy so will not automatically adjust capacity based on changes in demand. Does not ensure enough
resources during surges.
Option D - requires manual intervention to scale capacity. Does not optimize cost or provide an automated solution.
upvoted 1 times

  darn 5 months, 1 week ago


as you dig down the question, they get more and more bogus with less and less votes
upvoted 1 times

  Steve_4542636 7 months ago


Selected Answer: B
B is my vote
upvoted 1 times

  KZM 7 months, 1 week ago


Based on the information given, the best solution is option"B".
Autoscaling group with target tracking scaling policy with min 2 instances, desired capacity to 3, and the maximum instances to 6.
upvoted 1 times

  Shrestwt 5 months, 2 weeks ago


But the company is using only 5 EC2 Instances so how can we set maximum instance to 6.
upvoted 2 times

  LuckyAro 7 months, 1 week ago


Selected Answer: B
B is the correct solution because it allows for automatic scaling based on the average CPU utilization of the EC2 instances in the target
group. With the use of a target tracking scaling policy based on the ASGAverageCPUUtilization metric, the EC2 Auto Scaling group can
ensure that the target value of 50% is maintained while scaling the number of instances in the group up or down as needed. This will help
ensure that the application has enough CPU resources during surges without overprovisioning, thus optimizing the cost of the
architecture.
upvoted 1 times

  Babba 7 months, 1 week ago


Selected Answer: B
Should be B
upvoted 1 times
Question #298 Topic 1

A company is running a critical business application on Amazon EC2 instances behind an Application Load Balancer. The EC2 instances run in an
Auto Scaling group and access an Amazon RDS DB instance.

The design did not pass an operational review because the EC2 instances and the DB instance are all located in a single Availability Zone. A
solutions architect must update the design to use a second Availability Zone.

Which solution will make the application highly available?

A. Provision a subnet in each Availability Zone. Configure the Auto Scaling group to distribute the EC2 instances across both Availability
Zones. Configure the DB instance with connections to each network.

B. Provision two subnets that extend across both Availability Zones. Configure the Auto Scaling group to distribute the EC2 instances across
both Availability Zones. Configure the DB instance with connections to each network.

C. Provision a subnet in each Availability Zone. Configure the Auto Scaling group to distribute the EC2 instances across both Availability
Zones. Configure the DB instance for Multi-AZ deployment.

D. Provision a subnet that extends across both Availability Zones. Configure the Auto Scaling group to distribute the EC2 instances across
both Availability Zones. Configure the DB instance for Multi-AZ deployment.

Correct Answer: D

Community vote distribution


C (100%)

  bdp123 Highly Voted  7 months, 2 weeks ago


Selected Answer: C
A subnet must reside within a single Availability Zone.
https://aws.amazon.com/vpc/faqs/#:~:text=Can%20a%20subnet%20span%20Availability,within%20a%20single%20Availability%20Zone.
upvoted 9 times

  Guru4Cloud Most Recent  3 weeks, 5 days ago


Selected Answer: C
This solution will ensure that the EC2 instances and the DB instance are not located in the same Availability Zone, which will improve the
availability of the application.
upvoted 1 times

  zjcorpuz 2 months ago


a subnet only resides on a one AZ, it does not span to another AZ.
upvoted 2 times

  MrAWSAssociate 3 months, 1 week ago


Selected Answer: C
D is completely wrong, because each subnet must reside entirely within one Availability Zone and cannot span zones. By launching AWS
resources in separate Availability Zones, you can protect your applications from the failure of a single Availability Zone.
upvoted 1 times

  Anmol_1010 3 months, 2 weeks ago


The key word here was extend.
upvoted 1 times

  GalileoEC2 6 months, 2 weeks ago


This discards B and D: Subnet basics. Each subnet must reside entirely within one Availability Zone and cannot span zones. By launching
AWS resources in separate Availability Zones, you can protect your applications from the failure of a single Availability Zone
upvoted 2 times

  Steve_4542636 7 months ago


Selected Answer: C
a subnet is per AZ. a scaling group can span multiple AZs. https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-add-availability-
zone.html
upvoted 1 times

  KZM 7 months, 1 week ago


I think D.
Span the single subnet in both Availability Zones can access the DB instances in either zone without going over the public internet.
upvoted 2 times

  KZM 7 months, 1 week ago


Can span like that?
upvoted 1 times

  leoattf 7 months, 1 week ago


Nope. The answer is indeed C.
You cannot span like that. Check the link below:
"Each subnet must reside entirely within one Availability Zone and cannot span zones."
https://docs.aws.amazon.com/vpc/latest/userguide/configure-subnets.html
upvoted 3 times

  KZM 7 months ago


Thanks, Leoattf for the link you shared.
upvoted 2 times

  KZM 7 months, 1 week ago


Sorry I think C is correct.
upvoted 1 times

  Babba 7 months, 1 week ago


Selected Answer: C
it's C
upvoted 1 times
Question #299 Topic 1

A research laboratory needs to process approximately 8 TB of data. The laboratory requires sub-millisecond latencies and a minimum throughput
of 6 GBps for the storage subsystem. Hundreds of Amazon EC2 instances that run Amazon Linux will distribute and process the data.

Which solution will meet the performance requirements?

A. Create an Amazon FSx for NetApp ONTAP file system. Sat each volume’ tiering policy to ALL. Import the raw data into the file system.
Mount the fila system on the EC2 instances.

B. Create an Amazon S3 bucket to store the raw data. Create an Amazon FSx for Lustre file system that uses persistent SSD storage. Select
the option to import data from and export data to Amazon S3. Mount the file system on the EC2 instances.

C. Create an Amazon S3 bucket to store the raw data. Create an Amazon FSx for Lustre file system that uses persistent HDD storage. Select
the option to import data from and export data to Amazon S3. Mount the file system on the EC2 instances.

D. Create an Amazon FSx for NetApp ONTAP file system. Set each volume’s tiering policy to NONE. Import the raw data into the file system.
Mount the file system on the EC2 instances.

Correct Answer: D

Community vote distribution


B (100%)

  Bhawesh Highly Voted  7 months, 2 weeks ago


Selected Answer: B
Keyword here is a minimum throughput of 6 GBps. Only the FSx for Lustre with SSD option gives the sub-milli response and throughput of
6 GBps or more.
B. Create an Amazon S3 bucket to store the raw data. Create an Amazon FSx for Lustre file system that uses persistent SSD storage. Select
the option to import data from and export data to Amazon S3. Mount the file system on the EC2 instances.

Refrences:
https://aws.amazon.com/fsx/when-to-choose-fsx/
upvoted 9 times

  bdp123 Highly Voted  7 months, 2 weeks ago


Selected Answer: B
Create an Amazon S3 bucket to store the raw data Create an Amazon FSx for Lustre file system that
uses persistent SSD storage Select the option to import data from and export data to Amazon S3
Mount the file system on the EC2 instances. Amazon FSx for Lustre uses SSD storage for submillisecond latencies and up to 6 GBps
throughput, and can import data from and export data to
Amazon S3. Additionally, the option to select persistent SSD storage will ensure that the data is stored on the disk and not lost if the file
system is stopped.
upvoted 6 times

  Guru4Cloud Most Recent  3 weeks, 5 days ago


Selected Answer: B
Amazon FSx for Lustre with SSD: Amazon FSx for Lustre is designed for high-performance, parallel file processing workloads. Choosing
SSD storage ensures fast I/O and meets the sub-millisecond latency requirement.
upvoted 1 times

  rolervengador 1 month ago


Voto por la B
upvoted 1 times

  Gooniegoogoo 3 months ago


So many of these are wrong, its good we have people that vote so we can get to the right answer!!
upvoted 1 times

  kruasan 5 months ago


Selected Answer: B
• Amazon FSx for Lustre with SSD storage can provide up to 260 GB/s of aggregate throughput and sub-millisecond latencies needed for
this workload.
• Persistent SSD storage ensures data durability in the file system. Data is also exported to S3 for backup storage.
• The file system will import the initial 8 TB of raw data from S3, providing a fast storage tier for processing while retaining the data in S3.
• The file system is mounted to the EC2 compute instances to distribute processing.
• FSx for Lustre is optimized for high-performance computing workloads running on Linux, matching the EC2 environment.
upvoted 1 times
  kruasan 5 months ago
Option A - FSx for NetApp ONTAP with ALL tiering policy would not provide fast enough storage tier for sub-millisecond latency. HDD
tiers have higher latency.
Option C - FSx for Lustre with HDD storage would not provide the throughput, IOPS or low latency needed.
Option D - FSx for NetApp ONTAP with NONE tiering policy would require much more expensive SSD storage to meet requirements,
increasing cost.
upvoted 1 times

  Steve_4542636 7 months ago


Selected Answer: B
I vote B
upvoted 1 times

  AlmeroSenior 7 months, 1 week ago


Selected Answer: B
FSX Lusture is 1000mbs per TB provisioned and we have 8TBs so gives us 8GBs . The netapp FSX appears a hard limit of 4gbs .

https://aws.amazon.com/fsx/lustre/faqs/?nc=sn&loc=5
https://aws.amazon.com/fsx/netapp-ontap/faqs/
upvoted 3 times

  LuckyAro 7 months, 1 week ago


Selected Answer: B
B is the best choice as it utilizes Amazon S3 for data storage, which is cost-effective and durable, and Amazon FSx for Lustre for high-
performance file storage, which provides the required sub-millisecond latencies and minimum throughput of 6 GBps. Additionally, the
option to import and export data to and from Amazon S3 makes it easier to manage and move data between the two services.
B is the best option as it meets the performance requirements for sub-millisecond latencies and a minimum throughput of 6 GBps.
upvoted 1 times

  everfly 7 months, 1 week ago


Selected Answer: B
Amazon FSx for Lustre provides fully managed shared storage with the scalability and performance of the popular Lustre file system. It
can deliver sub-millisecond latencies and hundreds of gigabytes per second of throughput.
upvoted 3 times
Question #300 Topic 1

A company needs to migrate a legacy application from an on-premises data center to the AWS Cloud because of hardware capacity constraints.
The application runs 24 hours a day, 7 days a week. The application’s database storage continues to grow over time.

What should a solutions architect do to meet these requirements MOST cost-effectively?

A. Migrate the application layer to Amazon EC2 Spot Instances. Migrate the data storage layer to Amazon S3.

B. Migrate the application layer to Amazon EC2 Reserved Instances. Migrate the data storage layer to Amazon RDS On-Demand Instances.

C. Migrate the application layer to Amazon EC2 Reserved Instances. Migrate the data storage layer to Amazon Aurora Reserved Instances.

D. Migrate the application layer to Amazon EC2 On-Demand Instances. Migrate the data storage layer to Amazon RDS Reserved Instances.

Correct Answer: C

Community vote distribution


C (85%) B (15%)

  LuckyAro Highly Voted  7 months, 1 week ago


Selected Answer: C
Amazon EC2 Reserved Instances allow for significant cost savings compared to On-Demand instances for long-running, steady-state
workloads like this one. Reserved Instances provide a capacity reservation, so the instances are guaranteed to be available for the
duration of the reservation period.

Amazon Aurora is a highly scalable, cloud-native relational database service that is designed to be compatible with MySQL and
PostgreSQL. It can automatically scale up to meet growing storage requirements, so it can accommodate the application's database
storage needs over time. By using Reserved Instances for Aurora, the cost savings will be significant over the long term.
upvoted 9 times

  NolaHOla Highly Voted  7 months, 2 weeks ago


Option B based on the fact that the DB storage will continue to grow, so on-demand will be a more suitable solution
upvoted 7 times

  NolaHOla 7 months, 1 week ago


Since the application's database storage is continuously growing over time, it may be difficult to estimate the appropriate size of the
Aurora cluster in advance, which is required when reserving Aurora.

In this case, it may be more cost-effective to use Amazon RDS On-Demand Instances for the data storage layer. With RDS On-Demand
Instances, you pay only for the capacity you use and you can easily scale up or down the storage as needed.
upvoted 4 times

  hristni0 4 months ago


Answer is C. From Aurora Reserved Instances documentation:
If you have a DB instance, and you need to scale it to larger capacity, your reserved DB instance is automatically applied to your
scaled DB instance. That is, your reserved DB instances are automatically applied across all DB instance class sizes. Size-flexible
reserved DB instances are available for DB instances with the same AWS Region and database engine.
upvoted 1 times

  Joxtat 7 months, 1 week ago


The Answer is C.
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.AuroraMySQL.html
upvoted 1 times

  Wayne23Fang Most Recent  1 week, 3 days ago


My research concludes that From pure price point of view Aurora Reserved might/ usually be slightly more expensive than On-demand
RDS. But RDS has less Operation overhead. For the 24x7 nature, I would vote C. But for pure cost-effective, B is less costly.
upvoted 1 times

  Guru4Cloud 3 weeks, 5 days ago


Selected Answer: C
This option involves migrating the application layer to Amazon EC2 Reserved Instances and migrating the data storage layer to Amazon
Aurora Reserved Instances. Amazon EC2 Reserved Instances provide a significant discount (up to 75%) compared to On-Demand Instance
pricing, making them a cost-effective choice for applications that have steady state or predictable usage. Similarly, Amazon Aurora
Reserved Instances provide a significant discount (up to 69%) compared to On-Demand Instance pricing.
upvoted 1 times

  ajchi1980 3 months ago


Selected Answer: C
To meet the requirements of migrating a legacy application from an on-premises data center to the AWS Cloud in a cost-effective manner,
the most suitable option would be:

C. Migrate the application layer to Amazon EC2 Reserved Instances. Migrate the data storage layer to Amazon Aurora Reserved Instances.

Explanation:

Migrating the application layer to Amazon EC2 Reserved Instances allows you to reserve EC2 capacity in advance, providing cost savings
compared to On-Demand Instances. This is especially beneficial if the application runs 24/7.

Migrating the data storage layer to Amazon Aurora Reserved Instances provides cost optimization for the growing database storage
needs. Amazon Aurora is a fully managed relational database service that offers high performance, scalability, and cost efficiency.
upvoted 1 times
  cpen 4 months ago
nnascncnscnknkckl
upvoted 1 times

  TariqKipkemei 5 months, 2 weeks ago


Answer is C
upvoted 1 times

  QuangPham810 5 months, 2 weeks ago


Answer is C. Refer https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_WorkingWithReservedDBInstances.html =>
Size-flexible reserved DB instances
upvoted 1 times

  Abhineet9148232 6 months, 3 weeks ago


Selected Answer: C
C: With Aurora Serverless v2, each writer and reader has its own current capacity value, measured in ACUs. Aurora Serverless v2 scales a
writer or reader up to a higher capacity when its current capacity is too low to handle the load. It scales the writer or reader down to a
lower capacity when its current capacity is higher than needed.

This is sufficient to accommodate the growing data changes.

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2.how-it-works.html#aurora-serverless-v2.how-it-
works.scaling
upvoted 1 times

  Steve_4542636 7 months ago


Selected Answer: C
Typically Amazon RDS cost less than Aurora. But here, it's Aurora reserved.
upvoted 1 times

  ACasper 7 months ago


Answer C
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_WorkingWithReservedDBInstances.html
Discounts for reserved DB instances are tied to instance type and AWS Region.
upvoted 1 times

  AlmeroSenior 7 months, 1 week ago


Selected Answer: C
Both RDS and RDS aurora support Storage Auto scale .
Aurora is more expensive than base RDS , But between B and C , the Aurora is reserved instance and base RDS is on demand . Also it
states the DB strorage will grow , so no concern about a bigger DB instance ( server ) , only the actual storage
upvoted 1 times

  Joxtat 7 months, 1 week ago


Selected Answer: C
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.AuroraMySQL.html
upvoted 1 times

  Samuel03 7 months, 1 week ago


Selected Answer: B
I also think it is B. Otherewise there is no point in mentionig about growing storage requirements.
upvoted 2 times

  Americo32 7 months, 1 week ago


Selected Answer: B
A opção B com base no fato de que o armazenamento de banco de dados continuará a crescer, portanto, sob demanda será uma solução
mais adequada
upvoted 1 times

  Americo32 7 months, 1 week ago


Mudando para opção C, Observações importantes sobre compras
Os preços de instâncias reservadas cobrem apenas os custos da instância. O armazenamento e a E/S ainda são faturados
separadamente.
upvoted 1 times
  ManOnTheMoon 7 months, 1 week ago
Why not B?
upvoted 3 times

  skiwili 7 months, 2 weeks ago


Selected Answer: C
Ccccccc
upvoted 2 times
Question #301 Topic 1

A university research laboratory needs to migrate 30 TB of data from an on-premises Windows file server to Amazon FSx for Windows File Server.
The laboratory has a 1 Gbps network link that many other departments in the university share.

The laboratory wants to implement a data migration service that will maximize the performance of the data transfer. However, the laboratory
needs to be able to control the amount of bandwidth that the service uses to minimize the impact on other departments. The data migration must
take place within the next 5 days.

Which AWS solution will meet these requirements?

A. AWS Snowcone

B. Amazon FSx File Gateway

C. AWS DataSync

D. AWS Transfer Family

Correct Answer: C

Community vote distribution


C (100%)

  Michal_L_95 Highly Voted  6 months, 3 weeks ago


Selected Answer: C
As read a little bit, I assume that B (FSx File Gateway) requires a little bit more configuration rather than C (DataSync). From Stephane
Maarek course explanation about DataSync:
An online data transfer service that simplifies, automates, and accelerates copying large amounts of data between on-premises storage
systems and AWS Storage services, as well as between AWS Storage services.

You can use AWS DataSync to migrate data located on-premises, at the edge, or in other clouds to Amazon S3, Amazon EFS, Amazon FSx
for Windows File Server, Amazon FSx for Lustre, Amazon FSx for OpenZFS, and Amazon FSx for NetApp ONTAP.
upvoted 5 times

  Guru4Cloud Most Recent  3 weeks, 5 days ago


Selected Answer: C
C. AWS DataSync
upvoted 1 times

  Nikki013 1 month ago


Selected Answer: C
https://aws.amazon.com/datasync/features/
upvoted 1 times

  jayce5 3 months, 2 weeks ago


Selected Answer: C
"Amazon FSx File Gateway" is for storing data, not for migrating. So the answer should be C.
upvoted 1 times

  kruasan 5 months ago


Selected Answer: C
AWS DataSync is a data transfer service that can copy large amounts of data between on-premises storage and Amazon FSx for Windows
File Server at high speeds. It allows you to control the amount of bandwidth used during data transfer.
• DataSync uses agents at the source and destination to automatically copy files and file metadata over the network. This optimizes the
data transfer and minimizes the impact on your network bandwidth.
• DataSync allows you to schedule data transfers and configure transfer rates to suit your needs. You can transfer 30 TB within 5 days
while controlling bandwidth usage.
• DataSync can resume interrupted transfers and validate data to ensure integrity. It provides detailed monitoring and reporting on the
progress and performance of data transfers.
upvoted 3 times

  kruasan 5 months ago


Option A - AWS Snowcone is more suitable for physically transporting data when network bandwidth is limited. It would not complete
the transfer within 5 days.
Option B - Amazon FSx File Gateway only provides access to files stored in Amazon FSx and does not perform the actual data migration
from on-premises to FSx.
Option D - AWS Transfer Family is for transferring files over FTP, FTPS and SFTP. It may require scripting to transfer 30 TB and monitor
progress, and lacks bandwidth controls.
upvoted 2 times
  shanwford 5 months, 3 weeks ago
Selected Answer: C
Snowcone to small and delivertime to long. With DataSync you can set bandwidth limits - so this is fine solution.
upvoted 3 times

  MaxMa 6 months ago


Why not B?
upvoted 1 times

  Guru4Cloud 2 weeks, 2 days ago


Transfering will be much longerm rather then 5 days as required.
upvoted 1 times

  AlessandraSAA 6 months, 4 weeks ago


A not possible because Snowcone it's just 8TB and it takes 4-6 business days to deliver
B why cannot be https://aws.amazon.com/storagegateway/file/fsx/?
C I don't really get this
D cannot be because not compatible - https://aws.amazon.com/aws-transfer-family/
upvoted 1 times

  Steve_4542636 7 months ago


Selected Answer: C
Voting C
upvoted 1 times

  Bhawesh 7 months, 1 week ago


Selected Answer: C
C. - DataSync is Correct.
A. Snowcone is incorrect. The question says data migration must take place within the next 5 days.AWS says: If you order, you will receive
the Snowcone device in approximately 4-6 days.
upvoted 2 times

  LuckyAro 7 months, 1 week ago


Selected Answer: C
DataSync can be used to migrate data between on-premises Windows file servers and Amazon FSx for Windows File Server with its
compatibility for Windows file systems.

The laboratory needs to migrate a large amount of data (30 TB) within a relatively short timeframe (5 days) and limit the impact on other
departments' network traffic. Therefore, AWS DataSync can meet these requirements by providing fast and efficient data transfer with
network throttling capability to control bandwidth usage.
upvoted 3 times

  cloudbusting 7 months, 2 weeks ago


https://docs.aws.amazon.com/datasync/latest/userguide/configure-bandwidth.html
upvoted 2 times

  bdp123 7 months, 2 weeks ago


Selected Answer: C
https://aws.amazon.com/datasync/
upvoted 2 times
Question #302 Topic 1

A company wants to create a mobile app that allows users to stream slow-motion video clips on their mobile devices. Currently, the app captures
video clips and uploads the video clips in raw format into an Amazon S3 bucket. The app retrieves these video clips directly from the S3 bucket.
However, the videos are large in their raw format.

Users are experiencing issues with buffering and playback on mobile devices. The company wants to implement solutions to maximize the
performance and scalability of the app while minimizing operational overhead.

Which combination of solutions will meet these requirements? (Choose two.)

A. Deploy Amazon CloudFront for content delivery and caching.

B. Use AWS DataSync to replicate the video files across AW'S Regions in other S3 buckets.

C. Use Amazon Elastic Transcoder to convert the video files to more appropriate formats.

D. Deploy an Auto Sealing group of Amazon EC2 instances in Local Zones for content delivery and caching.

E. Deploy an Auto Scaling group of Amazon EC2 instances to convert the video files to more appropriate formats.

Correct Answer: A

Community vote distribution


A (53%) C (47%)

  Bhawesh Highly Voted  7 months, 2 weeks ago


For Minimum operational overhead, the 2 options A,C should be correct.
A. Deploy Amazon CloudFront for content delivery and caching.
C. Use Amazon Elastic Transcoder to convert the video files to more appropriate formats.
upvoted 10 times

  Guru4Cloud Most Recent  3 weeks, 5 days ago


Selected Answer: A
For Minimum operational overhead, the 2 options A,C should be correct.
A. Deploy Amazon CloudFront for content delivery and caching.
C. Use Amazon Elastic Transcoder to convert the video files to more appropriate formats.
upvoted 1 times

  Guru4Cloud 1 week ago


examtopics team, please fix this question, please allow to select two answer
upvoted 1 times

  jacob_ho 1 month ago


Elastic Transcoder has been deprecated, and AWS encourage to use AWS Elemental MediaConvert right now:
https://aws.amazon.com/blogs/media/how-to-migrate-workflows-from-amazon-elastic-transcoder-to-aws-elemental-mediaconvert/
upvoted 1 times

  enc_0343 3 months ago


Selected Answer: A
AC is the correct answer
upvoted 1 times

  antropaws 4 months, 1 week ago


Selected Answer: A
AC, the only possible answers.
upvoted 1 times

  Eden 4 months, 4 weeks ago


It says choose two so I chose AC
upvoted 1 times

  WherecanIstart 6 months, 2 weeks ago


Selected Answer: C
A & C are the right answers.
upvoted 2 times

  kampatra 6 months, 3 weeks ago


Selected Answer: A
Correct answer: AC
upvoted 2 times

  Steve_4542636 7 months ago


Selected Answer: C
A and C. Transcoder does exactly what this needs.
upvoted 2 times

  Steve_4542636 7 months ago


Selected Answer: A
A and C. CloudFront hs caching for A
upvoted 1 times

  wawaw3213 7 months, 1 week ago


Selected Answer: C
a and c
upvoted 2 times

  bdp123 7 months, 1 week ago


Selected Answer: C
Both A and C - I was not able to choose both
https://aws.amazon.com/elastictranscoder/
upvoted 2 times

  Bhrino 7 months, 1 week ago


Selected Answer: C
A and C bc cloud front would help the performance for content such as this and elastictranscoder makes the process from transferring
devices almost seamless
upvoted 1 times

  LuckyAro 7 months, 1 week ago


Selected Answer: A
A & C.
A: Deploy Amazon CloudFront for content delivery and caching: Amazon CloudFront is a content delivery network (CDN) that can help
improve the performance and scalability of the app by caching content at edge locations, reducing latency, and improving the delivery of
video clips to users. CloudFront can also provide features such as DDoS protection, SSL/TLS encryption, and content compression to
optimize the delivery of video clips.

C: Use Amazon Elastic Transcoder to convert the video files to more appropriate formats: Amazon Elastic Transcoder is a service that can
help optimize the video format for mobile devices, reducing the size of the video files, and improving the playback performance. Elastic
Transcoder can also convert videos into multiple formats to support different devices and platforms.
upvoted 2 times

  Babba 7 months, 1 week ago


Selected Answer: A
Clearly A & C
upvoted 1 times

  jahmad0730 7 months, 2 weeks ago


Selected Answer: A
A and C
upvoted 1 times
Question #303 Topic 1

A company is launching a new application deployed on an Amazon Elastic Container Service (Amazon ECS) cluster and is using the Fargate
launch type for ECS tasks. The company is monitoring CPU and memory usage because it is expecting high traffic to the application upon its
launch. However, the company wants to reduce costs when utilization decreases.

What should a solutions architect recommend?

A. Use Amazon EC2 Auto Scaling to scale at certain periods based on previous traffic patterns.

B. Use an AWS Lambda function to scale Amazon ECS based on metric breaches that trigger an Amazon CloudWatch alarm.

C. Use Amazon EC2 Auto Scaling with simple scaling policies to scale when ECS metric breaches trigger an Amazon CloudWatch alarm.

D. Use AWS Application Auto Scaling with target tracking policies to scale when ECS metric breaches trigger an Amazon CloudWatch alarm.

Correct Answer: D

Community vote distribution


D (100%)

  rrharris Highly Voted  7 months, 2 weeks ago


Answer is D - Auto-scaling with target tracking
upvoted 7 times

  Guru4Cloud Most Recent  3 weeks, 5 days ago


Selected Answer: D
Answer is D - Auto-scaling with target tracking
upvoted 1 times

  TariqKipkemei 5 months ago


Answer is D - Application Auto Scaling is a web service for developers and system administrators who need a solution for automatically
scaling their scalable resources for individual AWS services beyond Amazon EC2.
upvoted 3 times

  boxu03 6 months, 3 weeks ago


Selected Answer: D
should be D
upvoted 1 times

  Joxtat 7 months, 1 week ago


Selected Answer: D
https://docs.aws.amazon.com/autoscaling/application/userguide/what-is-application-auto-scaling.html
upvoted 3 times

  jahmad0730 7 months, 2 weeks ago


Selected Answer: D
Answer is D
upvoted 2 times

  Neha999 7 months, 2 weeks ago


D : auto-scaling with target tracking
upvoted 3 times
Question #304 Topic 1

A company recently created a disaster recovery site in a different AWS Region. The company needs to transfer large amounts of data back and
forth between NFS file systems in the two Regions on a periodic basis.

Which solution will meet these requirements with the LEAST operational overhead?

A. Use AWS DataSync.

B. Use AWS Snowball devices.

C. Set up an SFTP server on Amazon EC2.

D. Use AWS Database Migration Service (AWS DMS).

Correct Answer: A

Community vote distribution


A (100%)

  LuckyAro Highly Voted  7 months, 1 week ago


Selected Answer: A
AWS DataSync is a fully managed data transfer service that simplifies moving large amounts of data between on-premises storage
systems and AWS services. It can also transfer data between different AWS services, including different AWS Regions. DataSync provides a
simple, scalable, and automated solution to transfer data, and it minimizes the operational overhead because it is fully managed by AWS.
upvoted 8 times

  Guru4Cloud Most Recent  3 weeks, 5 days ago


Selected Answer: A
Use AWS DataSync.
upvoted 1 times

  kruasan 5 months ago


Selected Answer: A
• AWS DataSync is a data transfer service optimized for moving large amounts of data between NFS file systems. It can automatically copy
files and metadata between your NFS file systems in different AWS Regions.
• DataSync requires minimal setup and management. You deploy a source and destination agent, provide the source and destination
locations, and DataSync handles the actual data transfer efficiently in the background.
• DataSync can schedule and monitor data transfers to keep source and destination in sync with minimal overhead. It resumes interrupted
transfers and validates data integrity.
• DataSync optimizes data transfer performance across AWS's network infrastructure. It can achieve high throughput with minimal impact
to your operations.
upvoted 2 times

  kruasan 5 months ago


Option B - AWS Snowball requires physical devices to transfer data. This incurs overhead to transport devices and manually load/unload
data. It is not an online data transfer solution.
Option C - Setting up and managing an SFTP server would require provisioning EC2 instances, handling security groups, and writing
scripts to automate the data transfer - all of which demand more overhead than DataSync.
Option D - AWS Database Migration Service is designed for migrating databases, not general file system data. It would require
converting your NFS data into a database format, incurring additional overhead.
upvoted 1 times

  ashu089 6 months, 1 week ago


Selected Answer: A
A only
upvoted 1 times

  skiwili 7 months, 2 weeks ago


Selected Answer: A
Aaaaaa
upvoted 1 times

  NolaHOla 7 months, 2 weeks ago


A should be correct
upvoted 1 times
Question #305 Topic 1

A company is designing a shared storage solution for a gaming application that is hosted in the AWS Cloud. The company needs the ability to use
SMB clients to access data. The solution must be fully managed.

Which AWS solution meets these requirements?

A. Create an AWS DataSync task that shares the data as a mountable file system. Mount the file system to the application server.

B. Create an Amazon EC2 Windows instance. Install and configure a Windows file share role on the instance. Connect the application server to
the file share.

C. Create an Amazon FSx for Windows File Server file system. Attach the file system to the origin server. Connect the application server to the
file system.

D. Create an Amazon S3 bucket. Assign an IAM role to the application to grant access to the S3 bucket. Mount the S3 bucket to the
application server.

Correct Answer: C

Community vote distribution


C (100%)

  rrharris Highly Voted  7 months, 2 weeks ago


Answer is C - SMB = storage gateway or FSx
upvoted 5 times

  Neha999 Highly Voted  7 months, 2 weeks ago


C L: Amazon FSx for Windows File Server file system
upvoted 5 times

  Guru4Cloud Most Recent  3 weeks, 5 days ago


Selected Answer: C
Answer is C - SMB = storage gateway or FSx
upvoted 1 times

  kruasan 5 months ago


Selected Answer: C
• Amazon FSx for Windows File Server provides a fully managed native Windows file system that can be accessed using the industry-
standard SMB protocol. This allows Windows clients like the gaming application to directly access file data.
• FSx for Windows File Server handles time-consuming file system administration tasks like provisioning, setup, maintenance, file share
management, backups, security, and software patching - reducing operational overhead.
• FSx for Windows File Server supports high file system throughput, IOPS, and consistent low latencies required for performance-sensitive
workloads. This makes it suitable for a gaming application.
• The file system can be directly attached to EC2 instances, providing a performant shared storage solution for the gaming servers.
upvoted 2 times

  kruasan 5 months ago


Option A - DataSync is for data transfer, not providing a shared file system. It cannot be mounted or directly accessed.
Option B - A self-managed EC2 file share would require manually installing, configuring and maintaining a Windows file system and
share. This demands significant overhead to operate.
Option D - Amazon S3 is object storage, not a native file system. The data in S3 would need to be converted/formatted to provide file
share access, adding complexity. S3 cannot be directly mounted or provide the performance of FSx.
upvoted 1 times

  elearningtakai 6 months ago


Selected Answer: C
Amazon FSx for Windows File Server
upvoted 1 times

  Steve_4542636 7 months ago


Selected Answer: C
I vote C since FSx supports SMB
upvoted 1 times

  LuckyAro 7 months, 1 week ago


Selected Answer: C
AWS FSx for Windows File Server is a fully managed native Microsoft Windows file system that is accessible through the SMB protocol. It
provides features such as file system backups, integrated with Amazon S3, and Active Directory integration for user authentication and
access control. This solution allows for the use of SMB clients to access the data and is fully managed, eliminating the need for the
company to manage the underlying infrastructure.
upvoted 2 times
  Babba 7 months, 1 week ago
Selected Answer: C
C for me
upvoted 1 times
Question #306 Topic 1

A company wants to run an in-memory database for a latency-sensitive application that runs on Amazon EC2 instances. The application
processes more than 100,000 transactions each minute and requires high network throughput. A solutions architect needs to provide a cost-
effective network design that minimizes data transfer charges.

Which solution meets these requirements?

A. Launch all EC2 instances in the same Availability Zone within the same AWS Region. Specify a placement group with cluster strategy when
launching EC2 instances.

B. Launch all EC2 instances in different Availability Zones within the same AWS Region. Specify a placement group with partition strategy
when launching EC2 instances.

C. Deploy an Auto Scaling group to launch EC2 instances in different Availability Zones based on a network utilization target.

D. Deploy an Auto Scaling group with a step scaling policy to launch EC2 instances in different Availability Zones.

Correct Answer: A

Community vote distribution


A (100%)

  Guru4Cloud 3 weeks, 5 days ago


Selected Answer: A
Launch all EC2 instances in the same Availability Zone within the same AWS Region. Specify a placement group with cluster strategy when
launching EC2 instances
upvoted 1 times

  kruasan 5 months ago


Selected Answer: A
Reasons:
• Launching instances within a single AZ and using a cluster placement group provides the lowest network latency and highest bandwidth
between instances. This maximizes performance for an in-memory database and high-throughput application.
• Communications between instances in the same AZ and placement group are free, minimizing data transfer charges. Inter-AZ and public
IP traffic can incur charges.
• A cluster placement group enables the instances to be placed close together within the AZ, allowing the high network throughput
required. Partition groups span AZs, reducing bandwidth.
• Auto Scaling across zones could launch instances in AZs that increase data transfer charges. It may reduce network throughput,
impacting performance.
upvoted 3 times

  kruasan 5 months ago


In contrast:
Option B - A partition placement group spans AZs, reducing network bandwidth between instances and potentially increasing costs.
Option C - Auto Scaling alone does not guarantee the network throughput and cost controls required for this use case. Launching
across AZs could increase data transfer charges.
Option D - Step scaling policies determine how many instances to launch based on metrics alone. They lack control over network
connectivity and costs between instances after launch.
upvoted 2 times

  NoinNothing 5 months, 2 weeks ago


Selected Answer: A
Cluster - have low latency if its in same AZ and same region so Answer is "A"
upvoted 2 times

  BeeKayEnn 6 months ago


Answer would be A - As part of selecting all the EC2 instances in the same availability zone, they all will be within the same DC and logically
the latency will be very less as compared to the other Availability Zones..

As all the autoscaling nodes will also be on the same availability zones, (as per Placement groups with Cluster mode), this would provide
the low-latency network performance

Reference is below:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html
upvoted 2 times

  [Removed] 6 months ago


Selected Answer: A
A - Low latency, high net throughput
upvoted 1 times
  elearningtakai 6 months ago
Selected Answer: A
A placement group is a logical grouping of instances within a single Availability Zone, and it provides low-latency network connectivity
between instances. By launching all EC2 instances in the same Availability Zone and specifying a placement group with cluster strategy,
the application can take advantage of the high network throughput and low latency network connectivity that placement groups provide.
upvoted 1 times

  Steve_4542636 7 months ago


Selected Answer: A
Cluster placement groups improves throughput between the instances which means less EC2 instances would be needed thus reducing
costs.
upvoted 1 times

  maciekmaciek 7 months, 1 week ago


Selected Answer: A
A because Specify a placement group
upvoted 1 times

  KZM 7 months, 1 week ago


It is option A:
To achieve low latency, high throughput, and cost-effectiveness, the optimal solution is to launch EC2 instances as a placement group with
the cluster strategy within the same Availability Zone.
upvoted 2 times

  ManOnTheMoon 7 months, 1 week ago


Why not C?
upvoted 1 times

  Steve_4542636 7 months ago


You're thinking operational efficiency. The question asks for cost reduction.
upvoted 2 times

  rrharris 7 months, 2 weeks ago


Answer is A - Clustering
upvoted 2 times

  Neha999 7 months, 2 weeks ago


A : Cluster placement group
upvoted 4 times
Question #307 Topic 1

A company that primarily runs its application servers on premises has decided to migrate to AWS. The company wants to minimize its need to
scale its Internet Small Computer Systems Interface (iSCSI) storage on premises. The company wants only its recently accessed data to remain
stored locally.

Which AWS solution should the company use to meet these requirements?

A. Amazon S3 File Gateway

B. AWS Storage Gateway Tape Gateway

C. AWS Storage Gateway Volume Gateway stored volumes

D. AWS Storage Gateway Volume Gateway cached volumes

Correct Answer: A

Community vote distribution


D (100%)

  LuckyAro Highly Voted  7 months, 1 week ago


Selected Answer: D
AWS Storage Gateway Volume Gateway provides two configurations for connecting to iSCSI storage, namely, stored volumes and cached
volumes. The stored volume configuration stores the entire data set on-premises and asynchronously backs up the data to AWS. The
cached volume configuration stores recently accessed data on-premises, and the remaining data is stored in Amazon S3.

Since the company wants only its recently accessed data to remain stored locally, the cached volume configuration would be the most
appropriate. It allows the company to keep frequently accessed data on-premises and reduce the need for scaling its iSCSI storage while
still providing access to all data through the AWS cloud. This configuration also provides low-latency access to frequently accessed data
and cost-effective off-site backups for less frequently accessed data.
upvoted 18 times

  smgsi Highly Voted  7 months, 2 weeks ago


Selected Answer: D
https://docs.amazonaws.cn/en_us/storagegateway/latest/vgw/StorageGatewayConcepts.html#storage-gateway-cached-concepts
upvoted 6 times

  Guru4Cloud Most Recent  3 weeks, 5 days ago


Selected Answer: D
The best AWS solution to meet the requirements is to use AWS Storage Gateway cached volumes (option D).

The key points:

Company migrating on-prem app servers to AWS


Want to minimize scaling on-prem iSCSI storage
Only recent data should remain on-premises
The AWS Storage Gateway cached volumes allow the company to connect their on-premises iSCSI storage to AWS cloud storage. It stores
frequently accessed data locally in the cache for low-latency access, while older data is stored in AWS.
upvoted 1 times

  kruasan 5 months ago


Selected Answer: D
• Volume Gateway cached volumes store entire datasets on S3, while keeping a portion of recently accessed data on your local storage as
a cache. This meets the goal of minimizing on-premises storage needs while keeping hot data local.
• The cache provides low-latency access to your frequently accessed data, while long-term retention of the entire dataset is provided
durable and cost-effective in S3.
• You get virtually unlimited storage on S3 for your infrequently accessed data, while controlling the amount of local storage used for
cache. This simplifies on-premises storage scaling.
• Volume Gateway cached volumes support iSCSI connections from on-premises application servers, allowing a seamless migration
experience. Servers access local cache and S3 storage volumes as iSCSI LUNs.
upvoted 3 times

  kruasan 5 months ago


In contrast:
Option A - S3 File Gateway only provides file interfaces (NFS/SMB) to data in S3. It does not support block storage or cache recently
accessed data locally.
Option B - Tape Gateway is designed for long-term backup and archiving to virtual tape cartridges on S3. It does not provide primary
storage volumes or local cache for low-latency access.
Option C - Volume Gateway stored volumes keep entire datasets locally, then asynchronously back them up to S3. This does not meet
the goal of minimizing on-premises storage needs.
upvoted 2 times
  Steve_4542636 7 months ago
Selected Answer: D
I vote D
upvoted 1 times

  ManOnTheMoon 7 months, 1 week ago


Agree with D
upvoted 1 times

  Babba 7 months, 1 week ago


Selected Answer: D
recently accessed data to remain stored locally - cached
upvoted 3 times

  Bhawesh 7 months, 2 weeks ago


Selected Answer: D
D. AWS Storage Gateway Volume Gateway cached volumes
upvoted 3 times

  bdp123 7 months, 2 weeks ago


Selected Answer: D
recently accessed data to remain stored locally - cached
upvoted 3 times
Question #308 Topic 1

A company has multiple AWS accounts that use consolidated billing. The company runs several active high performance Amazon RDS for Oracle
On-Demand DB instances for 90 days. The company’s finance team has access to AWS Trusted Advisor in the consolidated billing account and all
other AWS accounts.

The finance team needs to use the appropriate AWS account to access the Trusted Advisor check recommendations for RDS. The finance team
must review the appropriate Trusted Advisor check to reduce RDS costs.

Which combination of steps should the finance team take to meet these requirements? (Choose two.)

A. Use the Trusted Advisor recommendations from the account where the RDS instances are running.

B. Use the Trusted Advisor recommendations from the consolidated billing account to see all RDS instance checks at the same time.

C. Review the Trusted Advisor check for Amazon RDS Reserved Instance Optimization.

D. Review the Trusted Advisor check for Amazon RDS Idle DB Instances.

E. Review the Trusted Advisor check for Amazon Redshift Reserved Node Optimization.

Correct Answer: AC

Community vote distribution


BD (77%) BC (23%)

  Nietzsche82 Highly Voted  7 months, 2 weeks ago


Selected Answer: BD
B&D
https://aws.amazon.com/premiumsupport/knowledge-center/trusted-advisor-cost-optimization/
upvoted 11 times

  MrAWSAssociate Most Recent  3 months, 1 week ago


Selected Answer: BD
B&D are correct !
upvoted 1 times

  kruasan 5 months ago


Selected Answer: BD
https://docs.aws.amazon.com/awssupport/latest/user/organizational-view.html
https://docs.aws.amazon.com/awssupport/latest/user/cost-optimization-checks.html#amazon-rds-idle-dbs-instances
upvoted 1 times

  ErfanKh 5 months, 3 weeks ago


Selected Answer: BC
I think BC and ChatGPT as well
upvoted 2 times

  kraken21 6 months ago


Selected Answer: BD
B and D
upvoted 1 times

  Russs99 6 months, 1 week ago


Selected Answer: BC
Option A is not necessary, as the Trusted Advisor recommendations can be accessed from the consolidated billing account. Option D is not
relevant, as the check for idle DB instances is not specific to RDS instances. Option E is for Amazon Redshift, not RDS, and is therefore not
relevant.
upvoted 2 times

  kruasan 5 months ago


it is
Amazon RDS Idle DB Instances
Description
Checks the configuration of your Amazon Relational Database Service (Amazon RDS) for any database (DB) instances that appear to be
idle.

If a DB instance has not had a connection for a prolonged period of time, you can delete the instance to reduce costs. A DB instance is
considered idle if the instance hasn't had a connection in the past 7 days. If persistent storage is needed for data on the instance, you
can use lower-cost options such as taking and retaining a DB snapshot. Manually created DB snapshots are retained until you delete
them.
https://docs.aws.amazon.com/awssupport/latest/user/cost-optimization-checks.html#amazon-rds-idle-dbs-instances
upvoted 1 times
  Steve_4542636 7 months ago
Selected Answer: BD
I got with B and D
upvoted 2 times

  Michal_L_95 7 months, 1 week ago


Selected Answer: BC
I would go with B and C as the company is running for 90 days and C option is basing on 30 days report which would mean that there is
higher potential on cost saving rather than on idle instances
upvoted 2 times

  Steve_4542636 7 months ago


C is stating "Reserved Instances" The question states they are using On Demand Instances. Reserved instances are reserved for less
money for 1 or 3 years.
upvoted 5 times

  Lalo 3 months, 4 weeks ago


In the scenario it says for 90 days, therefore the correct answer is D
No C
upvoted 1 times

  Michal_L_95 6 months, 3 weeks ago


Once read the question again, I agree with you.
upvoted 1 times

  bdp123 7 months, 1 week ago


Selected Answer: BD
reduce costs - delete idle instances
https://aws.amazon.com/premiumsupport/knowledge-center/trusted-advisor-cost-optimization/
upvoted 3 times

  leoattf 7 months, 1 week ago


This same URL also says that there is an option which recommends the purchase of reserved noes. So I think that C is the option
instead of D, because since they already use on-demand DB instances, most probably that there will not have iddle instances. But if we
replace them by reserved ones, we indeed can have some costs savings.
What are your thought on it?
upvoted 1 times

  LuckyAro 7 months, 1 week ago


Selected Answer: BC
B. Use the Trusted Advisor recommendations from the consolidated billing account to see all RDS instance checks at the same time. This
option allows the finance team to see all RDS instance checks across all AWS accounts in one place. Since the company uses consolidated
billing, this account will have access to all of the AWS accounts' Trusted Advisor recommendations.

C. Review the Trusted Advisor check for Amazon RDS Reserved Instance Optimization. This check can help identify cost savings
opportunities for RDS by identifying instances that can be covered by Reserved Instances. This can result in significant savings on RDS
costs.
upvoted 1 times

  leoattf 7 months, 1 week ago


I also think it is B and C. I think that C is the option instead of D, because since they already use on-demand DB instances, most
probably there will not have idle instances. But if we replace them by reserved ones, we indeed can have some costs savings.
upvoted 1 times

  LuckyAro 7 months, 1 week ago


Option A is not recommended because the finance team may not have access to the AWS account where the RDS instances are
running. Even if they have access, it may not be practical to check each individual account for Trusted Advisor recommendations.

Option D is not the best choice because it only addresses the issue of idle instances and may not provide the most effective
recommendations to reduce RDS costs.

Option E is not relevant to this scenario since it is related to Amazon Redshift, not RDS.
upvoted 1 times

  jennyka76 7 months, 1 week ago


B&D
https://aws.amazon.com/premiumsupport/knowledge-center/trusted-advisor-cost-optimization/
upvoted 2 times

  skiwili 7 months, 2 weeks ago


Selected Answer: BD
B and D I believe
upvoted 4 times
Question #309 Topic 1

A solutions architect needs to optimize storage costs. The solutions architect must identify any Amazon S3 buckets that are no longer being
accessed or are rarely accessed.

Which solution will accomplish this goal with the LEAST operational overhead?

A. Analyze bucket access patterns by using the S3 Storage Lens dashboard for advanced activity metrics.

B. Analyze bucket access patterns by using the S3 dashboard in the AWS Management Console.

C. Turn on the Amazon CloudWatch BucketSizeBytes metric for buckets. Analyze bucket access patterns by using the metrics data with
Amazon Athena.

D. Turn on AWS CloudTrail for S3 object monitoring. Analyze bucket access patterns by using CloudTrail logs that are integrated with Amazon
CloudWatch Logs.

Correct Answer: D

Community vote distribution


A (95%) 5%

  kpato87 Highly Voted  7 months, 2 weeks ago


Selected Answer: A
S3 Storage Lens is a fully managed S3 storage analytics solution that provides a comprehensive view of object storage usage, activity
trends, and recommendations to optimize costs. Storage Lens allows you to analyze object access patterns across all of your S3 buckets
and generate detailed metrics and reports.
upvoted 7 times

  Wayne23Fang Most Recent  3 weeks, 3 days ago


Selected Answer: D
A missed turning on monitoring. It can also help you learn about your customer base and understand your Amazon S3 bill. By default,
Amazon S3 doesn't collect server access logs. When you enable logging, Amazon S3 delivers access logs for a source bucket to a target
bucket that you choose.

I could not find that S3 storage Lens examples online showing using Lens to identify idle S3 buckets. Instead I find using S3 Access
Logging. Hmm.
upvoted 1 times

  Guru4Cloud 3 weeks, 5 days ago


Selected Answer: A
S3 Storage Lens is a cloud-storage analytics feature that provides you with 29+ usage and activity metrics, including object count, size,
age, and access patterns. This data can help you understand how your data is being used and identify areas where you can optimize your
storage costs.
The S3 Storage Lens dashboard provides an interactive view of your storage usage and activity trends. This makes it easy to identify
buckets that are no longer being accessed or are rarely accessed.
The S3 Storage Lens dashboard is a fully managed service, so there is no need to set up or manage any additional infrastructure.
upvoted 1 times

  BigHammer 4 weeks ago


"S3 Storage Lens" seems to be the popular answer, however, where in Storage Lens can you see if a bucket/object is being USED? I see all
kinds of stats, but not that.
upvoted 1 times

  Guru4Cloud 3 weeks, 5 days ago


https://aws.amazon.com/blogs/aws/s3-storage-lens/
upvoted 2 times

  kruasan 5 months ago


Selected Answer: A
The S3 Storage Lens dashboard provides visibility into storage metrics and activity patterns to help optimize storage costs. It shows
metrics like objects added, objects deleted, storage consumed, and requests. It can filter by bucket, prefix, and tag to analyze specific
subsets of data
upvoted 1 times

  kruasan 5 months ago


B) The standard S3 console dashboard provides basic info but would require manually analyzing metrics for each bucket. This does not
scale well and requires significant overhead.
C) Turning on the BucketSizeBytes metric and analyzing the data in Athena may provide insights but would require enabling metrics,
building Athena queries, and analyzing the results. This requires more operational effort than option A.
D) Enabling CloudTrail logging and monitoring the logs in CloudWatch Logs could provide access pattern data but would require
setting up CloudTrail, monitoring the logs, and analyzing the relevant info. This option has the highest operational overhead
upvoted 2 times
  bdp123 7 months, 1 week ago
Selected Answer: A
https://aws.amazon.com/blogs/aws/s3-storage-lens/
upvoted 4 times

  LuckyAro 7 months, 1 week ago


Selected Answer: A
S3 Storage Lens provides a dashboard with advanced activity metrics that enable the identification of infrequently accessed and unused
buckets. This can help a solutions architect optimize storage costs without incurring additional operational overhead.
upvoted 3 times

  Babba 7 months, 1 week ago


Selected Answer: A
it looks like it's A
upvoted 2 times
Question #310 Topic 1

A company sells datasets to customers who do research in artificial intelligence and machine learning (AI/ML). The datasets are large, formatted
files that are stored in an Amazon S3 bucket in the us-east-1 Region. The company hosts a web application that the customers use to purchase
access to a given dataset. The web application is deployed on multiple Amazon EC2 instances behind an Application Load Balancer. After a
purchase is made, customers receive an S3 signed URL that allows access to the files.

The customers are distributed across North America and Europe. The company wants to reduce the cost that is associated with data transfers
and wants to maintain or improve performance.

What should a solutions architect do to meet these requirements?

A. Configure S3 Transfer Acceleration on the existing S3 bucket. Direct customer requests to the S3 Transfer Acceleration endpoint. Continue
to use S3 signed URLs for access control.

B. Deploy an Amazon CloudFront distribution with the existing S3 bucket as the origin. Direct customer requests to the CloudFront URL. Switch
to CloudFront signed URLs for access control.

C. Set up a second S3 bucket in the eu-central-1 Region with S3 Cross-Region Replication between the buckets. Direct customer requests to
the closest Region. Continue to use S3 signed URLs for access control.

D. Modify the web application to enable streaming of the datasets to end users. Configure the web application to read the data from the
existing S3 bucket. Implement access control directly in the application.

Correct Answer: B

Community vote distribution


B (100%)

  LuckyAro Highly Voted  7 months, 1 week ago


Selected Answer: B
To reduce the cost associated with data transfers and maintain or improve performance, a solutions architect should use Amazon
CloudFront, a content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with
low latency and high transfer speeds.

Deploying a CloudFront distribution with the existing S3 bucket as the origin will allow the company to serve the data to customers from
edge locations that are closer to them, reducing data transfer costs and improving performance.

Directing customer requests to the CloudFront URL and switching to CloudFront signed URLs for access control will enable customers to
access the data securely and efficiently.
upvoted 7 times

  react97 Most Recent  1 day, 14 hours ago


Selected Answer: B
B.
1. Amazon CloudFront caches content at edge locations -- reducing the need for frequent data transfer from S3 bucket -- thus significantly
lowering data transfer costs (as compared to directly serving data from S3 bucket to customers in different regions)
2. CloudFront delivers content to users from the nearest edge location -- minimizing latency -- improves performance for customers

A - focus on accelerating uploads to S3 which may not necessarily improve the performance needed for serving datasets to customers
C - helps with redundancy and data availability but does not necessarily offer cost savings for data transfer.
D - complex to implement, does not address data transfer cost
upvoted 1 times

  bdp123 7 months, 1 week ago


Selected Answer: B
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html
upvoted 3 times

  Bhawesh 7 months, 2 weeks ago


Selected Answer: B
B. Deploy an Amazon CloudFront distribution with the existing S3 bucket as the origin. Direct customer requests to the CloudFront URL.
Switch to CloudFront signed URLs for access control.

https://www.examtopics.com/discussions/amazon/view/68990-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 2 times
Question #311 Topic 1

A company is using AWS to design a web application that will process insurance quotes. Users will request quotes from the application. Quotes
must be separated by quote type, must be responded to within 24 hours, and must not get lost. The solution must maximize operational efficiency
and must minimize maintenance.

Which solution meets these requirements?

A. Create multiple Amazon Kinesis data streams based on the quote type. Configure the web application to send messages to the proper data
stream. Configure each backend group of application servers to use the Kinesis Client Library (KCL) to pool messages from its own data
stream.

B. Create an AWS Lambda function and an Amazon Simple Notification Service (Amazon SNS) topic for each quote type. Subscribe the
Lambda function to its associated SNS topic. Configure the application to publish requests for quotes to the appropriate SNS topic.

C. Create a single Amazon Simple Notification Service (Amazon SNS) topic. Subscribe Amazon Simple Queue Service (Amazon SQS) queues
to the SNS topic. Configure SNS message filtering to publish messages to the proper SQS queue based on the quote type. Configure each
backend application server to use its own SQS queue.

D. Create multiple Amazon Kinesis Data Firehose delivery streams based on the quote type to deliver data streams to an Amazon OpenSearch
Service cluster. Configure the application to send messages to the proper delivery stream. Configure each backend group of application
servers to search for the messages from OpenSearch Service and process them accordingly.

Correct Answer: D

Community vote distribution


C (100%)

  LuckyAro Highly Voted  7 months, 1 week ago


Selected Answer: C
Quote types need to be separated: SNS message filtering can be used to publish messages to the appropriate SQS queue based on the
quote type, ensuring that quotes are separated by type.
Quotes must be responded to within 24 hours and must not get lost: SQS provides reliable and scalable queuing for messages, ensuring
that quotes will not get lost and can be processed in a timely manner. Additionally, each backend application server can use its own SQS
queue, ensuring that quotes are processed efficiently without any delay.
Operational efficiency and minimizing maintenance: Using a single SNS topic and multiple SQS queues is a scalable and cost-effective
approach, which can help to maximize operational efficiency and minimize maintenance. Additionally, SNS and SQS are fully managed
services, which means that the company will not need to worry about maintenance tasks such as software updates, hardware upgrades,
or scaling the infrastructure.
upvoted 9 times

  VIad Highly Voted  7 months, 2 weeks ago


C is the best option
upvoted 7 times

  Guru4Cloud Most Recent  3 weeks, 5 days ago


Selected Answer: C
Create a single SNS topic
Subscribe separate SQS queues per quote type
Use SNS message filtering to send messages to proper queue
Backend servers poll their respective SQS queue
The key points:

Quote requests must be processed within 24 hrs without loss


Need to maximize efficiency and minimize maintenance
Requests separated by quote type
upvoted 1 times

  lexotan 5 months, 2 weeks ago


Selected Answer: C
This wrong answers from examtopic are getting me so frustrated. Which one is the correct answer then?
upvoted 5 times

  Steve_4542636 7 months ago


Selected Answer: C
This is the SNS fan-out technique where you will have one SNS service to many SQS services
https://docs.aws.amazon.com/sns/latest/dg/sns-sqs-as-subscriber.html
upvoted 6 times
  UnluckyDucky 6 months, 2 weeks ago
SNS Fan-out fans message to all subscribers, this uses SNS filtering to publish the message only to the right SQS queue (not all of
them).
upvoted 1 times

  Yechi 7 months, 2 weeks ago


Selected Answer: C
https://aws.amazon.com/getting-started/hands-on/filter-messages-published-to-topics/
upvoted 6 times
Question #312 Topic 1

A company has an application that runs on several Amazon EC2 instances. Each EC2 instance has multiple Amazon Elastic Block Store (Amazon
EBS) data volumes attached to it. The application’s EC2 instance configuration and data need to be backed up nightly. The application also needs
to be recoverable in a different AWS Region.

Which solution will meet these requirements in the MOST operationally efficient way?

A. Write an AWS Lambda function that schedules nightly snapshots of the application’s EBS volumes and copies the snapshots to a different
Region.

B. Create a backup plan by using AWS Backup to perform nightly backups. Copy the backups to another Region. Add the application’s EC2
instances as resources.

C. Create a backup plan by using AWS Backup to perform nightly backups. Copy the backups to another Region. Add the application’s EBS
volumes as resources.

D. Write an AWS Lambda function that schedules nightly snapshots of the application's EBS volumes and copies the snapshots to a different
Availability Zone.

Correct Answer: C

Community vote distribution


B (89%) 11%

  khasport Highly Voted  7 months, 1 week ago


B is answer so the requirement is "The application’s EC2 instance configuration and data need to be backed up nightly" so we need "add
the application’s EC2 instances as resources". This option will backup both EC2 configuration and data
upvoted 10 times

  TungPham Highly Voted  7 months, 1 week ago


Selected Answer: B
https://aws.amazon.com/vi/blogs/aws/aws-backup-ec2-instances-efs-single-file-restore-and-cross-region-backup/
When you back up an EC2 instance, AWS Backup will protect all EBS volumes attached to the instance, and it will attach them to an AMI
that stores all parameters from the original EC2 instance except for two
upvoted 10 times

  Guru4Cloud Most Recent  3 weeks, 5 days ago


Selected Answer: B
B is the most appropriate solution because it allows you to create a backup plan to automate the backup process of EC2 instances and EBS
volumes, and copy backups to another region. Additionally, you can add the application's EC2 instances as resources to ensure their
configuration and data are backed up nightly.
upvoted 1 times

  Geekboii 6 months ago


i would say B
upvoted 1 times

  Geekboii 6 months ago


i would say B
upvoted 1 times

  AlmeroSenior 7 months, 1 week ago


Selected Answer: B
AWS KB states if you select the EC2 instance , associated EBS's will be auto covered .

https://aws.amazon.com/blogs/aws/aws-backup-ec2-instances-efs-single-file-restore-and-cross-region-backup/
upvoted 2 times

  LuckyAro 7 months, 1 week ago


Selected Answer: B
B is the most appropriate solution because it allows you to create a backup plan to automate the backup process of EC2 instances and EBS
volumes, and copy backups to another region. Additionally, you can add the application's EC2 instances as resources to ensure their
configuration and data are backed up nightly.
A and D involve writing custom Lambda functions to automate the snapshot process, which can be complex and require more
maintenance effort. Moreover, these options do not provide an integrated solution for managing backups and recovery, and copying
snapshots to another region.
Option C involves creating a backup plan with AWS Backup to perform backups for EBS volumes only. This approach would not back up
the EC2 instances and their configuration
upvoted 2 times

  Mia2009687 2 months, 3 weeks ago


The data is stored in the EBS storage volume, EC2 won't hold the data, I think we need "Add the application’s EBS volumes as
resources."
upvoted 1 times

  everfly 7 months, 1 week ago


Selected Answer: C
The application’s EC2 instance configuration and data are stored on EBS volume right?
upvoted 2 times

  Rehan33 7 months, 1 week ago


The data is store on EBS volume so why we are not using EBS as a source instead of EC2
upvoted 1 times

  obatunde 7 months, 1 week ago


Because "The application’s EC2 instance configuration and data need to be backed up nightly"
upvoted 3 times

  fulingyu288 7 months, 2 weeks ago


Selected Answer: B
Use AWS Backup to create a backup plan that includes the EC2 instances, Amazon EBS snapshots, and any other resources needed for
recovery. The backup plan can be configured to run on a nightly schedule.
upvoted 1 times

  zTopic 7 months, 2 weeks ago


Selected Answer: B
The application’s EC2 instance configuration and data need to be backed up nightly >> B
upvoted 1 times

  NolaHOla 7 months, 2 weeks ago


But isn't the data needed to be backed up on the EBS ?
upvoted 1 times
Question #313 Topic 1

A company is building a mobile app on AWS. The company wants to expand its reach to millions of users. The company needs to build a platform
so that authorized users can watch the company’s content on their mobile devices.

What should a solutions architect recommend to meet these requirements?

A. Publish content to a public Amazon S3 bucket. Use AWS Key Management Service (AWS KMS) keys to stream content.

B. Set up IPsec VPN between the mobile app and the AWS environment to stream content.

C. Use Amazon CloudFront. Provide signed URLs to stream content.

D. Set up AWS Client VPN between the mobile app and the AWS environment to stream content.

Correct Answer: C

Community vote distribution


C (100%)

  Steve_4542636 Highly Voted  7 months ago


Selected Answer: C
Enough with CloudFront already.
upvoted 16 times

  TariqKipkemei 5 months ago


Hahaha..cloudfront too hyped :)
upvoted 1 times

  Guru4Cloud Most Recent  3 weeks, 5 days ago


Selected Answer: C
Use Amazon CloudFront. Provide signed URLs to stream content.
upvoted 1 times

  antropaws 4 months, 1 week ago


Selected Answer: C
C is correct.
upvoted 1 times

  kprakashbehera 6 months, 3 weeks ago


Cloudfront is the correct solution.
upvoted 2 times

  datz 6 months, 1 week ago


Feel your pain :D hahaha
upvoted 2 times

  LuckyAro 7 months, 1 week ago


Selected Answer: C
Amazon CloudFront is a content delivery network (CDN) that securely delivers data, videos, applications, and APIs to customers globally
with low latency and high transfer speeds. CloudFront supports signed URLs that provide authorized access to your content. This feature
allows the company to control who can access their content and for how long, providing a secure and scalable solution for millions of
users.
upvoted 3 times

  jennyka76 7 months, 1 week ago


C
https://www.amazonaws.cn/en/cloudfront/
upvoted 1 times
Question #314 Topic 1

A company has an on-premises MySQL database used by the global sales team with infrequent access patterns. The sales team requires the
database to have minimal downtime. A database administrator wants to migrate this database to AWS without selecting a particular instance type
in anticipation of more users in the future.

Which service should a solutions architect recommend?

A. Amazon Aurora MySQL

B. Amazon Aurora Serverless for MySQL

C. Amazon Redshift Spectrum

D. Amazon RDS for MySQL

Correct Answer: B

Community vote distribution


B (100%)

  cloudbusting Highly Voted  7 months, 2 weeks ago


"without selecting a particular instance type" = serverless
upvoted 15 times

  Guru4Cloud Most Recent  3 weeks, 5 days ago


Selected Answer: B
B. Amazon Aurora Serverless for MySQL
upvoted 1 times

  Diqian 1 month, 1 week ago


What’s the difference between A and B. I think Aurora is serverless, isn’t it?
upvoted 1 times

  Valder21 3 weeks, 4 days ago


seems serverless is an option of amazon aurora. Not a very good naming scheme.
upvoted 1 times

  elearningtakai 6 months ago


Selected Answer: B
With Aurora Serverless for MySQL, you don't need to select a particular instance type, as the service automatically scales up or down
based on the application's needs.
upvoted 4 times

  Srikanth0057 6 months, 4 weeks ago


Selected Answer: B
Bbbbbbb
upvoted 1 times

  Steve_4542636 7 months ago


Selected Answer: B
https://aws.amazon.com/rds/aurora/serverless/
upvoted 1 times

  LuckyAro 7 months, 1 week ago


Selected Answer: B
Amazon Aurora Serverless for MySQL is a fully managed, auto-scaling relational database service that scales up or down automatically
based on the application demand. This service provides all the capabilities of Amazon Aurora, such as high availability, durability, and
security, without requiring the customer to provision any database instances.

With Amazon Aurora Serverless for MySQL, the sales team can enjoy minimal downtime since the database is designed to automatically
scale to accommodate the increased traffic. Additionally, the service allows the customer to pay only for the capacity used, making it cost-
effective for infrequent access patterns.

Amazon RDS for MySQL could also be an option, but it requires the customer to select an instance type, and the database administrator
would need to monitor and adjust the instance size manually to accommodate the increasing traffic.
upvoted 2 times

  Drayen25 7 months, 2 weeks ago


Minimal downtime points directly to Aurora Serverless
upvoted 2 times
Question #315 Topic 1

A company experienced a breach that affected several applications in its on-premises data center. The attacker took advantage of vulnerabilities
in the custom applications that were running on the servers. The company is now migrating its applications to run on Amazon EC2 instances. The
company wants to implement a solution that actively scans for vulnerabilities on the EC2 instances and sends a report that details the findings.

Which solution will meet these requirements?

A. Deploy AWS Shield to scan the EC2 instances for vulnerabilities. Create an AWS Lambda function to log any findings to AWS CloudTrail.

B. Deploy Amazon Macie and AWS Lambda functions to scan the EC2 instances for vulnerabilities. Log any findings to AWS CloudTrail.

C. Turn on Amazon GuardDuty. Deploy the GuardDuty agents to the EC2 instances. Configure an AWS Lambda function to automate the
generation and distribution of reports that detail the findings.

D. Turn on Amazon Inspector. Deploy the Amazon Inspector agent to the EC2 instances. Configure an AWS Lambda function to automate the
generation and distribution of reports that detail the findings.

Correct Answer: C

Community vote distribution


D (96%)

  siyam008 Highly Voted  7 months ago


Selected Answer: D
AWS Shield for DDOS
Amazon Macie for discover and protect sensitive date
Amazon GuardDuty for intelligent thread discovery to protect AWS account
Amazon Inspector for automated security assessment. like known Vulnerability
upvoted 30 times

  Guru4Cloud Most Recent  3 weeks, 5 days ago


Selected Answer: D
Enable Amazon Inspector
Deploy Inspector agents to EC2 instances
Use Lambda to generate and distribute vulnerability reports
The key points:

Migrate on-prem apps with vulnerabilities to EC2


Need active scanning of EC2 instances for vulnerabilities
Require reports on findings
upvoted 2 times

  kruasan 5 months ago


Selected Answer: D
Amazon Inspector:
• Performs active vulnerability scans of EC2 instances. It looks for software vulnerabilities, unintended network accessibility, and other
security issues.
• Requires installing an agent on EC2 instances to perform scans. The agent must be deployed to each instance.
• Provides scheduled scan reports detailing any findings of security risks or vulnerabilities. These reports can be used to patch or
remediate issues.
• Is best suited for proactively detecting security weaknesses and misconfigurations in your AWS environment.
upvoted 2 times

  kruasan 5 months ago


Amazon GuardDuty:
• Monitors for malicious activity like unusual API calls, unauthorized infrastructure deployments, or compromised EC2 instances. It uses
machine learning and behavioral analysis of logs.
• Does not require installing any agents. It relies on analyzing AWS CloudTrail, VPC Flow Logs, and DNS logs.
• Alerts you to any detected threats, suspicious activity or policy violations in your AWS accounts. These alerts warrant investigation but
may not always require remediation.
• Is focused on detecting active threats, unauthorized behavior, and signs of a compromise in your AWS environment.
• Can also detect some vulnerabilities and misconfigurations but coverage is not as broad as a dedicated service like Inspector.
upvoted 3 times

  datz 6 months, 1 week ago


Selected Answer: D
Amazon Inspector is a vulnerability scanning tool that you can use to identify potential security issues within your EC2 instances.
It is a kind of automated security assessment service that checks the network exposure of your EC2 or latest security state for applications
running into your EC2 instance. It has ability to auto discover your AWS workload and continuously scan for the open loophole or
vulnerability.
upvoted 1 times
  shanwford 6 months, 2 weeks ago
Selected Answer: D
Amazon Inspector is a vulnerability scanning tool that you can use to identify potential security issues within your EC2 instances. Guard
Duty continuously monitors your entire AWS account via Cloud Trail, Flow Logs, DNS Logs as Input.
upvoted 1 times

  GalileoEC2 6 months, 2 weeks ago


Selected Answer: C
:) C is the correct
https://cloudkatha.com/amazon-guardduty-vs-inspector-which-one-should-you-use/
upvoted 1 times

  MssP 6 months, 1 week ago


Please, read the link you sent: Amazon Inspector is a vulnerability scanning tool that you can use to identify potential security issues
within your EC2 instances. GuardDuty is very critical part to identify threats, based on that findings you can setup automated
preventive actions or remediation’s. So Answer is D.
upvoted 1 times

  GalileoEC2 6 months, 2 weeks ago


Selected Answer: C
https://cloudkatha.com/amazon-guardduty-vs-inspector-which-one-should-you-use/
upvoted 1 times

  LuckyAro 7 months, 1 week ago


Selected Answer: D
Amazon Inspector is a security assessment service that helps to identify security vulnerabilities and compliance issues in applications
deployed on Amazon EC2 instances. It can be used to assess the security of applications that are deployed on Amazon EC2 instances,
including those that are custom-built.

To use Amazon Inspector, the Amazon Inspector agent must be installed on the EC2 instances that need to be assessed. The agent collects
data about the instances and sends it to Amazon Inspector for analysis. Amazon Inspector then generates a report that details any
security vulnerabilities that were found and provides guidance on how to remediate them.

By configuring an AWS Lambda function, the company can automate the generation and distribution of reports that detail the findings.
This means that reports can be generated and distributed as soon as vulnerabilities are detected, allowing the company to take action
quickly.
upvoted 1 times

  pbpally 7 months, 1 week ago


Selected Answer: D
I'm a little confused on how someone came up with C, it is definitely D.
upvoted 1 times

  obatunde 7 months, 1 week ago


Selected Answer: D
Amazon Inspector
upvoted 2 times

  obatunde 7 months, 1 week ago


Amazon Inspector is an automated vulnerability management service that continually scans AWS workloads for software vulnerabilities
and unintended network exposure. https://aws.amazon.com/inspector/features/?nc=sn&loc=2
upvoted 3 times

  Palanda 7 months, 1 week ago


Selected Answer: D
I think D
upvoted 1 times

  minglu 7 months, 2 weeks ago


Selected Answer: D
Inspector for EC2
upvoted 1 times

  skiwili 7 months, 2 weeks ago


Selected Answer: D
Ddddddd
upvoted 1 times

  cloudbusting 7 months, 2 weeks ago


this is inspector = https://medium.com/aws-architech/use-case-aws-inspector-vs-guardduty-3662bf80767a
upvoted 3 times
Question #316 Topic 1

A company uses an Amazon EC2 instance to run a script to poll for and process messages in an Amazon Simple Queue Service (Amazon SQS)
queue. The company wants to reduce operational costs while maintaining its ability to process a growing number of messages that are added to
the queue.

What should a solutions architect recommend to meet these requirements?

A. Increase the size of the EC2 instance to process messages faster.

B. Use Amazon EventBridge to turn off the EC2 instance when the instance is underutilized.

C. Migrate the script on the EC2 instance to an AWS Lambda function with the appropriate runtime.

D. Use AWS Systems Manager Run Command to run the script on demand.

Correct Answer: A

Community vote distribution


C (88%) 13%

  kpato87 Highly Voted  7 months, 2 weeks ago


Selected Answer: C
By migrating the script to AWS Lambda, the company can take advantage of the auto-scaling feature of the service. AWS Lambda will
automatically scale resources to match the size of the workload. This means that the company will not have to worry about provisioning or
managing instances as the number of messages increases, resulting in lower operational costs
upvoted 6 times

  Guru4Cloud Most Recent  3 weeks, 5 days ago


Selected Answer: C
The key points are:

Currently using an EC2 instance to poll SQS and process messages


Want to reduce costs while handling growing message volume
By migrating the polling script to a Lambda function, the company can avoid the cost of running a dedicated EC2 instance. Lambda
functions scale automatically to handle message spikes. And Lambda billing is based on actual usage, resulting in cost savings versus
provisioned EC2 capacity.
upvoted 2 times

  Steve_4542636 7 months ago


Selected Answer: C
Lambda costs money only when it's processing, not when idle
upvoted 2 times

  ManOnTheMoon 7 months, 1 week ago


Agree with C
upvoted 1 times

  khasport 7 months, 1 week ago


the answer is C. With this option, you can reduce operational cost as the question mentioned
upvoted 1 times

  LuckyAro 7 months, 1 week ago


Selected Answer: C
AWS Lambda is a serverless compute service that allows you to run your code without provisioning or managing servers. By migrating the
script to an AWS Lambda function, you can eliminate the need to maintain an EC2 instance, reducing operational costs. Additionally,
Lambda automatically scales to handle the increasing number of messages in the SQS queue.
upvoted 1 times

  zTopic 7 months, 2 weeks ago


Selected Answer: C
It Should be C.
Lambda allows you to execute code without provisioning or managing servers, so it is ideal for running scripts that poll for and process
messages in an Amazon SQS queue. The scaling of the Lambda function is automatic, and you only pay for the actual time it takes to
process the messages.
upvoted 3 times

  Bhawesh 7 months, 2 weeks ago


Selected Answer: D
To reduce the operational overhead, it should be:
D. Use AWS Systems Manager Run Command to run the script on demand.
upvoted 2 times

  lucdt4 4 months, 1 week ago


No, replace EC2 instead by using lambda to reduce costs
upvoted 1 times
Question #317 Topic 1

A company uses a legacy application to produce data in CSV format. The legacy application stores the output data in Amazon S3. The company is
deploying a new commercial off-the-shelf (COTS) application that can perform complex SQL queries to analyze data that is stored in Amazon
Redshift and Amazon S3 only. However, the COTS application cannot process the .csv files that the legacy application produces.

The company cannot update the legacy application to produce data in another format. The company needs to implement a solution so that the
COTS application can use the data that the legacy application produces.

Which solution will meet these requirements with the LEAST operational overhead?

A. Create an AWS Glue extract, transform, and load (ETL) job that runs on a schedule. Configure the ETL job to process the .csv files and store
the processed data in Amazon Redshift.

B. Develop a Python script that runs on Amazon EC2 instances to convert the .csv files to .sql files. Invoke the Python script on a cron
schedule to store the output files in Amazon S3.

C. Create an AWS Lambda function and an Amazon DynamoDB table. Use an S3 event to invoke the Lambda function. Configure the Lambda
function to perform an extract, transform, and load (ETL) job to process the .csv files and store the processed data in the DynamoDB table.

D. Use Amazon EventBridge to launch an Amazon EMR cluster on a weekly schedule. Configure the EMR cluster to perform an extract,
transform, and load (ETL) job to process the .csv files and store the processed data in an Amazon Redshift table.

Correct Answer: A

Community vote distribution


A (90%) 10%

  Guru4Cloud 3 weeks, 5 days ago


Selected Answer: A
Create an AWS Glue ETL job to process the CSV files
Configure the job to run on a schedule
Output the transformed data to Amazon Redshift
The key points:

Legacy app generates CSV files in S3


New app requires data in Redshift or S3
Need to transform CSV to support new app with minimal ops overhead
upvoted 1 times

  kraken21 6 months ago


Selected Answer: A
Glue is server less and has less operational head than EMR so A.
upvoted 1 times

  elearningtakai 6 months ago


Selected Answer: A
A, AWS Glue is a fully managed ETL service that can extract data from various sources, transform it into the required format, and load it
into a target data store. In this case, the ETL job can be configured to read the CSV files from Amazon S3, transform the data into a format
that can be loaded into Amazon Redshift, and load it into an Amazon Redshift table.
B requires the development of a custom script to convert the CSV files to SQL files, which could be time-consuming and introduce
additional operational overhead. C, while using serverless technology, requires the additional use of DynamoDB to store the processed
data, which may not be necessary if the data is only needed in Amazon Redshift. D, while an option, is not the most efficient solution as it
requires the creation of an EMR cluster, which can be costly and complex to manage.
upvoted 4 times

  dcp 6 months, 2 weeks ago


Selected Answer: C
o meet the requirement with the least operational overhead, a serverless approach should be used. Among the options provided, option C
provides a serverless solution using AWS Lambda, S3, and DynamoDB. Therefore, the solution should be to create an AWS Lambda
function and an Amazon DynamoDB table. Use an S3 event to invoke the Lambda function. Configure the Lambda function to perform an
extract, transform, and load (ETL) job to process the .csv files and store the processed data in the DynamoDB table.
Option A is also a valid solution, but it may involve more operational overhead than Option C. With Option A, you would need to set up
and manage an AWS Glue job, which would require more setup time than creating an AWS Lambda function. Additionally, AWS Glue jobs
have a minimum execution time of 10 minutes, which may not be necessary or desirable for this use case. However, if the data processing
is particularly complex or requires a lot of data transformation, AWS Glue may be a more appropriate solution.
upvoted 1 times
  MssP 6 months, 1 week ago
Important point: The COTS performs complex SQL queries to analyze data in Amazon Redshift. If you use DynamoDB -> No SQL
querires. Option A makes more sense.
upvoted 3 times

  LuckyAro 7 months, 1 week ago


Selected Answer: A
A would be the best solution as it involves the least operational overhead. With this solution, an AWS Glue ETL job is created to process the
.csv files and store the processed data directly in Amazon Redshift. This is a serverless approach that does not require any infrastructure to
be provisioned, configured, or maintained. AWS Glue provides a fully managed, pay-as-you-go ETL service that can be easily configured to
process data from S3 and load it into Amazon Redshift. This approach allows the legacy application to continue to produce data in the CSV
format that it currently uses, while providing the new COTS application with the ability to analyze the data using complex SQL queries.
upvoted 3 times

  jennyka76 7 months, 2 weeks ago


A
https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-format-csv-home.html
I AGREE AFTER READING LINK
upvoted 1 times

  cloudbusting 7 months, 2 weeks ago


A: https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-format.html
upvoted 1 times
Question #318 Topic 1

A company recently migrated its entire IT environment to the AWS Cloud. The company discovers that users are provisioning oversized Amazon
EC2 instances and modifying security group rules without using the appropriate change control process. A solutions architect must devise a
strategy to track and audit these inventory and configuration changes.

Which actions should the solutions architect take to meet these requirements? (Choose two.)

A. Enable AWS CloudTrail and use it for auditing.

B. Use data lifecycle policies for the Amazon EC2 instances.

C. Enable AWS Trusted Advisor and reference the security dashboard.

D. Enable AWS Config and create rules for auditing and compliance purposes.

E. Restore previous resource configurations with an AWS CloudFormation template.

Correct Answer: AD

Community vote distribution


AD (91%) 9%

  LuckyAro Highly Voted  7 months, 1 week ago


Selected Answer: AD
A. Enable AWS CloudTrail and use it for auditing. CloudTrail provides event history of your AWS account activity, including actions taken
through the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs and APIs. By enabling CloudTrail, the company
can track user activity and changes to AWS resources, and monitor compliance with internal policies and external regulations.

D. Enable AWS Config and create rules for auditing and compliance purposes. AWS Config provides a detailed inventory of the AWS
resources in your account, and continuously records changes to the configurations of those resources. By creating rules in AWS Config,
the company can automate the evaluation of resource configurations against desired state, and receive alerts when configurations drift
from compliance.

Options B, C, and E are not directly relevant to the requirement of tracking and auditing inventory and configuration changes.
upvoted 7 times

  Guru4Cloud Most Recent  3 weeks, 5 days ago


Selected Answer: AD
A. Enable AWS CloudTrail and use it for auditing. CloudTrail provides event history of your AWS account activity, including actions taken
through the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs and APIs. By enabling CloudTrail, the company
can track user activity and changes to AWS resources, and monitor compliance with internal policies and external regulations.

D. Enable AWS Config and create rules for auditing and compliance purposes. AWS Config provides a detailed inventory of the AWS
resources in your account, and continuously records changes to the configurations of those resources. By creating rules in AWS Config,
the company can automate the evaluation of resource configurations against desired state, and receive alerts when configurations drift
from compliance.
upvoted 1 times

  mrsoa 2 months, 1 week ago


Selected Answer: CD
I am gonna go with CD
AWS Cloudtrail is already enabled so no need to enable it and for the auding we are gonna use AWS config Answer D

C because Trusted advisor checks the security groups


upvoted 1 times

  kruasan 5 months ago


Selected Answer: AD
A) Enable AWS CloudTrail and use it for auditing.
AWS CloudTrail provides a record of API calls and can be used to audit changes made to EC2 instances and security groups. By analyzing
CloudTrail logs, the solutions architect can track who provisioned oversized instances or modified security groups without proper
approval.
D) Enable AWS Config and create rules for auditing and compliance purposes.
AWS Config can record the configuration changes made to resources like EC2 instances and security groups. The solutions architect can
create AWS Config rules to monitor for non-compliant changes, like launching certain instance types or opening security group ports
without permission. AWS Config would alert on any violations of these rules.
upvoted 1 times

  kruasan 5 months ago


The other options would not fully meet the auditing and change tracking requirements:
B) Data lifecycle policies control when EC2 instances are backed up or deleted but do not audit configuration changes.
C) AWS Trusted Advisor security checks may detect some compliance violations after the fact but do not comprehensively log changes
like AWS CloudTrail and AWS Config do.
E) CloudFormation templates enable rollback but do not provide an audit trail of changes. The solutions architect would not know who
made unauthorized modifications in the first place.
upvoted 1 times

  skiwili 7 months, 2 weeks ago


Selected Answer: AD
Yes A and D
upvoted 1 times

  jennyka76 7 months, 2 weeks ago


AGREE WITH ANSWER - A & D
CloudTrail and Config
upvoted 1 times

  Neha999 7 months, 2 weeks ago


CloudTrail and Config
upvoted 2 times
Question #319 Topic 1

A company has hundreds of Amazon EC2 Linux-based instances in the AWS Cloud. Systems administrators have used shared SSH keys to manage
the instances. After a recent audit, the company’s security team is mandating the removal of all shared keys. A solutions architect must design a
solution that provides secure access to the EC2 instances.

Which solution will meet this requirement with the LEAST amount of administrative overhead?

A. Use AWS Systems Manager Session Manager to connect to the EC2 instances.

B. Use AWS Security Token Service (AWS STS) to generate one-time SSH keys on demand.

C. Allow shared SSH access to a set of bastion instances. Configure all other instances to allow only SSH access from the bastion instances.

D. Use an Amazon Cognito custom authorizer to authenticate users. Invoke an AWS Lambda function to generate a temporary SSH key.

Correct Answer: B

Community vote distribution


A (73%) 13% 13%

  Guru4Cloud 2 weeks, 2 days ago


Selected Answer: B
The key reasons why:

STS can generate short-lived credentials that provide temporary access to the EC2 instances for administering them.
The credentials can be generated on-demand each time access is needed, eliminating the risks of using permanent shared SSH keys.
No infrastructure like bastion hosts needs to be maintained.
The on-premises administrators can use the familiar SSH tools with the temporary keys.
upvoted 1 times

  Guru4Cloud 3 weeks, 5 days ago


Selected Answer: B
Using AWS Security Token Service (AWS STS) to generate one-time SSH keys on demand is a secure and efficient way to provide access to
the EC2 instances without the need for shared SSH keys. STS is a fully managed service that can be used to generate temporary security
credentials, allowing systems administrators to connect to the EC2 instances without having to share SSH keys. The temporary credentials
can be generated on demand, reducing the administrative overhead associated with managing SSH access
upvoted 1 times

  ofinto 1 week, 2 days ago


Can you please provide documentation about generating a one-time SSH with STS?
upvoted 1 times

  kruasan 5 months ago


Selected Answer: A
AWS Systems Manager Session Manager provides secure shell access to EC2 instances without the need for SSH keys. It meets the security
requirement to remove shared SSH keys while minimizing administrative overhead.
upvoted 1 times

  Guru4Cloud 2 weeks, 2 days ago


If the systems administrators need to access the EC2 instances from an on-premises environment, using Session Manager may not be
the ideal solution.
upvoted 1 times

  kruasan 5 months ago


Session Manager is a fully managed AWS Systems Manager capability. With Session Manager, you can manage your Amazon Elastic
Compute Cloud (Amazon EC2) instances, edge devices, on-premises servers, and virtual machines (VMs). You can use either an
interactive one-click browser-based shell or the AWS Command Line Interface (AWS CLI). Session Manager provides secure and
auditable node management without the need to open inbound ports, maintain bastion hosts, or manage SSH keys. Session Manager
also allows you to comply with corporate policies that require controlled access to managed nodes, strict security practices, and fully
auditable logs with node access details, while providing end users with simple one-click cross-platform access to your managed nodes.
https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html
upvoted 1 times

  kruasan 5 months ago


Who should use Session Manager?
Any AWS customer who wants to improve their security and audit posture, reduce operational overhead by centralizing access
control on managed nodes, and reduce inbound node access.

Information Security experts who want to monitor and track managed node access and activity, close down inbound ports on
managed nodes, or allow connections to managed nodes that don't have a public IP address.
Administrators who want to grant and revoke access from a single location, and who want to provide one solution to users for Linux,
macOS, and Windows Server managed nodes.

Users who want to connect to a managed node with just one click from the browser or AWS CLI without having to provide SSH keys.
upvoted 1 times
  Stanislav4907 6 months, 3 weeks ago
Selected Answer: C
You guys seriously don't want to go to SMSM for Avery Single EC2. You have to create solution not used services for one time access.
Bastion will give you option to manage 1000s EC2 machines from 1. Plus you can use Ansible from it.
upvoted 2 times

  Zox42 6 months, 1 week ago


Question:" the company’s security team is mandating the removal of all shared keys", answer C can't be right because it says:"Allow
shared SSH access to a set of bastion instances".
upvoted 2 times

  UnluckyDucky 6 months, 2 weeks ago


Session Manager is the best practice and recommended way by Amazon to manage your instances.
Bastion hosts require remote access therefore exposing them to the internet.

The most secure way is definitely session manager therefore answer A is correct imho.
upvoted 2 times

  Steve_4542636 7 months ago


Selected Answer: A
I vote a
upvoted 1 times

  LuckyAro 7 months, 1 week ago


Selected Answer: A
AWS Systems Manager Session Manager provides secure and auditable instance management without the need for any inbound
connections or open ports. It allows you to manage your instances through an interactive one-click browser-based shell or through the
AWS CLI. This means that you don't have to manage any SSH keys, and you don't have to worry about securing access to your instances as
access is controlled through IAM policies.
upvoted 3 times

  bdp123 7 months, 1 week ago


Selected Answer: A
https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html
upvoted 2 times

  jahmad0730 7 months, 2 weeks ago


Selected Answer: A
Answer must be A
upvoted 2 times

  jennyka76 7 months, 2 weeks ago


ANSWER - A
AWS SESSION MANAGER IS CORRECT LEAST EFFORTS TO ACCESS LINUX SYSTEM IN AWS CONDOLE AND YOUR ARE ALREAADY LOGIN TO
AWS. SO NO NEED FOR THE TOKEN OR OTHER STUFF DONE IN THE BACKGROUND BY AWS. MAKES SENESE.
upvoted 2 times

  cloudbusting 7 months, 2 weeks ago


Answer is A
upvoted 3 times

  zTopic 7 months, 2 weeks ago


Selected Answer: A
Answer is A
upvoted 2 times

  VIad 7 months, 2 weeks ago


Answer is A
Using AWS Systems Manager Session Manager to connect to the EC2 instances is a secure option as it eliminates the need for inbound
SSH ports and removes the requirement to manage SSH keys manually. It also provides a complete audit trail of user activity. This solution
requires no additional software to be installed on the EC2 instances.
upvoted 4 times
Question #320 Topic 1

A company is using a fleet of Amazon EC2 instances to ingest data from on-premises data sources. The data is in JSON format and ingestion
rates can be as high as 1 MB/s. When an EC2 instance is rebooted, the data in-flight is lost. The company’s data science team wants to query
ingested data in near-real time.

Which solution provides near-real-time data querying that is scalable with minimal data loss?

A. Publish data to Amazon Kinesis Data Streams, Use Kinesis Data Analytics to query the data.

B. Publish data to Amazon Kinesis Data Firehose with Amazon Redshift as the destination. Use Amazon Redshift to query the data.

C. Store ingested data in an EC2 instance store. Publish data to Amazon Kinesis Data Firehose with Amazon S3 as the destination. Use
Amazon Athena to query the data.

D. Store ingested data in an Amazon Elastic Block Store (Amazon EBS) volume. Publish data to Amazon ElastiCache for Redis. Subscribe to
the Redis channel to query the data.

Correct Answer: A

Community vote distribution


A (94%) 6%

  LuckyAro Highly Voted  7 months, 1 week ago


Selected Answer: A
A: is the solution for the company's requirements. Publishing data to Amazon Kinesis Data Streams can support ingestion rates as high as
1 MB/s and provide real-time data processing. Kinesis Data Analytics can query the ingested data in real-time with low latency, and the
solution can scale as needed to accommodate increases in ingestion rates or querying needs. This solution also ensures minimal data loss
in the event of an EC2 instance reboot since Kinesis Data Streams has a persistent data store for up to 7 days by default.
upvoted 9 times

  Guru4Cloud Most Recent  3 weeks, 5 days ago


Selected Answer: A
• Provide near-real-time data ingestion into Kinesis Data Streams with the ability to handle the 1 MB/s ingestion rate. Data would be stored
redundantly across shards.
• Enable near-real-time querying of the data using Kinesis Data Analytics. SQL queries can be run directly against the Kinesis data stream.
• Minimize data loss since data is replicated across shards. If an EC2 instance is rebooted, the data stream is still accessible.
• Scale seamlessly to handle varying ingestion and query rates.
upvoted 2 times

  Nikki013 1 month ago


Selected Answer: A
Answer is A as it will provide a more streamlined solution.
Using B (Firehose + Redshift) will involve sending the data to an S3 bucket first and then copying the data to Redshift which will take more
time.
https://docs.aws.amazon.com/firehose/latest/dev/what-is-this-service.html
upvoted 1 times

  nublit 4 months, 1 week ago


Selected Answer: B
Amazon Kinesis Data Firehose can deliver data in real-time to Amazon Redshift, making it immediately available for queries. Amazon
Redshift, on the other hand, is a powerful data analytics service that allows fast and scalable querying of large volumes of data.
upvoted 1 times

  kruasan 5 months ago


Selected Answer: A
• Provide near-real-time data ingestion into Kinesis Data Streams with the ability to handle the 1 MB/s ingestion rate. Data would be stored
redundantly across shards.
• Enable near-real-time querying of the data using Kinesis Data Analytics. SQL queries can be run directly against the Kinesis data stream.
• Minimize data loss since data is replicated across shards. If an EC2 instance is rebooted, the data stream is still accessible.
• Scale seamlessly to handle varying ingestion and query rates.
upvoted 2 times

  kruasan 5 months ago


The other options would not fully meet the requirements:
B) Kinesis Firehose + Redshift would introduce latency since data must be loaded from Firehose into Redshift before querying. Redshift
would lack real-time capabilities.
C) An EC2 instance store and Kinesis Firehose to S3 with Athena querying would risk data loss from instance store if an instance
reboots. Athena querying data in S3 also lacks real-time capabilities.
D) Using EBS storage, Kinesis Firehose to Redis and subscribing to Redis may provide near-real-time ingestion and querying but risks
data loss if an EBS volume or EC2 instance fails. Recovery requires re-hydrating data from a backup which impacts real-time needs.
upvoted 2 times

  joechen2023 3 months, 2 weeks ago


I voted A as well, although not 100% sure why B is not correct. I just selected what seems the most simple solution between A and B.

Reason Kruasan gave "Redshift would lack real-time capabilities." This is not true. Redshift could do real-time. evidence
https://aws.amazon.com/blogs/big-data/real-time-analytics-with-amazon-redshift-streaming-ingestion/
upvoted 1 times

  jennyka76 7 months, 2 weeks ago


ANSWER - A
https://docs.aws.amazon.com/kinesisanalytics/latest/dev/what-is.html
upvoted 1 times

  cloudbusting 7 months, 2 weeks ago


near-real-time data querying = Kinesis analytics
upvoted 3 times

  zTopic 7 months, 2 weeks ago


Selected Answer: A
Answer is A
upvoted 1 times
Question #321 Topic 1

What should a solutions architect do to ensure that all objects uploaded to an Amazon S3 bucket are encrypted?

A. Update the bucket policy to deny if the PutObject does not have an s3:x-amz-acl header set.

B. Update the bucket policy to deny if the PutObject does not have an s3:x-amz-acl header set to private.

C. Update the bucket policy to deny if the PutObject does not have an aws:SecureTransport header set to true.

D. Update the bucket policy to deny if the PutObject does not have an x-amz-server-side-encryption header set.

Correct Answer: D

Community vote distribution


D (100%)

  bdp123 Highly Voted  7 months, 2 weeks ago


Selected Answer: D
https://aws.amazon.com/blogs/security/how-to-prevent-uploads-of-unencrypted-objects-to-amazon-s3/#:~:text=Solution%20overview
upvoted 6 times

  Grace83 6 months, 2 weeks ago


Thank you!
upvoted 1 times

  Guru4Cloud Most Recent  3 weeks, 5 days ago


Selected Answer: D
The x-amz-server-side-encryption header is used to specify the encryption method that should be used to encrypt objects uploaded to an
Amazon S3 bucket. By updating the bucket policy to deny if the PutObject does not have this header set, the solutions architect can ensure
that all objects uploaded to the bucket are encrypted.
upvoted 2 times

  kruasan 5 months ago


To encrypt an object at the time of upload, you need to add a header called x-amz-server-side-encryption to the request to tell S3 to
encrypt the object using SSE-C, SSE-S3, or SSE-KMS. The following code example shows a Put request using SSE-S3.
https://aws.amazon.com/blogs/security/how-to-prevent-uploads-of-unencrypted-objects-to-amazon-s3/
upvoted 3 times

  kruasan 5 months ago


The other options would not enforce encryption:
A) Requiring an s3:x-amz-acl header does not mandate encryption. This header controls access permissions.
B) Requiring an s3:x-amz-acl header set to private also does not enforce encryption. It only enforces private access permissions.
C) Requiring an aws:SecureTransport header ensures uploads use SSL but does not specify that objects must be encrypted. Encryption
is not required when using SSL transport.
upvoted 2 times

  kruasan 5 months ago


Selected Answer: D
To encrypt an object at the time of upload, you need to add a header called x-amz-server-side-encryption to the request to tell S3 to
encrypt the object using SSE-C, SSE-S3, or SSE-KMS. The following code example shows a Put request using SSE-S3.
https://aws.amazon.com/blogs/security/how-to-prevent-uploads-of-unencrypted-objects-to-amazon-s3/
upvoted 1 times

  Sbbh 6 months, 1 week ago


Confusing question. It doesn't state clearly if the object needs to be encrypted at-rest or in-transit
upvoted 3 times

  Guru4Cloud 3 weeks, 5 days ago


That's true
upvoted 1 times

  Steve_4542636 7 months ago


Selected Answer: D
I vote d
upvoted 1 times

  LuckyAro 7 months, 1 week ago


Selected Answer: D
To ensure that all objects uploaded to an Amazon S3 bucket are encrypted, the solutions architect should update the bucket policy to deny
any PutObject requests that do not have an x-amz-server-side-encryption header set. This will prevent any objects from being uploaded to
the bucket unless they are encrypted using server-side encryption.
upvoted 3 times
  jennyka76 7 months, 2 weeks ago
answer - D
upvoted 1 times

  zTopic 7 months, 2 weeks ago


Selected Answer: D
Answer is D
upvoted 1 times

  Neorem 7 months, 2 weeks ago


Selected Answer: D
https://docs.aws.amazon.com/AmazonS3/latest/userguide/amazon-s3-policy-keys.html
upvoted 1 times
Question #322 Topic 1

A solutions architect is designing a multi-tier application for a company. The application's users upload images from a mobile device. The
application generates a thumbnail of each image and returns a message to the user to confirm that the image was uploaded successfully.

The thumbnail generation can take up to 60 seconds, but the company wants to provide a faster response time to its users to notify them that the
original image was received. The solutions architect must design the application to asynchronously dispatch requests to the different application
tiers.

What should the solutions architect do to meet these requirements?

A. Write a custom AWS Lambda function to generate the thumbnail and alert the user. Use the image upload process as an event source to
invoke the Lambda function.

B. Create an AWS Step Functions workflow. Configure Step Functions to handle the orchestration between the application tiers and alert the
user when thumbnail generation is complete.

C. Create an Amazon Simple Queue Service (Amazon SQS) message queue. As images are uploaded, place a message on the SQS queue for
thumbnail generation. Alert the user through an application message that the image was received.

D. Create Amazon Simple Notification Service (Amazon SNS) notification topics and subscriptions. Use one subscription with the application
to generate the thumbnail after the image upload is complete. Use a second subscription to message the user's mobile app by way of a push
notification after thumbnail generation is complete.

Correct Answer: C

Community vote distribution


C (88%) 12%

  Steve_4542636 Highly Voted  7 months ago


Selected Answer: C
I've noticed there are a lot of questions about decoupling services and SQS is almost always the answer.
upvoted 14 times

  Neha999 Highly Voted  7 months, 2 weeks ago


D
SNS fan out
upvoted 8 times

  Guru4Cloud Most Recent  3 weeks, 5 days ago


Selected Answer: C
SQS is a fully managed message queuing service that can be used to decouple different parts of an application.
upvoted 1 times

  Zox42 6 months, 1 week ago


Selected Answer: C
Answers B and D alert the user when thumbnail generation is complete. Answer C alerts the user through an application message that the
image was received.
upvoted 3 times

  Sbbh 6 months, 1 week ago


B:
Use cases for Step Functions vary widely, from orchestrating serverless microservices, to building data-processing pipelines, to defining a
security-incident response. As mentioned above, Step Functions may be used for synchronous and asynchronous business processes.
upvoted 1 times

  AlessandraSAA 6 months, 4 weeks ago


why not B?
upvoted 4 times

  Wael216 7 months ago


Selected Answer: C
Creating an Amazon Simple Queue Service (SQS) message queue and placing messages on the queue for thumbnail generation can help
separate the image upload and thumbnail generation processes.
upvoted 1 times

  vindahake 7 months ago


C
The key here is "a faster response time to its users to notify them that the original image was received." i.e user needs to be notified when
image was received and not after thumbnail was created.
upvoted 2 times

  AlmeroSenior 7 months, 1 week ago


Selected Answer: C
A looks like the best way , but its essentially replacing the mentioned app , that's not the ask
upvoted 1 times

  Mickey321 7 months, 1 week ago


Selected Answer: A
https://docs.aws.amazon.com/lambda/latest/dg/with-s3-tutorial.html
upvoted 1 times

  bdp123 7 months, 1 week ago


Selected Answer: C
C is the only one that makes sense
upvoted 1 times

  LuckyAro 7 months, 1 week ago


Selected Answer: A
Use a custom AWS Lambda function to generate the thumbnail and alert the user. Lambda functions are well-suited for short-lived,
stateless operations like generating thumbnails, and they can be triggered by various events, including image uploads. By using Lambda,
the application can quickly confirm that the image was uploaded successfully and then asynchronously generate the thumbnail. When the
thumbnail is generated, the Lambda function can send a message to the user to confirm that the thumbnail is ready.

C proposes to use an Amazon Simple Queue Service (Amazon SQS) message queue to process image uploads and generate thumbnails.
SQS can help decouple the image upload process from the thumbnail generation process, which is helpful for asynchronous processing.
However, it may not be the most suitable option for quickly alerting the user that the image was received, as the user may have to wait
until the thumbnail is generated before receiving a notification.
upvoted 2 times

  Bhrino 7 months, 1 week ago


Selected Answer: A
This is A because SNS and SQS dont work because it can take up to 60 seconds and b is just more complex than a
upvoted 1 times

  CapJackSparrow 6 months, 2 weeks ago


Does Lambda not time out after 15 seconds?
upvoted 1 times

  MssP 6 months, 1 week ago


15 min.
upvoted 1 times

  jennyka76 7 months, 2 weeks ago


answer - c
upvoted 1 times

  rrharris 7 months, 2 weeks ago


Answer is C
upvoted 1 times

  zTopic 7 months, 2 weeks ago


Selected Answer: C
The solutions architect can use Amazon Simple Queue Service (SQS) to manage the messages and dispatch the requests in a scalable and
decoupled manner. Therefore, the correct answer is C.
upvoted 2 times
Question #323 Topic 1

A company’s facility has badge readers at every entrance throughout the building. When badges are scanned, the readers send a message over
HTTPS to indicate who attempted to access that particular entrance.

A solutions architect must design a system to process these messages from the sensors. The solution must be highly available, and the results
must be made available for the company’s security team to analyze.

Which system architecture should the solutions architect recommend?

A. Launch an Amazon EC2 instance to serve as the HTTPS endpoint and to process the messages. Configure the EC2 instance to save the
results to an Amazon S3 bucket.

B. Create an HTTPS endpoint in Amazon API Gateway. Configure the API Gateway endpoint to invoke an AWS Lambda function to process the
messages and save the results to an Amazon DynamoDB table.

C. Use Amazon Route 53 to direct incoming sensor messages to an AWS Lambda function. Configure the Lambda function to process the
messages and save the results to an Amazon DynamoDB table.

D. Create a gateway VPC endpoint for Amazon S3. Configure a Site-to-Site VPN connection from the facility network to the VPC so that sensor
data can be written directly to an S3 bucket by way of the VPC endpoint.

Correct Answer: B

Community vote distribution


B (100%)

  kruasan Highly Voted  5 months ago


Selected Answer: B
- Option A would not provide high availability. A single EC2 instance is a single point of failure.
- Option B provides a scalable, highly available solution using serverless services. API Gateway and Lambda can scale automatically, and
DynamoDB provides a durable data store.
- Option C would expose the Lambda function directly to the public Internet, which is not a recommended architecture. API Gateway
provides an abstraction layer and additional features like access control.
- Option D requires configuring a VPN to AWS which adds complexity. It also saves the raw sensor data to S3, rather than processing it and
storing the results.
upvoted 6 times

  Guru4Cloud Most Recent  3 weeks, 5 days ago


Selected Answer: B
The correct answer is B. Create an HTTPS endpoint in Amazon API Gateway. Configure the API Gateway endpoint to invoke an AWS
Lambda function to process the messages and save the results to an Amazon DynamoDB table.

Here are the reasons why:

API Gateway is a highly scalable and available service that can be used to create and expose RESTful APIs.
Lambda is a serverless compute service that can be used to process events and data.
DynamoDB is a NoSQL database that can be used to store data in a scalable and highly available way.
upvoted 2 times

  Steve_4542636 7 months ago


Selected Answer: B
I vote B
upvoted 1 times

  KZM 7 months, 1 week ago


It is option "B"
Option "B" can provide a system with highly scalable, fault-tolerant, and easy to manage.
upvoted 1 times

  LuckyAro 7 months, 1 week ago


Selected Answer: B
Deploy Amazon API Gateway as an HTTPS endpoint and AWS Lambda to process and save the messages to an Amazon DynamoDB table.
This option provides a highly available and scalable solution that can easily handle large amounts of data. It also integrates with other
AWS services, making it easier to analyze and visualize the data for the security team.
upvoted 3 times

  zTopic 7 months, 2 weeks ago


Selected Answer: B
B is Correct
upvoted 3 times
Question #324 Topic 1

A company wants to implement a disaster recovery plan for its primary on-premises file storage volume. The file storage volume is mounted from
an Internet Small Computer Systems Interface (iSCSI) device on a local storage server. The file storage volume holds hundreds of terabytes (TB)
of data.

The company wants to ensure that end users retain immediate access to all file types from the on-premises systems without experiencing latency.

Which solution will meet these requirements with the LEAST amount of change to the company's existing infrastructure?

A. Provision an Amazon S3 File Gateway as a virtual machine (VM) that is hosted on premises. Set the local cache to 10 TB. Modify existing
applications to access the files through the NFS protocol. To recover from a disaster, provision an Amazon EC2 instance and mount the S3
bucket that contains the files.

B. Provision an AWS Storage Gateway tape gateway. Use a data backup solution to back up all existing data to a virtual tape library. Configure
the data backup solution to run nightly after the initial backup is complete. To recover from a disaster, provision an Amazon EC2 instance and
restore the data to an Amazon Elastic Block Store (Amazon EBS) volume from the volumes in the virtual tape library.

C. Provision an AWS Storage Gateway Volume Gateway cached volume. Set the local cache to 10 TB. Mount the Volume Gateway cached
volume to the existing file server by using iSCSI, and copy all files to the storage volume. Configure scheduled snapshots of the storage
volume. To recover from a disaster, restore a snapshot to an Amazon Elastic Block Store (Amazon EBS) volume and attach the EBS volume to
an Amazon EC2 instance.

D. Provision an AWS Storage Gateway Volume Gateway stored volume with the same amount of disk space as the existing file storage volume.
Mount the Volume Gateway stored volume to the existing file server by using iSCSI, and copy all files to the storage volume. Configure
scheduled snapshots of the storage volume. To recover from a disaster, restore a snapshot to an Amazon Elastic Block Store (Amazon EBS)
volume and attach the EBS volume to an Amazon EC2 instance.

Correct Answer: C

Community vote distribution


D (74%) C (26%)

  Grace83 Highly Voted  6 months, 2 weeks ago


D is the correct answer

Volume Gateway CACHED Vs STORED


Cached = stores a subset of frequently accessed data locally
Stored = Retains the ENTIRE ("all file types") in on prem data centre
upvoted 11 times

  netcj Most Recent  3 weeks, 1 day ago


Selected Answer: D
"users retain immediate access to all file types"
immediate cannot be cached -> D
upvoted 1 times

  Guru4Cloud 3 weeks, 5 days ago


Selected Answer: D
dddddddd
upvoted 2 times

  alexandercamachop 4 months ago


Selected Answer: D
Correct answer is Volume Gateway Stored which keeps all data on premises.
To have immediate access to the data. Cached is for frequently accessed data only.
upvoted 1 times

  omoakin 4 months, 1 week ago


CCCCCCCCCCCCCCCC
upvoted 1 times

  lucdt4 4 months, 1 week ago


Selected Answer: D
D is the correct answer
Volume Gateway CACHED Vs STORED
Cached = stores a data recentlly at local
Stored = Retains the ENTIRE ("all file types") in on prem data centre
upvoted 1 times
  rushi0611 4 months, 4 weeks ago
Selected Answer: D
In the cached mode, your primary data is written to S3, while retaining your frequently accessed data locally in a cache for low-latency
access.
In the stored mode, your primary data is stored locally and your entire dataset is available for low-latency access while asynchronously
backed up to AWS.
Reference: https://aws.amazon.com/storagegateway/faqs/
Good luck.
upvoted 1 times

  kruasan 5 months ago


Selected Answer: D
It is stated the company wants to keep the data locally and have DR plan in cloud. It points directly to the volume gateway
upvoted 1 times

  UnluckyDucky 6 months, 2 weeks ago


Selected Answer: D
"The company wants to ensure that end users retain immediate access to all file types from the on-premises systems "

D is the correct answer.


upvoted 2 times

  CapJackSparrow 6 months, 2 weeks ago


Selected Answer: C
all file types, NOT all files. Volume mode can not cache 100TBs.
upvoted 2 times

  eddie5049 4 months, 4 weeks ago


https://docs.aws.amazon.com/storagegateway/latest/vgw/StorageGatewayConcepts.html

Stored volumes can range from 1 GiB to 16 TiB in size and must be rounded to the nearest GiB. Each gateway configured for stored
volumes can support up to 32 volumes and a total volume storage of 512 TiB (0.5 PiB).
upvoted 1 times

  MssP 6 months, 1 week ago


all file types. Cached only save the most frecuently or lastest accesed. If you didn´t access any type for a long time, you will not cache it -
> No immediate access
upvoted 2 times

  WherecanIstart 6 months, 2 weeks ago


Selected Answer: D
"The company wants to ensure that end users retain immediate access to all file types from the on-premises systems "

This points to stored volumes..


upvoted 1 times

  KAUS2 6 months, 3 weeks ago


Selected Answer: D
Option D is the right choice for this question . "The company wants to ensure that end users retain immediate access to all file types from
the on-premises systems "
- Cached volumes: low latency access to most recent data
- Stored volumes: entire dataset is on premise, scheduled backups to S3
Hence Volume Gateway stored volume is the apt choice.
upvoted 3 times

  bangfire 6 months, 3 weeks ago


Answer is C.

Option D is not the best solution because a Volume Gateway stored volume does not provide immediate access to all file types and would
require additional steps to retrieve data from Amazon S3, which can result in latency for end-users.
upvoted 2 times

  UnluckyDucky 6 months, 3 weeks ago


You're confusing cached mode with stored volume mode.
upvoted 1 times

  un1x 6 months, 3 weeks ago


Selected Answer: C
Answer is C.
why?
https://docs.aws.amazon.com/storagegateway/latest/vgw/StorageGatewayConcepts.html#storage-gateway-stored-volume-concepts
"Stored volumes can range from 1 GiB to 16 TiB in size and must be rounded to the nearest GiB. Each gateway configured for stored
volumes can support up to 32 volumes and a total volume storage of 512 TiB"

Option D states: "Provision an AWS Storage Gateway Volume Gateway stored *volume* with the same amount of disk space as the
existing file storage volume.".
Notice that it states volume and not volumes, which would be the only way to match the information that the question provides.
Initial question states that on-premise volume is 100s of TB in size.
Therefore, only logical and viable answer can be C.
Feel free to prove me wrong
upvoted 3 times

  eddie5049 4 months, 4 weeks ago


Stored volumes can range from 1 GiB to 16 TiB in size and must be rounded to the nearest GiB. Each gateway configured for stored
volumes can support up to 32 volumes and a total volume storage of 512 TiB (0.5 PiB).

why not configure multiple gateway to achieve the hundreds of TB?


upvoted 1 times

  Steve_4542636 7 months ago


Selected Answer: D
Stored Volume Gateway will retain ALL data locally whereas Cached Volume Gateway retains frequently accessed data locally
upvoted 3 times

  KZM 7 months, 1 week ago


As per the given information, option 'C' can support the Company's requirements with the LEAST amount of change to the existing
infrastructure, I think.
https://aws.amazon.com/storagegateway/volume/
upvoted 2 times

  bdp123 7 months, 1 week ago


Selected Answer: D
the " all file types" is confusing - does not say "all files" - also, hundreds of Terabytes is enormously large to maintain all files on-prem.
Cache volume is also low latency
upvoted 2 times
Question #325 Topic 1

A company is hosting a web application from an Amazon S3 bucket. The application uses Amazon Cognito as an identity provider to authenticate
users and return a JSON Web Token (JWT) that provides access to protected resources that are stored in another S3 bucket.

Upon deployment of the application, users report errors and are unable to access the protected content. A solutions architect must resolve this
issue by providing proper permissions so that users can access the protected content.

Which solution meets these requirements?

A. Update the Amazon Cognito identity pool to assume the proper IAM role for access to the protected content.

B. Update the S3 ACL to allow the application to access the protected content.

C. Redeploy the application to Amazon S3 to prevent eventually consistent reads in the S3 bucket from affecting the ability of users to access
the protected content.

D. Update the Amazon Cognito pool to use custom attribute mappings within the identity pool and grant users the proper permissions to
access the protected content.

Correct Answer: A

Community vote distribution


A (88%) 13%

  Guru4Cloud 3 weeks, 5 days ago


Selected Answer: A
A. Update the Amazon Cognito identity pool to assume the proper IAM role for access to the protected content.
upvoted 2 times

  Abrar2022 3 months, 3 weeks ago


Selected Answer: A
Services access other services via IAM Roles. Hence why updating AWS Cognito identity pool to assume proper IAM Role is the right
solution.
upvoted 1 times

  alexandercamachop 4 months ago


Selected Answer: A
To resolve the issue and provide proper permissions for users to access the protected content, the recommended solution is:

A. Update the Amazon Cognito identity pool to assume the proper IAM role for access to the protected content.

Explanation:

Amazon Cognito provides authentication and user management services for web and mobile applications.
In this scenario, the application is using Amazon Cognito as an identity provider to authenticate users and obtain JSON Web Tokens (JWTs).
The JWTs are used to access protected resources stored in another S3 bucket.
To grant users access to the protected content, the proper IAM role needs to be assumed by the identity pool in Amazon Cognito.
By updating the Amazon Cognito identity pool with the appropriate IAM role, users will be authorized to access the protected content in
the S3 bucket.
upvoted 3 times

  alexandercamachop 4 months ago


Option B is incorrect because updating the S3 ACL (Access Control List) will only affect the permissions of the application, not the users
accessing the content.

Option C is incorrect because redeploying the application to Amazon S3 will not resolve the issue related to user access permissions.

Option D is incorrect because updating custom attribute mappings in Amazon Cognito will not directly grant users the proper
permissions to access the protected content.
upvoted 2 times

  shanwford 5 months, 3 weeks ago


Selected Answer: A
Amazon Cognito identity pools assign your authenticated users a set of temporary, limited-privilege credentials to access your AWS
resources. The permissions for each user are controlled through IAM roles that you create.
https://docs.aws.amazon.com/cognito/latest/developerguide/role-based-access-control.html
upvoted 1 times
  Brak 6 months, 4 weeks ago
Selected Answer: D
A makes no sense - Cognito is not accessing the S3 resource. It just returns the JWT token that will be attached to the S3 request.

D is the right answer, using custom attributes that are added to the JWT and used to grant permissions in S3. See
https://docs.aws.amazon.com/cognito/latest/developerguide/using-attributes-for-access-control-policy-example.html for an example.
upvoted 2 times

  Abhineet9148232 6 months, 3 weeks ago


But even D requires setting up the permissions as bucket policy (as show in the shared example) which includes higher overhead than
managing permissions attached to specific roles.
upvoted 2 times

  asoli 6 months, 2 weeks ago


A says "Identity Pool"
According to AWS: "With an identity pool, your users can obtain temporary AWS credentials to access AWS services, such as Amazon S3
and DynamoDB."
So, answer is A
upvoted 1 times

  Steve_4542636 7 months ago


Selected Answer: A
Services access other services via IAM Roles.
upvoted 1 times

  LuckyAro 7 months, 1 week ago


Selected Answer: A
A is the best solution as it directly addresses the issue of permissions and grants authenticated users the necessary IAM role to access the
protected content.

A suggests updating the Amazon Cognito identity pool to assume the proper IAM role for access to the protected content. This is a valid
solution, as it would grant authenticated users the necessary permissions to access the protected content.
upvoted 4 times

  jennyka76 7 months, 2 weeks ago


ANSWER - A
https://docs.aws.amazon.com/cognito/latest/developerguide/tutorial-create-identity-pool.html
You have to create an custom role such as read-only
upvoted 4 times

  zTopic 7 months, 2 weeks ago


Selected Answer: A
Answer is A
upvoted 2 times
Question #326 Topic 1

An image hosting company uploads its large assets to Amazon S3 Standard buckets. The company uses multipart upload in parallel by using S3
APIs and overwrites if the same object is uploaded again. For the first 30 days after upload, the objects will be accessed frequently. The objects
will be used less frequently after 30 days, but the access patterns for each object will be inconsistent. The company must optimize its S3 storage
costs while maintaining high availability and resiliency of stored assets.

Which combination of actions should a solutions architect recommend to meet these requirements? (Choose two.)

A. Move assets to S3 Intelligent-Tiering after 30 days.

B. Configure an S3 Lifecycle policy to clean up incomplete multipart uploads.

C. Configure an S3 Lifecycle policy to clean up expired object delete markers.

D. Move assets to S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days.

E. Move assets to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 30 days.

Correct Answer: AB

Community vote distribution


AB (57%) BD (37%) 7%

  Neha999 Highly Voted  7 months, 2 weeks ago


AB
A : Access Pattern for each object inconsistent, Infrequent Access
B : Deleting Incomplete Multipart Uploads to Lower Amazon S3 Costs
upvoted 14 times

  TungPham Highly Voted  7 months, 1 week ago


Selected Answer: AB
B because Abort Incomplete Multipart Uploads Using S3 Lifecycle => https://aws.amazon.com/blogs/aws-cloud-financial-
management/discovering-and-deleting-incomplete-multipart-uploads-to-lower-amazon-s3-costs/
A because The objects will be used less frequently after 30 days, but the access patterns for each object will be inconsistent => random
access => S3 Intelligent-Tiering
upvoted 8 times

  Guru4Cloud Most Recent  3 weeks, 5 days ago


Selected Answer: AB
A. Move assets to S3 Intelligent-Tiering after 30 days.
B. Configure an S3 Lifecycle policy to clean up incomplete multipart uploads.
upvoted 1 times

  vini15 2 months, 1 week ago


should be A and B
upvoted 1 times

  MrAWSAssociate 3 months, 1 week ago


Selected Answer: BD
Option A has not been mentioned for resiliency in S3, check the page: https://docs.aws.amazon.com/AmazonS3/latest/userguide/disaster-
recovery-resiliency.html
Therefore, I am with B & D choices.
upvoted 1 times

  alexandercamachop 4 months ago


Selected Answer: AB
A. Move assets to S3 Intelligent-Tiering after 30 days.
B. Configure an S3 Lifecycle policy to clean up incomplete multipart uploads.

Explanation:

A. Moving assets to S3 Intelligent-Tiering after 30 days: This storage class automatically analyzes the access patterns of objects and moves
them between frequent access and infrequent access tiers. Since the objects will be accessed frequently for the first 30 days, storing them
in the frequent access tier during that period optimizes performance. After 30 days, when the access patterns become inconsistent, S3
Intelligent-Tiering will automatically move the objects to the infrequent access tier, reducing storage costs.

B. Configuring an S3 Lifecycle policy to clean up incomplete multipart uploads: Multipart uploads are used for large objects, and
incomplete multipart uploads can consume storage space if not cleaned up. By configuring an S3 Lifecycle policy to clean up incomplete
multipart uploads, unnecessary storage costs can be avoided.
upvoted 1 times
  antropaws 4 months, 1 week ago
Selected Answer: AD
AD.

B makes no sense because multipart uploads overwrite objects that are already uploaded. The question never says this is a problem.
upvoted 1 times

  VellaDevil 2 months, 3 weeks ago


Questions says to optimize cost and if incomplete multiparts are not aborted it will still use capacity on S3 Bucket thus increase
unnecessary cost.
upvoted 1 times

  klayytech 6 months, 1 week ago


Selected Answer: AB
the following two actions to optimize S3 storage costs while maintaining high availability and resiliency of stored assets:

A. Move assets to S3 Intelligent-Tiering after 30 days. This will automatically move objects between two access tiers based on changing
access patterns and save costs by reducing the number of objects stored in the expensive tier.

B. Configure an S3 Lifecycle policy to clean up incomplete multipart uploads. This will help to reduce storage costs by removing incomplete
multipart uploads that are no longer needed.
upvoted 2 times

  datz 6 months, 1 week ago


Selected Answer: BD
B = Deleting incomplete uploads will lower S3 cost.

and D: as "For the first 30 days after upload, the objects will be accessed frequently"

Intelligent checks and if file haven't been access for 30 consecutive days and send infrequent access.So if somebody accessed the file 20
days after the upload with the intelligent process, file will be moved to Infrequent Access tier after 50 days. Which will reflect against the
COST.

"S3 Intelligent-Tiering monitors access patterns and moves objects that have not been accessed for 30 consecutive days to the Infrequent
Access tier and after 90 days of no access to the Archive Instant Access tier. For data that does not require immediate retrieval, you can set
up S3 Intelligent-Tiering to monitor and automatically move objects that aren’t accessed for 180 days or more to the Deep Archive Access
tier to realize up to 95% in storage cost savings."

https://aws.amazon.com/s3/storage-classes/#Unknown_or_changing_access
upvoted 4 times

  datz 6 months, 1 week ago


Apologies D is wrong for sure lol

"S3 Standard-IA is for data that is accessed less frequently, but requires rapid access when needed." and for the first 30 days data is
frequently accessed lol.

So best solution will be A - Amazon S3 Intelligent-Tiering


upvoted 2 times

  datz 6 months, 1 week ago


sorry remove the above comment, as we are setting solution which will be needed after 30 Days

this should be : Amazon S3 Standard-Infrequent Access (S3 Standard-IA)


upvoted 2 times

  MLCL 6 months, 2 weeks ago


Selected Answer: BD
Infrequent access is written in the question so it's BD
upvoted 1 times

  MssP 6 months, 1 week ago


It is not infrequent... it is LESS frequent. It can be few less or too much less (infrequent) but it is clear that pattern is inconsistent -> A
upvoted 1 times

  asoli 6 months, 2 weeks ago


Selected Answer: AB
The answer is AB
A: "the access patterns for each object will be inconsistent" so Intelligent-Tiering works well for this assumption (even better than D. It
may put it in lower tiers based on access patterns that Standard-IA)
D: incomplete multipart is just a waste of resources
upvoted 2 times

  asoli 6 months, 2 weeks ago


I meant B: incomplete multipart is just a waste of resources
upvoted 1 times

  AlessandraSAA 6 months, 2 weeks ago


Selected Answer: AB
https://www.examtopics.com/discussions/amazon/view/84533-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 3 times

  cenil 6 months, 2 weeks ago


AB, Unknown of changing access pattern
https://aws.amazon.com/s3/storage-classes/
upvoted 1 times

  houzuun 6 months, 2 weeks ago


Selected Answer: AB
I think B is obvious, and I chose A because the pattern is unpredictable
upvoted 2 times

  Maximus007 6 months, 3 weeks ago


B is clear
the choice might be between A and D
I vote for A - S3 Intelligent-Tiering will analyze patterns and decide properly
upvoted 1 times

  taehyeki 6 months, 3 weeks ago


Selected Answer: BD
i think b , d make more sense
it is no matter where each object is moved,
we only know object is not accessed frequently after 30days
so i go with D
upvoted 2 times

  Abhineet9148232 6 months, 3 weeks ago


Selected Answer: BD
S3-IA provides same low latency and high throughput performance of S3 Standard. Ideal for infrequent but high throughput access.

https://aws.amazon.com/s3/storage-classes/#Unknown_or_changing_access
upvoted 1 times
Question #327 Topic 1

A solutions architect must secure a VPC network that hosts Amazon EC2 instances. The EC2 instances contain highly sensitive data and run in a
private subnet. According to company policy, the EC2 instances that run in the VPC can access only approved third-party software repositories on
the internet for software product updates that use the third party’s URL. Other internet traffic must be blocked.

Which solution meets these requirements?

A. Update the route table for the private subnet to route the outbound traffic to an AWS Network Firewall firewall. Configure domain list rule
groups.

B. Set up an AWS WAF web ACL. Create a custom set of rules that filter traffic requests based on source and destination IP address range
sets.

C. Implement strict inbound security group rules. Configure an outbound rule that allows traffic only to the authorized software repositories on
the internet by specifying the URLs.

D. Configure an Application Load Balancer (ALB) in front of the EC2 instances. Direct all outbound traffic to the ALB. Use a URL-based rule
listener in the ALB’s target group for outbound access to the internet.

Correct Answer: A

Community vote distribution


A (83%) C (17%)

  Bhawesh Highly Voted  7 months, 1 week ago


Selected Answer: A
Correct Answer A. Send the outbound connection from EC2 to Network Firewall. In Network Firewall, create stateful outbound rules to
allow certain domains for software patch download and deny all other domains.

https://docs.aws.amazon.com/network-firewall/latest/developerguide/suricata-examples.html#suricata-example-domain-filtering
upvoted 9 times

  Guru4Cloud 3 weeks, 5 days ago


Option A uses a network firewall which is overkill for instance-level rules.
upvoted 1 times

  jennyka76 Highly Voted  7 months, 2 weeks ago


Answer - A
https://aws.amazon.com/premiumsupport/knowledge-center/ec2-al1-al2-update-yum-without-internet/
upvoted 5 times

  asoli 6 months, 2 weeks ago


Although the answer is A, the link you provided here is not related to this question.
The information about "Network Firewall" and how it can help this issue is here:
https://docs.aws.amazon.com/network-firewall/latest/developerguide/suricata-examples.html#suricata-example-domain-filtering

(thanks to "@Bhawesh" to provide the link in their answer)


upvoted 3 times

  Guru4Cloud Most Recent  3 weeks, 5 days ago


Selected Answer: C
Implement strict inbound security group rules
Configure an outbound security group rule to allow traffic only to the approved software repository URLs
The key points:

Highly sensitive EC2 instances in private subnet that can access only approved URLs
Other internet access must be blocked
Security groups act as a firewall at the instance level and can control both inbound and outbound traffic.
upvoted 1 times

  kelvintoys93 3 months, 2 weeks ago


Isnt private subnet not connectible to internet at all, unless with a NAT gateway?
upvoted 1 times

  UnluckyDucky 6 months, 3 weeks ago


Selected Answer: A
Can't use URLs in outbound rule of security groups. URL Filtering screams Firewall.
upvoted 4 times
  VeseljkoD 6 months, 3 weeks ago
Selected Answer: A
We can't specifu URL in outbound rule of security group. Create free tier AWS account and test it.
upvoted 2 times

  Leo301 6 months, 4 weeks ago


Selected Answer: C
CCCCCCCCCCC
upvoted 1 times

  Brak 6 months, 4 weeks ago


It can't be C. You cannot use URLs in the outbound rules of a security group.
upvoted 3 times

  johnmcclane78 7 months ago


Option C is the best solution to meet the requirements of this scenario. Implementing strict inbound security group rules that only allow
traffic from approved sources can help secure the VPC network that hosts Amazon EC2 instances. Additionally, configuring an outbound
rule that allows traffic only to the authorized software repositories on the internet by specifying the URLs will ensure that only approved
third-party software repositories can be accessed from the EC2 instances. This solution does not require any additional AWS services and
can be implemented using VPC security groups.

Option A is not the best solution as it involves the use of AWS Network Firewall, which may introduce additional operational overhead.
While domain list rule groups can be used to block all internet traffic except for the approved third-party software repositories, this
solution is more complex than necessary for this scenario.
upvoted 2 times

  Steve_4542636 7 months ago


Selected Answer: C
In the security group, only allow inbound traffic originating from the VPC. Then only allow outbound traffic with a whitelisted IP address.
The question asks about blocking EC2 instances, which is best for security groups since those are at the EC2 instance level. A network
firewall is at the VPC level, which is not what the question is asking to protect.
upvoted 1 times

  Theodorz 7 months ago


Is Security Group able to allow a specific URL? According to
https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html, I cannot find such description.
upvoted 2 times

  KZM 7 months, 1 week ago


I am confused that It seems both options A and C are valid solutions.
upvoted 3 times

  Mia2009687 2 months, 3 weeks ago


I think C is in private subnet. Even with security group, it could not go public to download the software.
upvoted 1 times

  ruqui 4 months ago


C is not valid. Security groups can allow only traffic from specific ports and/or IPs, you can't use an URL. Correct answer is A
upvoted 1 times

  Zohx 7 months ago


Same here - why is C not a valid option?
upvoted 2 times

  Karlos99 7 months ago


And it is easier to do it at the level
upvoted 1 times

  Karlos99 7 months ago


And it is easier to do it at the VPC level
upvoted 1 times

  Karlos99 7 months ago


Because in this case, the session is initialized from inside
upvoted 1 times

  Neha999 7 months, 2 weeks ago


A as other options are controlling inbound traffic
upvoted 4 times
Question #328 Topic 1

A company is hosting a three-tier ecommerce application in the AWS Cloud. The company hosts the website on Amazon S3 and integrates the
website with an API that handles sales requests. The company hosts the API on three Amazon EC2 instances behind an Application Load Balancer
(ALB). The API consists of static and dynamic front-end content along with backend workers that process sales requests asynchronously.

The company is expecting a significant and sudden increase in the number of sales requests during events for the launch of new products.

What should a solutions architect recommend to ensure that all the requests are processed successfully?

A. Add an Amazon CloudFront distribution for the dynamic content. Increase the number of EC2 instances to handle the increase in traffic.

B. Add an Amazon CloudFront distribution for the static content. Place the EC2 instances in an Auto Scaling group to launch new instances
based on network traffic.

C. Add an Amazon CloudFront distribution for the dynamic content. Add an Amazon ElastiCache instance in front of the ALB to reduce traffic
for the API to handle.

D. Add an Amazon CloudFront distribution for the static content. Add an Amazon Simple Queue Service (Amazon SQS) queue to receive
requests from the website for later processing by the EC2 instances.

Correct Answer: D

Community vote distribution


D (60%) B (40%)

  Steve_4542636 Highly Voted  7 months ago


Selected Answer: B
The auto-scaling would increase the rate at which sales requests are "processed", whereas a SQS will ensure messages don't get lost. If
you were at a fast food restaurant with a long line with 3 cash registers, would you want more cash registers or longer ropes to handle
longer lines? Same concept here.
upvoted 14 times

  rushi0611 4 months, 4 weeks ago


"ensure that all the requests are processed successfully?"
we want to ensure success not the speed, even in the auto-scaling, there is the chance for the failure of the request but not in SQS- if it
is failed in sqs it is sent back to the queue again and new consumer will pick the request.
upvoted 5 times

  joechen2023 3 months, 2 weeks ago


As an architecture, it is not possible to add more backend workers (it is part of the HR and boss's job, not for architecture design the
solution). So when the demand surge, the only correct choice is to buffer them using SQS so that workers can take their time to process
it successfully
upvoted 1 times

  lizzard812 6 months, 1 week ago


Hell true: I'd rather combine the both options: a SQS + auto-scaled bound to the length of the queue.
upvoted 7 times

  Guru4Cloud Most Recent  3 weeks, 5 days ago


Selected Answer: D
D is correct.
upvoted 1 times

  antropaws 4 months, 1 week ago


Selected Answer: D
D is correct.
upvoted 1 times

  Abhineet9148232 4 months, 4 weeks ago


Selected Answer: D
B doesn't fit because Auto Scaling alone does not guarantee that all requests will be processed successfully, which the question clearly
asks for.

D ensures that all messages are processed.


upvoted 4 times

  kruasan 5 months ago


Selected Answer: D
An SQS queue acts as a buffer between the frontend (website) and backend (API). Web requests can dump messages into the queue at a
high throughput, then the queue handles delivering those messages to the API at a controlled rate that it can sustain. This prevents the
API from being overwhelmed.
upvoted 1 times

  kruasan 5 months ago


Options A and B would help by scaling out more instances, however, this may not scale quickly enough and still risks overwhelming the
API. Caching parts of the dynamic content (option C) may help but does not provide the buffering mechanism that a queue does.
upvoted 1 times

  seifshendy99 5 months, 1 week ago


Selected Answer: D
D make sens
upvoted 1 times

  kraken21 6 months ago


Selected Answer: D
D makes more sense
upvoted 1 times

  kraken21 6 months ago


There is no clarity on what the asynchronous process is but D makes more sense if we want to process all requests successfully. The way
the question is worded it looks like the msgs->SQS>ELB/Ec2. This ensures that the messages are processed but may be delayed as the
load increases.
upvoted 1 times

  channn 6 months ago


Selected Answer: D
although i agree with B for better performance. but i choose 'D' as question request to ensure that all the requests are processed
successfully.
upvoted 2 times

  klayytech 6 months ago


To ensure that all the requests are processed successfully, I would recommend adding an Amazon CloudFront distribution for the static
content and an Amazon CloudFront distribution for the dynamic content. This will help to reduce the load on the API and improve its
performance. You can also place the EC2 instances in an Auto Scaling group to launch new instances based on network traffic. This will
help to ensure that you have enough capacity to handle the increase in traffic during events for the launch of new products.
upvoted 1 times

  AravindG 6 months ago


Selected Answer: D
The company is expecting a significant and sudden increase in the number of sales requests and keyword async. So I feel option D suits
here.
upvoted 1 times

  MssP 6 months, 1 week ago


Selected Answer: D
Critical here is "to ensure that all the requests". ALL REQUESTS, so it is only possible with a SQS. ASG can spend time to launch new
instances so any request can be lost.
upvoted 4 times

  andyto 6 months, 1 week ago


Selected Answer: D
I vote for D. "The company is expecting a significant and sudden increase in the number of sales requests". Sudden increase means ASG
might not be able to deploy more EC2 instances when requests rocket and some of request will get lost.
upvoted 2 times

  asoli 6 months, 2 weeks ago


Selected Answer: D
The keyword here about the orders is "asynchronously". Orders are supposed to process asynchronously. So, it can be published in an SQS
and processed after that. Also, it ensures in a spike, there is no lost order.

In contrast, if you think the answer is B, the issue is the sudden spike. Maybe the auto-scaling is not acting fast enough and some orders
are lost. So, B i snot correct.
upvoted 2 times

  harirkmusa 6 months, 4 weeks ago


Selected D
upvoted 1 times

  taehyeki 6 months, 4 weeks ago


Selected Answer: D
anwer d
upvoted 1 times
  KZM 7 months, 1 week ago
I think D.
It may be SQS as per the points,
>workers process sales requests asynchronously and
?the requests are processed successfully,
upvoted 3 times
Question #329 Topic 1

A security audit reveals that Amazon EC2 instances are not being patched regularly. A solutions architect needs to provide a solution that will run
regular security scans across a large fleet of EC2 instances. The solution should also patch the EC2 instances on a regular schedule and provide a
report of each instance’s patch status.

Which solution will meet these requirements?

A. Set up Amazon Macie to scan the EC2 instances for software vulnerabilities. Set up a cron job on each EC2 instance to patch the instance
on a regular schedule.

B. Turn on Amazon GuardDuty in the account. Configure GuardDuty to scan the EC2 instances for software vulnerabilities. Set up AWS
Systems Manager Session Manager to patch the EC2 instances on a regular schedule.

C. Set up Amazon Detective to scan the EC2 instances for software vulnerabilities. Set up an Amazon EventBridge scheduled rule to patch the
EC2 instances on a regular schedule.

D. Turn on Amazon Inspector in the account. Configure Amazon Inspector to scan the EC2 instances for software vulnerabilities. Set up AWS
Systems Manager Patch Manager to patch the EC2 instances on a regular schedule.

Correct Answer: D

Community vote distribution


D (100%)

  Guru4Cloud 3 weeks, 5 days ago


Selected Answer: D
dddddddddd
upvoted 1 times

  elearningtakai 6 months ago


Selected Answer: D
Amazon Inspector is a security assessment service that automatically assesses applications for vulnerabilities or deviations from best
practices. It can be used to scan the EC2 instances for software vulnerabilities. AWS Systems Manager Patch Manager can be used to patch
the EC2 instances on a regular schedule. Together, these services can provide a solution that meets the requirements of running regular
security scans and patching EC2 instances on a regular schedule. Additionally, Patch Manager can provide a report of each instance’s
patch status.
upvoted 3 times

  Steve_4542636 7 months ago


Selected Answer: D
Inspecter is for EC2 instances and network accessibility of those instances
https://portal.tutorialsdojo.com/forums/discussion/difference-between-security-hub-detective-and-inspector/
upvoted 1 times

  LuckyAro 7 months, 1 week ago


Selected Answer: D
Amazon Inspector is a security assessment service that helps improve the security and compliance of applications deployed on Amazon
Web Services (AWS). It automatically assesses applications for vulnerabilities or deviations from best practices. Amazon Inspector can be
used to identify security issues and recommend fixes for them. It is an ideal solution for running regular security scans across a large fleet
of EC2 instances.

AWS Systems Manager Patch Manager is a service that helps you automate the process of patching Windows and Linux instances. It
provides a simple, automated way to patch your instances with the latest security patches and updates. Patch Manager helps you
maintain compliance with security policies and regulations by providing detailed reports on the patch status of your instances.
upvoted 1 times

  TungPham 7 months, 1 week ago


Selected Answer: D
Amazon Inspector for EC2
https://aws.amazon.com/vi/inspector/faqs/?nc1=f_ls
Amazon system manager Patch manager for automates the process of patching managed nodes with both security-related updates and
other types of updates.

http://webcache.googleusercontent.com/search?q=cache:FbFTc6XKycwJ:https://medium.com/aws-architech/use-case-aws-inspector-vs-
guardduty-3662bf80767a&hl=vi&gl=kr&strip=1&vwsrc=0
upvoted 2 times

  jennyka76 7 months, 2 weeks ago


answer - D
https://aws.amazon.com/inspector/faqs/
upvoted 1 times

  Neha999 7 months, 2 weeks ago


D as AWS Systems Manager Patch Manager can patch the EC2 instances.
upvoted 1 times
Question #330 Topic 1

A company is planning to store data on Amazon RDS DB instances. The company must encrypt the data at rest.

What should a solutions architect do to meet this requirement?

A. Create a key in AWS Key Management Service (AWS KMS). Enable encryption for the DB instances.

B. Create an encryption key. Store the key in AWS Secrets Manager. Use the key to encrypt the DB instances.

C. Generate a certificate in AWS Certificate Manager (ACM). Enable SSL/TLS on the DB instances by using the certificate.

D. Generate a certificate in AWS Identity and Access Management (IAM). Enable SSL/TLS on the DB instances by using the certificate.

Correct Answer: C

Community vote distribution


A (100%)

  antropaws 4 months, 1 week ago


OK, but why not B???
upvoted 1 times

  aaroncelestin 1 month, 1 week ago


KMS only generates and manages encryption keys. That's it. That's all it does. It's a fundamental service that you as well as other AWS
Services (like Secrets Manager) use it to encrypt or decrypt.

Secrets Manager stores actual secrets like passwords, pass phrases, and anything else you want encrypted. SM uses KMS to encrypt its
secrets, it would be circular to get an encryption key from KMS to use SM to encrypt the encryption key.
upvoted 1 times

  SkyZeroZx 5 months, 1 week ago


Selected Answer: A
ANSWER - A
upvoted 1 times

  datz 6 months, 1 week ago


Selected Answer: A
A for sure
upvoted 1 times

  PRASAD180 6 months, 4 weeks ago


A is 100% Crt
upvoted 1 times

  Steve_4542636 7 months ago


Selected Answer: A
Key Management Service. Secrets Manager is for database connection strings.
upvoted 3 times

  LuckyAro 7 months, 1 week ago


Selected Answer: A
A is the correct solution to meet the requirement of encrypting the data at rest.

To encrypt data at rest in Amazon RDS, you can use the encryption feature of Amazon RDS, which uses AWS Key Management Service
(AWS KMS). With this feature, Amazon RDS encrypts each database instance with a unique key. This key is stored securely by AWS KMS. You
can manage your own keys or use the default AWS-managed keys. When you enable encryption for a DB instance, Amazon RDS encrypts
the underlying storage, including the automated backups, read replicas, and snapshots.
upvoted 2 times

  bdp123 7 months, 1 week ago


Selected Answer: A
AWS Key Management Service (KMS) is used to manage the keys used to encrypt and decrypt the data.
upvoted 1 times

  pbpally 7 months, 1 week ago


Selected Answer: A
Option A
upvoted 1 times
  NolaHOla 7 months, 2 weeks ago
A. Create a key in AWS Key Management Service (AWS KMS). Enable encryption for the DB instances is the correct answer to encrypt the
data at rest in Amazon RDS DB instances.

Amazon RDS provides multiple options for encrypting data at rest. AWS Key Management Service (KMS) is used to manage the keys used
to encrypt and decrypt the data. Therefore, a solution architect should create a key in AWS KMS and enable encryption for the DB
instances to encrypt the data at rest.
upvoted 1 times

  jennyka76 7 months, 2 weeks ago


ANSWER - A
https://docs.aws.amazon.com/whitepapers/latest/efs-encrypted-file-systems/managing-keys.html
upvoted 1 times

  Bhawesh 7 months, 2 weeks ago


Selected Answer: A
A. Create a key in AWS Key Management Service (AWS KMS). Enable encryption for the DB instances.

https://www.examtopics.com/discussions/amazon/view/80753-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 2 times
Question #331 Topic 1

A company must migrate 20 TB of data from a data center to the AWS Cloud within 30 days. The company’s network bandwidth is limited to 15
Mbps and cannot exceed 70% utilization.

What should a solutions architect do to meet these requirements?

A. Use AWS Snowball.

B. Use AWS DataSync.

C. Use a secure VPN connection.

D. Use Amazon S3 Transfer Acceleration.

Correct Answer: A

Community vote distribution


A (82%) B (18%)

  kruasan Highly Voted  5 months ago


Selected Answer: A
Don't mix up between Mbps and Mbs.
The proper calculation is:

10 MB/s x 86,400 seconds per day x 30 days/8 = 3,402,000 MB or approximately 3.4 TB


upvoted 6 times

  Guru4Cloud Most Recent  2 weeks, 2 days ago


Selected Answer: A
° 15 Mbps bandwidth with 70% max utilization limits the effective bandwidth to 10.5 Mbps or 1.31 MB/s.
° 20 TB of data at 1.31 MB/s would take approximately 193 days to transfer over the network. ° This far exceeds the 30 day requirement.
° AWS Snowball provides a physical storage device that can be shipped to the data center. Up to 80 TB can be loaded onto a Snowball
device and shipped back to AWS.
This allows the 20 TB of data to be transferred much faster by shipping rather than over the limited network bandwidth.
° Snowball uses tamper-resistant enclosures and 256-bit encryption to keep the data secure during transit.
° The data can be imported into Amazon S3 or Amazon Glacier once the Snowball is received by AWS.
upvoted 1 times

  UnluckyDucky 6 months, 2 weeks ago


Selected Answer: B
10 MB/s x 86,400 seconds per day x 30 days = 25,920,000 MB or approximately 25.2 TB

That's how much you can transfer with a 10 Mbps link (roughly 70% of the 15 Mbps connection).

With a consistent connection of 8~ Mbps, and 30 days, you can upload 20 TB of data.

My math says B, my brain wants to go with A. Take your pick.


upvoted 3 times

  Zox42 6 months, 1 week ago


15 Mbps * 0.7 = 1.3125 MB/s and 1.3125 * 86,400 * 30 = 3.402.000 MB
Answer A is correct.
upvoted 2 times

  hozy_ 2 months, 2 weeks ago


How can 15 * 0.7 be 1.3125 LMAO
upvoted 1 times

  hozy_ 2 months, 2 weeks ago


OMG it was Mbps! Not MBps. You are right! awesome!!!
upvoted 1 times

  Zox42 6 months, 1 week ago


3,402,000
upvoted 2 times

  Bilalazure 7 months, 1 week ago


Selected Answer: A
Aws snowball
upvoted 2 times
  PRASAD180 7 months, 1 week ago
A is 100% Crt
upvoted 1 times

  LuckyAro 7 months, 1 week ago


Selected Answer: A
AWS Snowball
upvoted 1 times

  pbpally 7 months, 1 week ago


Selected Answer: A
Option a
upvoted 1 times

  jennyka76 7 months, 2 weeks ago


ANSWER - A
https://docs.aws.amazon.com/snowball/latest/ug/whatissnowball.html
upvoted 1 times

  AWSSHA1 7 months, 2 weeks ago


Selected Answer: A
option A
upvoted 3 times
Question #332 Topic 1

A company needs to provide its employees with secure access to confidential and sensitive files. The company wants to ensure that the files can
be accessed only by authorized users. The files must be downloaded securely to the employees’ devices.

The files are stored in an on-premises Windows file server. However, due to an increase in remote usage, the file server is running out of capacity.
.
Which solution will meet these requirements?

A. Migrate the file server to an Amazon EC2 instance in a public subnet. Configure the security group to limit inbound traffic to the employees’
IP addresses.

B. Migrate the files to an Amazon FSx for Windows File Server file system. Integrate the Amazon FSx file system with the on-premises Active
Directory. Configure AWS Client VPN.

C. Migrate the files to Amazon S3, and create a private VPC endpoint. Create a signed URL to allow download.

D. Migrate the files to Amazon S3, and create a public VPC endpoint. Allow employees to sign on with AWS IAM Identity Center (AWS Single
Sign-On).

Correct Answer: B

Community vote distribution


B (86%) 14%

  BrijMohan08 3 weeks, 6 days ago


Selected Answer: C
Remember: The file server is running out of capacity.
upvoted 1 times

  SkyZeroZx 4 months, 3 weeks ago


Selected Answer: B
B is the correct answer
upvoted 1 times

  elearningtakai 6 months ago


Selected Answer: B
This solution addresses the need for secure access to confidential and sensitive files, as well as the increase in remote usage. Migrating
the files to Amazon FSx for Windows File Server provides a scalable, fully managed file storage solution in the AWS Cloud that is accessible
from on-premises and cloud environments. Integration with the on-premises Active Directory allows for a consistent user experience and
centralized access control. AWS Client VPN provides a secure and managed VPN solution that can be used by employees to access the files
securely.
upvoted 4 times

  LuckyAro 7 months, 1 week ago


Selected Answer: B
B is the best solution for the given requirements. It provides a secure way for employees to access confidential and sensitive files from
anywhere using AWS Client VPN. The Amazon FSx for Windows File Server file system is designed to provide native support for Windows
file system features such as NTFS permissions, Active Directory integration, and Distributed File System (DFS). This means that the
company can continue to use their on-premises Active Directory to manage user access to files.
upvoted 1 times

  Bilalazure 7 months, 1 week ago


B is the correct answer
upvoted 1 times

  jennyka76 7 months, 2 weeks ago


Answer - B
1- https://docs.aws.amazon.com/fsx/latest/WindowsGuide/what-is.html
2- https://docs.aws.amazon.com/fsx/latest/WindowsGuide/managing-storage-capacity.html
upvoted 1 times

  Neha999 7 months, 2 weeks ago


B
Amazon FSx for Windows File Server file system
upvoted 2 times
Question #333 Topic 1

A company’s application runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The instances run in an Amazon EC2 Auto
Scaling group across multiple Availability Zones. On the first day of every month at midnight, the application becomes much slower when the
month-end financial calculation batch runs. This causes the CPU utilization of the EC2 instances to immediately peak to 100%, which disrupts the
application.

What should a solutions architect recommend to ensure the application is able to handle the workload and avoid downtime?

A. Configure an Amazon CloudFront distribution in front of the ALB.

B. Configure an EC2 Auto Scaling simple scaling policy based on CPU utilization.

C. Configure an EC2 Auto Scaling scheduled scaling policy based on the monthly schedule.

D. Configure Amazon ElastiCache to remove some of the workload from the EC2 instances.

Correct Answer: C

Community vote distribution


C (100%)

  elearningtakai 6 months ago


Selected Answer: C
By configuring a scheduled scaling policy, the EC2 Auto Scaling group can proactively launch additional EC2 instances before the CPU
utilization peaks to 100%. This will ensure that the application can handle the workload during the month-end financial calculation batch,
and avoid any disruption or downtime.

Configuring a simple scaling policy based on CPU utilization or adding Amazon CloudFront distribution or Amazon ElastiCache will not
directly address the issue of handling the monthly peak workload.
upvoted 1 times

  Steve_4542636 7 months ago


Selected Answer: C
If the scaling were based on CPU or memory, it requires a certain amount of time above that threshhold, 5 minutes for example. That
would mean the CPU would be at 100% for five minutes.
upvoted 2 times

  LuckyAro 7 months, 1 week ago


Selected Answer: C
C: Configure an EC2 Auto Scaling scheduled scaling policy based on the monthly schedule is the best option because it allows for the
proactive scaling of the EC2 instances before the monthly batch run begins. This will ensure that the application is able to handle the
increased workload without experiencing downtime. The scheduled scaling policy can be configured to increase the number of instances
in the Auto Scaling group a few hours before the batch run and then decrease the number of instances after the batch run is complete.
This will ensure that the resources are available when needed and not wasted when not needed.

The most appropriate solution to handle the increased workload during the monthly batch run and avoid downtime would be to configure
an EC2 Auto Scaling scheduled scaling policy based on the monthly schedule.
upvoted 2 times

  LuckyAro 7 months, 1 week ago


Scheduled scaling policies allow you to schedule EC2 instance scaling events in advance based on a specified time and date. You can
use this feature to plan for anticipated traffic spikes or seasonal changes in demand. By setting up scheduled scaling policies, you can
ensure that you have the right number of instances running at the right time, thereby optimizing performance and reducing costs.

To set up a scheduled scaling policy in EC2 Auto Scaling, you need to specify the following:

Start time and date: The date and time when the scaling event should begin.

Desired capacity: The number of instances that you want to have running after the scaling event.

Recurrence: The frequency with which the scaling event should occur. This can be a one-time event or a recurring event, such as daily or
weekly.
upvoted 1 times

  bdp123 7 months, 1 week ago


Selected Answer: C
C is the correct answer as traffic spike is known
upvoted 1 times
  jennyka76 7 months, 2 weeks ago
ANSWER - C
https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-scheduled-scaling.html
upvoted 2 times

  Neha999 7 months, 2 weeks ago


C as the schedule of traffic spike is known beforehand.
upvoted 1 times
Question #334 Topic 1

A company wants to give a customer the ability to use on-premises Microsoft Active Directory to download files that are stored in Amazon S3. The
customer’s application uses an SFTP client to download the files.

Which solution will meet these requirements with the LEAST operational overhead and no changes to the customer’s application?

A. Set up AWS Transfer Family with SFTP for Amazon S3. Configure integrated Active Directory authentication.

B. Set up AWS Database Migration Service (AWS DMS) to synchronize the on-premises client with Amazon S3. Configure integrated Active
Directory authentication.

C. Set up AWS DataSync to synchronize between the on-premises location and the S3 location by using AWS IAM Identity Center (AWS Single
Sign-On).

D. Set up a Windows Amazon EC2 instance with SFTP to connect the on-premises client with Amazon S3. Integrate AWS Identity and Access
Management (IAM).

Correct Answer: B

Community vote distribution


A (100%)

  Steve_4542636 Highly Voted  7 months ago


Selected Answer: A
SFTP, FTP - think "Transfer" during test time
upvoted 5 times

  Guru4Cloud Most Recent  3 weeks, 5 days ago


Transfer family is used for SFTP
upvoted 1 times

  live_reply_developers 2 months, 1 week ago


SFTP -> transfer family
upvoted 1 times

  antropaws 4 months, 1 week ago


Selected Answer: A
A no doubt. Why the system gives B as the correct answer?
upvoted 1 times

  lht 5 months ago


Selected Answer: A
just A
upvoted 1 times

  LuckyAro 7 months, 1 week ago


Selected Answer: A
AWS Transfer Family
upvoted 2 times

  LuckyAro 7 months, 1 week ago


AWS Transfer Family is a fully managed service that allows customers to transfer files over SFTP, FTPS, and FTP directly into and out of
Amazon S3. It eliminates the need to manage any infrastructure for file transfer, which reduces operational overhead. Additionally, the
service can be configured to use an existing Active Directory for authentication, which means that no changes need to be made to the
customer's application.
upvoted 1 times

  bdp123 7 months, 1 week ago


Selected Answer: A
Transfer family is used for SFTP
upvoted 1 times

  TungPham 7 months, 2 weeks ago


Selected Answer: A
using AWS Batch to LEAST operational overhead
and have SFTP to no changes to the customer’s application
https://aws.amazon.com/vi/blogs/architecture/managed-file-transfer-using-aws-transfer-family-and-amazon-s3/
upvoted 2 times
  Bhawesh 7 months, 2 weeks ago
Selected Answer: A
A. Set up AWS Transfer Family with SFTP for Amazon S3. Configure integrated Active Directory authentication.

https://docs.aws.amazon.com/transfer/latest/userguide/directory-services-users.html
upvoted 3 times
Question #335 Topic 1

A company is experiencing sudden increases in demand. The company needs to provision large Amazon EC2 instances from an Amazon Machine
Image (AMI). The instances will run in an Auto Scaling group. The company needs a solution that provides minimum initialization latency to meet
the demand.

Which solution meets these requirements?

A. Use the aws ec2 register-image command to create an AMI from a snapshot. Use AWS Step Functions to replace the AMI in the Auto
Scaling group.

B. Enable Amazon Elastic Block Store (Amazon EBS) fast snapshot restore on a snapshot. Provision an AMI by using the snapshot. Replace
the AMI in the Auto Scaling group with the new AMI.

C. Enable AMI creation and define lifecycle rules in Amazon Data Lifecycle Manager (Amazon DLM). Create an AWS Lambda function that
modifies the AMI in the Auto Scaling group.

D. Use Amazon EventBridge to invoke AWS Backup lifecycle policies that provision AMIs. Configure Auto Scaling group capacity limits as an
event source in EventBridge.

Correct Answer: C

Community vote distribution


B (88%) 13%

  danielklein09 Highly Voted  4 months ago


readed the question 5 times, didn't understood a thing :(
upvoted 21 times

  Guru4Cloud 3 weeks, 5 days ago


Me too
upvoted 2 times

  kambarami Most Recent  1 week, 3 days ago


Pleaw3 reword 5he question. Can not understand a thing!
upvoted 1 times

  Guru4Cloud 3 weeks, 5 days ago


Selected Answer: B
Enable EBS fast snapshot restore on a snapshot
Create an AMI from the snapshot
Replace the AMI used by the Auto Scaling group with this new AMI

The key points:

° Need to launch large EC2 instances quickly from an AMI in an Auto Scaling group
° Looking to minimize instance initialization latency
upvoted 1 times

  antropaws 4 months, 1 week ago


Selected Answer: B
B most def
upvoted 1 times

  elearningtakai 6 months ago


Selected Answer: B
B: "EBS fast snapshot restore": minimizes initialization latency. This is a good choice.
upvoted 2 times

  Zox42 6 months, 1 week ago


Selected Answer: B
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-fast-snapshot-restore.html
upvoted 2 times

  geekgirl22 7 months, 1 week ago


Keyword, minimize initilization latency == snapshot. A and B have snapshots in them, but B is the one that makes sense.
C has DLP that can create machines from AMI, but that does not talk about latency and snapshots.
upvoted 3 times
  LuckyAro 7 months, 1 week ago
Selected Answer: B
Enabling Amazon Elastic Block Store (Amazon EBS) fast snapshot restore on a snapshot allows for rapid restoration of EBS volumes from
snapshots. This reduces the time required to create an AMI from a snapshot, which is useful for quickly provisioning large Amazon EC2
instances.

Provisioning an AMI by using the fast snapshot restore feature is a fast and efficient way to create an AMI. Once the AMI is created, it can
be replaced in the Auto Scaling group without any downtime or disruption to running instances.
upvoted 1 times

  bdp123 7 months, 1 week ago


Selected Answer: B
Enabling Amazon Elastic Block Store (Amazon EBS) fast snapshot restore on a snapshot allows you to
quickly create a new Amazon Machine Image (AMI) from a snapshot, which can help reduce the
initialization latency when provisioning new instances. Once the AMI is provisioned, you can replace
the AMI in the Auto Scaling group with the new AMI. This will ensure that new instances are launched from the updated AMI and are able
to meet the increased demand quickly.
upvoted 1 times

  TungPham 7 months, 2 weeks ago


Selected Answer: C
Provision an AMI by using the snapshot => not sure because SnapShot only backup a EBS, AMI is backup a cluster
. Replace the AMI in the Auto Scaling group with the new AMI. => for what ??

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/snapshot-lifecycle.html

Amazon Data Lifecycle Manager helps automate snapshot and AMI management
upvoted 2 times

  jennyka76 7 months, 2 weeks ago


agree with answer - B
upvoted 1 times

  kpato87 7 months, 2 weeks ago


Selected Answer: B
Option B is the most suitable solution for this use case, as it enables Amazon Elastic Block Store (Amazon EBS) fast snapshot restore on a
snapshot, which significantly reduces the time required for creating an AMI from the snapshot. The fast snapshot restore feature enables
Amazon EBS to pre-warm the EBS volumes associated with the snapshot, which reduces the time required to initialize the volumes when
launching instances from the AMI.
upvoted 2 times

  Neha999 7 months, 2 weeks ago


https://www.examtopics.com/discussions/amazon/view/82400-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times

  bdp123 7 months, 2 weeks ago


Selected Answer: B
Enabling Amazon Elastic Block Store (Amazon EBS) fast snapshot restore on a snapshot allows you to
quickly create a new Amazon Machine Image (AMI) from a snapshot, which can help reduce the
initialization latency when provisioning new instances. Once the AMI is provisioned, you can replace
the AMI in the Auto Scaling group with the new AMI. This will ensure that new instances are launched
from the updated AMI and are able to meet the increased demand quickly.
upvoted 4 times
Question #336 Topic 1

A company hosts a multi-tier web application that uses an Amazon Aurora MySQL DB cluster for storage. The application tier is hosted on Amazon
EC2 instances. The company’s IT security guidelines mandate that the database credentials be encrypted and rotated every 14 days.

What should a solutions architect do to meet this requirement with the LEAST operational effort?

A. Create a new AWS Key Management Service (AWS KMS) encryption key. Use AWS Secrets Manager to create a new secret that uses the
KMS key with the appropriate credentials. Associate the secret with the Aurora DB cluster. Configure a custom rotation period of 14 days.

B. Create two parameters in AWS Systems Manager Parameter Store: one for the user name as a string parameter and one that uses the
SecureString type for the password. Select AWS Key Management Service (AWS KMS) encryption for the password parameter, and load these
parameters in the application tier. Implement an AWS Lambda function that rotates the password every 14 days.

C. Store a file that contains the credentials in an AWS Key Management Service (AWS KMS) encrypted Amazon Elastic File System (Amazon
EFS) file system. Mount the EFS file system in all EC2 instances of the application tier. Restrict the access to the file on the file system so that
the application can read the file and that only super users can modify the file. Implement an AWS Lambda function that rotates the key in
Aurora every 14 days and writes new credentials into the file.

D. Store a file that contains the credentials in an AWS Key Management Service (AWS KMS) encrypted Amazon S3 bucket that the application
uses to load the credentials. Download the file to the application regularly to ensure that the correct credentials are used. Implement an AWS
Lambda function that rotates the Aurora credentials every 14 days and uploads these credentials to the file in the S3 bucket.

Correct Answer: A

Community vote distribution


A (100%)

  Guru4Cloud 3 weeks, 5 days ago


Selected Answer: A
Use AWS Secrets Manager to store the Aurora credentials as a secret
Encrypt the secret with a KMS key
Configure 14 day automatic rotation for the secret
Associate the secret with the Aurora DB cluster
The key points:

Aurora MySQL credentials must be encrypted and rotated every 14 days


Want to minimize operational effort
upvoted 1 times

  elearningtakai 6 months ago


Selected Answer: A
AWS Secrets Manager allows you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their
lifecycle. With this service, you can automate the rotation of secrets, such as database credentials, on a schedule that you choose. The
solution allows you to create a new secret with the appropriate credentials and associate it with the Aurora DB cluster. You can then
configure a custom rotation period of 14 days to ensure that the credentials are automatically rotated every two weeks, as required by the
IT security guidelines. This approach requires the least amount of operational effort as it allows you to manage secrets centrally without
modifying your application code or infrastructure.
upvoted 3 times

  elearningtakai 6 months ago


Selected Answer: A
A: AWS Secrets Manager. Simply this supported rotate feature, and secure to store credentials instead of EFS or S3.
upvoted 1 times

  Steve_4542636 7 months ago


Selected Answer: A
Voting A
upvoted 1 times

  LuckyAro 7 months, 1 week ago


Selected Answer: A
A proposes to create a new AWS KMS encryption key and use AWS Secrets Manager to create a new secret that uses the KMS key with the
appropriate credentials. Then, the secret will be associated with the Aurora DB cluster, and a custom rotation period of 14 days will be
configured. AWS Secrets Manager will automate the process of rotating the database credentials, which will reduce the operational effort
required to meet the IT security guidelines.
upvoted 1 times
  jennyka76 7 months, 2 weeks ago
Answer is A
To implement password rotation lifecycles, use AWS Secrets Manager. You can rotate, manage, and retrieve database credentials, API keys,
and other secrets throughout their lifecycle using Secrets Manager.
https://aws.amazon.com/blogs/security/how-to-use-aws-secrets-manager-rotate-credentials-amazon-rds-database-types-oracle/
upvoted 3 times

  Neha999 7 months, 2 weeks ago


A
https://www.examtopics.com/discussions/amazon/view/59985-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times
Question #337 Topic 1

A company has deployed a web application on AWS. The company hosts the backend database on Amazon RDS for MySQL with a primary DB
instance and five read replicas to support scaling needs. The read replicas must lag no more than 1 second behind the primary DB instance. The
database routinely runs scheduled stored procedures.

As traffic on the website increases, the replicas experience additional lag during periods of peak load. A solutions architect must reduce the
replication lag as much as possible. The solutions architect must minimize changes to the application code and must minimize ongoing
operational overhead.

Which solution will meet these requirements?

A. Migrate the database to Amazon Aurora MySQL. Replace the read replicas with Aurora Replicas, and configure Aurora Auto Scaling. Replace
the stored procedures with Aurora MySQL native functions.

B. Deploy an Amazon ElastiCache for Redis cluster in front of the database. Modify the application to check the cache before the application
queries the database. Replace the stored procedures with AWS Lambda functions.

C. Migrate the database to a MySQL database that runs on Amazon EC2 instances. Choose large, compute optimized EC2 instances for all
replica nodes. Maintain the stored procedures on the EC2 instances.

D. Migrate the database to Amazon DynamoDB. Provision a large number of read capacity units (RCUs) to support the required throughput,
and configure on-demand capacity scaling. Replace the stored procedures with DynamoDB streams.

Correct Answer: A

Community vote distribution


A (69%) B (31%)

  fkie4 Highly Voted  6 months, 3 weeks ago


i hate this kind of question
upvoted 23 times

  Guru4Cloud Most Recent  3 weeks, 5 days ago


Selected Answer: A
Migrate the RDS MySQL database to Amazon Aurora MySQL
Use Aurora Replicas for read scaling instead of RDS read replicas
Configure Aurora Auto Scaling to handle load spikes
Replace stored procedures with Aurora MySQL native functions
upvoted 1 times

  MrAWSAssociate 3 months, 1 week ago


Selected Answer: A
First, Elasticache involves heavy change on application code. The question mentioned that "he solutions architect must minimize changes
to the application code". Therefore B is not suitable and A is more appropriate for the question requirement.
upvoted 2 times

  aaroncelestin 1 month, 1 week ago


... but migrating their ENTIRE prod database and its replicas to a new platform is not a heavy change?
upvoted 1 times

  KMohsoe 4 months, 1 week ago


Selected Answer: B
Why not B? Please explain to me.
upvoted 2 times

  Terion 5 days, 18 hours ago


It wouldn't have the most up to date info since it must no lag in relation to the main DB
upvoted 1 times

  asoli 6 months, 2 weeks ago


Selected Answer: A
Using Cache required huge changes in the application. Several things need to change to use cache in front of the DB in the application. So,
option B is not correct.
Aurora will help to reduce replication lag for read replica
upvoted 4 times
  kaushald 6 months, 3 weeks ago
Option A is the most appropriate solution for reducing replication lag without significant changes to the application code and minimizing
ongoing operational overhead. Migrating the database to Amazon Aurora MySQL allows for improved replication performance and higher
scalability compared to Amazon RDS for MySQL. Aurora Replicas provide faster replication, reducing the replication lag, and Aurora Auto
Scaling ensures that there are enough Aurora Replicas to handle the incoming traffic. Additionally, Aurora MySQL native functions can
replace the stored procedures, reducing the load on the database and improving performance.

Option B is not the best solution since adding an ElastiCache for Redis cluster does not address the replication lag issue, and the cache
may not have the most up-to-date information. Additionally, replacing the stored procedures with AWS Lambda functions adds additional
complexity and may not improve performance.
upvoted 3 times

  taehyeki 6 months, 3 weeks ago


Selected Answer: B
a,b are confusing me..
i would like to go with b..
upvoted 1 times

  bangfire 6 months, 3 weeks ago


Option B is incorrect because it suggests using ElastiCache for Redis as a caching layer in front of the database, but this would not
necessarily reduce the replication lag on the read replicas. Additionally, it suggests replacing the stored procedures with AWS Lambda
functions, which may require significant changes to the application code.
upvoted 4 times

  lizzard812 6 months, 1 week ago


Yes and moreover Redis requires app refactoring which is a solid operational overhead
upvoted 1 times

  Nel8 7 months ago


Selected Answer: B
By using ElastiCache you avoid a lot of common issues you might encounter. ElastiCache is a database caching solution. ElastiCache Redis
per se, supports failover and Multi-AZ. And Most of all, ElastiCache is well suited to place in front of RDS.

Migrating a database such as option A, requires operational overhead.


upvoted 2 times

  bdp123 7 months, 1 week ago


Selected Answer: A
Aurora can have up to 15 read replicas - much faster than RDS
https://aws.amazon.com/rds/aurora/
upvoted 4 times

  ChrisG1454 6 months, 4 weeks ago


" As a result, all Aurora Replicas return the same data for query results with minimal replica lag. This lag is usually much less than 100
milliseconds after the primary instance has written an update "

Reference:
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Replication.html
upvoted 2 times

  ChrisG1454 6 months, 3 weeks ago


You can invoke an Amazon Lambda function from an Amazon Aurora MySQL-Compatible Edition DB cluster with the "native
function"....

https://docs.amazonaws.cn/en_us/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.Lambda.html
upvoted 1 times

  jennyka76 7 months, 2 weeks ago


Answer - A
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PostgreSQL.Replication.ReadReplicas.html
---------------------------------------------------------------------------------------
You can scale reads for your Amazon RDS for PostgreSQL DB instance by adding read replicas to the instance. As with other Amazon RDS
database engines, RDS for PostgreSQL uses the native replication mechanisms of PostgreSQL to keep read replicas up to date with
changes on the source DB. For general information about read replicas and Amazon RDS, see Working with read replicas.
upvoted 3 times
Question #338 Topic 1

A solutions architect must create a disaster recovery (DR) plan for a high-volume software as a service (SaaS) platform. All data for the platform
is stored in an Amazon Aurora MySQL DB cluster.

The DR plan must replicate data to a secondary AWS Region.

Which solution will meet these requirements MOST cost-effectively?

A. Use MySQL binary log replication to an Aurora cluster in the secondary Region. Provision one DB instance for the Aurora cluster in the
secondary Region.

B. Set up an Aurora global database for the DB cluster. When setup is complete, remove the DB instance from the secondary Region.

C. Use AWS Database Migration Service (AWS DMS) to continuously replicate data to an Aurora cluster in the secondary Region. Remove the
DB instance from the secondary Region.

D. Set up an Aurora global database for the DB cluster. Specify a minimum of one DB instance in the secondary Region.

Correct Answer: D

Community vote distribution


D (48%) B (24%) 14% 14%

  jennyka76 Highly Voted  7 months, 2 weeks ago


Answer - A
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Replication.CrossRegion.html
-----------------------------------------------------------------------------
Before you begin
Before you can create an Aurora MySQL DB cluster that is a cross-Region read replica, you must turn on binary logging on your source
Aurora MySQL DB cluster. Cross-region replication for Aurora MySQL uses MySQL binary replication to replay changes on the cross-Region
read replica DB cluster.
upvoted 8 times

  leoattf 7 months, 1 week ago


On this same URL you provided, there is a note highlighted, stating the following:
"Replication from the primary DB cluster to all secondaries is handled by the Aurora storage layer rather than by the database engine,
so lag time for replicating changes is minimal—typically, less than 1 second. Keeping the database engine out of the replication process
means that the database engine is dedicated to processing workloads. It also means that you don't need to configure or manage the
Aurora MySQL binlog (binary logging) replication."

So, answer should be A


upvoted 2 times

  leoattf 7 months, 1 week ago


Correction: So, answer should be D
upvoted 1 times

  ChrisG1454 6 months, 4 weeks ago


The question states " The DR plan must replicate data to a "secondary" AWS Region."

In addition to Aurora Replicas, you have the following options for replication with Aurora MySQL:

Aurora MySQL DB clusters in different AWS Regions.

You can replicate data across multiple Regions by using an Aurora global database. For details, see High availability across AWS Regions
with Aurora global databases

You can create an Aurora read replica of an Aurora MySQL DB cluster in a different AWS Region, by using MySQL binary log (binlog)
replication. Each cluster can have up to five read replicas created this way, each in a different Region.

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Replication.html
upvoted 1 times

  ChrisG1454 6 months, 4 weeks ago


The question is asking for the most cost-effective solution.
Aurora global databases are more expensive.

https://aws.amazon.com/rds/aurora/pricing/
upvoted 1 times
  luisgu Highly Voted  4 months, 1 week ago
Selected Answer: B
MOST cost-effective --> B
See section "Creating a headless Aurora DB cluster in a secondary Region" on the link
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Replication.html
"Although an Aurora global database requires at least one secondary Aurora DB cluster in a different AWS Region than the primary, you
can use a headless configuration for the secondary cluster. A headless secondary Aurora DB cluster is one without a DB instance. This type
of configuration can lower expenses for an Aurora global database. In an Aurora DB cluster, compute and storage are decoupled. Without
the DB instance, you're not charged for compute, only for storage. If it's set up correctly, a headless secondary's storage volume is kept in-
sync with the primary Aurora DB cluster."
upvoted 5 times

  bsbs1234 1 week ago


upvoted your message, but still think D is correct. Because the question is to design a DR plan.In case of DR, B need to create an
instance in DR region manually.
upvoted 1 times

  vini15 Most Recent  2 months, 1 week ago


should be B for most cost effective solution.
see the link - Achieve cost-effective multi-Region resiliency with Amazon Aurora Global Database headless clusters
https://aws.amazon.com/blogs/database/achieve-cost-effective-multi-region-resiliency-with-amazon-aurora-global-database-headless-
clusters/
upvoted 1 times

  Abhineet9148232 6 months, 3 weeks ago


Selected Answer: D
D: With Amazon Aurora Global Database, you pay for replicated write I/Os between the primary Region and each secondary Region (in this
case 1).

Not A because it achieves the same, would be equally costly and adds overhead.
upvoted 2 times

  [Removed] 7 months ago


Selected Answer: C
CCCCCC
upvoted 3 times

  Steve_4542636 7 months ago


Selected Answer: D
I think Amazon is looking for D here. I don' think A is intended because that would require knowledge of MySQL, which isn't what they are
testing us on. Not option C because the question states large volume. If the volume were low, then DMS would be better. This question is
not a good question.
upvoted 3 times

  fkie4 6 months, 3 weeks ago


very true. Amazon wanna everyone to use AWS, why do they sell for MySQL?
upvoted 1 times

  LuckyAro 7 months, 1 week ago


Selected Answer: D
D provides automatic replication
upvoted 3 times

  LuckyAro 7 months, 1 week ago


D provides automatic replication to a secondary Region through the Aurora global database feature. This feature provides automatic
replication of data across AWS Regions, with the ability to control and configure the replication process. By specifying a minimum of one
DB instance in the secondary Region, you can ensure that your secondary database is always available and up-to-date, allowing for quick
failover in the event of a disaster.
upvoted 2 times

  bdp123 7 months, 1 week ago


Selected Answer: D
Actually I change my answer to 'D' because of following:
An Aurora DB cluster can contain up to 15 Aurora Replicas. The Aurora Replicas can be distributed across the Availability Zones that a DB
cluster spans WITHIN an AWS Region.
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Replication.htmhttps://docs.aws.amazon.com/AmazonRDS/lat
est/AuroraUserGuide/Aurora.Replication.html
You can replicate data across multiple Regions by using an Aurora global database
upvoted 1 times

  bdp123 7 months, 1 week ago


Selected Answer: A
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Replication.MySQL.html Global database is for specific
versions - they did not tell us the version
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database.html
upvoted 1 times
  doodledreads 7 months, 1 week ago
Selected Answer: D
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database.html

Checkout the part Recovery from Region-wide outages


upvoted 1 times

  zTopic 7 months, 2 weeks ago


Selected Answer: A
Answer is A
upvoted 2 times
Question #339 Topic 1

A company has a custom application with embedded credentials that retrieves information from an Amazon RDS MySQL DB instance.
Management says the application must be made more secure with the least amount of programming effort.

What should a solutions architect do to meet these requirements?

A. Use AWS Key Management Service (AWS KMS) to create keys. Configure the application to load the database credentials from AWS KMS.
Enable automatic key rotation.

B. Create credentials on the RDS for MySQL database for the application user and store the credentials in AWS Secrets Manager. Configure the
application to load the database credentials from Secrets Manager. Create an AWS Lambda function that rotates the credentials in Secret
Manager.

C. Create credentials on the RDS for MySQL database for the application user and store the credentials in AWS Secrets Manager. Configure the
application to load the database credentials from Secrets Manager. Set up a credentials rotation schedule for the application user in the RDS
for MySQL database using Secrets Manager.

D. Create credentials on the RDS for MySQL database for the application user and store the credentials in AWS Systems Manager Parameter
Store. Configure the application to load the database credentials from Parameter Store. Set up a credentials rotation schedule for the
application user in the RDS for MySQL database using Parameter Store.

Correct Answer: D

Community vote distribution


C (100%)

  Bhawesh Highly Voted  7 months, 2 weeks ago


Selected Answer: C
C. Create credentials on the RDS for MySQL database for the application user and store the credentials in AWS Secrets Manager. Configure
the application to load the database credentials from Secrets Manager. Set up a credentials rotation schedule for the application user in
the RDS for MySQL database using Secrets Manager.

https://www.examtopics.com/discussions/amazon/view/46483-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 8 times

  cloudbusting Highly Voted  7 months, 2 weeks ago


Parameter Store does not provide automatic credential rotation.
upvoted 8 times

  Guru4Cloud Most Recent  3 weeks, 5 days ago


Selected Answer: C
Store the RDS credentials in Secrets Manager
Configure the application to retrieve the credentials from Secrets Manager
Use Secrets Manager's built-in rotation to rotate the RDS credentials automatically
upvoted 1 times

  Hades2231 1 month ago


Selected Answer: C
Secrets Manager can handle the rotation, so no need for Lambda to rotate the keys.
upvoted 1 times

  chen0305_099 1 month, 1 week ago


WHY NOT B ?
upvoted 1 times

  StacyY 1 month, 2 weeks ago


B, we need lambda for password rotation, confirmed!
upvoted 1 times

  Nikki013 1 month ago


It is not needed for certain types RDS, including MySQL as Secrets Manager has built-in rotation capabilities for it:
https://aws.amazon.com/blogs/security/rotate-amazon-rds-database-credentials-automatically-with-aws-secrets-manager/
upvoted 1 times

  Abrar2022 3 months, 3 weeks ago


Selected Answer: C
If you need your DB to store credentials then use AWS Secret Manager. System Manager Paramater Store is for CloudFormation (no
rotation)
upvoted 1 times
  AlessandraSAA 6 months, 4 weeks ago
why it's not A?
upvoted 4 times

  MssP 6 months, 1 week ago


It is asking for credentials, not for encryption keys.
upvoted 4 times

  PoisonBlack 5 months ago


So credentials rotation is secrets manager and key rotation is KMS?
upvoted 1 times

  bdp123 7 months, 1 week ago


Selected Answer: C
https://aws.amazon.com/blogs/security/rotate-amazon-rds-database-credentials-automatically-with-aws-secrets-manager/
upvoted 1 times

  LuckyAro 7 months, 1 week ago


Selected Answer: C
C is a valid solution for securing the custom application with the least amount of programming effort. It involves creating credentials on
the RDS for MySQL database for the application user and storing them in AWS Secrets Manager. The application can then be configured to
load the database credentials from Secrets Manager. Additionally, the solution includes setting up a credentials rotation schedule for the
application user in the RDS for MySQL database using Secrets Manager, which will automatically rotate the credentials at a specified
interval without requiring any programming effort.
upvoted 2 times

  bdp123 7 months, 1 week ago


Selected Answer: C
https://docs.aws.amazon.com/secretsmanager/latest/userguide/create_database_secret.html
upvoted 2 times

  jennyka76 7 months, 2 weeks ago


Answer - C
https://ws.amazon.com/blogs/security/rotate-amazon-rds-database-credentials-automatically-with-aws-secrets-manager/
upvoted 3 times
Question #340 Topic 1

A media company hosts its website on AWS. The website application’s architecture includes a fleet of Amazon EC2 instances behind an
Application Load Balancer (ALB) and a database that is hosted on Amazon Aurora. The company’s cybersecurity team reports that the application
is vulnerable to SQL injection.

How should the company resolve this issue?

A. Use AWS WAF in front of the ALB. Associate the appropriate web ACLs with AWS WAF.

B. Create an ALB listener rule to reply to SQL injections with a fixed response.

C. Subscribe to AWS Shield Advanced to block all SQL injection attempts automatically.

D. Set up Amazon Inspector to block all SQL injection attempts automatically.

Correct Answer: C

Community vote distribution


A (100%)

  Bhawesh Highly Voted  7 months, 2 weeks ago


Selected Answer: A
A. Use AWS WAF in front of the ALB. Associate the appropriate web ACLs with AWS WAF.

SQL Injection - AWS WAF


DDoS - AWS Shield
upvoted 16 times

  jennyka76 Highly Voted  7 months, 2 weeks ago


Answer - A
https://aws.amazon.com/premiumsupport/knowledge-center/waf-block-common-
attacks/#:~:text=To%20protect%20your%20applications%20against,%2C%20query%20string%2C%20or%20URI.
-----------------------------------------------------------------------------------------------------------------------
Protect against SQL injection and cross-site scripting
To protect your applications against SQL injection and cross-site scripting (XSS) attacks, use the built-in SQL injection and cross-site
scripting engines. Remember that attacks can be performed on different parts of the HTTP request, such as the HTTP header, query string,
or URI. Configure the AWS WAF rules to inspect different parts of the HTTP request against the built-in mitigation engines.
upvoted 6 times

  Guru4Cloud Most Recent  3 weeks, 5 days ago


Selected Answer: A
° Use AWS WAF in front of the Application Load Balancer
° Configure appropriate WAF web ACLs to detect and block SQL injection patterns
The key points:
° Website hosted on EC2 behind an ALB with Aurora database
° Application is vulnerable to SQL injection attacks
° AWS WAF is designed to detect and block SQL injection and other common web exploits. It can be placed in front of the ALB to inspect all
incoming requests. WAF rules can identify malicious SQL patterns and block them.
upvoted 1 times

  KMohsoe 4 months, 1 week ago


Selected Answer: A
SQL injection -> WAF
upvoted 1 times

  lexotan 5 months, 2 weeks ago


Selected Answer: A
WAF is the right one
upvoted 1 times

  akram_akram 5 months, 3 weeks ago


Selected Answer: A
SQL Injection - AWS WAF
DDoS - AWS Shield
upvoted 1 times

  movva12 6 months, 1 week ago


Answer C - Shield Advanced (WAF + Firewall Manager)
upvoted 1 times
  fkie4 6 months, 3 weeks ago
Selected Answer: A
It is A. I am happy to see Amazon gives out score like this...
upvoted 2 times

  LuckyAro 7 months, 1 week ago


Selected Answer: A
AWS WAF is a managed service that protects web applications from common web exploits that could affect application availability,
compromise security, or consume excessive resources. AWS WAF enables customers to create custom rules that block common attack
patterns, such as SQL injection attacks.

By using AWS WAF in front of the ALB and associating the appropriate web ACLs with AWS WAF, the company can protect its website
application from SQL injection attacks. AWS WAF will inspect incoming traffic to the website application and block requests that match the
defined SQL injection patterns in the web ACLs. This will help to prevent SQL injection attacks from reaching the application, thereby
improving the overall security posture of the application.
upvoted 2 times

  LuckyAro 7 months, 1 week ago


B, C, and D are not the best solutions for this issue. Replying to SQL injections with a fixed response
(B) is not a recommended approach as it does not actually fix the vulnerability, but only masks the issue. Subscribing to AWS Shield
Advanced
(C) is useful to protect against DDoS attacks but does not protect against SQL injection vulnerabilities. Amazon Inspector
(D) is a vulnerability assessment tool and can identify vulnerabilities but cannot block attacks in real-time.
upvoted 2 times

  pbpally 7 months, 1 week ago


Selected Answer: A
Bhawesh answers it perfect so I'm avoiding redundancy but agree on it being A.
upvoted 2 times
Question #341 Topic 1

A company has an Amazon S3 data lake that is governed by AWS Lake Formation. The company wants to create a visualization in Amazon
QuickSight by joining the data in the data lake with operational data that is stored in an Amazon Aurora MySQL database. The company wants to
enforce column-level authorization so that the company’s marketing team can access only a subset of columns in the database.

Which solution will meet these requirements with the LEAST operational overhead?

A. Use Amazon EMR to ingest the data directly from the database to the QuickSight SPICE engine. Include only the required columns.

B. Use AWS Glue Studio to ingest the data from the database to the S3 data lake. Attach an IAM policy to the QuickSight users to enforce
column-level access control. Use Amazon S3 as the data source in QuickSight.

C. Use AWS Glue Elastic Views to create a materialized view for the database in Amazon S3. Create an S3 bucket policy to enforce column-
level access control for the QuickSight users. Use Amazon S3 as the data source in QuickSight.

D. Use a Lake Formation blueprint to ingest the data from the database to the S3 data lake. Use Lake Formation to enforce column-level
access control for the QuickSight users. Use Amazon Athena as the data source in QuickSight.

Correct Answer: C

Community vote distribution


D (100%)

  K0nAn Highly Voted  7 months, 2 weeks ago


Selected Answer: D
This solution leverages AWS Lake Formation to ingest data from the Aurora MySQL database into the S3 data lake, while enforcing
column-level access control for QuickSight users. Lake Formation can be used to create and manage the data lake's metadata and enforce
security and governance policies, including column-level access control. This solution then uses Amazon Athena as the data source in
QuickSight to query the data in the S3 data lake. This solution minimizes operational overhead by leveraging AWS services to manage and
secure the data, and by using a standard query service (Amazon Athena) to provide a SQL interface to the data.
upvoted 6 times

  jennyka76 Highly Voted  7 months, 2 weeks ago


Answer - D
https://aws.amazon.com/blogs/big-data/enforce-column-level-authorization-with-amazon-quicksight-and-aws-lake-formation/
upvoted 5 times

  Guru4Cloud Most Recent  3 weeks, 5 days ago


Selected Answer: D
Use a Lake Formation blueprint to ingest data from the Aurora database into the S3 data lake
Leverage Lake Formation to enforce column-level access control for the marketing team
Use Amazon Athena as the data source in QuickSight
The key points:

Need to join S3 data lake data with Aurora MySQL data


Require column-level access controls for marketing team in QuickSight
Minimize operational overhead
upvoted 1 times

  LuckyAro 7 months, 1 week ago


Selected Answer: D
Using a Lake Formation blueprint to ingest the data from the database to the S3 data lake, using Lake Formation to enforce column-level
access control for the QuickSight users, and using Amazon Athena as the data source in QuickSight. This solution requires the least
operational overhead as it utilizes the features provided by AWS Lake Formation to enforce column-level authorization, which simplifies
the process and reduces the need for additional configuration and maintenance.
upvoted 3 times

  Bhawesh 7 months, 2 weeks ago


Selected Answer: D
D. Use a Lake Formation blueprint to ingest the data from the database to the S3 data lake. Use Lake Formation to enforce column-level
access control for the QuickSight users. Use Amazon Athena as the data source in QuickSight.

https://www.examtopics.com/discussions/amazon/view/80865-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 2 times
Question #342 Topic 1

A transaction processing company has weekly scripted batch jobs that run on Amazon EC2 instances. The EC2 instances are in an Auto Scaling
group. The number of transactions can vary, but the baseline CPU utilization that is noted on each run is at least 60%. The company needs to
provision the capacity 30 minutes before the jobs run.

Currently, engineers complete this task by manually modifying the Auto Scaling group parameters. The company does not have the resources to
analyze the required capacity trends for the Auto Scaling group counts. The company needs an automated way to modify the Auto Scaling group’s
desired capacity.

Which solution will meet these requirements with the LEAST operational overhead?

A. Create a dynamic scaling policy for the Auto Scaling group. Configure the policy to scale based on the CPU utilization metric. Set the target
value for the metric to 60%.

B. Create a scheduled scaling policy for the Auto Scaling group. Set the appropriate desired capacity, minimum capacity, and maximum
capacity. Set the recurrence to weekly. Set the start time to 30 minutes before the batch jobs run.

C. Create a predictive scaling policy for the Auto Scaling group. Configure the policy to scale based on forecast. Set the scaling metric to CPU
utilization. Set the target value for the metric to 60%. In the policy, set the instances to pre-launch 30 minutes before the jobs run.

D. Create an Amazon EventBridge event to invoke an AWS Lambda function when the CPU utilization metric value for the Auto Scaling group
reaches 60%. Configure the Lambda function to increase the Auto Scaling group’s desired capacity and maximum capacity by 20%.

Correct Answer: C

Community vote distribution


C (66%) B (28%) 6%

  fkie4 Highly Voted  6 months, 3 weeks ago


Selected Answer: C
B is NOT correct. the question said "The company does not have the resources to analyze the required capacity trends for the Auto Scaling
group counts.".
answer B said "Set the appropriate desired capacity, minimum capacity, and maximum capacity".
how can someone set desired capacity if he has no resources to analyze the required capacity.
Read carefully Amigo
upvoted 8 times

  omoakin 4 months, 1 week ago


scheduled scaling....
upvoted 2 times

  ealpuche 4 months, 3 weeks ago


But you can make a vague estimation according to the resources used; you don't need to make machine learning models to do that.
You only need common sense.
upvoted 1 times

  bsbs1234 Most Recent  1 week ago


should be C. Question does not say how long the job will run. don't know when to set the end time in the schedule policy.
upvoted 1 times

  MrAWSAssociate 3 months, 1 week ago


Selected Answer: C
C is correct!
upvoted 1 times

  Abrar2022 3 months, 3 weeks ago


Selected Answer: C
if the baseline CPU utilization is 60%, then that's enough information needed to determaine you to predict some aspect of the usage in
the future. So key word "predictive" judging by past usage.
upvoted 1 times

  omoakin 4 months, 1 week ago


BBBBBBBBBBBBB
upvoted 1 times

  ealpuche 4 months, 3 weeks ago


Selected Answer: B
B.
you can make a vague estimation according to the resources used; you don't need to make machine-learning models to do that. You only
need common sense.
upvoted 1 times

  kruasan 5 months ago


Selected Answer: C
Use predictive scaling to increase the number of EC2 instances in your Auto Scaling group in advance of daily and weekly patterns in
traffic flows.

Predictive scaling is well suited for situations where you have:

Cyclical traffic, such as high use of resources during regular business hours and low use of resources during evenings and weekends

Recurring on-and-off workload patterns, such as batch processing, testing, or periodic data analysis

Applications that take a long time to initialize, causing a noticeable latency impact on application performance during scale-out events
https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-predictive-scaling.html
upvoted 1 times

  neverdie 6 months, 2 weeks ago


Selected Answer: B
A scheduled scaling policy allows you to set up specific times for your Auto Scaling group to scale out or scale in. By creating a scheduled
scaling policy for the Auto Scaling group, you can set the appropriate desired capacity, minimum capacity, and maximum capacity, and set
the recurrence to weekly. You can then set the start time to 30 minutes before the batch jobs run, ensuring that the required capacity is
provisioned before the jobs run.

Option C, creating a predictive scaling policy for the Auto Scaling group, is not necessary in this scenario since the company does not have
the resources to analyze the required capacity trends for the Auto Scaling group counts. This would require analyzing the required
capacity trends for the Auto Scaling group counts to determine the appropriate scaling policy.
upvoted 3 times

  [Removed] 6 months ago


(typo above) C is correct..
upvoted 1 times

  [Removed] 6 months ago


B is correct. "Predictive scaling uses machine learning to predict capacity requirements based on historical data from CloudWatch.",
meaning the company does not have to analyze the capacity trends themselves.
https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-predictive-scaling.html
upvoted 1 times

  MssP 6 months, 1 week ago


Look at fkie4 comment... no way to know desired capacity!!! -> B not correct
upvoted 1 times

  Lalo 3 months, 3 weeks ago


the text says
1.-"A transaction processing company has weekly scripted batch jobs", there is a Schedule
2.-" The company does not have the resources to analyze the required capacity trends for the Auto Scaling " Do not use
the answer is B
upvoted 1 times

  MLCL 6 months, 2 weeks ago


Selected Answer: C
The second part of the question invalidates option B, they don't know how to procure requirements and need something to do it for them,
therefore C.
upvoted 1 times

  asoli 6 months, 2 weeks ago


Selected Answer: C
In general, if you have regular patterns of traffic increases and applications that take a long time to initialize, you should consider using
predictive scaling. Predictive scaling can help you scale faster by launching capacity in advance of forecasted load, compared to using only
dynamic scaling, which is reactive in nature.
upvoted 2 times

  WherecanIstart 6 months, 2 weeks ago


Selected Answer: C
https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-predictive-scaling.html
upvoted 3 times

  UnluckyDucky 6 months, 3 weeks ago


Selected Answer: B
"The company does not have the resources to analyze the required capacity trends for the Auto Scaling group counts"
Using predictive schedule seems appropriate here, however the question says the company doesn't have the resources to analyze this,
even though forecast does it for you using ML.

The job runs weekly therefore the easiest way to achieve this with the LEAST operational overhead, seems to be scheduled scaling.

Both solutions achieve the goal, B imho does it better, considering the limitations.

Predictive Scaling:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-predictive-scaling.html
Scheduled Scaling:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-scheduled-scaling.html
upvoted 2 times
  samcloudaws 6 months, 4 weeks ago
Selected Answer: B
Scheduled scaling seems mostly simplest way to solve this
upvoted 3 times

  Steve_4542636 7 months ago


Selected Answer: C
"The company needs to provision the capacity 30 minutes before the jobs run." This means the ASG group needs to scale BEFORE the CPU
utilization hits 60%. Dynamic scaling only responds to a scaling metric setup such as average CPU utilization at 60% for 5 minutes. The
forecasting option is automatic, however, it does require some time for it to be effective since it needs the EC2 utilization in the past to
predict the future.
upvoted 2 times

  nder 7 months ago


Selected Answer: A
Dynamic Scaling policy is the least operational overhead.
upvoted 1 times

  dpmahendra 7 months, 1 week ago


B Scheduled scaling
upvoted 2 times

  dpmahendra 7 months, 1 week ago


C: Use predictive scaling to increase the number of EC2 instances in your Auto Scaling group in advance of daily and weekly patterns in
traffic flows.
upvoted 1 times

  LuckyAro 7 months, 1 week ago


Selected Answer: A
This solution automates the capacity provisioning process based on the actual workload, without requiring any manual intervention. With
dynamic scaling, the Auto Scaling group will automatically adjust the number of instances based on the actual workload. The target value
for the CPU utilization metric is set to 60%, which is the baseline CPU utilization that is noted on each run, indicating that this is a
reasonable level of utilization for the workload. This solution does not require any scheduling or forecasting, reducing the operational
overhead.
upvoted 1 times

  MssP 6 months, 1 week ago


What about provision Capacity 30 minutes before?? Only B C make this, no?
upvoted 1 times
Question #343 Topic 1

A solutions architect is designing a company’s disaster recovery (DR) architecture. The company has a MySQL database that runs on an Amazon
EC2 instance in a private subnet with scheduled backup. The DR design needs to include multiple AWS Regions.

Which solution will meet these requirements with the LEAST operational overhead?

A. Migrate the MySQL database to multiple EC2 instances. Configure a standby EC2 instance in the DR Region. Turn on replication.

B. Migrate the MySQL database to Amazon RDS. Use a Multi-AZ deployment. Turn on read replication for the primary DB instance in the
different Availability Zones.

C. Migrate the MySQL database to an Amazon Aurora global database. Host the primary DB cluster in the primary Region. Host the secondary
DB cluster in the DR Region.

D. Store the scheduled backup of the MySQL database in an Amazon S3 bucket that is configured for S3 Cross-Region Replication (CRR). Use
the data backup to restore the database in the DR Region.

Correct Answer: B

Community vote distribution


C (100%)

  Guru4Cloud 3 weeks, 5 days ago


Selected Answer: C
Amazon Aurora global database can span and replicate DB Servers between multiple AWS Regions. And also compatible with MySQL.
upvoted 1 times

  GalileoEC2 6 months, 1 week ago


C, Why B? B is multi zone in one region, C is multi region as it was requested
upvoted 1 times

  lucdt4 4 months, 1 week ago


" The DR design needs to include multiple AWS Regions."
with the requirement "DR SITE multiple AWS region" -> B is wrong, because it deploy multy AZ (this is not multi region)
upvoted 1 times

  AlessandraSAA 6 months, 3 weeks ago


Selected Answer: C
A. Multiple EC2 instances to be configured and updated manually in case of DR.
B. Amazon RDS=Multi-AZ while it asks to be multi-region
C. correct, see comment from LuckyAro
D. Manual process to start the DR, therefore same limitation as answer A
upvoted 4 times

  KZM 7 months, 1 week ago


Amazon Aurora global database can span and replicate DB Servers between multiple AWS Regions. And also compatible with MySQL.
upvoted 3 times

  LuckyAro 7 months, 1 week ago


C: Migrate MySQL database to an Amazon Aurora global database is the best solution because it requires minimal operational overhead.
Aurora is a managed service that provides automatic failover, so standby instances do not need to be manually configured. The primary
DB cluster can be hosted in the primary Region, and the secondary DB cluster can be hosted in the DR Region. This approach ensures that
the data is always available and up-to-date in multiple Regions, without requiring significant manual intervention.
upvoted 3 times

  LuckyAro 7 months, 1 week ago


With dynamic scaling, the Auto Scaling group will automatically adjust the number of instances based on the actual workload. The target
value for the CPU utilization metric is set to 60%, which is the baseline CPU utilization that is noted on each run, indicating that this is a
reasonable level of utilization for the workload. This solution does not require any scheduling or forecasting, reducing the operational
overhead.
upvoted 1 times

  LuckyAro 7 months, 1 week ago


Sorry, Posted right answer to the wrong question, mistakenly clicked the next question, sorry.
upvoted 4 times

  geekgirl22 7 months, 1 week ago


C is the answer as RDS is only multi-zone not multi region.
upvoted 1 times
  bdp123 7 months, 1 week ago
Selected Answer: C
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Replication.html
upvoted 1 times

  SMAZ 7 months, 1 week ago


C
option A has operation overhead whereas option C not.
upvoted 1 times

  alexman 7 months, 1 week ago


Selected Answer: C
C mentions multiple regions. Option B is within the same region
upvoted 3 times

  jennyka76 7 months, 1 week ago


ANSWER - B ?? NOT SURE
upvoted 1 times
Question #344 Topic 1

A company has a Java application that uses Amazon Simple Queue Service (Amazon SQS) to parse messages. The application cannot parse
messages that are larger than 256 KB in size. The company wants to implement a solution to give the application the ability to parse messages as
large as 50 MB.

Which solution will meet these requirements with the FEWEST changes to the code?

A. Use the Amazon SQS Extended Client Library for Java to host messages that are larger than 256 KB in Amazon S3.

B. Use Amazon EventBridge to post large messages from the application instead of Amazon SQS.

C. Change the limit in Amazon SQS to handle messages that are larger than 256 KB.

D. Store messages that are larger than 256 KB in Amazon Elastic File System (Amazon EFS). Configure Amazon SQS to reference this location
in the messages.

Correct Answer: A

Community vote distribution


A (100%)

  LuckyAro Highly Voted  7 months, 1 week ago


Selected Answer: A
A. Use the Amazon SQS Extended Client Library for Java to host messages that are larger than 256 KB in Amazon S3.

Amazon SQS has a limit of 256 KB for the size of messages. To handle messages larger than 256 KB, the Amazon SQS Extended Client
Library for Java can be used. This library allows messages larger than 256 KB to be stored in Amazon S3 and provides a way to retrieve and
process them. Using this solution, the application code can remain largely unchanged while still being able to process messages up to 50
MB in size.
upvoted 7 times

  Guru4Cloud Most Recent  3 weeks, 5 days ago


Selected Answer: A
The SQS Extended Client Library enables storing large payloads in S3 while referenced via SQS. The application code can stay almost
entirely unchanged - it sends/receives SQS messages normally. The library handles transparently routing the large payloads to S3 behind
the scenes
upvoted 1 times

  james2033 2 months, 2 weeks ago


Selected Answer: A
Quote "The Amazon SQS Extended Client Library for Java enables you to manage Amazon SQS message payloads with Amazon S3." and
"An extension to the Amazon SQS client that enables sending and receiving messages up to 2GB via Amazon S3." at
https://github.com/awslabs/amazon-sqs-java-extended-client-lib
upvoted 1 times

  Abrar2022 3 months, 3 weeks ago


Selected Answer: A
Amazon SQS has a limit of 256 KB for the size of messages.

To handle messages larger than 256 KB, the Amazon SQS Extended Client Library for Java can be used.
upvoted 1 times

  gold4otas 6 months ago


The Amazon SQS Extended Client Library for Java enables you to publish messages that are greater than the current SQS limit of 256 KB,
up to a maximum of 2 GB.

https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-s3-messages.html
upvoted 1 times

  bdp123 7 months, 1 week ago


Selected Answer: A
https://github.com/awslabs/amazon-sqs-java-extended-client-lib
upvoted 3 times

  Arathore 7 months, 1 week ago


Selected Answer: A
To send messages larger than 256 KiB, you can use the Amazon SQS Extended Client Library for Java. This library allows you to send an
Amazon SQS message that contains a reference to a message payload in Amazon S3. The maximum payload size is 2 GB.
upvoted 4 times
  Neha999 7 months, 1 week ago
A
For messages > 256 KB, use Amazon SQS Extended Client Library for Java
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/quotas-messages.html
upvoted 4 times
Question #345 Topic 1

A company wants to restrict access to the content of one of its main web applications and to protect the content by using authorization
techniques available on AWS. The company wants to implement a serverless architecture and an authentication solution for fewer than 100 users.
The solution needs to integrate with the main web application and serve web content globally. The solution must also scale as the company's user
base grows while providing the lowest login latency possible.

Which solution will meet these requirements MOST cost-effectively?

A. Use Amazon Cognito for authentication. Use Lambda@Edge for authorization. Use Amazon CloudFront to serve the web application
globally.

B. Use AWS Directory Service for Microsoft Active Directory for authentication. Use AWS Lambda for authorization. Use an Application Load
Balancer to serve the web application globally.

C. Use Amazon Cognito for authentication. Use AWS Lambda for authorization. Use Amazon S3 Transfer Acceleration to serve the web
application globally.

D. Use AWS Directory Service for Microsoft Active Directory for authentication. Use Lambda@Edge for authorization. Use AWS Elastic
Beanstalk to serve the web application globally.

Correct Answer: A

Community vote distribution


A (100%)

  Lonojack Highly Voted  7 months, 1 week ago


Selected Answer: A
CloudFront=globally
Lambda@edge = Authorization/ Latency
Cognito=Authentication for Web apps
upvoted 8 times

  Guru4Cloud Most Recent  3 weeks, 5 days ago


Selected Answer: A
Amazon Cognito is a serverless authentication service that can be used to easily add user sign-up and authentication to web and mobile
apps. It is a good choice for this scenario because it is scalable and can handle a small number of users without any additional costs.

Lambda@Edge is a serverless compute service that can be used to run code at the edge of the AWS network. It is a good choice for this
scenario because it can be used to perform authorization checks at the edge, which can improve the login latency.

Amazon CloudFront is a content delivery network (CDN) that can be used to serve web content globally. It is a good choice for this
scenario because it can cache web content closer to users, which can improve the performance of the web application.
upvoted 1 times

  antropaws 4 months ago


Selected Answer: A
A is perfect.
upvoted 1 times

  kraken21 6 months ago


Selected Answer: A
Lambda@Edge for authorization
https://aws.amazon.com/blogs/networking-and-content-delivery/adding-http-security-headers-using-lambdaedge-and-amazon-
cloudfront/
upvoted 2 times

  LuckyAro 7 months, 1 week ago


Selected Answer: A
Amazon CloudFront is a global content delivery network (CDN) service that can securely deliver web content, videos, and APIs at scale. It
integrates with Cognito for authentication and with Lambda@Edge for authorization, making it an ideal choice for serving web content
globally.

Lambda@Edge is a service that lets you run AWS Lambda functions globally closer to users, providing lower latency and faster response
times. It can also handle authorization logic at the edge to secure content in CloudFront. For this scenario, Lambda@Edge can provide
authorization for the web application while leveraging the low-latency benefit of running at the edge.
upvoted 2 times
  bdp123 7 months, 1 week ago
Selected Answer: A
CloudFront to serve globally
upvoted 1 times

  SMAZ 7 months, 1 week ago


A
Amazon Cognito for authentication and Lambda@Edge for authorizatioN, Amazon CloudFront to serve the web application globally
provides low-latency content delivery
upvoted 3 times
Question #346 Topic 1

A company has an aging network-attached storage (NAS) array in its data center. The NAS array presents SMB shares and NFS shares to client
workstations. The company does not want to purchase a new NAS array. The company also does not want to incur the cost of renewing the NAS
array’s support contract. Some of the data is accessed frequently, but much of the data is inactive.

A solutions architect needs to implement a solution that migrates the data to Amazon S3, uses S3 Lifecycle policies, and maintains the same look
and feel for the client workstations. The solutions architect has identified AWS Storage Gateway as part of the solution.

Which type of storage gateway should the solutions architect provision to meet these requirements?

A. Volume Gateway

B. Tape Gateway

C. Amazon FSx File Gateway

D. Amazon S3 File Gateway

Correct Answer: C

Community vote distribution


D (100%)

  LuckyAro Highly Voted  7 months, 1 week ago


Selected Answer: D
Amazon S3 File Gateway provides on-premises applications with access to virtually unlimited cloud storage using NFS and SMB file
interfaces. It seamlessly moves frequently accessed data to a low-latency cache while storing colder data in Amazon S3, using S3 Lifecycle
policies to transition data between storage classes over time.

In this case, the company's aging NAS array can be replaced with an Amazon S3 File Gateway that presents the same NFS and SMB shares
to the client workstations. The data can then be migrated to Amazon S3 and managed using S3 Lifecycle policies
upvoted 5 times

  Guru4Cloud Most Recent  3 weeks, 5 days ago


Selected Answer: D
It provides an easy way to lift-and-shift file data from the existing NAS to Amazon S3. The S3 File Gateway presents SMB and NFS file
shares that client workstations can access just like the NAS shares.
Behind the scenes, it moves the file data to S3 storage, storing it durably and cost-effectively.
S3 Lifecycle policies can be used to transition less frequently accessed data to lower-cost S3 storage tiers like S3 Glacier.
From the client workstation perspective, access to files feels seamless and unchanged after migration to S3. The S3 File Gateway handles
the underlying data transfers.
It is a simple, low-cost gateway option tailored for basic file share migration use cases.
upvoted 1 times

  james2033 2 months, 2 weeks ago


Selected Answer: D
- Volume Gateway: https://aws.amazon.com/storagegateway/volume/ (Remove A, related iSCSI)

- Tape Gateway https://aws.amazon.com/storagegateway/vtl/ (Remove B)

- Amazon FSx File Gateway https://aws.amazon.com/storagegateway/file/fsx/ (C)

- Why not choose C? Because need working with Amazon S3. (Answer D, and it is correct answer)
https://aws.amazon.com/storagegateway/file/s3/
upvoted 1 times

  siyam008 7 months ago


Selected Answer: D
https://aws.amazon.com/blogs/storage/how-to-create-smb-file-shares-with-aws-storage-gateway-using-hyper-v/
upvoted 2 times

  bdp123 7 months, 1 week ago


Selected Answer: D
https://aws.amazon.com/about-aws/whats-new/2018/06/aws-storage-gateway-adds-smb-support-to-store-objects-in-amazon-s3/
upvoted 2 times

  everfly 7 months, 1 week ago


Selected Answer: D
Amazon S3 File Gateway provides a file interface to objects stored in S3. It can be used for a file-based interface with S3, which allows the
company to migrate their NAS array data to S3 while maintaining the same look and feel for client workstations. Amazon S3 File Gateway
supports SMB and NFS protocols, which will allow clients to continue to access the data using these protocols. Additionally, Amazon S3
Lifecycle policies can be used to automate the movement of data to lower-cost storage tiers, reducing the storage cost of inactive data.
upvoted 3 times
Question #347 Topic 1

A company has an application that is running on Amazon EC2 instances. A solutions architect has standardized the company on a particular
instance family and various instance sizes based on the current needs of the company.

The company wants to maximize cost savings for the application over the next 3 years. The company needs to be able to change the instance
family and sizes in the next 6 months based on application popularity and usage.

Which solution will meet these requirements MOST cost-effectively?

A. Compute Savings Plan

B. EC2 Instance Savings Plan

C. Zonal Reserved Instances

D. Standard Reserved Instances

Correct Answer: D

Community vote distribution


A (72%) B (25%)

  AlmeroSenior Highly Voted  7 months, 1 week ago


Selected Answer: A
Read Carefully guys , They need to be able to change FAMILY , and although EC2 Savings has a higher discount , its clearly documented as
not allowed >

EC2 Instance Savings Plans provide savings up to 72 percent off On-Demand, in exchange for a commitment to a specific instance family
in a chosen AWS Region (for example, M5 in Virginia). These plans automatically apply to usage regardless of size (for example, m5.xlarge,
m5.2xlarge, etc.), OS (for example, Windows, Linux, etc.), and tenancy (Host, Dedicated, Default) within the specified family in a Region.
upvoted 12 times

  FFO 5 months, 1 week ago


Savings Plans are a flexible pricing model that offer low prices on Amazon EC2, AWS Lambda, and AWS Fargate usage, in exchange for a
commitment to a consistent amount of usage (measured in $/hour) for a 1 or 3 year term. When you sign up for a Savings Plan, you will
be charged the discounted Savings Plans price for your usage up to your commitment.
The company wants savings over the next 3 years but wants to change the instance type in 6 months. This invalidates A
upvoted 2 times

  FFO 5 months, 1 week ago


Disregard! found more information:
We recommend Savings Plans (over Reserved Instances). Like Reserved Instances, Savings Plans offer lower prices (up to 72%
savings compared to On-Demand Instance pricing). In addition, Savings Plans offer you the flexibility to change your usage as your
needs evolve. For example, with Compute Savings Plans, lower prices will automatically apply when you change from C4 to C6g
instances, shift a workload from EU (Ireland) to EU (London), or move a workload from Amazon EC2 to AWS Fargate or AWS Lambda.
https://aws.amazon.com/ec2/pricing/reserved-instances/pricing/
upvoted 1 times

  Wayne23Fang Most Recent  3 weeks, 2 days ago


Selected Answer: A
D is not right. D. Standard Reserved Instances. should be Convertible Reserved Instances if you need additional flexibility, such as the
ability to use different instance families, operating systems.
upvoted 1 times

  Guru4Cloud 3 weeks, 5 days ago


Selected Answer: B
The key factors are:

Need to maximize cost savings over 3 years


Ability to change instance family and sizes in 6 months
Standardized on a particular instance family for now
upvoted 2 times

  Kiki_Pass 2 months ago


Why not C? Can do with Convertible Reserved Instance
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/reserved-instances-types.html
upvoted 1 times

  ITV2021 2 months, 2 weeks ago


Selected Answer: A
https://aws.amazon.com/savingsplans/compute-pricing/
upvoted 1 times

  Mia2009687 2 months, 3 weeks ago


Selected Answer: A
EC2 Instance Savings Plan cannot change the family.
https://docs.aws.amazon.com/savingsplans/latest/userguide/what-is-savings-plans.html
upvoted 1 times

  mattcl 3 months, 2 weeks ago


Anser D: You can use Standard Reserved Instances when you know that you need a specific instance type.
upvoted 1 times

  kruasan 5 months ago


Selected Answer: A
Savings Plans offer a flexible pricing model that provides savings on AWS usage. You can save up to 72 percent on your AWS compute
workloads. Compute Savings Plans provide lower prices on Amazon EC2 instance usage regardless of instance family, size, OS, tenancy, or
AWS Region. This also applies to AWS Fargate and AWS Lambda usage. SageMaker Savings Plans provide you with lower prices for your
Amazon SageMaker instance usage, regardless of your instance family, size, component, or AWS Region.
https://docs.aws.amazon.com/savingsplans/latest/userguide/what-is-savings-plans.html
upvoted 2 times

  kruasan 5 months ago


With an EC2 Instance Savings Plan, you can change your instance size within the instance family (for example, from c5.xlarge to
c5.2xlarge) or the operating system (for example, from Windows to Linux), or move from Dedicated tenancy to Default and continue to
receive the discounted rate provided by your EC2 Instance Savings Plan.
https://docs.aws.amazon.com/savingsplans/latest/userguide/what-is-savings-plans.html
upvoted 1 times

  kruasan 5 months ago


The company needs to be able to change the instance family and sizes in the next 6 months based on application popularity and
usage.
Therefore EC2 Instance Savings Plan prerequisites are not fulfilled
upvoted 1 times

  SkyZeroZx 5 months, 1 week ago


Selected Answer: B
EC2 Instance Savings Plan
upvoted 1 times

  lexotan 5 months, 1 week ago


Selected Answer: D
Why not D. you can change istance type and classes
upvoted 1 times

  bdp123 7 months, 1 week ago


Selected Answer: A
https://aws.amazon.com/savingsplans/compute-pricing/
upvoted 3 times

  everfly 7 months, 1 week ago


Selected Answer: A
Compute Savings Plans provide the most flexibility and help to reduce your costs by up to 66%. These plans automatically apply to EC2
instance usage regardless of instance family, size, AZ, Region, OS or tenancy, and also apply to Fargate or Lambda usage.
EC2 Instance Savings Plans provide the lowest prices, offering savings up to 72% in exchange for commitment to usage of individual
instance families in a Region
https://aws.amazon.com/savingsplans/compute-pricing/
upvoted 4 times

  doodledreads 7 months, 1 week ago


Selected Answer: A
Compute Savings plans are most flexible - lets you change the instance types vs EC2 Savings plans offer best savings.
upvoted 2 times

  Yechi 7 months, 1 week ago


Selected Answer: B
With an EC2 Instance Savings Plan, you can change your instance size within the instance family (for example, from c5.xlarge to
c5.2xlarge) or the operating system (for example, from Windows to Linux), or move from Dedicated tenancy to Default and continue to
receive the discounted rate provided by your EC2 Instance Savings Plan.
upvoted 3 times

  everfly 7 months, 1 week ago


Selected Answer: B
EC2 Instance Savings Plans provide the lowest prices, offering savings up to 72% in exchange for commitment to usage of individual
instance families in a Region (e.g. M5 usage in N. Virginia). This automatically reduces your cost on the selected instance family in that
region regardless of AZ, size, OS or tenancy. EC2 Instance Savings Plans give you the flexibility to change your usage between instances
within a family in that region. For example, you can move from c5.xlarge running Windows to c5.2xlarge running Linux and automatically
benefit from the Savings Plan prices.
https://aws.amazon.com/savingsplans/compute-pricing/
upvoted 3 times
Question #348 Topic 1

A company collects data from a large number of participants who use wearable devices. The company stores the data in an Amazon DynamoDB
table and uses applications to analyze the data. The data workload is constant and predictable. The company wants to stay at or below its
forecasted budget for DynamoDB.

Which solution will meet these requirements MOST cost-effectively?

A. Use provisioned mode and DynamoDB Standard-Infrequent Access (DynamoDB Standard-IA). Reserve capacity for the forecasted workload.

B. Use provisioned mode. Specify the read capacity units (RCUs) and write capacity units (WCUs).

C. Use on-demand mode. Set the read capacity units (RCUs) and write capacity units (WCUs) high enough to accommodate changes in the
workload.

D. Use on-demand mode. Specify the read capacity units (RCUs) and write capacity units (WCUs) with reserved capacity.

Correct Answer: A

Community vote distribution


B (76%) A (24%)

  Guru4Cloud 3 weeks, 6 days ago


Selected Answer: A
Option B lacks the cost benefits of Standard-IA.

Option C uses more expensive on-demand pricing.

Option D does not actually allow reserving capacity with on-demand mode.

So option A leverages provisioned mode, Standard-IA, and reserved capacity to meet the requirements in a cost-optimal way.
upvoted 1 times

  MrAWSAssociate 3 months, 1 week ago


Selected Answer: A
A is correct!
upvoted 1 times

  MrAWSAssociate 3 months, 1 week ago


Sorry, A will not work, since Reserved Capacity can only be used with DynamoDB Standard table class. So, B is right for this case.
upvoted 1 times

  UNGMAN 6 months, 2 weeks ago


Selected Answer: B
예측가능..
upvoted 3 times

  kayodea25 6 months, 3 weeks ago


Option C is the most cost-effective solution for this scenario. In on-demand mode, DynamoDB automatically scales up or down based on
the current workload, so the company only pays for the capacity it uses. By setting the RCUs and WCUs high enough to accommodate
changes in the workload, the company can ensure that it always has the necessary capacity without overprovisioning and incurring
unnecessary costs. Since the workload is constant and predictable, using provisioned mode with reserved capacity (Options A and D) may
result in paying for unused capacity during periods of low demand. Option B, using provisioned mode without reserved capacity, may
result in throttling during periods of high demand if the provisioned capacity is not sufficient to handle the workload.
upvoted 2 times

  Bofi 6 months, 2 weeks ago


Kayode olode..lol
upvoted 1 times

  boxu03 6 months, 3 weeks ago


you forgot "The data workload is constant and predictable", should be B
upvoted 2 times

  Steve_4542636 7 months ago


"The data workload is constant and predictable."
https://docs.aws.amazon.com/wellarchitected/latest/serverless-applications-lens/capacity.html

"With provisioned capacity you pay for the provision of read and write capacity units for your DynamoDB tables. Whereas with DynamoDB
on-demand you pay per request for the data reads and writes that your application performs on your tables."
upvoted 1 times
  Charly0710 7 months ago
Selected Answer: B
The data workload is constant and predictable, then, isn't on-demand mode.
DynamoDB Standard-IA is not necessary in this context
upvoted 1 times

  Lonojack 7 months, 1 week ago


Selected Answer: B
The problem with (A) is: “Standard-Infrequent Access“. In the question, they say the company has to analyze the Data.
That’s why the Correct answer is (B)
upvoted 3 times

  bdp123 7 months, 1 week ago


Selected Answer: A
workload is constant
upvoted 2 times

  Lonojack 7 months, 1 week ago


The problem with (A) is: “Standard-Infrequent Access“.
In the question, they say the company has to analyze the Data.
Correct answer is (B)
upvoted 2 times

  Samuel03 7 months, 1 week ago


Selected Answer: B
As the numbers are already known
upvoted 2 times

  everfly 7 months, 1 week ago


Selected Answer: B
The data workload is constant and predictable.
upvoted 4 times
Question #349 Topic 1

A company stores confidential data in an Amazon Aurora PostgreSQL database in the ap-southeast-3 Region. The database is encrypted with an
AWS Key Management Service (AWS KMS) customer managed key. The company was recently acquired and must securely share a backup of the
database with the acquiring company’s AWS account in ap-southeast-3.

What should a solutions architect do to meet these requirements?

A. Create a database snapshot. Copy the snapshot to a new unencrypted snapshot. Share the new snapshot with the acquiring company’s
AWS account.

B. Create a database snapshot. Add the acquiring company’s AWS account to the KMS key policy. Share the snapshot with the acquiring
company’s AWS account.

C. Create a database snapshot that uses a different AWS managed KMS key. Add the acquiring company’s AWS account to the KMS key alias.
Share the snapshot with the acquiring company's AWS account.

D. Create a database snapshot. Download the database snapshot. Upload the database snapshot to an Amazon S3 bucket. Update the S3
bucket policy to allow access from the acquiring company’s AWS account.

Correct Answer: B

Community vote distribution


B (100%)

  Vuuu 2 months ago


Selected Answer: B
B. Create a database snapshot. Add the acquiring company’s AWS account to the KMS key policy. Share the snapshot with the acquiring
company’s AWS account. Most Voted
upvoted 1 times

  Abrar2022 3 months, 3 weeks ago


Create a database snapshot of the encrypted. Add the acquiring company’s AWS account to the KMS key policy. Share the snapshot with
the acquiring company’s AWS account.
upvoted 1 times

  Abrar2022 3 months, 3 weeks ago


Selected Answer: B
A. - "So let me get this straight, with the current company the data is protected and encrypted. However, for the acquiring company the
data is unencrypted? How is that fair?"

C - Wouldn't recommended this option because using a different AWS managed KMS key will not allow the acquiring company's AWS
account to access the encrypted data.

D. - Don't risk it for a biscuit and get fired!!!! - by downloading the database snapshot and uploading it to an Amazon S3 bucket. This will
increase the risk of data leakage or loss of confidentiality during the transfer process.

B - CORRECT
upvoted 3 times

  SkyZeroZx 5 months ago


Selected Answer: B
To securely share a backup of the database with the acquiring company's AWS account in the same Region, a solutions architect should
create a database snapshot, add the acquiring company's AWS account to the AWS KMS key policy, and share the snapshot with the
acquiring company's AWS account.

Option A, creating an unencrypted snapshot, is not recommended as it will compromise the confidentiality of the data. Option C, creating
a snapshot that uses a different AWS managed KMS key, does not provide any additional security and will unnecessarily complicate the
solution. Option D, downloading the database snapshot and uploading it to an S3 bucket, is not secure as it can expose the data during
transit.

Therefore, the correct option is B: Create a database snapshot. Add the acquiring company's AWS account to the KMS key policy. Share the
snapshot with the acquiring company's AWS account.
upvoted 1 times

  elearningtakai 6 months ago


Selected Answer: B
Option B is the correct answer.
Option A is not recommended because copying the snapshot to a new unencrypted snapshot will compromise the confidentiality of the
data.
Option C is not recommended because using a different AWS managed KMS key will not allow the acquiring company's AWS account to
access the encrypted data.
Option D is not recommended because downloading the database snapshot and uploading it to an Amazon S3 bucket will increase the
risk of data leakage or loss of confidentiality during the transfer process.
upvoted 1 times
  Steve_4542636 7 months ago
Selected Answer: B
https://docs.aws.amazon.com/kms/latest/developerguide/key-policy-modifying-external-accounts.html
upvoted 1 times

  geekgirl22 7 months, 1 week ago


It is C, you have to create a new key. Read below
You can't share a snapshot that's encrypted with the default AWS KMS key. You must create a custom AWS KMS key instead. To share an
encrypted Aurora DB cluster snapshot:

Create a custom AWS KMS key.


Add the target account to the custom AWS KMS key.
Create a copy of the DB cluster snapshot using the custom AWS KMS key. Then, share the newly copied snapshot with the target account.
Copy the shared DB cluster snapshot from the target account
https://aws.amazon.com/premiumsupport/knowledge-center/aurora-share-encrypted-snapshot/
upvoted 1 times

  KZM 7 months, 1 week ago


Yes, as per the given information "The database is encrypted with an AWS Key Management Service (AWS KMS) customer managed
key", it may not be the default AWS KMS key.
upvoted 1 times

  KZM 7 months, 1 week ago


Yes, can't share a snapshot that's encrypted with the default AWS KMS key.
But as per the given information "The database is encrypted with an AWS Key Management Service (AWS KMS) customer managed
key", it may not be the default AWS KMS key.
upvoted 3 times

  enzomv 6 months, 3 weeks ago


I agree with KZM.
It is B.
There's no need to create another custom AWS KMS key.
https://aws.amazon.com/premiumsupport/knowledge-center/aurora-share-encrypted-snapshot/
Give target account access to the custom AWS KMS key within the source account
1. Log in to the source account, and go to the AWS KMS console in the same Region as the DB cluster snapshot.
2. Select Customer-managed keys from the navigation pane.
3. Select your custom AWS KMS key (ALREADY CREATED)
4. From the Other AWS accounts section, select Add another AWS account, and then enter the AWS account number of your
target account.

Then:
Copy and share the DB cluster snapshot
upvoted 2 times

  leoattf 7 months, 1 week ago


I also thought straight away that it could be C, however, the questions mentions that the database is encrypted with an AWS KMS
custom key already. So maybe the letter B could be right, since it already has a custom key, not the default KMS Key.
What do you think?
upvoted 3 times

  enzomv 6 months, 3 weeks ago


It is B.
There's no need to create another custom AWS KMS key.
https://aws.amazon.com/premiumsupport/knowledge-center/aurora-share-encrypted-snapshot/
Give target account access to the custom AWS KMS key within the source account
1. Log in to the source account, and go to the AWS KMS console in the same Region as the DB cluster snapshot.
2. Select Customer-managed keys from the navigation pane.
3. Select your custom AWS KMS key (ALREADY CREATED)
4. From the Other AWS accounts section, select Add another AWS account, and then enter the AWS account number of your target
account.

Then:
Copy and share the DB cluster snapshot
upvoted 1 times

  nyx12345 7 months, 1 week ago


Is it bad that in answer B the acquiring company is using the same KMS key? Should a new KMS key not be used?
upvoted 2 times

  geekgirl22 7 months, 1 week ago


Yes, you are right, read my comment above.
upvoted 1 times

  bsbs1234 1 week ago


I think I would agree with you if option C say using a new "customer managed key" instead of AWS managed key
upvoted 1 times

  bdp123 7 months, 1 week ago


Selected Answer: B
https://aws.amazon.com/premiumsupport/knowledge-center/aurora-share-encrypted-snapshot/
upvoted 2 times

  jennyka76 7 months, 1 week ago


ANSWER - B
upvoted 1 times
Question #350 Topic 1

A company uses a 100 GB Amazon RDS for Microsoft SQL Server Single-AZ DB instance in the us-east-1 Region to store customer transactions.
The company needs high availability and automatic recovery for the DB instance.

The company must also run reports on the RDS database several times a year. The report process causes transactions to take longer than usual to
post to the customers’ accounts. The company needs a solution that will improve the performance of the report process.

Which combination of steps will meet these requirements? (Choose two.)

A. Modify the DB instance from a Single-AZ DB instance to a Multi-AZ deployment.

B. Take a snapshot of the current DB instance. Restore the snapshot to a new RDS deployment in another Availability Zone.

C. Create a read replica of the DB instance in a different Availability Zone. Point all requests for reports to the read replica.

D. Migrate the database to RDS Custom.

E. Use RDS Proxy to limit reporting requests to the maintenance window.

Correct Answer: AC

Community vote distribution


AC (100%)
  elearningtakai Highly Voted  6 months ago
A and C are the correct choices.
B. It will not help improve the performance of the report process.
D. Migrating to RDS Custom does not address the issue of high availability and automatic recovery.
E. RDS Proxy can help with scalability and high availability but it does not address the issue of performance for the report process. Limiting
the reporting requests to the maintenance window will not provide the required availability and recovery for the DB instance.
upvoted 5 times

  Guru4Cloud Most Recent  3 weeks, 6 days ago


Selected Answer: AC
The correct answers are A and C.

A. Modify the DB instance from a Single-AZ DB instance to a Multi-AZ deployment. This will provide high availability and automatic
recovery for the DB instance. If the primary DB instance fails, the standby DB instance will automatically become the primary DB instance.
This will ensure that the database is always available.

C. Create a read replica of the DB instance in a different Availability Zone. Point all requests for reports to the read replica. This will
improve the performance of the report process by offloading the read traffic from the primary DB instance to the read replica. The read
replica is a fully synchronized copy of the primary DB instance, so the reports will be accurate.
upvoted 1 times

  elearningtakai 6 months ago


Selected Answer: AC
A and C.
upvoted 2 times

  WherecanIstart 6 months, 2 weeks ago


Selected Answer: AC
Options A & C...
upvoted 3 times

  KZM 7 months, 1 week ago


Options A+C
upvoted 2 times

  bdp123 7 months, 1 week ago


Selected Answer: AC
https://medium.com/awesome-cloud/aws-difference-between-multi-az-and-read-replicas-in-amazon-rds-60fe848ef53a
upvoted 2 times

  jennyka76 7 months, 1 week ago


ANSWER - A & C
upvoted 3 times

You might also like