100% found this document useful (1 vote)
710 views

AWS Certified DevOps Engineer Professional Practice Exams

The document summarizes the results of a practice test question for the AWS Certified DevOps Engineer Professional exam. The question asks how a DevOps Engineer should satisfy a requirement to monitor changes to objects in an S3 bucket in near real-time and ensure only administrators are making the changes. The correct answers are to set up an S3 data event in CloudTrail to trigger a Lambda function to check the user initiating the change and to set up a canary deployment in API Gateway routing a portion of traffic to a new version.

Uploaded by

Arif
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
710 views

AWS Certified DevOps Engineer Professional Practice Exams

The document summarizes the results of a practice test question for the AWS Certified DevOps Engineer Professional exam. The question asks how a DevOps Engineer should satisfy a requirement to monitor changes to objects in an S3 bucket in near real-time and ensure only administrators are making the changes. The correct answers are to set up an S3 data event in CloudTrail to trigger a Lambda function to check the user initiating the change and to set up a canary deployment in API Gateway routing a portion of traffic to a new version.

Uploaded by

Arif
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 290

AWS Certified DevOps Engineer Professional Practice Test 1 - Results

Question 1: Incorrect

A company is re-architecting its monolithic system to a serverless application in AWS to save on cost. The
deployment of the succeeding new version of the application must be initially rolled out to a small number of
users first for testing before the full release. If the post-hook tests fail, there should be an easy way to roll back
the deployment. The DevOps Engineer was assigned to design an efficient deployment setup that mitigates any
unnecessary outage that impacts their production environment.

As a DevOps Engineer, how should you satisfy this requirement? (Select TWO.)

Set up a canary deployment in Amazon API Gateway that routes 20% of the incoming traffic to the canary
release. Promote the canary release to production once the initial tests have passed.

(Correct)

Launch a Network Load Balancer with an Amazon API Gateway private integration. Attach two target
groups to the load balancer. Configure the first target group with the current version and the second target
group with the new version. Configure the load balancer to route 20% of the incoming traffic to the new
version and once it becomes stable, detach the first target group from the load balancer.

Create a new record in Route 53 with a Failover routing policy. Configure the primary record to route
20% of incoming traffic to the new version and set the secondary record to route the rest of the traffic to
the current version. Once the new version stabilizes, update the primary record to route all traffic to the
new version.

Set up one AWS Lambda Function Alias that points to both the current and new versions. Route 20% of
incoming traffic to the new version and once it is considered stable, update the alias to route all traffic to
the new version.

(Correct)

Launch an Application Load Balancer with an Amazon API Gateway private integration. Attach a single
target group to the load balancer and select the "Canary" routing option which will automatically route
incoming traffic to the new version.

Explanation

You can create one or more aliases for your AWS Lambda function. A Lambda alias is like a pointer to a
specific Lambda function version. Each alias has a unique ARN. An alias can only point to a function version,
not to another alias. You can update an alias to point to a new version of the function. Event sources such as
Amazon S3 invoke your Lambda function. These event sources maintain a mapping that identifies the function
to invoke when events occur. If you specify a Lambda function alias in the mapping configuration, you don't
need to update the mapping when the function version changes. In a resource policy, you can grant permissions
for event sources to use your Lambda function. If you specify an alias ARN in the policy, you don't need to
update the policy when the function version changes.
Use routing configuration on an alias to send a portion of traffic to a second function version. For example, you
can reduce the risk of deploying a new version by configuring the alias to send most of the traffic to the existing
version, and only a small percentage of traffic to the new version. You can point an alias to a maximum of two
Lambda function versions.

In API Gateway, you create a canary release deployment when deploying the API with canary settings as an
additional input to the deployment creation operation.

You can also create a canary release deployment from an existing non-canary deployment by making
a stage:update request to add the canary settings on the stage.

When creating a non-canary release deployment, you can specify a non-existing stage name. API Gateway
creates one if the specified stage does not exist. However, you cannot specify any non-existing stage name when
creating a canary release deployment. You will get an error and API Gateway will not create any canary release
deployment.

Hence, the correct answers are:

- Set up one AWS Lambda Function Alias that points to both the current and new versions. Route 20% of
incoming traffic to the new version and once it is considered stable, update the alias to route all traffic to
the new version.

- Set up a canary deployment in Amazon API Gateway that routes 20% of the incoming traffic to the
canary release. Promote the canary release to production once the initial tests have passed.

The option that says: Launch an Application Load Balancer with an Amazon API Gateway private
integration. Attach a single target group to the load balancer and select the "Canary" routing option
which will automatically route incoming traffic to the new version is incorrect because there is no Canary
routing option in an Application Load Balancer.

The option that says: Launch a Network Load Balancer with an Amazon API Gateway private integration.
Attach two target groups to the load balancer. Configure the first target group with the current version
and the second target group with the new version. Configure the load balancer to route 20% of the
incoming traffic to the new version and once it becomes stable, detach the first target group from the load
balancer is incorrect because the Network Load Balancer does not support weighted target groups, unlike the
Application Load Balancer.The option that says: Create a new record in Route 53 with a Failover routing
policy. Configure the primary record to route 20% of incoming traffic to the new version and set the
secondary record to route the rest of the traffic to the current version. Once the new version stabilizes,
update the primary record to route all traffic to the new version is incorrect because the failover routing
policy simply lets you route traffic to a resource when the resource is healthy, or to a different resource when the
first resource is unhealthy. This type of routing is not an appropriate setup. A better solution is to use Canary
deployment release in API Gateway to deploy the serverless application.

References:

https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-private-integration.html

https://docs.aws.amazon.com/apigateway/latest/developerguide/canary-release.html
Check out this Amazon API Gateway Cheat Sheet:

https://tutorialsdojo.com/amazon-api-gateway/

Tutorials Dojo's AWS Certified DevOps Engineer Professional Exam Study Guide:

https://tutorialsdojo.com/aws-certified-devops-engineer-professional/

Question 2: Correct

An insurance firm has its on-premises data center integrated with its VPC in AWS Cloud via a Direct Connect
connection. All of their financial records are stored on-premises and then uploaded to an Amazon S3 bucket for
backup purposes. There is a new mandate by the IT Security team that only a trusted group of administrators can
change the objects in the bucket including the object permissions. The team instructed a DevOps Engineer to
develop a solution that provides near real-time, automated checks to monitor their data.

How should the Engineer satisfy this requirement?

Set up an Amazon S3 data events in CloudTrail to track any object changes. Create a rule to run your
Lambda function in response to an Amazon S3 data event that checks if the user who initiated the change
is an administrator.

(Correct)

Create a new AWS Config rule with a Periodic trigger type that queries the Amazon S3 Server Access
Logs for changes on a regular basis. Configure the rule to check if the user who initiated the change is an
administrator.

Create a new AWS Config rule with a Configuration changes trigger type that tracks any changes to the
Amazon S3 bucket-level permissions. Modify the rule to use Step Functions to check if the user who
initiated the change is an administrator.

Use Amazon CloudWatch to monitor any changes to the Amazon S3 object-level and bucket-level
permissions. Create a custom metric that checks if the user who initiated the change is an administrator.

Explanation

When an event occurs in your account, CloudTrail evaluates whether the event matches the settings for your
trails. Only events that match your trail settings are delivered to your Amazon S3 bucket and Amazon
CloudWatch Logs log group.

You can configure your trails to log the following:

Data events: These events provide insight into the resource operations performed on or within a resource. These
are also known as data plane operations.

Management events: Management events provide insight into management operations that are performed on
resources in your AWS account. These are also known as control plane operations. Management events can also
include non-API events that occur in your account. For example, when a user logs in to your account, CloudTrail
logs the ConsoleLogin event.
You can configure multiple trails differently so that the trails process and log only the events that you specify.
For example, one trail can log read-only data and management events so that all read-only events are delivered
to one S3 bucket. Another trail can log only write-only data and management events, so that all write-only events
are delivered to a separate S3 bucket.

You can also configure your trails to have one trail log and deliver all management events to one S3 bucket and
configure another trail to log and deliver all data events to another S3 bucket.

Data events provide insight into the resource operations performed on or within a resource. These are also
known as data plane operations. Data events are often high-volume activities.

Example data events include:

-Amazon S3 object-level API activity (for example, GetObject, DeleteObject, and PutObject API operations)

-AWS Lambda function execution activity (the Invoke API)

Data events are disabled by default when you create a trail. To record CloudTrail data events, you must
explicitly add the supported resources or resource types for which you want to collect activity to a trail.

Hence, the correct answer is: Set up an Amazon S3 data events in CloudTrail to track any object changes.
Create a rule to run your Lambda function in response to an Amazon S3 data event that checks if the user
who initiated the change is an administrator.

The option that says: Create a new AWS Config rule with a Periodic trigger type that queries the Amazon
S3 Server Access Logs for changes on a regular basis. Configure the rule to check if the user who initiated
the change is an administrator is incorrect because the Amazon Server access logging only provides detailed
records for the requests that are made to the S3 bucket only and not the IAM Policy changes. Moreover, the
AWS Config rule will only run on a specific schedule which means that it won't provide near real-time
monitoring in comparison to Amazon S3 data events.

The option that says: Use Amazon CloudWatch to monitor any changes to the Amazon S3 object-level and
bucket-level permissions. Create a custom metric that checks if the user who initiated the change is an
administrator is incorrect because CloudWatch is not a suitable service to use in monitoring your configuration
changes. Moreover, you can't directly create a custom metric that automatically checks if the user, who initiated
the change, is indeed an administrator.

The option that says: Create a new AWS Config rule with a Configuration changes trigger type that tracks
any changes to the Amazon S3 bucket-level permissions. Modify the rule to use Step Functions check if the
user who initiated the change is an administrator is incorrect because you can't directly use Step Functions in
conjunction with AWS Config. Although this type of trigger will cause the AWS Config to run evaluations when
there is any change in Amazon S3, the use of Amazon S3 Data Event is still a more preferred way to track the
changes in near real-time.

References:

https://aws.amazon.com/blogs/aws/cloudtrail-update-capture-and-process-amazon-s3-object-level-api-activity/
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-management-and-data-events-with-
cloudtrail.html#example-logging-all-S3-objects

https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html

Check out this AWS CloudTrail Cheat Sheet:

https://tutorialsdojo.com/aws-cloudtrail/

Question 3: Incorrect

You have created a mobile app that allows users to rate and review hotels they have stayed on. The app is hosted
on Docker containers deployed on Amazon ECS. You need to run validation scripts for new deployments to
determine if the application is working as expected before allowing production traffic to flow to the new app
version. You also want to configure automatic rollback if the validation is not successful.

Which of the following options will you do to implement this validation?

Create your validation scripts in AWS Lambda and define them on the AfterAllowTraffic lifecycle hook
of the AppSpec.yaml file. The functions can validate the deployment using production traffic and rollback
if the tests fail.

Create your validation scripts in AWS Lambda and define them on the AfterAllowTestTraffic lifecycle
hook of the AppSpec.yaml file. The functions can validate the deployment using the test traffic and
rollback if the tests fail.

(Correct)

Create your validation scripts in AWS Lambda and define them on the AfterInstall lifecycle hook of
the AppSpec.yaml file. The functions can validate the deployment after installing the new version and
rollback if the tests fail.

Create your validation scripts in AWS Lambda and define them on the BeforeAllowTraffic lifecycle hook
of the AppSpec.yaml file. The functions can validate the deployment before allowing production traffic
and rollback if the tests fail.

Explanation

You can use a Lambda function to validate part of the deployment of an updated Amazon ECS application.
During an Amazon ECS deployment with validation tests, CodeDeploy can be configured to use a load balancer
with two target groups: one production traffic listener and one test traffic listener. To add a validation test, you
first implement the test in a Lambda function. Next, in your deployment AppSpec file, you specify the Lambda
function for the lifecycle hook you want to test. If a validation test fails, the deployment stops, it is rolled back,
and marked as failed. If the test succeeds, the deployment continues to the next deployment lifecycle event or
hook.

The content in the 'hooks' section of the AppSpec file varies depending on the compute platform for your
deployment. The 'hooks' section for an EC2/On-Premises deployment contains mappings that link deployment
lifecycle event hooks to one or more scripts. The 'hooks' section for a Lambda or an Amazon ECS deployment
specifies Lambda validation functions to run during a deployment lifecycle event. If an event hook is not
present, no operation is executed for that event.

When the deployment starts, the deployment lifecycle events start to execute one at a time. Some lifecycle
events are hooks that only execute Lambda functions specified in the AppSpec file. An AWS Lambda hook is
one Lambda function specified with a string on a new line after the name of the lifecycle event. On the
AfterAllowTestTraffic hook, you can specify Lambda functions that can validate the deployment using the test
traffic.For example, a Lambda function can serve traffic to the test listener and track metrics from the
replacement task set. If rollbacks are configured, you can configure a CloudWatch alarm that triggers a rollback
when the validation test in your Lambda function fails. After the validation tests are complete, one of the
following occurs:

-If validation fails and rollbacks are configured, the deployment status is marked Failed, and components return
to their state when the deployment started.

-If validation fails and rollbacks are not configured, the deployment status is marked Failed, and components
remain in their current state.

-If validation succeeds, the deployment continues to the BeforeAllowTraffic hook.

Hence, the correct answer is: Create your validation scripts in AWS Lambda and define them on
the AfterAllowTestTraffic lifecycle hook of the AppSpec.yaml file. The functions can validate the
deployment using the test traffic and rollback if the tests fail.

The option that says: Create your validation scripts in AWS Lambda and define them on the AfterInstall
lifecycle hook of the AppSpec.yaml file. The functions can validate the deployment after installing the new
version and rollback if the tests fail is incorrect because in the AfterInstall lifecycle hook, the new task version
is not yet attached on the test listener and application load balancer, therefore, no traffic is flowing yet on this
cluster.

The option that says: Create your validation scripts in AWS Lambda and define them on
the BeforeAllowTraffic lifecycle hook of the AppSpec.yaml file. The functions can validate the deployment
before allowing production traffic and rollback if the tests fail is incorrect because in the BeforeAllowTraffic
lifecycle hook, validation has already succeeded, which defeats its purpose. You have to use this hook to
perform additional actions before allowing production traffic to flow to the new task version.

The option that says: Create your validation scripts in AWS Lambda and define them on
the AfterAllowTraffic lifecycle hook of the AppSpec.yaml file. The functions can validate the deployment
using production traffic and rollback if the tests fail is incorrect because in the AfterAllowTraffic lifecycle
hook, production traffic is rerouted from the old task set to the new task set. You want to verify the
application before opening it to production traffic and not after.

References:

https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-hooks.html#appspec-
hooks-ecs

https://docs.aws.amazon.com/codedeploy/latest/userguide/deployment-steps.html#deployment-steps-what-
happens
https://docs.aws.amazon.com/codedeploy/latest/userguide/tutorial-ecs-deployment-with-hooks.html

Check out this AWS CodeDeploy Cheat Sheet:

https://tutorialsdojo.com/aws-codedeploy/

Question 4: Correct

You are working as a DevOps Engineer in a leading aerospace engineering company that has a hybrid cloud
architecture that connects its on-premises data center with AWS via Direct Connect Gateway. There is a new
requirement in which you have to implement an automated OS patching solution for all of the Windows servers
hosted on-premises as well as in AWS Cloud. The AWS Systems Manager service should be utilized to
automate the patching of your servers.

Which combination of steps should you set up to satisfy this requirement? (Select TWO.)

Download and install the SSM Agent on the hybrid servers by using the activation codes and activation
IDs that you obtained. Register the servers or virtual machines on-premises to the AWS Systems Manager
service. In the SSM console, the hybrid instances will show with an mi- prefix. Apply the patches using
the Systems Manager Patch Manager.

(Correct)

Set up several IAM service roles for AWS Systems Manager to enable the service to execute the
STS AssumeRoleWithSAML operation. Allow the generation of service tokens by registering the IAM role.
Use the service role to perform the managed-instance activation.

Set up a single IAM service role for AWS Systems Manager to enable the service to execute the
STS AssumeRole operation. Allow the generation of service tokens by registering the IAM role. Use the
service role to perform the managed-instance activation.

(Correct)

Download and install the SSM Agent on the hybrid servers by using the activation codes and activation
IDs that you obtained. Register the servers or virtual machines on-premises to the AWS Systems Manager
service. In the SSM console, the hybrid instances will show with an i- prefix. Apply the patches using the
Systems Manager State Manager.

Download and install the SSM Agent on the hybrid servers by using the activation codes and activation
IDs that you obtained. Register the servers or virtual machines on-premises to the AWS Systems Manager
service. In the SSM console, the hybrid instances will show with an i- prefix. Apply the patches using the
Systems Manager Patch Manager.

Explanation

A hybrid environment includes on-premises servers and virtual machines (VMs) that have been configured for
use with Systems Manager, including VMs in other cloud environments. After following the steps below, the
users who have been granted permissions by the AWS account administrator can use AWS Systems Manager to
configure and manage their organization's on-premises servers and virtual machines (VMs).
To configure your hybrid servers and VMs for AWS Systems Manager, just follow these provided steps:

1. Complete General Systems Manager Setup Steps


2. Create an IAM Service Role for a Hybrid Environment
3. Install a TLS certificate on On-Premises Servers and VMs
4. Create a Managed-Instance Activation for a Hybrid Environment
5. Install SSM Agent for a Hybrid Environment (Windows)
6. Install SSM Agent for a Hybrid Environment (Linux)
7. (Optional) Enable the Advanced-Instances Tier

Configuring your hybrid environment for Systems Manager enables you to do the following:

-Create a consistent and secure way to remotely manage your hybrid workloads from one location using the
same tools or scripts.

-Centralize access control for actions that can be performed on your servers and VMs by using AWS Identity
and Access Management (IAM).

-Centralize auditing and your view into the actions performed on your servers and VMs by recording all actions
in AWS CloudTrail.

-Centralize monitoring by configuring CloudWatch Events and Amazon SNS to send notifications about service
execution success.

After you finish configuring your servers and VMs for Systems Manager, your hybrid machines are listed in the
AWS Management Console and described as managed instances. Amazon EC2 instances configured for
Systems Manager are also described as managed instances. In the console, however, the IDs of your hybrid
instances are distinguished from Amazon EC2 instances with the prefix "mi-". Amazon EC2 instance IDs use the
prefix "i-".

Servers and virtual machines (VMs) in a hybrid environment require an IAM role to communicate with the
Systems Manager service. The role grants AssumeRole trust to the Systems Manager service. You only need to
create the service role for a hybrid environment once for each AWS account.

Hence, the correct answers are:

- Set up a single IAM service role for AWS Systems Manager to enable the service to execute the
STS AssumeRole operation. Allow the generation of service tokens by registering the IAM role. Use the
service role to perform the managed-instance activation.

- Download and install the SSM Agent on the hybrid servers by using the activation codes and activation
IDs that you obtained. Register the servers or virtual machines on-premises to the AWS Systems Manager
service. In the SSM console, the hybrid instances will show with an mi- prefix. Apply the patches using the
Systems Manager Patch Manager.

The option that says: Set up several IAM service roles for AWS Systems Manager to enable the service to
execute the STS AssumeRoleWithSAML operation. Allow the generation of service tokens by registering the
IAM role. Use the service role to perform the managed-instance activation is incorrect because you have to
execute the AssumeRole operation instead and not the AssumeRoleWithSAML operation. Moreover, you only need
to set up a single IAM service role.

The option that says: Download and install the SSM Agent on the hybrid servers by using the activation
codes and activation IDs that you obtained. Register the servers or virtual machines on-premises to the
AWS Systems Manager service. In the SSM console, the hybrid instances will show with an mi- prefix.
Apply the patches using the Systems Manager Patch Manager is incorrect because the hybrid instances will
show with an mi- prefix in the SSM console and not with an i- prefix.

The option that says: Download and install the SSM Agent on the hybrid servers by using the activation
codes and activation IDs that you obtained. Register the servers or virtual machines on-premises to the
AWS Systems Manager service. In the SSM console, the hybrid instances will show with an i- prefix.
Apply the patches using the Systems Manager State Manager is incorrect because the AWS Systems
Manager State Manager is just a secure and scalable configuration management service that automates the
process of keeping your Amazon EC2 and hybrid infrastructure in a state that you define. You have to apply the
patches using the Systems Manager Patch Manager instead. In addition, the hybrid instances will show with
an mi- prefix and not with an i- prefix.

References:

https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-managedinstances.html

https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-service-role.html

Check out this AWS Systems Manager Cheat Sheet:

https://tutorialsdojo.com/aws-systems-manager/

Question 5: Incorrect

A leading game development company is planning to host its latest video game on AWS. It is expected that there
will be millions of users around the globe that will play the game. The architecture should allow players to send
or receive data on the backend in real-time for a better gaming experience. The application libraries and user
data of the game must also comply with the data residency requirement wherein all files must remain in the same
region. CodeCommit, CodeBuild, CodeDeploy, and CodePipeline should be utilized to build the CI/CD process.

As a DevOps Engineer, which of the following is the MOST suitable and efficient solution that you should
implement to satisfy this requirement?

Create a new pipeline using AWS CodePipeline and a CodeCommit repository as the source on multiple
AWS regions. Configure the repository of the pipeline for each region to trigger the build and deployment
actions whenever there is a new code update. The pipeline will automatically store output files on a
default artifact bucket on each region.

Create a new pipeline using AWS CodePipeline and a CodeCommit repository as the source in your
primary AWS region. Configure the repository to trigger the build and deployment actions whenever there
is a new code update. Using the AWS Management Console, set up the pipeline to use cross-AZ actions
that will run the build and deployment actions to other Availability Zones. The pipeline will automatically
store output files on a default artifact bucket on each AZ.
Create a new pipeline using AWS CodePipeline and a CodeCommit repository as the source in your
primary AWS region. Set a CodeBuild test action to run the automated unit and integration tests.
Configure the repository to trigger the build and deployment actions whenever there is a new code update.
Using the AWS Management Console, set up the pipeline to use cross-region actions that will run the
build and deployment actions to other regions. Manually configure the pipeline to automatically store
output files on a single S3 bucket.

Create a new pipeline using AWS CodePipeline and a CodeCommit repository as the source in your
primary AWS region. Configure the repository to trigger the build and deployment actions whenever there
is a new code update. Using the AWS Management Console, set up the pipeline to use cross-region
actions that will run the build and deployment actions to other regions. The pipeline will automatically
store output files on a default artifact bucket on each region.

(Correct)

Explanation

AWS CodePipeline includes a number of actions that help you configure build, test, and deploy resources for
your automated release process. You can add actions to your pipeline that are in an AWS Region different from
your pipeline. When an AWS service is the provider for an action, and this action type/provider type are in a
different AWS Region from your pipeline, this is a cross-region action. Certain action types in CodePipeline
may only be available in certain AWS Regions. Also note that there may AWS Regions where an action type is
available, but a specific AWS provider for that action type is not available.

You can use the console, AWS CLI, or AWS CloudFormation to add cross-region actions in pipelines. If you
use the console to create a pipeline or cross-region actions, default artifact buckets are configured by
CodePipeline in the Regions where you have actions. When you use the AWS CLI, AWS CloudFormation, or an
SDK to create a pipeline or cross-region actions, you provide the artifact bucket for each Region where you have
actions. You must create the artifact bucket and encryption key in the same AWS Region as the cross-region
action and in the same account as your pipeline.

You cannot create cross-region actions for the following action types: source actions, third-party actions, and
custom actions. When a pipeline includes a cross-region action as part of a stage, CodePipeline replicates only
the input artifacts of the cross-region action from the pipeline Region to the action's Region. The pipeline Region
and the Region where your CloudWatch Events change detection resources are maintained remain the same. The
Region where your pipeline is hosted does not change.

Hence, the correct answer is: Create a new pipeline using AWS CodePipeline and a CodeCommit repository
as the source in your primary AWS region. Configure the repository to trigger the build and deployment
actions whenever there is a new code update. Using the AWS Management Console, set up the pipeline to
use cross-region actions that will run the build and deployment actions to other regions. The pipeline will
automatically store output files on a default artifact bucket on each region.

The option that says:Create a new pipeline using AWS CodePipeline and a CodeCommit repository as the
source on multiple AWS regions. Configure the repository of the pipeline for each region to trigger the
build and deployment actions whenever there is a new code update. The pipeline will automatically store
output files on a default artifact bucket on each region is incorrect because this deployment set up is not
efficient since you have to maintain several pipelines in multiple AWS regions. A better solution would be to
simply configure cross-region actions to build and deploy the application on multiple regions.
The option that says: Create a new pipeline using AWS CodePipeline and a CodeCommit repository as the
source in your primary AWS region. Configure the repository to trigger the build and deployment actions
whenever there is a new code update. Using the AWS Management Console, set up the pipeline to use
cross-AZ actions that will run the build and deployment actions to other Availability Zones. The pipeline
will automatically store output files on a default artifact bucket on each AZ is incorrect because there is no
such thing as cross-AZ actions but only cross-region actions. You can build, test, and deploy resources to other
AWS regions by adding cross-region actions in your pipeline.

The option that says: Create a new pipeline using AWS CodePipeline and a CodeCommit repository as the
source in your primary AWS region. Set a CodeBuild test action to run the automated unit and
integration tests. Configure the repository to trigger the build and deployment actions whenever there is a
new code update. Using the AWS Management Console, set up the pipeline to use cross-region actions that
will run the build and deployment actions to other regions. Manually configure the pipeline to
automatically store output files on a single S3 bucket is incorrect because this will violate the data residency
requirement. Take note that there is a requirement that the application libraries and user data of the game must
remain in the same region.

References:

https://docs.aws.amazon.com/codepipeline/latest/userguide/actions-create-cross-region.html

https://aws.amazon.com/getting-started/projects/set-up-ci-cd-pipeline/

https://aws.amazon.com/about-aws/whats-new/2018/11/aws-codepipeline-now-supports-cross-region-actions/

https://aws.amazon.com/blogs/devops/building-a-ci-cd-pipeline-for-multi-region-deployment-with-aws-
codepipeline/

Check out this AWS CodePipeline Cheat Sheet:

https://tutorialsdojo.com/aws-codepipeline/

Question 6: Correct

A company is developing a serverless application that uses AWS Lambda, AWS SAM, and Amazon API
Gateway. There is a requirement to fully automate the backend Lambda deployment in such a way that the
deployment will automatically run whenever a new commit is pushed to an AWS CodeCommit repository. There
should also be a separate environment pipeline for TEST and PROD environments. In addition, the TEST
environment should be the only one that allows automatic deployment.

As a DevOps Engineer, who can you satisfy these requirements?

Create a new pipeline using AWS CodePipeline and a new CodeCommit repository for each environment.
Configure CodePipeline to retrieve the application source code from the appropriate repository. Deploy
the Lambda functions with AWS CloudFormation by creating a deployment step in the pipeline.

Set up two pipelines using AWS CodePipeline for TEST and PROD environments. In the PROD pipeline,
set up a manual approval step for application deployment. Set up separate CodeCommit repositories for
each environment and configure each pipeline to retrieve the source code from CodeCommit. Deploy the
Lambda functions with AWS CloudFormation by setting up a deployment step in the pipeline.

Set up two pipelines using AWS CodePipeline for TEST and PROD environments. Configure the PROD
pipeline to have a manual approval step. Create a CodeCommit repository with a branch for each
environment and configure the pipeline to retrieve the source code from CodeCommit according to its
branch. Deploy the Lambda functions with AWS CloudFormation by setting up a deployment step in the
pipeline.

(Correct)

Set up two pipelines using AWS CodePipeline for TEST and PROD environments. On both pipelines, set
up a manual approval step for application deployment. Set up separate CodeCommit repositories for each
environment and configure each pipeline to retrieve the source code from CodeCommit. Deploy the
Lambda functions with AWS CloudFormation by setting up a deployment step in the pipeline.

Explanation

In AWS CodePipeline, you can add an approval action to a stage in a pipeline at the point where you want the
pipeline execution to stop so that someone with the required AWS Identity and Access Management permissions
can approve or reject the action.

If the action is approved, the pipeline execution resumes. If the action is rejected—or if no one approves or
rejects the action within seven days of the pipeline reaching the action and stopping—the result is the same as an
action failing, and the pipeline execution does not continue.

You might use manual approvals for these reasons:

- You want someone to perform a code review or change management review before a revision is allowed into
the next stage of a pipeline.

- You want someone to perform manual quality assurance testing on the latest version of an application, or to
confirm the integrity of a build artifact, before it is released.

- You want someone to review new or updated text before it is published to a company website.

Hence, the correct answer is: Set up two pipelines using AWS CodePipeline for TEST and PROD
environments. Configure the PROD pipeline to have a manual approval step. Create a CodeCommit
repository with a branch for each environment and configure the pipeline to retrieve the source code from
CodeCommit according to its branch. Deploy the Lambda functions with AWS CloudFormation by setting up
a deployment step in the pipeline.

The option that says: Create a new pipeline using AWS CodePipeline and a new CodeCommit repository for
each environment. Configure CodePipeline to retrieve the application source code from the appropriate
repository. Deploy the Lambda functions with AWS CloudFormation by creating a deployment step in the
pipeline is incorrect because you should add a manual approval step on the PROD pipeline as mentioned in the
requirements of the scenario.

The option that says: Set up two pipelines using AWS CodePipeline for TEST and PROD environments. In the
PROD pipeline, set up a manual approval step for application deployment. Set up separate CodeCommit
repositories for each environment and configure each pipeline to retrieve the source code from CodeCommit.
Deploy the Lambda functions with AWS CloudFormation by setting up a deployment step in the pipeline is
incorrect because you don't need to create separate CodeCommit repositories for the two environments. You just
need to create two different branches from a single repository.

The option that says: Set up two pipelines using AWS CodePipeline for TEST and PROD environments. On
both pipelines, set up a manual approval step for application deployment. Set up separate CodeCommit
repositories for each environment and configure each pipeline to retrieve the source code from CodeCommit.
Deploy the Lambda functions with AWS CloudFormation by setting up a deployment step in the pipeline is
incorrect because you should add the manual approval step on the PROD pipeline only, excluding the TEST
pipeline. Moreover, you don't need to create separate CodeCommit repositories for the two environments. You
just need to create two different branches from a single repository.

References:

https://docs.aws.amazon.com/codepipeline/latest/userguide/approvals.html

https://docs.aws.amazon.com/codepipeline/latest/userguide/approvals-action-add.html

https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html

Check out this AWS CodePipeline Cheat Sheet:

https://tutorialsdojo.com/aws-codepipeline/

Tutorials Dojo's AWS Certified DevOps Engineer Professional Exam Study Guide:

https://tutorialsdojo.com/aws-certified-devops-engineer-professional/

Question 7: Correct

A commercial bank has a hybrid cloud architecture in AWS where its online banking platform is hosted. The
CTO instructed its Lead DevOps Engineer to implement a suitable deployment set up that minimizes the impact
on their production environment. The CI/CD process should be configured as follows:

- A new fleet of Amazon EC2 instances should be automatically launched first before the actual production
deployment. The additional instances will serve traffic during the deployment.

- All available EC2 instances across various Availability Zones must be load-balanced and must automatically
heal if it becomes impaired due to an underlying hardware failure in Amazon EC2.

- At least half of the incoming traffic must be rerouted to the new application version that is hosted to the new
instances.

- The deployment should be considered successful if traffic is rerouted to at least half of the available EC2
instances.

- All temporary files must be deleted before routing traffic to the new fleet of instances. Ensure that any other
files that were automatically generated during the deployment process are removed.
- To reduce costs, the EC2 instances that host the old version in the deployment group must be terminated
immediately.

What should the Engineer do to satisfy these requirements?

Launch an Application Load Balancer and use a blue/green deployment for releasing new application
versions. Associate the Auto Scaling group and the Application Load Balancer target group with the
deployment group. Create a custom deployment configuration for the deployment group in CodeDeploy
with minimum healthy hosts defined as 50% and configure it to also terminate the original instances in the
Auto Scaling group after deployment. Use the BeforeAllowTraffic Traffic hook within appspec.yml to
purge the temporary files.

(Correct)

Launch an Application Load Balancer and use in-place deployment for releasing new application
versions. Use the CodeDeployDefault.OneAtAtime as the deployment configuration and associate the Auto
Scaling group with the deployment group. Configure AWS CodeDeploy to terminate all EC2 instances in
the original Auto Scaling group and use the AllowTraffic hook within the appspec.yml configuration file
to purge the temporary files.

Launch an Application Load Balancer and use an in-place deployment for releasing new application
versions. Associate the Auto Scaling group and Application Load Balancer target group with the
deployment group. In CodeDeploy, use the <code>CodeDeployDefault AllatOnce</code> as a
deployment configuration and add a configuration to terminate the original instances in the Auto Scaling
group after the deployment. Use the BlockTraffic hook within appspec.yml to purge the temporary files.

Launch an Application Load Balancer and use a blue/green deployment for releasing new application
versions. Associate the Auto Scaling group and the Application Load Balancer target group with the
deployment group. In CodeDeploy, use the CodeDeployDefault.HalfAtAtime as the deployment
configuration and configure it to terminate the original instances in the Auto Scaling group after
deployment. Use the BlockTraffic hook within appspec.yml to purge the temporary files.

Explanation

The content in the 'hooks' section of the AppSpec file varies, depending on the compute platform for your
deployment. The 'hooks' section for an EC2/On-Premises deployment contains mappings that link deployment
lifecycle event hooks to one or more scripts. The 'hooks' section for a Lambda or an Amazon ECS deployment
specifies Lambda validation functions to run during a deployment lifecycle event. If an event hook is not
present, no operation is executed for that event. This section is required only if you are running scripts or
Lambda validation functions as part of the deployment.

An EC2/On-Premises deployment hook is executed once per deployment to an instance. You can specify one or
more scripts to run in a hook. Each hook for a lifecycle event is specified with a string on a separate line. Here
are descriptions of the hooks available for use in your AppSpec file.

ApplicationStop – This deployment lifecycle event occurs even before the application revision is downloaded.
You can specify scripts for this event to gracefully stop the application or remove currently installed packages in
preparation for a deployment. The AppSpec file and scripts used for this deployment lifecycle event are from the
previous successfully deployed application revision.
DownloadBundle – During this deployment lifecycle event, the CodeDeploy agent copies the application revision
files to a temporary location:

/opt/codedeploy-agent/deployment-root/deployment-group-id/deployment-id/deployment-archive folder on
Amazon Linux, Ubuntu Server, and RHEL Amazon EC2 instances.

C:\ProgramData\Amazon\CodeDeploy\deployment-group-id\deployment-id\deployment-archive folder on
Windows Server Amazon EC2 instances.

This event is reserved for the CodeDeploy agent and cannot be used to run scripts.

BeforeInstall – You can use this deployment lifecycle event for preinstall tasks, such as decrypting files and
creating a backup of the current version.

Install – During this deployment lifecycle event, the CodeDeploy agent copies the revision files from the
temporary location to the final destination folder. This event is reserved for the CodeDeploy agent and cannot be
used to run scripts.

AfterInstall – You can use this deployment lifecycle event for tasks such as configuring your application or
changing file permissions.

ApplicationStart – You typically use this deployment lifecycle event to restart services that were stopped
during ApplicationStop.

ValidateService – This is the last deployment lifecycle event. It is used to verify the deployment was completed
successfully.

BeforeBlockTraffic – You can use this deployment lifecycle event to run tasks on instances before they are
deregistered from a load balancer.

BlockTraffic – During this deployment lifecycle event, internet traffic is blocked from accessing instances that
are currently serving traffic. This event is reserved for the CodeDeploy agent and cannot be used to run scripts.

AfterBlockTraffic – You can use this deployment lifecycle event to run tasks on instances after they are
deregistered from a load balancer.

BeforeAllowTraffic – You can use this deployment lifecycle event to run tasks on instances before they are
registered with a load balancer.

AllowTraffic – During this deployment lifecycle event, internet traffic is allowed to access instances after a
deployment. This event is reserved for the CodeDeploy agent and cannot be used to run scripts.

AfterAllowTraffic – You can use this deployment lifecycle event to run tasks on instances after they are
registered with a load balancer.

Hence, the correct answer is: Launch an Application Load Balancer and use a blue/green deployment for
releasing new application versions. Associate the Auto Scaling group and the Application Load Balancer
target group with the deployment group. Create a custom deployment configuration for the deployment
group in CodeDeploy with minimum healthy hosts defined as 50% and configure it to also terminate the
original instances in the Auto Scaling group after deployment. Use the BeforeAllowTraffic Traffic hook
within appsec.yml to purge the temporary files.

The option that says: Launch an Application Load Balancer and use in-place deployment for releasing new
application versions. Use the CodeDeployDefault.OneAtAtime as the deployment configuration and associate
the Auto Scaling group with the deployment group. Configure AWS CodeDeploy to terminate all EC2
instances in the original Auto Scaling group and use the AllowTraffic hook within
the appspec.yml configuration file to purge the temporary files is incorrect because you should use blue/green
deployment instead of in-place. In addition, the AllowTraffic event just allows the incoming traffic to the
instances after a deployment. This event is reserved for the CodeDeploy agent and cannot be used to run scripts.

The option that says: Launch an Application Load Balancer and use a blue/green deployment for releasing
new application versions. Associate the Auto Scaling group and the Application Load Balancer target
group with the deployment group. In CodeDeploy, use the CodeDeployDefault.HalfAtAtime as the
deployment configuration and configure it to terminate the original instances in the Auto Scaling group
after deployment. Use the BlockTraffic hook within appspec.yml to purge the temporary files is incorrect
because the BlockTraffic event is reserved for the CodeDeploy agent and cannot be used to run custom scripts
such as deleting the temporary files.

The option that says: Launch an Application Load Balancer and use an in-place deployment for releasing
new application versions. Associate the Auto Scaling group and Application Load Balancer target group
with the deployment group. In CodeDeploy, use the CodeDeployDefault.AllatOnce as a deployment
configuration and add a configuration to terminate the original instances in the Auto Scaling group after
the deployment. Use the BlockTraffic hook within appsec.yml to purge the temporary files is incorrect
because you should use a blue/green deployment instead of in-place. It is also incorrect to use
the CodeDeployDefault AllatOnce deployment configuration as this attempts to deploy the application revision to
as many instances as possible at once.

References:

https://docs.aws.amazon.com/codedeploy/latest/userguide/troubleshooting-deployments.html#troubleshooting-
deployments-lifecycle-event-failures

https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-hooks.html#appspec-
hooks-server

Check out this AWS CodeDeploy Cheat Sheet:

https://tutorialsdojo.com/aws-codedeploy/

Question 8: Incorrect

A fast-growing company has multiple AWS accounts which are consolidated using AWS Organizations and they
expect to add new accounts soon. As the DevOps engineer, you were instructed to design a centralized logging
solution to deliver all of their VPC Flow Logs and CloudWatch Logs across all of their sub-accounts to their
dedicated Audit account for compliance purposes. The logs should also be properly indexed in order to perform
search, retrieval, and analysis.
Which of the following is the MOST suitable solution that you should implement to meet the above
requirements?

In the Audit account, launch a new Lambda function which will push all of the required logs to a self-
hosted ElasticSearch cluster in a large EC2 instance. Integrate the Lambda function to a CloudWatch
subscription filter to collect all of the logs from the sub-accounts and stream them to the Lambda function
deployed in the Audit account.

In the Audit account, create an Amazon SQS queue that will push all logs to an Amazon ES cluster. Use a
CloudWatch subscription filter to stream both VPC Flow Logs and CloudWatch Logs from their sub-
accounts to the SQS queue in the Audit account.

In the Audit account, create a new stream in Kinesis Data Streams and a Lambda function that acts as an
event handler to send all of the logs to the Amazon ES cluster. Create a CloudWatch subscription filter
and use Kinesis Data Streams to stream all of the VPC Flow Logs and CloudWatch Logs from the sub-
accounts to the Kinesis data stream in the Audit account.

(Correct)

In the Audit account, launch a new Lambda function which will send all VPC Flow Logs and CloudWatch
Logs to an Amazon ES cluster. Use a CloudWatch subscription filter in the sub-accounts to stream all of
the logs to the Lambda function in the Audit account.

Explanation

You can load streaming data into your Amazon Elasticsearch Service domain from many different sources in
AWS. Some sources, like Amazon Kinesis Data Firehose and Amazon CloudWatch Logs, have built-in support
for Amazon ES. Others, like Amazon S3, Amazon Kinesis Data Streams, and Amazon DynamoDB, use AWS
Lambda functions as event handlers. The Lambda functions respond to new data by processing it and streaming
it to your domain.

You can use subscriptions to get access to a real-time feed of log events from CloudWatch Logs and have it
delivered to other services such as an Amazon Kinesis stream, Amazon Kinesis Data Firehose stream, or AWS
Lambda for custom processing, analysis, or loading to other systems. To begin subscribing to log events, create
the receiving source, such as a Kinesis stream, where the events will be delivered. A subscription filter defines
the filter pattern to use for filtering which log events get delivered to your AWS resource, as well as information
about where to send matching log events to.

You can collaborate with an owner of a different AWS account and receive their log events on your AWS
resources, such as a Amazon Kinesis stream (this is known as cross-account data sharing). For example, this log
event data can be read from a centralized Amazon Kinesis stream to perform custom processing and analysis.
Custom processing is especially useful when you collaborate and analyze data across many accounts. For
example, a company's information security group might want to analyze data for real-time intrusion detection or
anomalous behaviors so it could conduct an audit of accounts in all divisions in the company by collecting their
federated production logs for central processing.

A real-time stream of event data across those accounts can be assembled and delivered to the information
security groups who can use Kinesis to attach the data to their existing security analytic systems. Kinesis streams
are currently the only resource supported as a destination for cross-account subscriptions.
Hence, the correct solution is: In the Audit account, create a new stream in Kinesis Data Streams and a
Lambda function that acts as an event handler to send all of the logs to the Amazon ES cluster. Create a
CloudWatch subscription filter and use Kinesis Data Streams to stream all of the VPC Flow Logs and
CloudWatch Logs from the sub-accounts to the Kinesis data stream in the Audit account.

The option that that says: In the Audit account, launch a new Lambda function which will send all VPC
Flow Logs and CloudWatch Logs to an Amazon ES cluster. Use a CloudWatch subscription filter in the
sub-accounts to stream all of the logs to the Lambda function in the Audit account is incorrect. Launching a
Kinesis data stream is a more suitable option than just a Lambda function that will accept the logs from other
accounts and send them to Amazon ES.

The option that that says: In the Audit account, create an Amazon SQS queue that will push all logs to an
Amazon ES cluster. Use a CloudWatch subscription filter to stream both VPC Flow Logs and
CloudWatch Logs from their sub-accounts to the SQS queue in the Audit account is incorrect because the
CloudWatch subscription filter doesn't directly support SQS. You should use a Kinesis Data Stream, Kinesis
Firehose or Lambda function.

The option that that says: In the Audit account, launch a new Lambda function which will push all of the
required logs to a self-hosted ElasticSearch cluster in a large EC2 instance. Integrate the Lambda function
to a CloudWatch subscription filter to collect all of the logs from the sub-accounts and stream them to the
Lambda function deployed in the Audit account is incorrect. It is better to use Amazon ES instead of
launching a self-hosted ElasticSearch cluster. Launching a Kinesis data stream is a more suitable option than just
a Lambda function that will accept the logs from other accounts and send them to Amazon ES.

References:

https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs//CrossAccountSubscriptions.html

https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-aws-integrations.html

https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs//SubscriptionFilters.html#FirehoseExample

Check out this Amazon CloudWatch Cheat Sheet:

https://tutorialsdojo.com/amazon-cloudwatch/

Question 9: Incorrect

You have developed a web portal for your analytics team that they can use to query and visualize patterns of the
collected data from the customer's click behavior. The portal relies on data processed by an Amazon EMR
cluster which consists of an Auto Scaling group of EC2 instances that you've provisioned on AWS. After 3
months of operation, you checked AWS Trusted Advisor, which recommends that you terminate some EMR
instances that were underutilized. Upon checking the EMR cluster, you forgot to define scale-down parameters
which contributed to high billing costs. To prevent this from happening in the future, you want to be notified for
AWS Trusted Advisor recommendation as frequently as possible.

Which of the following solutions can help you achieve this requirement? (Select THREE.)
Enable the built-in Trusted Advisor notification feature to automatically receive notification emails which
includes the summary of savings estimates along with Trusted Advisor check results.

Use CloudWatch Events to monitor Trusted Advisor checks and set a trigger to send an email using SES
to notify you about the results of the check.

Use CloudWatch Events to monitor Trusted Advisor checks and set a trigger to send an email using SNS
to notify you about the results of the check.

(Correct)

Write a Lambda function that runs daily to refresh AWS Trusted Advisor changes via API and send
results to CloudWatch Logs. Create a CloudWatch Log metric and have it send an alarm notification
when it is triggered.

(Correct)

Enable the built-in CloudWatch Events notification feature that scans changes in the Trusted Advisor
check result. Send notifications automatically to the account owner.

Write a Lambda function that runs daily to refresh AWS Trusted Advisor via API and then publish a
message to an SNS Topic to notify the subscribers based on the results.

(Correct)

Explanation

You can use Amazon CloudWatch Events to detect and react to changes in the status of Trusted Advisor
checks. Then, based on the rules that you create, CloudWatch Events invokes one or more target actions when a
check status changes to the value you specify in a rule. Depending on the type of status change, you might want
to send notifications, capture status information, take corrective action, initiate events, or take other actions. You
can select the following types of targets when using CloudWatch Events as a part of your Trusted Advisor
workflow:

- AWS Lambda functions

- Amazon Kinesis streams

- Amazon Simple Queue Service queues

- Built-in targets (CloudWatch alarm actions)

- Amazon Simple Notification Service topics

The following are some use cases:

- Use a Lambda function to pass a notification to a Slack channel when check status changes.

- Push data about checks to a Kinesis stream to support comprehensive, real-time status monitoring.
You can also configure CloudWatch Logs to send a notification whenever an alarm is triggered. Doing so
enables you to respond quickly based on the logs collected by CloudWatch Logs. CloudWatch uses Amazon
Simple Notification Service (SNS) to send an email.

Hence, the correct answers are:

- Write a Lambda function that runs daily to refresh AWS Trusted Advisor via API and then publish a
message to an SNS Topic to notify the subscribers based on the results.

- Write a Lambda function that runs daily to refresh AWS Trusted Advisor changes via API and send
results to CloudWatch Logs. Create a CloudWatch Log metric and have it send an alarm notification
when it is triggered.

- Use CloudWatch Events to monitor Trusted Advisor checks and set a trigger to send an email using SNS
to notify you about the results of the check.

The option that says: Use CloudWatch Events to monitor Trusted Advisor checks and set a trigger to send
an email using SES to notify you about the results of the check is just partially correct since the integration
for sending email notifications, should use SNS, not SES.

The option that says: Enable the built-in CloudWatch Events notification feature that scans changes in the
Trusted Advisor check results. Send notifications automatically to the account owner is incorrect because
there is no automatic checks on CloudWatch Events for AWS Trusted Advisor in the first place. You need to
manually configure those rules and have them trigger notifications to you.

The option that says: Enable the built-in Trusted Advisor notification feature to automatically receive
notification emails which include the summary of savings estimates along with Trusted Advisor check
results is incorrect. Although this is a viable solution, the notification will be sent on a weekly basis only, which
can already incur a lot of costs by the time it notifies.

References:

https://docs.aws.amazon.com/awssupport/latest/user/cloudwatch-events-ta.html

https://docs.aws.amazon.com/lambda/latest/dg/with-scheduled-events.html

https://docs.aws.amazon.com/lambda/latest/dg/tutorial-scheduled-events-schedule-expressions.html

Tutorials Dojo's AWS Certified DevOps Engineer Professional Exam Study Guide:

https://tutorialsdojo.com/aws-certified-devops-engineer-professional/

Question 10: Incorrect

A leading commercial bank has its online banking application hosted in AWS. It uses an encrypted Amazon S3
bucket to store the confidential files of its customers. Their DevOps team has configured federated access to a
particular Active Directory user group from their on-premises network to allow access to the S3 bucket. For
audit purposes, there is a new requirement to automatically detect any policy changes that are related to the
restricted federated access of the bucket and to have the ability to revert any accidental changes made by the
administrators.
Which of the following options provides the FASTEST way to detect configuration changes?

Set up an AWS Config rule with a configuration change trigger that will detect any changes in the S3
bucket configuration and which will also invoke an AWS Systems Manager Automation document with a
Lambda function that will revert any changes.

(Correct)

Using Amazon CloudWatch Events, integrate an Event Bus with AWS CloudTrail API in order to trigger
an AWS Lambda function that will detect and revert any particular changes.

Integrate CloudWatch Events and a Lambda function to create a scheduled job that runs every hour to
scan the IAM policy attached to the federated access role. Configure the function to detect as well as
revert any recent changes made in the current configuration.

Set up an AWS Config rule with a periodic trigger that runs every hour which will detect any changes in
the S3 bucket configuration. Associate a Lambda function in the rule that will revert any recent changes
made in the bucket.

Explanation

AWS Config provides a detailed view of the configuration of AWS resources in your AWS account. This
includes how the resources are related to one another and how they were configured in the past so that you can
see how the configurations and relationships change over time.

When you add a rule to your account, you can specify when you want AWS Config to run the rule; this is called
a trigger. AWS Config evaluates your resource configurations against the rule when the trigger occurs.

There are two types of triggers:

1. Configuration changes
2. Periodic

If you choose both configuration changes and periodic triggers, AWS Config invokes your Lambda function
when it detects a configuration change and also at the frequency that you specify.

Configuration changes

AWS Config runs evaluations for the rule when certain types of resources are created, changed, or deleted. You
choose which resources trigger the evaluation by defining the rule's scope. The scope can include the following:

- One or more resource types

- A combination of a resource type and a resource ID

- A combination of a tag key and value

- When any recorded resource is created, updated, or deleted


AWS Config runs the evaluation when it detects a change to a resource that matches the rule's scope. You can
use the scope to constrain which resources trigger evaluations. Otherwise, evaluations are triggered when any
recorded resource changes.

Periodic

AWS Config runs evaluations for the rule at a frequency that you choose (for example, every 24 hours).

Hence, the correct answer is: Set up an AWS Config rule with a configuration change trigger that will
detect any changes in the S3 bucket configuration and which will also invoke an AWS Systems Manager
Automation document with a Lambda function that will revert any changes.

The option that says: Using Amazon CloudWatch Events, integrate an Event Bus with AWS CloudTrail
API in order to trigger an AWS Lambda function that will detect and revert any particular changes is
incorrect. Although you can track all changes to your configuration using CloudTrail API, it would be difficult
to integrate it with CloudWatch Events in order to monitor the changes. There is no direct way of integrating
these two services and you have to create a custom mapping in order for this to work.

The option that says: Integrate CloudWatch Events and a Lambda function to create a scheduled job that
runs every hour to scan the IAM policy attached to the federated access role. Configure the function to
detect as well as revert any recent changes made in the current configuration is incorrect. Although this
solution may work, there would be a significant delay since the Lambda function is only run every one hour. So
if the new S3 bucket configuration was applied at 12:05 PM, the change will only be detected at 1:00 PM.
Moreover, this entails a lot of overhead since you have to develop a custom function that will scan your IAM
policies.

The option that says: Set up an AWS Config rule with a periodic trigger that runs every hour which will
detect any changes in the S3 bucket configuration. Associate a Lambda function in the rule that will revert
any recent changes made in the bucket is incorrect. Although this may work, this is not the fastest way of
detecting a change in your resource configurations in AWS. Since the rule is using a periodic trigger, the rule
will run every hour and not in near real-time unlike the Configuration changes trigger. So say a new
configuration was applied at 12:01 PM, the change will only be detected at 1:00 PM after the rule has been run.

References:

https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config-rules.html

https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config_develop-rules_getting-started.html

Check out this AWS Config Cheat Sheet:

https://tutorialsdojo.com/aws-config/

Question 11: Incorrect

A company is hosting their high-frequency trading application in AWS which serves millions of investors
around the globe. The application is hosted in an Auto Scaling Group of EC2 instances behind an Application
Load Balancer with an Amazon DynamoDB database. The architecture was deployed using a CloudFormation
template with a Route 53 record. There recently was a production deployment that had caused system
degradation and outage, costing the company a significant monetary loss due to their application's unavailability.
As a result, the company instructed their DevOps engineer to implement an efficient strategy for deploying
updates to their web application with the ability to perform an immediate rollback of the stack. All deployments
should maintain the normal number of active EC2 instances to keep the performance of the application.

Which of the following should the DevOps engineer implement to satisfy these requirements?

Configure the UpdatePolicy of the AWS::AutoScaling::AutoscalingGroup resource in the CloudFormation


template to use the AutoScalingReplacingUpdate policy. Set the WillReplace property to false. Also,
specify the AutoScalingRollingUpdate policy to update instances that are in an Auto Scaling group in
batches.

Configure the UpdatePolicy of the AWS::AutoScaling::AutoscalingGroup resource in the CloudFormation


template to use the AutoScalingRollingUpdate policy. Update the required properties for the new Auto
Scaling group policy. Set the MinSuccessfulInstancesPercent property, as well as its
corresponding WaitOnResourceSignals and PauseTime properties.

Configure the UpdatePolicy of the AWS::AutoScaling::AutoscalingGroup resource in the CloudFormation


template to use the AutoScalingReplacingUpdate policy. Update the required properties for the new Auto
Scaling group policy. Set the WillReplace property to true.

(Correct)

Configure the UpdatePolicy of the AWS::AutoScaling::DeploymentUpdates resource in the CloudFormation


template to use the AutoScalingReplacingUpdate policy. Update the required properties for the new Auto
Scaling group policy. Set the WillReplace property to false.

Explanation

If you plan to launch an Auto Scaling group of EC2 instances, you can configure
the AWS::AutoScaling::AutoScalingGroup resource type reference in your CloudFormation template to define an
Amazon EC2 Auto Scaling group with the specified name and attributes. To configure Amazon EC2 instances
launched as part of the group, you can specify a launch template or a launch configuration. It is recommended
that you use a launch template to make sure that you can use the latest features of Amazon EC2, such as T2
Unlimited instances.

You can add an UpdatePolicy attribute to your Auto Scaling group to perform rolling updates (or replace the
group) when a change has been made to the group.

To specify how AWS CloudFormation handles replacement updates for an Auto Scaling group, use
the AutoScalingReplacingUpdate policy. This policy enables you to specify whether AWS CloudFormation

replaces an Auto Scaling group with a new one or replaces only the instances in the Auto Scaling group.
During replacement, AWS CloudFormation retains the old group until it finishes creating the new one. If the
update fails, AWS CloudFormation can roll back to the old Auto Scaling group and delete the new Auto Scaling
group. While AWS CloudFormation creates the new group, it doesn't detach or attach any instances. After
successfully creating the new Auto Scaling group, AWS CloudFormation deletes the old Auto Scaling group
during the cleanup process.
When you set the WillReplace parameter, remember to specify a matching CreationPolicy. If the minimum
number of instances (specified by the MinSuccessfulInstancesPercent property) doesn't signal success within the
Timeout period (specified in the CreationPolicy policy), the replacement update fails, and AWS CloudFormation
rolls back to the old Auto Scaling group.

Hence, the correct answer is: Configure the UpdatePolicy of the AWS::AutoScaling::AutoscalingGroup resource
in the CloudFormation template to use the AutoScalingReplacingUpdate policy. Update the required
properties for the new Auto Scaling group policy. Set the WillReplace property to true.

The option that says: Configure the UpdatePolicy of the AWS::AutoScaling::AutoscalingGroup resource in the
CloudFormation template to use the AutoScalingReplacingUpdate policy. Set the WillReplace property to
false. Also, specify the AutoScalingRollingUpdate policy to update instances that are in an Auto Scaling
group in batches is incorrect because if both
the AutoScalingReplacingUpdate and AutoScalingRollingUpdate policies are specified, setting
the WillReplace property to true gives AutoScalingReplacingUpdate precedence. But since this property is set to
false, then the AutoScalingRollingUpdate policy will take precedence instead.

The option that says: Configure the UpdatePolicy of the AWS::AutoScaling::AutoscalingGroup resource in the
CloudFormation template to use the AutoScalingRollingUpdate policy. Update the required properties for
the new Auto Scaling group policy. Set the MinSuccessfulInstancesPercent property, you must also enable
the WaitOnResourceSignals and PauseTime properties is incorrect because this type of deployment will affect
the existing compute capacity of your application. The rolling update doesn't maintain the total number of active
EC2 instances during deployment. A better solution is to use the AutoScalingReplacingUpdate policy instead,
which will create a separate Auto Scaling group and is able to perform an immediate rollback of the stack in the
event of an update failure.

The option that says: Configure the UpdatePolicy of the AWS::AutoScaling::DeploymentUpdates resource in the
CloudFormation template to use the AutoScalingReplacingUpdate policy. Update the required properties for
the new Auto Scaling group policy. Set the WillReplace property to false is incorrect because there is
no AWS::AutoScaling::DeploymentUpdates resource. You have to use
the AWS::AutoScaling::AutoscalingGroup resource instead and set the WillReplace property to true.

References:

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-group.html

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-updatepolicy.html#cfn-
attributes-updatepolicy-replacingupdate

https://aws.amazon.com/premiumsupport/knowledge-center/auto-scaling-group-rolling-updates/

Check out this AWS CloudFormation Cheat Sheet:

https://tutorialsdojo.com/aws-cloudformation/

Question 12: Incorrect

An application is hosted in an Auto Scaling group of Amazon EC2 instances with public IP addresses in a public
subnet. The instances are configured with a user data script which fetch and install the required system
dependencies of the application from the Internet upon launch. A change was recently introduced to prohibit any
Internet access from these instances to improve the security but after its implementation, the instances could not
get the external dependencies anymore. Upon investigation, all instances are properly running but the hosted
application is not starting up completely due to the incomplete installation.

Which of the following is the MOST secure solution to solve this issue and also ensure that the instances do not
have public Internet access?

Download all of the external application dependencies from the public Internet and then store them to an
S3 bucket. Set up a VPC endpoint for the S3 bucket and then assign an IAM instance profile to the
instances in order to allow them to fetch the required dependencies from the bucket.

(Correct)

Use a NAT gateway to disallow any traffic to the VPC which originated from the public Internet. Deploy
the Amazon EC2 instances to a private subnet then set the subnet's route table to use the NAT gateway as
its default route.

Set up a brand new security group for the Amazon EC2 instances. Use a whitelist configuration to only
allow outbound traffic to the site where all of the application dependencies are hosted. Delete the security
group rule once the installation is complete. Use AWS Config to monitor the compliance.

Deploy the Amazon EC2 instances in a private subnet and associate Elastic IP addresses on each of them.
Run a custom shell script to disassociate the Elastic IP addresses after the application has been
successfully installed and is running properly.

Explanation

A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint
services powered by PrivateLink without requiring an Internet gateway, NAT device, VPN connection, or AWS
Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with
resources in the service. Traffic between your VPC and the other service does not leave the Amazon network.

Endpoints are virtual devices. They are horizontally scaled, redundant, and highly available VPC components
that allow communication between instances in your VPC and services without imposing availability risks or
bandwidth constraints on your network traffic.

There are two types of VPC endpoints: interface endpoints and gateway endpoints. You can create the type of
VPC endpoint required by the supported service. S3 and DynamoDB are using Gateway endpoints while most of
the services are using Interface endpoints.

You can use an S3 bucket to store the required dependencies and then set up a VPC Endpoint to allow your EC2
instances to access the data without having to traverse the public Internet.

Hence, the correct answer is the option that says: Download all of the external application dependencies from
the public Internet and then store them to an S3 bucket. Set up a VPC endpoint for the S3 bucket and
then assign an IAM instance profile to the instances in order to allow them to fetch the required
dependencies from the bucket.
The option that says: Deploy the Amazon EC2 instances in a private subnet and associate Elastic IP
addresses on each of them. Run a custom shell script to disassociate the Elastic IP addresses after the
application has been successfully installed and is running properly is incorrect because it is possible that the
custom shell script may fail and the disassociation of the Elastic IP addresses might not be fully implemented
which will allow the EC2 instances to access the Internet.

The option that says: Use a NAT gateway to disallow any traffic to the VPC which originated from the
public Internet. Deploy the Amazon EC2 instances to a private subnet then set the subnet's route table to
use the NAT gateway as its default route is incorrect because although a NAT Gateway can safeguard the
instances from any incoming traffic that were initiated from the Internet, it still permits them to send outgoing
requests externally.

The option that says: Set up a brand new security group for the Amazon EC2 instances. Use a whitelist
configuration to only allow outbound traffic to the site where all of the application dependencies are
hosted. Delete the security group rule once the installation is complete. Use AWS Config to monitor the
compliance is incorrect because this solution has a high operational overhead since the actions are done
manually. This is susceptible to human error such as in the event that the DevOps team forgets to delete the
security group. The use of AWS Config will just monitor and inform you about the security violation but it won't
do anything to remediate the issue.

References:

https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints.html

https://docs.aws.amazon.com/vpc/latest/userguide/vpce-gateway.html

Check out this Amazon VPC Cheat Sheet:

https://tutorialsdojo.com/amazon-vpc/

Tutorials Dojo's AWS Certified DevOps Engineer Professional Exam Study Guide:

https://tutorialsdojo.com/aws-certified-devops-engineer-professional/

Question 13: Correct

A leading media company has recently moved its .NET web application from its on-premises network to AWS
Elastic Beanstalk for easier deployment and management. An Amazon S3 bucket stores static contents such as
PDF files, images, videos, and the likes. An Amazon DynamoDB table stores all of the data of the application.
The company has recently launched a series of global marketing campaigns that resulted in unpredictable spikes
of incoming traffic. Upon checking, the Operations team discovered that over 80% of the traffic is just duplicate
read requests.

As a DevOps Engineer, how can you improve the application's performance for its users around the world?

Cache the images stored in the S3 bucket using the AWS Elemental MediaPackage. Set up a distributed
cache layer using an ElastiCache for Memcached Cluster to serve the repeated read requests on the web
application.
Use Lambda@Edge to cache the images stored in the S3 bucket. Use DynamoDB Accelerator (DAX) to
cache repeated read requests on the web application.

Cache the images stored in the S3 bucket using the AWS Elemental MediaStore. Set up a distributed
cache layer using an ElastiCache for Redis Cluster to serve the repeated read requests on the web
application.

Create a CloudFront web distribution to cache images stored in the S3 bucket. Use DynamoDB
Accelerator (DAX) to cache repeated read requests on the web application.

(Correct)

Explanation

CloudFront speeds up the distribution of your static and dynamic content (HTML, .css, .js, and image files) by
routing each user request through the AWS backbone network to the edge location that can best serve your
content. By creating a CloudFront distribution, CloudFront determines which origin servers to get your files
from when users request the files through your website or application. Moreover, to specify the details about
how to track and manage content delivery.

Amazon DynamoDB is designed for scale and performance. In most cases, the DynamoDB response times can
be measured in single-digit milliseconds. However, there are certain use cases that require response times in
microseconds. For these use cases, DynamoDB Accelerator (DAX) delivers fast response times for accessing
eventually consistent data.

DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance
for demanding applications. DAX addresses three core scenarios:

- As an in-memory cache, DAX reduces the response times of eventually consistent read workloads by an order
of magnitude, from single-digit milliseconds to microseconds.

- DAX reduces operational and application complexity by providing a managed service that is API-compatible
with DynamoDB. Therefore, it requires only minimal functional changes to use with an existing application.

- For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings
by reducing the need to over-provision read capacity units. This is especially beneficial for applications that
require repeated reads for individual keys

Hence, the correct answer is:Create a CloudFront web distribution to cache images stored in the S3 bucket.
Use DynamoDB Accelerator (DAX) to cache repeated read requests on the web application.

The option that says: Cache the images stored in the S3 bucket using the AWS Elemental MediaStore. Set
up a distributed cache layer using an ElastiCache for Redis Cluster to serve the repeated read requests on
the web application is incorrect because the AWS Elemental MediaStore is not a dedicated CDN service unlike
CloudFront. AWS Elemental MediaStore is a video origination and storage service that offers the performance,
consistency, and low latency required to deliver live video content combined with the security and durability
Amazon offers across its services.

The option that says: Cache the images stored in the S3 bucket using the AWS Elemental MediaPackage.
Set up a distributed cache layer using an ElastiCache for Memcached Cluster to serve the repeated read
requests on the web application is incorrect because the AWS Elemental MediaPackage is primarily used for
videos and not for photos. Moreover, CloudFront is a more suitable service to use as a Content Delivery
Network. AWS Elemental MediaPackage is a video origination and just-in-time (JIT) packaging service that
allows video providers to securely and reliably deliver live streaming content at scale.

The option that says: Use Lambda@Edge to cache the images stored in the S3 bucket. Use DynamoDB
Accelerator (DAX) to cache repeated read requests on the web application is incorrect because
Lambda@Edge is primarily used to run code closer to users of your application in order to improve application
performance and reduce latency. Serving static content using Lambda@Edge is not a suitable use case.

References:

https://aws.amazon.com/dynamodb/dax/

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DAX.concepts.html

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-web-creating-console.html

Check out this Amazon DynamoDB Cheat Sheet:

https://tutorialsdojo.com/amazon-dynamodb/

https://tutorialsdojo.com/amazon-cloudfront/

Question 14: Incorrect

A popular e-commerce website which has customers across the globe is hosted in the us-east-1 AWS region with
a backup site in the us-west-1 region. Due to an unexpected regional outage in the us-east-1 region, the company
initiated their disaster recovery plan and turned on the backup site. However, they discovered that the actual
failover still entails several hours of manual effort to prepare and switch over the database. They also noticed
that the database is missing up to three hours of data transactions when the regional outage happened. Which of
the following solutions should the DevOps engineer implement which will improve the RTO and RPO of the
website for the cross-region failover?

Use an ECS cluster to host a custom python script that calls the RDS API to create a snapshot of the
database, create a cross-region snapshot copy, and restore the database instance from a snapshot in the
backup region. Create a scheduled job using CloudWatch Events that triggers the Lambda function to
snapshot a database instance every hour. Set up an SNS topic that will receive published messages about
AWS-initiated RDS events from Trusted Advisor that will trigger the function to create a cross-region
snapshot copy. During failover, restore the database from a snapshot in the backup region

Configure an Amazon RDS Multi-AZ Deployment configuration and place the standby instance in the us-
west-1 region. Set up the RDS option group to enable multi-region availability for native automation of
cross-region recovery as well as for continuous data replication. Set up a notification system using
Amazon SNS which is integrated with AWS Health API to monitor RDS-related systems events and
notify the Operations team. In the actual failover where the primary database instance is down, RDS will
automatically make the standby instance in the backup region as the primary instance.
Create a snapshot every hour using Amazon RDS scheduled instance lifecycle events which will also
allow you to monitor specific RDS events. Perform a cross-region snapshot copy into the us-west-1
backup region once the SnapshotCreateComplete event occurred. Create an Amazon CloudWatch Alert
which will trigger an action to restore the Amazon RDS database snapshot in the backup region when the
CPU Utilization metric of the RDS instance in CloudWatch falls to 0% for more than 15 minutes.

Use Step Functions with 2 Lambda functions that call the RDS API to create a snapshot of the database,
create a cross-region snapshot copy, and restore the database instance from a snapshot in the backup
region. Use CloudWatch Events to trigger the function to take a database snapshot every hour. Set up an
SNS topic that will receive published messages from AWS Health API, RDS availability and other events
that will trigger the Lambda function to create a cross-region snapshot copy. During failover, Configure
the Lambda function to restore the database from a snapshot in the backup region.

(Correct)

Explanation

When you copy a snapshot to an AWS Region that is different from the source snapshot's AWS Region, the first
copy is a full snapshot copy, even if you copy an incremental snapshot. A full snapshot copy contains all of the
data and metadata required to restore the DB instance. After the first snapshot copy, you can copy incremental
snapshots of the same DB instance to the same destination region within the same AWS account.

An incremental snapshot contains only the data that has changed after the most recent snapshot of the same DB
instance. Incremental snapshot copying is faster and results in lower storage costs than full snapshot copying.
Incremental snapshot copying across AWS Regions is supported for both unencrypted and encrypted snapshots.
For shared snapshots, copying incremental snapshots is not supported. For shared snapshots, all of the copies are
full snapshots, even within the same region.

Depending on the AWS Regions involved and the amount of data to be copied, a cross-region snapshot copy can
take hours to complete. In some cases, there might be a large number of cross-region snapshot copy requests
from a given source AWS Region. In these cases, Amazon RDS might put new cross-region copy requests from
that source AWS Region into a queue until some in-progress copies complete. No progress information is
displayed about copy requests while they are in the queue. Progress information is displayed when the copy
starts.

Take note that when you copy a source snapshot that is a snapshot copy, the copy isn't incremental because the
snapshot copy doesn't include the required metadata for incremental copies.

Hence, the correct solution is: Use Step Functions with 2 Lambda functions that call the RDS API to create
a snapshot of the database, create a cross-region snapshot copy, and restore the database instance from a
snapshot in the backup region. Use CloudWatch Events to trigger the function to take a database
snapshot every hour. Set up an SNS topic that will receive published messages from AWS Health API,
RDS availability and other events that will trigger the Lambda function to create a cross-region snapshot
copy. During failover, Configure the Lambda function to restore the database from a snapshot in the
backup region.

The option that says: Configure an Amazon RDS Multi-AZ Deployment configuration and place the
standby instance in the us-west-1 region. Set up the RDS option group to enable multi-region availability
for native automation of cross-region recovery as well as for continuous data replication. Set up a
notification system using Amazon SNS which is integrated with AWS Health API to monitor RDS-related
systems events and notify the Operations team. In the actual failover where the primary database instance
is down, RDS will automatically make the standby instance in the backup region as the primary
instance is incorrect because the standby instance of an Amazon RDS Multi-AZ database can only be placed in
the same AWS region where the primary instance is hosted. Thus, you cannot failover to the standby instance as
your replacement to your primary instance in another region. A better solution would be to set up a cross-region
snapshot copy from the primary to the backup region. Another solution would be to use Read Replicas since
these can be placed in another AWS region.

The option that says: Create a snapshot every hour using Amazon RDS scheduled instance lifecycle events
which will also allow you to monitor specific RDS events. Perform a cross-region snapshot copy into the
us-west-1 backup region once the SnapshotCreateComplete event occurred. Create an Amazon CloudWatch
Alert which will trigger an action to restore the Amazon RDS database snapshot in the backup region
when the CPU Utilization metric of the RDS instance in CloudWatch falls to 0% for more than 15
minutes is incorrect because you cannot create a snapshot using the Amazon RDS scheduled instance lifecycle
events.

The option that says: Use an ECS cluster to host a custom python script that calls the RDS API to create a
snapshot of the database, create a cross-region snapshot copy, and restore the database instance from a
snapshot in the backup region. Create a scheduled job using CloudWatch Events that triggers the
Lambda function to snapshot a database instance every hour. Set up an SNS topic that will receive
published messages about AWS-initiated RDS events from Trusted Advisor that will trigger the function
to create a cross-region snapshot copy. During failover, restore the database from a snapshot in the
backup region is incorrect because the AWS Trusted Advisor doesn't provide any information regarding AWS-
initiated RDS events. You should use the AWS Health API instead. Moreover, it is not necessary to provision an
ECS cluster just to host a custom python program when you can simply use Lambda functions instead.

References:

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/
USER_CopySnapshot.html#USER_CopySnapshot.AcrossRegions

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/
AuroraMySQL.Replication.CrossRegion.html

Check out this Amazon RDS Cheat Sheet:

https://tutorialsdojo.com/amazon-relational-database-service-amazon-rds/

Question 15: Correct

A government agency recently decided to modernize its network infrastructure using AWS. They are developing
a solution to store confidential files containing Personally Identifiable Information (PII) and other sensitive
financial records of its citizens. All data in the storage solution must be encrypted both at rest and in transit. In
addition, all of its data must also be replicated in two locations that are at least 450 miles apart from each other.

As a DevOps Engineer, what solution should you implement to meet these requirements?
Set up primary and secondary S3 buckets in two separate AWS Regions that are at least 450 miles apart.
Create a bucket policy to enforce access to the buckets only through HTTPS and enforce S3-Managed
Keys (SSE-S3) encryption on all objects uploaded to the bucket. Enable cross-region replication (CRR)
between the two buckets.

(Correct)

Set up primary and secondary S3 buckets in two separate Availability Zones that are at least 450 miles
apart. Create a bucket policy to enforce access to the buckets only through HTTPS and enforce AWS
KMS encryption on all objects uploaded to the bucket. Enable Transfer Acceleration between the two
buckets. Set up a KMS Customer Master Key (CMK) in the primary region for encrypting objects.

Set up primary and secondary Amazon S3 buckets in two separate AWS Regions that are at least 450
miles apart. Create an IAM role to enforce access to the buckets only through HTTPS. Set up a bucket
policy to enforce Amazon S3-Managed Keys (SSE-S3) encryption on all objects uploaded to the bucket.
Enable cross-region replication (CRR) between the two buckets.

Set up primary and secondary S3 buckets in two separate Availability Zones that are at least 450 miles
apart. Create a bucket policy to enforce access to the buckets only through HTTPS and enforce S3 SSE-C
encryption on all objects uploaded to the bucket. Enable cross-region replication (CRR) between the two
buckets.

Explanation

Availability Zones give customers the ability to operate production applications and databases that are more
highly available, fault-tolerant, and scalable than would be possible from a single data center. AWS maintains
multiple AZs around the world and more zones are added at a fast pace. Each AZ can be multiple data centers
(typically 3), and at full scale can be hundreds of thousands of servers. They are fully isolated partitions of the
AWS Global Infrastructure. With their own power infrastructure, the AZs are physically separated by a
meaningful distance, many kilometers, from any other AZ, although all are within 100 km (60 miles of each
other).

All AZs are interconnected with high-bandwidth, low-latency networking, over fully redundant, dedicated metro
fiber providing high-throughput, low-latency networking between AZs. The network performance is sufficient to
accomplish synchronous replication between AZs. AWS Availability Zones are also powerful tools for helping
build highly available applications. AZs make partitioning applications about as easy as it can be. If an
application is partitioned across AZs, companies are better isolated and protected from issues such as lightning
strikes, tornadoes, earthquakes and more.

By default, Amazon S3 allows both HTTP and HTTPS requests. To comply with the s3-bucket-ssl-requests-
only rule, confirm that your bucket policies explicitly deny access to HTTP requests. Bucket policies that allow
HTTPS requests without explicitly denying HTTP requests might not comply with the rule.

To determine HTTP or HTTPS requests in a bucket policy, use a condition that checks for the
key "aws:SecureTransport". When this key is true, this means that the request is sent through HTTPS. To be
sure to comply with the s3-bucket-ssl-requests-only rule, create a bucket policy that explicitly denies access
when the request meets the condition "aws:SecureTransport": "false". This policy explicitly denies access to
HTTP requests.
In this scenario, you should use AWS Regions since AZs are physically separated by only 100 km (60 miles)
from each other. Within each AWS Region, S3 operates in a minimum of three AZs, each separated by miles to
protect against local events like fires, floods et cetera. Take note that you can't launch an AZ-based S3 bucket.

Hence, the correct answer is: Set up primary and secondary S3 buckets in two separate AWS Regions that
are at least 450 miles apart. Create a bucket policy to enforce access to the buckets only through HTTPS
and enforce S3-Managed Keys (SSE-S3) encryption on all objects uploaded to the bucket. Enable cross-
region replication (CRR) between the two buckets.

The option that says: Set up primary and secondary S3 buckets in two separate Availability Zones that are
at least 450 miles apart. Create a bucket policy to enforce access to the buckets only through HTTPS and
enforce S3 SSE-C encryption on all objects uploaded to the bucket. Enable cross-region replication (CRR)
between the two buckets is incorrect because you can't create Amazon S3 buckets in two separate Availability
Zones since this is a regional service.

The option that says: Set up primary and secondary Amazon S3 buckets in two separate AWS Regions that
are at least 450 miles apart. Create an IAM role to enforce access to the buckets only through HTTPS. Set
up a bucket policy to enforce Amazon S3-Managed Keys (SSE-S3) encryption on all objects uploaded to
the bucket. Enable cross-region replication (CRR) between the two buckets is incorrect because you have to
use the bucket policy to enforce access to the bucket using HTTPS only and not an IAM role.

The option that says: Set up primary and secondary S3 buckets in two separate Availability Zones that are
at least 450 miles apart. Create a bucket policy to enforce access to the buckets only through HTTPS and
enforce AWS KMS encryption on all objects uploaded to the bucket. Enable Transfer Acceleration
between the two buckets. Set up a KMS Customer Master Key (CMK) in the primary region for
encrypting objects is incorrect because you have to enable Cross-Region replication and not Transfer
Acceleration. This feature simply enables fast, easy, and secure transfers of files over long distances between
your client and an S3 bucket but not data replication.

References:

https://docs.aws.amazon.com/config/latest/developerguide/s3-bucket-ssl-requests-only.html

https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-policy-for-config-rule/

https://aws.amazon.com/about-aws/global-infrastructure/regions_az/

Check out this Amazon S3 Cheat Sheet:

https://tutorialsdojo.com/amazon-s3/

Question 16: Correct

Your API servers are hosted on a cluster of EC2 instances deployed using Elastic Beanstalk. You need to deploy
a new version and also change the instance type of the instances from m4.large to m4.xlarge. Since this is a
production workload already operating at 80% capacity, you want to ensure that the workload on the current
EC2 instances does not increase while doing the deployment.

Which of the following deployment policy will you implement which will incur the LEAST amount of cost?
Use Rolling with additional batch as deployment policy to maintain the capacity of the cluster during the
deployment.

(Correct)

Use Rolling as the deployment policy to maintain the capacity of the cluster during the deployment.

Use All at once as deployment policy to deploy the new version with complete capacity.

Use Immutable as deployment policy to deploy the new version with complete capacity.

Explanation

When a configuration change requires replacing instances, Elastic Beanstalk can perform the update in batches
to avoid downtime while the change is propagated. During a rolling update, capacity is only reduced by the size
of a single batch, which you can configure. With rolling deployments, Elastic Beanstalk splits the environment's
EC2 instances into batches and deploys the new version of the application to one batch at a time, leaving the rest
of the instances in the environment running the old version of the application. During a rolling deployment,
some instances serve requests with the old version of the application, while instances in completed batches serve
other requests with the new version.

To maintain full capacity during deployments, you can configure your environment to launch a new batch of
instances before taking any instances out of service. This option is known as a rolling deployment with an
additional batch. When the deployment completes, Elastic Beanstalk terminates the additional batch of instances.

Refer to the table below for the characteristics of each deployment method as well as the amount of time it takes
to do the deployment, as seen in the Deploy Time column:

Hence, the correct answer in this scenario is: Use Rolling with additional batch as deployment policy to
maintain the capacity of the cluster during the deployment.

The option that says: Use Rolling as the deployment policy to maintain the capacity of the cluster during
the deployment is incorrect because this type will remove a batch of instances from the cluster while deploying
the new instances. Hence, this will increase the load to the current EC2 instances. A better solution to implement
in this scenario is to use Rolling with additional batch.

The option that says: Use All at once as deployment policy to deploy the new version with complete
capacity is incorrect because this type will cause a brief downtime during deployment. Hence, this is not ideal
for critical production applications.

The option that says: Use Immutable as deployment policy to deploy the new version with complete
capacity is incorrect. Although this type can maintain the compute capacity of the cluster, it entails a significant
amount of cost since a new batch of instances will be launched.

References:

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.rolling-version-deploy.html

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.rollingupdates.html
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/environmentmgmt-updates-immutable.html

Check out this AWS Elastic Beanstalk Cheat Sheet:

https://tutorialsdojo.com/aws-elastic-beanstalk/

Question 17: Correct

A DevOps engineer has been tasked to implement a reliable solution to maintain all of their Windows and Linux
servers both in AWS and in on-premises data center. There should be a system that allows them to easily update
the operating systems of their servers and apply the core application patches with minimum management
overhead. The patches must be consistent across all levels in order to meet the company’s security
compliance.Which of the following is the MOST suitable solution that you should implement?

Store the login credentials of each Linux and Windows servers on the AWS Systems Manager Parameter
Store. Use Systems Manager Resource Groups to set up one group for your Linux servers and another one
for your Windows servers. Remotely login, run, and deploy the patch updates to all of your servers using
the credentials stored in the Systems Manager Parameter Store and through the use of the Systems
Manager Run Command.

Configure and install the AWS OpsWorks agent on all of your EC2 instances in your VPC and your on-
premises servers. Set up an OpsWorks stack with separate layers for each OS then fetch a recipe from the
Chef supermarket site (supermarket.chef.io) to automate the execution of the patch commands for each
layer during maintenance windows.

Configure and install AWS Systems Manager agent on all of the EC2 instances in your VPC as well as
your physical servers on-premises. Use the Systems Manager Patch Manager service and specify the
required Systems Manager Resource Groups for your hybrid architecture. Utilize a preconfigured patch
baseline and then run scheduled patch updates during maintenance windows.

(Correct)

Develop a custom python script to install the latest OS patches on the Linux servers. Set up a scheduled
job to automatically run this script using the cron scheduler on Linux servers. Enable Windows Update in
order to automatically patch Windows servers or set up a scheduled task using Windows Task Scheduler
to periodically run the python script.

Explanation

AWS Systems Manager Patch Manager automates the process of patching managed instances with both
security-related and other types of updates. You can use the Patch Manager to apply patches for both operating
systems and applications. (On Windows Server, application support is limited to updates for Microsoft
applications.) You can patch fleets of Amazon EC2 instances or your on-premises servers and virtual machines
(VMs) by operating system type. This includes supported versions of Windows Server, Ubuntu Server, Red Hat
Enterprise Linux (RHEL), SUSE Linux Enterprise Server (SLES), CentOS, Amazon Linux, and Amazon Linux
2. You can scan instances to see only a report of missing patches, or you can scan and automatically install all
missing patches.Patch Manager uses patch baselines, which include rules for auto-approving patches within
days of their release, as well as a list of approved and rejected patches. You can install patches on a regular basis
by scheduling patching to run as a Systems Manager maintenance window task. You can also install patches
individually or to large groups of instances by using Amazon EC2 tags. You can add tags to your patch baselines
themselves when you create or update them.

A resource group is a collection of AWS resources that are all in the same AWS Region and that match criteria
provided in a query. You build queries in the AWS Resource Groups (Resource Groups) console or pass them as
arguments to Resource Groups commands in the AWS CLI.

With AWS Resource Groups, you can create a custom console that organizes and consolidates information based
on criteria that you specify in tags. After you add resources to a group you created in Resource Groups, use
AWS Systems Manager tools such as Automation to simplify management tasks on your resource group. You
can also use the resource groups you create as the basis for viewing monitoring and configuration insights in
Systems Manager.

Hence, the correct answer is: Configure and install AWS Systems Manager agent on all of the EC2
instances in your VPC as well as your physical servers on-premises. Use the Systems Manager Patch
Manager service and specify the required Systems Manager Resource Groups for your hybrid
architecture. Utilize a preconfigured patch baseline and then run scheduled patch updates during
maintenance windows.

The option which uses an AWS OpsWorks agent is incorrect because the OpsWorks service is primarily used
for application deployment and not for applying application patches or upgrading the operating systems of your
servers.

The option which uses a custom python script is incorrect because this solution entails a high management
overhead since you need to develop a new script and maintain a number of cron schedulers in your Linux servers
and Windows Task Scheduler jobs on your Windows servers.

The option which uses the AWS Systems Manager Parameter Store is incorrect because this is not a suitable
service to use to handle the patching activities of your servers. You have to use AWS Systems Manager Patch
Manager instead.

References:

https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-patch.html

https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-resource-groups.html

Check out this AWS Systems Manager Cheat Sheet:

https://tutorialsdojo.com/aws-systems-manager/

Question 18: Incorrect

A retail company is planning to migrate its on-premises data center to AWS to scale its infrastructure and reach
more customers. Their multitier web applications will be moved to the cloud and will use a variety of AWS
services, IAM policies, and custom network configuration. The requirements can be changed anytime by their
Solutions Architect, which means there will be a lot of modifications to the AWS components being deployed.
CloudFormation will be used to automate, launch, and version-control the new cloud environment in AWS in
various regions.Which of the following is the MOST recommended way to set up CloudFormation in this
scenario?

Prepare a single master CloudFormation template containing all logical parts of the architecture. Store the
CloudFormation resource outputs to a DynamoDB table that will be used by the template. Upload and
manage the template in AWS CodeCommit.

Prepare multiple separate CloudFormation templates for each logical part of the architecture. Store the
CloudFormation resource outputs to AWS Systems Manager Parameter Store. Upload and manage the
templates in AWS CodeCommit.

Prepare a single master CloudFormation template containing all logical parts of the architecture. Upload
and maintain the template in AWS CodeCommit.

Prepare multiple separate CloudFormation templates for each logical part of the architecture. Use cross-
stack references to export resources from one AWS CloudFormation stack to another and maintain the
templates in AWS CodeCommit.

(Correct)

Explanation

When you organize your AWS resources based on lifecycle and ownership, you might want to build a stack that
uses resources that are in another stack. You can hard-code values or use input parameters to pass resource
names and IDs. However, these methods can make templates difficult to reuse or can increase the overhead to
get a stack running. Instead, use cross-stack references to export resources from a stack so that other stacks can
use them. Stacks can use the exported resources by calling them using the Fn::ImportValue function.

For example, you might have a network stack that includes a VPC, a security group, and a subnet. You want all
public web applications to use these resources. By exporting the resources, you allow all stacks with public web
applications to use them.

To export resources from one AWS CloudFormation stack to another, create a cross-stack reference. Cross-stack
references let you use a layered or service-oriented architecture. Instead of including all resources in a single
stack, you create related AWS resources in separate stacks; then, you can refer to required resource outputs from
other stacks. By restricting cross-stack references to outputs, you control the parts of a stack that are referenced
by other stacks.For example, you might have a network stack with a VPC, a security group, and a subnet for
public web applications, and a separate public web application stack. To ensure that the web applications use the
security group and subnet from the network stack, you create a cross-stack reference that allows the web
application stack to reference resource outputs from the network stack. With a cross-stack reference, owners of
the web application stacks don't need to create or maintain networking rules or assets.

To create a cross-stack reference, use the Export output field to flag the value of a resource output for export.
Then, use the Fn::ImportValue intrinsic function to import the value.

Hence, the correct answer is: Prepare multiple separate CloudFormation templates for each logical part of
the architecture. Use cross-stack references to export resources from one AWS CloudFormation stack to
another and maintain the templates in AWS CodeCommit.
The option that says: Prepare a single master CloudFormation template containing all logical parts of the
architecture. Store the CloudFormation resource outputs to a DynamoDB table that will be used by the
template. Upload and manage the template in AWS CodeCommit is incorrect because it is better to use
multiple separate CloudFormation templates to handle each logical part of the architecture, considering that you
are deploying multitier web applications that use a variety of AWS services, IAM policies, and custom network
configuration. This will provide better management of each part of your architecture. In addition, you can simply
use cross-stack references in CloudFormation instead of storing the resource outputs in a DynamoDB table.

The option that says: Prepare a single master CloudFormation template containing all logical parts of the
architecture. Upload and maintain the template in AWS CodeCommit is incorrect because, just as
mentioned above, it is better to use multiple separate CloudFormation templates to handle each logical part of
the architecture.

The option that says: Prepare multiple separate CloudFormation templates for each logical part of the
architecture. Store the CloudFormation resource outputs to AWS Systems Manager Parameter Store.
Upload and manage the templates in AWS CodeCommit is incorrect because it is better to handle each
logical part of the architecture on a separate CloudFormation template for easier management. Although you can
integrate AWS Systems Manager Parameter Store with CloudFormation, this service is more suitable to store
data such as passwords, database strings, and license codes as parameter values but not resource outputs. You
should create a cross-stack reference to export resources from one AWS CloudFormation stack to another.

References:

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/best-practices.html#cross-stack

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/walkthrough-crossstackref.html

https://aws.amazon.com/premiumsupport/knowledge-center/cloudformation-reference-resource/

AWS CloudFormation - Templates, Stacks, Change Sets:

https://youtu.be/9Xpuprxg7aY

Check out this AWS CloudFormation Cheat Sheet:

https://tutorialsdojo.com/aws-cloudformation/

Check out this AWS CodeCommit Cheat Sheet:

https://tutorialsdojo.com/aws-codecommit/

Question 19: Correct

A leading pharmaceutical company has several web applications and GraphQL APIs hosted in a fleet of Amazon
EC2 instances. There is a private EC2 instance with no Internet access that needs to upload a new object to a
private Amazon S3 bucket. However, you are getting an HTTP 403: Access Denied error whenever you try to
fetch the objects from the instance.What are the possible causes for this issue? (Select TWO.)

The bucket has a lifecycle policy that automatically moves data to the S3 Intelligent-Tiering storage class.
Versioning is enabled in the Amazon S3 bucket.

The Amazon S3 bucket policy is not properly configured.

(Correct)

The Amazon Virtual Private Cloud (Amazon VPC) endpoint policy is not properly configured.

(Correct)

The Cross-Region replication (CRR) feature, which is used to copy objects across Amazon S3 buckets in
different AWS Regions, is enabled in the bucket.

Explanation

A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint
services powered by PrivateLink without requiring an Internet gateway, NAT device, VPN connection, or AWS
Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with
resources in the service. Traffic between your VPC and the other service does not leave the Amazon network.

Endpoints are virtual devices. They are horizontally scaled, redundant, and highly available VPC components
that allow communication between instances in your VPC and services without imposing availability risks or
bandwidth constraints on your network traffic.

There are two types of VPC endpoints: interface endpoints and gateway endpoints. You should create the type
of VPC endpoint required by the supported service.If an IAM user has permission with s3:PutObject action on
an Amazon Simple Storage Service (Amazon S3) bucket and got an HTTP 403: Access Denied error when
uploading an object, then there might be an issue with the bucket or VPC endpoint policy.

If the IAM user has the correct user permissions to upload to the bucket, then check the following policies for
any settings that might be preventing the uploads:

- IAM user permission to s3:PutObjectAcl

- Conditions in the bucket policy

- Access allowed by an Amazon Virtual Private Cloud (Amazon VPC) endpoint policy

Hence, the correct answers are:

- The Amazon S3 bucket policy is not properly configured

- The Amazon Virtual Private Cloud (Amazon VPC) endpoint policy is not properly configured

The following options are incorrect since these will not cause an HTTP 403: Access Denied error:

- The Cross-Region replication (CRR) feature, which is used to copy objects across Amazon S3 buckets in
different AWS Regions, is enabled in the bucket.

- The bucket has a lifecycle policy that automatically moves data to the S3 Intelligent-Tiering storage class
- Versioning is enabled in the Amazon S3 bucket.

References:

https://aws.amazon.com/premiumsupport/knowledge-center/s3-403-upload-bucket/

https://aws.amazon.com/premiumsupport/knowledge-center/s3-troubleshoot-403/

Tutorials Dojo's AWS Certified DevOps Engineer Professional Exam Study Guide:

https://tutorialsdojo.com/aws-certified-devops-engineer-professional/

Question 20: Incorrect

A company has several legacy systems which use both on-premises servers as well as EC2 instances in AWS.
The cluster nodes in AWS are configured to have a local IP address and a fixed hostname in order for the on-
premises servers to communicate with them. There is a requirement to automate the configuration of the cluster
which consists of 10 nodes to ensure high availability across three Availability Zones. There should also be a
corresponding elastic network interface in a specific subnet for each Availability Zone. Each of the cluster node's
local IP address and hostname must be static and should not change even if the instance reboots or gets
terminated.

Which of the following solutions below provides the LEAST amount of effort to automate this architecture?

Use a DynamoDB table to store the list of ENIs and hostnames subnets which will be used by the cluster.
Set up a single AWS CloudFormation template to manage an Auto Scaling group with a minimum and
maximum size of 10. Maintain the assignment of each local IP address and hostname of the instances by
using Systems Manager.

Develop a custom AWS CLI script to launch the EC2 instances, each with an attached ENI, a unique
name and placed in a specific AZ. Replace the missing EC2 instance by running the script via AWS
CloudShell in the event that one of the instances in the cluster got rebooted or terminated.

Launch an Elastic Beanstalk application with 10 EC2 instances, each has a corresponding ENI, hostname,
and AZ as input parameters. Use the Elastic Beanstalk health agent daemon process to configure the
hostname of the instance and attach a specific ENI based on the current environment.

Set up a CloudFormation child stack template which launches an Auto Scaling group consisting of just
one EC2 instance then provide a list of ENIs, hostnames, and the specific AZs as stack parameters. Set
both the MinSize and MaxSize parameters of the Auto Scaling group to 1. Add a user data script that will
attach an ENI to the instance once launched. Use CloudFormation nested stacks to provision a total of 10
nodes needed for the cluster, and deploy the stack using a master template.

(Correct)

Explanation

A stack is a collection of AWS resources that you can manage as a single unit. In other words, you can create,
update, or delete a collection of resources by creating, updating, or deleting stacks. All the resources in a stack
are defined by the stack's AWS CloudFormation template. A stack, for instance, can include all the resources
required to run a web application, such as a web server, a database, and networking rules. If you no longer
require that web application, you can simply delete the stack, and all of its related resources are deleted.

AWS CloudFormation ensures all stack resources are created or deleted as appropriate. Because AWS
CloudFormation treats the stack resources as a single unit, they must all be created or deleted successfully for the
stack to be created or deleted. If a resource cannot be created, AWS CloudFormation rolls the stack back and
automatically deletes any resources that were created. If a resource cannot be deleted, any remaining resources
are retained until the stack can be successfully deleted.

Nested stacks are stacks created as part of other stacks. You create a nested stack within another stack by using
the AWS::CloudFormation::Stack resource.

As your infrastructure grows, common patterns can emerge in which you declare the same components in
multiple templates. You can separate out these common components and create dedicated templates for them.
Then use the resource in your template to reference other templates, creating nested stacks.For example, assume
that you have a load balancer configuration that you use for most of your stacks. Instead of copying and pasting
the same configurations into your templates, you can create a dedicated template for the load balancer. Then, you
just use the resource to reference that template from within other templates.

By setting up both the MinSize and MaxSize parameters of the Auto Scaling group to 1, you can ensure that your
EC2 instance can recover again in the event of systems failure with exactly the same parameters defined in the
CloudFormation template. This is one of the Auto Scaling strategies which provides high availability with the
least possible cost. In this scenario, there is no mention about the scalability of the solution but only its
availability.

Hence, the correct answer is to: Set up a CloudFormation child stack template which launches an Auto
Scaling group consisting of just one EC2 instance then provide a list of ENIs, hostnames and the specific
AZs as stack parameters. Set both the MinSize and MaxSize parameters of the Auto Scaling group to 1.
Add a user data script that will attach an ENI to the instance once launched. Use CloudFormation nested
stacks to provision a total of 10 nodes needed for the cluster, and deploy the stack using a master
template.

The option that launches an Elastic Beanstalk application with 10 EC2 instances is incorrect because this
involves a lot of manual configuration, which will make it hard for you to replicate the same stack to another
AWS region. Moreover, the Elastic Beanstalk health agent is primarily used to monitor the health of your
instances and can't be used to configure the hostname of the instance nor attach a specific ENI.

The option that uses a DynamoDB table to store the list of ENIs and hostnames subnets is incorrect because
you cannot maintain the assignment of the individual local IP address and hostname for each instance using
Systems Manager.

The option that develops a custom AWS CLI script to launch the EC2 instances then run it via AWS
CloudShell is incorrect because this is not an automated solution. AWS CloudShell may provide an easy and
secure way of interacting with AWS resources via browser-based shell but executing a script on this is still a
manual process. A better way to meet this requirement is to use CloudFormation.

References:

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-nested-stacks.html
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacks.html

https://aws.amazon.com/cloudshell/

Check out this AWS CloudFormation Cheat Sheet:

https://tutorialsdojo.com/aws-cloudformation/

Question 21: Correct

A recent production incident has caused a data breach in one of the company's flagship application which is
hosted in an Auto Scaling group of EC2 instances. In order to prevent this from happening again, a DevOps
engineer was tasked to implement a solution that will automatically terminate any instance in production which
was manually logged into via SSH. All of the EC2 instances that are being used by the application already have
an Amazon CloudWatch Logs agent installed.

Which of the following is the MOST automated solution that the DevOps engineer should implement?

Set up a CloudWatch Alarm that will be triggered when there is an SSH login event and configure it to
send a notification to an SQS queue. Launch a group of EC2 worker instances to consume the messages
from the SQS queue and terminate the detected EC2 instances.

Set up a CloudWatch Logs subscription with an AWS Lambda function which is configured to add
a FOR_DELETION tag to the Amazon EC2 instance that produced the SSH login event. Run another Lambda
function every day using the CloudWatch Events rule to terminate all EC2 instances with the custom tag
for deletion.

(Correct)

Set up a CloudWatch Alarm which will be triggered when an SSH login event occurred and configure it to
also send a notification to an SNS topic once the alarm is triggered. Instruct the Support and Operations
team to subscribe to the SNS topic and then manually terminate the detected EC2 instance as soon as
possible.

Set up and integrate a CloudWatch Logs subscription with AWS Step Functions to add a
special FOR_DELETION tag to the specific EC2 instance that had an SSH login event. Create a CloudWatch
Events rule to trigger a second AWS Lambda function everyday at 12 PM that will terminate all of the
EC2 instances with this tag.

Explanation

You can use subscriptions to get access to a real-time feed of log events from CloudWatch Logs and have it
delivered to other services such as an Amazon Kinesis stream, Amazon Kinesis Data Firehose stream, or AWS
Lambda for custom processing, analysis, or loading to other systems. To begin subscribing to log events, create
the receiving source, such as a Kinesis stream, where the events will be delivered. A subscription filter defines
the filter pattern to use for filtering which log events get delivered to your AWS resource, as well as information
about where to send matching log events to.

CloudWatch Logs also produces CloudWatch metrics about the forwarding of log events to subscriptions. You
can use a subscription filter with Kinesis, Lambda, or Kinesis Data Firehose.
Hence, the correct answer is: Set up a CloudWatch Logs subscription with an AWS Lambda function which
is configured to add a FOR_DELETION tag to the Amazon EC2 instance that produced the SSH login
event. Run another Lambda function everyday using the CloudWatch Events rule to terminate all EC2
instances with the custom tag for deletion.

The option that says: Set up and integrate a CloudWatch Logs subscription with AWS Step Functions to
add a special FOR_DELETION tag to the specific EC2 instance that had an SSH login event. Create a
CloudWatch Events rule to trigger a second AWS Lambda function every day at 12 PM that will
terminate all of the EC2 instances with this tag is incorrect because a CloudWatch Logs subscription cannot
be directly integrated with an AWS Step Functions application.

The option that says: Set up a CloudWatch Alarm which will be triggered when an SSH login event
occurred and configure it to also send a notification to an SNS topic once the alarm is triggered. Instruct
the Support and Operations team to subscribe to the SNS topic and then manually terminate the detected
EC2 instance as soon as possible is incorrect. Although you can configure your Amazon CloudWatch Alarms
to send a notification to SNS, this solution still involves a manual process. Remember that the scenario is asking
for an automated system for this scenario.

The option that says: Set up a CloudWatch alarm that will be triggered when there is an SSH login event
and configure it to send to a notification to an SQS queue. Launch a group of EC2 worker instances to
consume the messages from the SQS queue and terminate the detected EC2 instances is incorrect because
using SQS as well as worker instances is unnecessary since you can simply use Lambda functions for
processing. In addition, Amazon CloudWatch Alarms can only send notifications to SNS and not SQS.

References:

https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/
SubscriptionFilters.html#LambdaFunctionExample

https://aws.amazon.com/blogs/security/how-to-monitor-and-visualize-failed-ssh-access-attempts-to-amazon-ec2-
linux-instances/

https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/Create-CloudWatch-Events-Scheduled-
Rule.html

Check out this Amazon CloudWatch Cheat Sheet:

https://tutorialsdojo.com/amazon-cloudwatch/

Question 22: Incorrect

A government agency has a VMware-based automated server build system on their on-premises network which
uses a virtualization software that allows them to create server images of their application. They instructed their
DevOps Engineer to set up a system that will allow them to test their server images using their on-premises
server pipeline to resemble the build and behavior on Amazon EC2. In this way, the agency can verify the
functionality of their application, detect incompatibility issues, and determine any prerequisites on the new
Amazon Linux 2 operating system that will be used in AWS.

Which of the following solutions should the DevOps Engineer implement to accomplish this task?
Download the latest AmazonLinux2.iso of the Amazon Linux 2 operating system and import it to your
on-premises network. Directly launch a new on-premises server based on the imported ISO, without any
virtual platform. Deploy the application, and commence testing.

Launch a new on-premises server with any distribution of Linux operating system such as CentOS,
Ubuntu or Fedora since these are technically the same. Deploy the application to the server for testing.

Launch an Amazon EC2 instance with the latest Amazon Linux OS in AWS. Use the AWS VM
Import/Export service to import the EC2 image, export it to a VMware ISO in an S3 bucket, and then
import the ISO to an on-premises server. Once done, commence the testing activity to verify the
application's functionalities.

(Correct)

Use an AWS OpsWorks deployment agent that can reformat the target server on-premises to use Amazon
Linux and then install the application afterward. Start the testing once the installation is complete.

Explanation

The VM Import/Export enables you to easily import virtual machine images from your existing environment to
Amazon EC2 instances and export them back to your on-premises environment. This offering allows you to
leverage your existing investments in the virtual machines that you have built to meet your IT security,
configuration management, and compliance requirements by bringing those virtual machines into Amazon EC2
as ready-to-use instances. You can also export imported instances back to your on-premises virtualization
infrastructure, allowing you to deploy workloads across your IT infrastructure.

To import your images, use the AWS CLI or other developer tools to import a virtual machine (VM) image from
your VMware environment. If you use the VMware vSphere virtualization platform, you can also use the AWS
Management Portal for vCenter to import your VM. As part of the import process, VM Import will convert your
VM into an Amazon EC2 AMI, which you can use to run Amazon EC2 instances. Once your VM has been
imported, you can take advantage of Amazon’s elasticity, scalability, and monitoring via offerings like Auto
Scaling, Elastic Load Balancing, and CloudWatch to support your imported images.

You can export previously imported EC2 instances using the Amazon EC2 API tools. You simply specify the
target instance, virtual machine file format and a destination S3 bucket, and VM Import/Export will
automatically export the instance to the S3 bucket. You can then download and launch the exported VM within
your on-premises virtualization infrastructure.

You can import Windows and Linux VMs that use VMware ESX or Workstation, Microsoft Hyper-V, and Citrix
Xen virtualization formats. And you can export previously imported EC2 instances to VMware ESX, Microsoft
Hyper-V or Citrix Xen formats.

Hence, the correct answer is: Launch an Amazon EC2 instance with the latest Amazon Linux OS in AWS.
Use the AWS VM Import/Export service to import the EC2 image, export it to a VMware ISO in an S3
bucket, and then import the ISO to an on-premises server. Once done, commence the testing activity to
verify the application's functionalities.

The option that says: Download the latest ISO of the Amazon Linux 2 operating system and import it to
your on-premises network. Launch a new on-premises server based on the imported ISO, deploy the
application, and commence testing is incorrect because there is no way to directly download
the AmazonLinux2.iso for Amazon Linux 2. You have to use VM Import/Export service instead or, alternatively,
run the Amazon Linux 2 as a virtual machine in your on-premises data center. Again, you won't be able to
directly download the ISO image, but you can get the Amazon Linux 2 image for the specific virtualization
platform of your choice. If you are using VMware, you can download the ESX image *.ova, and for VirtualBox,
you'll get the *.vdi image file. What you should do first is to prepare the seed.iso boot image and then connect it
to the VM of your choice on the first boot.

The option that says: Use an AWS OpsWorks deployment agent that can reformat the target server on-
premises to use Amazon Linux and then install the application afterward. Start the testing once the
installation is complete is incorrect because OpsWorks is primarily used to deploy applications to both your on-
premises servers as well as your EC2 instances, which reside in your VPC. It doesn't have the capability to
reformat the target server to use an Amazon Linux OS or to switch to another type of OS at all.

The option that says: Launch a new on-premises server with any distribution of Linux operating system
such as CentOS, Ubuntu or Fedora since these are technically the same. Deploy the application to the
server for testing is incorrect because these Linux distributions are actually different from one another. There
could be some incompatibility issues between the different Linux operating systems, which is why you need to
test your application on a specific Amazon Linux 2 type only.

References:

https://docs.aws.amazon.com/vm-import/latest/userguide/vmexport_image.html

https://aws.amazon.com/ec2/vm-import/

https://docs.aws.amazon.com/vm-import/latest/userguide/vmimport-image-import.html

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/amazon-linux-2-virtual-machine.html

Tutorials Dojo's AWS Certified DevOps Engineer Professional Exam Study Guide:

https://tutorialsdojo.com/aws-certified-devops-engineer-professional/

Question 23: Correct

A multi-tier enterprise web application is hosted in an Auto Scaling group of On-Demand Amazon EC2
instances across multiple Availability Zones behind an Application Load Balancer. For its database tier, Amazon
Aurora is used in storing thousands of transactions and user data. A DevOps Engineer was instructed to
implement a secure and manageable method in obtaining the database password credentials as well as providing
access to AWS resources during deployment.

Which among the following options is the MOST suitable setup that the Engineer should use?

Store the sensitive database credentials and access keys from AWS Systems Manager Parameter Store
as SecureString parameters. Configure your instances to retrieve the credentials and access keys from the
Parameter Store.
Using the AWS Systems Manager Parameter Store service, store the sensitive database credentials
as SecureString parameters and the access keys as plaintext parameters. Configure your instances to
retrieve the credentials and access keys from the Parameter Store.

Create an IAM role for your EC2 instances that allows access to other AWS services. Configure your
instances to use the IAM Role and store the database credentials in an encrypted configuration file in an
Amazon S3 bucket with SSE.

Store the sensitive database credentials from AWS Secrets Manager. Create an IAM role for your EC2
instances that allows access to other AWS services. Configure your instances to use the IAM Role and
retrieve the credentials from the AWS Secrets Manager.

(Correct)

Explanation

AWS Secrets Manager is an AWS service that makes it easier for you to manage secrets. Secrets can be
database credentials, passwords, third-party API keys, and even arbitrary text. You can store and control access
to these secrets centrally by using the Secrets Manager console, the Secrets Manager command line interface
(CLI), or the Secrets Manager API and SDKs.

In the past, when you created a custom application that retrieves information from a database, you typically had
to embed the credentials (the secret) for accessing the database directly in the application. When the time came
to rotate the credentials, you had to do much more than just create new credentials. You had to invest time to
update the application to use the new credentials. Then you had to distribute the updated application. If you had
multiple applications with shared credentials and you missed updating one of them, the application would break.
Because of this risk, many customers have chosen not to regularly rotate their credentials, which effectively
substitutes one risk for another.

Secrets Manager enables you to replace hardcoded credentials in your code (including passwords), with an API
call to Secrets Manager to retrieve the secret programmatically. This helps ensure that the secret can't be
compromised by someone examining your code, because the secret simply isn't there. Also, you can configure
Secrets Manager to automatically rotate the secret for you according to a specified schedule. This enables you to
replace long-term secrets with short-term ones, which significantly reduces the risk of compromise.

An IAM role is an IAM identity that you can create in your account that has specific permissions. An IAM role
is similar to an IAM user, in that it is an AWS identity with permission policies that determine what the identity
can and cannot do in AWS. However, instead of being uniquely associated with one person, a role is intended to
be assumable by anyone who needs it. Also, a role does not have standard long-term credentials such as a
password or access keys associated with it. Instead, when you assume a role, it provides you with temporary
security credentials for your role session.

You can use roles to delegate access to users, applications, or services that don't normally have access to your
AWS resources. For example, you might want to grant users in your AWS account access to resources they don't
usually have, or grant users in one AWS account access to resources in another account. Or you might want to
allow a mobile app to use AWS resources, but not want to embed AWS keys within the app (where they can be
difficult to rotate and where users can potentially extract them). Sometimes you want to give AWS access to
users who already have identities defined outside of AWS, such as in your corporate directory. Or, you might
want to grant access to your account to third parties so that they can perform an audit on your resources.
Hence, the correct answer is: Store the sensitive database credentials from AWS Secrets Manager. Create
an IAM role for your EC2 instances that allows access to other AWS services. Configure your instances to
use the IAM Role and retrieve the credentials from the AWS Secrets Manager.

The option that says:Store the sensitive database credentials and access keys from AWS Systems Manager
Parameter Store as SecureString parameters. Configure your instances to retrieve the credentials and
access keys from the Parameter Store is incorrect because you should use an IAM role instead of directly
using access keys. AWS Systems Manager Parameter Store is primarily used to store passwords, database
strings, and license codes as parameter values.

The option that says: Using the AWS Systems Manager Parameter Store service, store the sensitive
database credentials as SecureString parameters and the access keys as plaintext parameters. Configure
your instances to retrieve the credentials and access keys from the Parameter Store is incorrect. Although it
is valid to store the database credentials to AWS Systems Manager Parameter Store, you still need to use an IAM
Role to allow EC2 instances to access your AWS resources and not by using access keys.

The option that says: Create an IAM role for your EC2 instances that allows access to other AWS services.
Configure your instances to use the IAM Role and store the database credentials in an encrypted
configuration file in an Amazon S3 bucket with SSE is incorrect because it is not appropriate to store sensitive
database credentials to S3 even if it is using Server Side Encryption. You should use either AWS Systems
Manager Parameter Store or AWS Secrets Manager to store the credentials.

References:

https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#use-roles-with-ec2

https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html

https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html

Check out this AWS Identity & Access Management (IAM) Cheat Sheet:

https://tutorialsdojo.com/aws-identity-and-access-management-iam/

Check out this AWS Secrets Manager Cheat Sheet:

https://tutorialsdojo.com/aws-secrets-manager/

AWS Security Services Overview - Secrets Manager, ACM, Macie:

https://www.youtube.com/watch?v=ogVamzF2Dzk

Question 24: Correct

A global IT consulting company has a multi-tier enterprise resource planning application which is hosted in
AWS. It runs on an Auto Scaling group of EC2 instances across multiple Availability Zones behind an
Application Load Balancer. For its database tier, all of its data is persisted in an Amazon RDS MySQL database
running in a Multi-AZ deployments configuration. All of the static content of the application is durably stored in
Amazon S3. The company is already using CloudFormation templates for managing and deploying your AWS
resources. Few weeks ago, the company failed an IT audit due to their application’s long recovery time and
excessive data loss in a simulated disaster recovery scenario drill.

How should the DevOps Engineer implement a multi-region disaster recovery plan which has the LOWEST
recovery time and the LEAST data loss?

Launch the application stack in another AWS region using the CloudFormation template. Enable cross-
region replication between the original Amazon S3 bucket and a new S3 bucket. Set up an Application
Load Balancer which will distribute the traffic to the other AWS region in the event of an outage.
Maintain the Multi-AZ deployments configuration of the Amazon RDS database which can ensure the
availability of your data even in the event of a regional AWS outage in the primary site.

Launch the application stack in another AWS region using the CloudFormation template. Create another
Amazon RDS standby DB instance in the other region then enable cross-region replication between the
original Amazon S3 bucket and a new S3 bucket. The Standby DB instance will automatically be the
master DB in the event of an application fail over. Increase the capacity of the Auto Scaling group using
the CloudFormation stack template to improve the scalability of the application.

Launch the application stack in another AWS region using the CloudFormation template. Take a daily
Amazon RDS cross-region snapshot to the other region using a scheduled job running in Lambda and
CloudWatch Events. Enable cross-region replication between the original S3 bucket and Amazon Glacier.
In the event of application outages, launch a new application stack in the other AWS region and restore
the database from the most recent snapshot.

Launch the application stack in another AWS region using the CloudFormation template. Create an
Amazon RDS Read Replica in the other region then enable cross-region replication between the original
Amazon S3 bucket and a new S3 bucket. Promote the RDS Read Replica as the master in the event of
application failover. Increase the capacity of the Auto Scaling group using the CloudFormation stack
template to improve the scalability of the application.

(Correct)

Explanation

Amazon RDS Read Replicas provide enhanced performance and durability for database (DB) instances. This
feature makes it easy to elastically scale out beyond the capacity constraints of a single DB instance for read-
heavy database workloads. You can create one or more replicas of a given source DB Instance and serve high-
volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput.
Read replicas can also be promoted when needed to become standalone DB instances. Read replicas are
available in Amazon RDS for MySQL, MariaDB, PostgreSQL, and Oracle as well as Amazon Aurora.

Read replicas in Amazon RDS for MySQL, MariaDB, PostgreSQL, and Oracle provide a complementary
availability mechanism to Amazon RDS Multi-AZ Deployments. You can promote a read replica if the source
DB instance fails. You can also replicate DB instances across AWS Regions as part of your disaster recovery
strategy, which is not available with Multi-AZ Deployments since this is only applicable in a single AWS
Region. This functionality complements the synchronous replication, automatic failure detection, and failover
provided with Multi-AZ deployments.
When you copy a snapshot to an AWS Region that is different from the source snapshot's AWS Region, the first
copy is a full snapshot copy, even if you copy an incremental snapshot. A full snapshot copy contains all of the
data and metadata required to restore the DB instance. After the first snapshot copy, you can copy incremental
snapshots of the same DB instance to the same destination region within the same AWS account.

Depending on the AWS Regions involved and the amount of data to be copied, a cross-region snapshot copy
can take hours to complete. In some cases, there might be a large number of cross-region snapshot copy requests
from a given source AWS Region. In these cases, Amazon RDS might put new cross-region copy requests from
that source AWS Region into a queue until some in-progress copies complete. No progress information is
displayed about copy requests while they are in the queue. Progress information is displayed when the copy
starts.

This means that a cross-region snapshot doesn't provide a high RPO compared with a Read Replica since the
snapshot takes significant time to complete. Although this is better than Multi-AZ deployments since you can
replicate your database across AWS Regions, using a Read Replica is still the best choice for providing a high
RTO and RPO for disaster recovery.

Hence, the correct answer is: Launch the application stack in another AWS region using the
CloudFormation template. Create an Amazon RDS Read Replica in the other region then enable cross-
region replication between the original Amazon S3 bucket and a new S3 bucket. Promote the RDS Read
Replica as the master in the event of application failover. Increase the capacity of the Auto Scaling group
using the CloudFormation stack template to improve the scalability of the application.

The option that says: Launch the application stack in another AWS region using the CloudFormation
template. Take a daily Amazon RDS cross-region snapshot to the other region using a scheduled job
running in Lambda and CloudWatch Events. Enable cross-region replication between the original S3
bucket and Amazon Glacier. In the event of application outages, launch a new application stack in the
other AWS region and restore the database from the most recent snapshot is incorrect. Although this
solution may work, the use of cross-region snapshot doesn't provide the LOWEST recovery time and the LEAST
data loss because a snapshot can take several hours to complete. The best solution to use here is to launch a Read
Replica to another AWS Region, which asynchronously replicates the data from the source database to the other
AWS Region.

The option that says: Launch the application stack in another AWS region using the CloudFormation
template. Create another Amazon RDS Standby DB instance in the other region then enable cross-region
replication between the original Amazon S3 bucket and a new S3 bucket. The Standby DB instance will
automatically be the master DB in the event of application failover. Increase the capacity of the Auto
Scaling group using the CloudFormation stack template to improve the scalability of the application is
incorrect because the scope of the Multi-AZ deployments is bound to a single AWS Region only. You cannot
host your standby DB instance to another AWS Region. You should either use Read Replicas or cross-region
snapshots instead.

The option that says: Launch the application stack in another AWS region using the CloudFormation
template. Enable cross-region replication between the original Amazon S3 bucket and a new S3 bucket.
Set up an Application Load Balancer which will distribute the traffic to the other AWS region in the event
of an outage. Maintain the Multi-AZ deployments configuration of the Amazon RDS database which can
ensure the availability of your data even in the event of a regional AWS outage in the primary site is
incorrect because an ELB can't distribute traffic to different AWS Regions, unlike Route 53. Moreover, a Multi-
AZ deployments configuration can only handle an outage of one or more Availability Zones and not the entire
AWS Region.

References:

https://aws.amazon.com/blogs/database/implementing-a-disaster-recovery-strategy-with-amazon-rds/

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/
USER_CopySnapshot.html#USER_CopySnapshot.AcrossRegions

https://aws.amazon.com/rds/details/read-replicas/

Check out this Amazon RDS Cheat Sheet:

https://tutorialsdojo.com/amazon-relational-database-service-amazon-rds/

Question 25: Incorrect

A company has recently developed a serverless application that is composed of several Lambda functions and a
DynamoDB database. For the CI/CD process, you have built a continuous deployment pipeline using AWS
CodeCommit, AWS CodeBuild, and AWS CodePipeline. You have also configured the source, build, test, and
deployment stages of the pipeline of the application. However, upon review, the Lead DevOps engineer asked
you to improve the current pipeline configuration that you've made to mitigate the risk of failed deployments.
The deployment stage should release the new application version to a small subset of users only for verification
before fully releasing the change to all users.

The pipeline's deployment stage must be modified to meet this requirement. Which of the following is the
MOST suitable setup that you should implement?

Develop a custom script that uses AWS CLI to update the Lambda functions. Integrate the script in
CodeBuild that will automatically publish the new version of the application by switching to the
production alias.

Define and publish the new version on the serverless application using CloudFormation. Deploy the new
version of the AWS Lambda functions with AWS CodeDeploy using
the CodeDeployDefault.LambdaCanary10Percent5Minutes predefined deployment configuration.

(Correct)

Use AWS CloudFormation to define and publish the new version of the serverless application. Deploy the
new version of the AWS Lambda functions with AWS CodeDeploy using
the CodeDeployDefault.LambdaAllAtOnce predefined deployment configuration.

Publish a new version on the serverless application using CloudFormation. Set up a manual approval
action in CodePipeline in order to verify and approve the version that will be deployed. Once the change
has been verified, invoke the Lambda functions to use the production alias using CodePipeline.

Explanation
CodeDeploy is a deployment service that automates application deployments to Amazon EC2 instances, on-
premises instances, serverless Lambda functions, or Amazon ECS services. When you deploy to an AWS
Lambda compute platform, the deployment configuration specifies the way traffic is shifted to the new Lambda
function versions in your application.

There are three ways traffic can shift during a deployment:

Canary: Traffic is shifted in two increments. You can choose from predefined canary options that specify the
percentage of traffic shifted to your updated Lambda function version in the first increment and the interval, in
minutes, before the remaining traffic is shifted in the second increment.

Linear: Traffic is shifted in equal increments with an equal number of minutes between each increment. You
can choose from predefined linear options that specify the percentage of traffic shifted in each increment and the
number of minutes between each increment.

All-at-once: All traffic is shifted from the original Lambda function to the updated Lambda function version all
at once.

The following table lists the predefined configurations available for AWS Lambda deployments.

Hence, the correct answer is: Define and publish the new version on the serverless application using
CloudFormation. Deploy the new version of the AWS Lambda functions with AWS CodeDeploy using
the CodeDeployDefault.LambdaCanary10Percent5Minutes predefined deployment configuration.

The option that says: Publish a new version on the serverless application using CloudFormation. Set up a
manual approval action in CodePipeline in order to verify and approve the version that will be deployed.
Once the change has been verified, invoke the Lambda functions to use the production alias using
CodePipeline is incorrect. Although this setup allows you to verify the new version of the serverless application
before the production deployment, it still fails to meet the requirement of deploying the change to a small subset
of users only. This deployment configuration will release the new version to all users.The option that
says: Develop a custom script that uses AWS CLI to update the Lambda functions. Integrate the script in
CodeBuild that will automatically publish the new version of the application by switching to the
production alias is incorrect because developing a custom script to update the Lambda functions is not
necessary. Moreover, AWS CodeBuild is not capable of publishing new versions of your Lambda functions. You
should use AWS CodeDeploy instead.

The option that says: Use AWS CloudFormation to define and publish the new version on the serverless
application. Deploy the new version of the AWS Lambda functions with AWS CodeDeploy using
the CodeDeployDefault.LambdaAllAtOnce predefined deployment configuration is incorrect because this
configuration will deploy the changes to all users immediately. Keep in mind that the scenario requires that the
change should only be deployed to a small subset of users only. You have to use a Canary deployment instead or
the CodeDeployDefault.LambdaCanary10Percent5Minutes configuration.

References:

https://docs.aws.amazon.com/codedeploy/latest/userguide/deployment-configurations.html

https://docs.aws.amazon.com/codedeploy/latest/userguide/deployment-steps-lambda.html
Check out this AWS CodeDeploy Cheat Sheet:

https://tutorialsdojo.com/aws-codedeploy/

AWS CodeDeploy - Primary Components:

https://youtu.be/ClWBJT6k20Q

Question 26: Correct

A local e-commerce website is gaining an unprecedented number of users in the company's home country to the
point that people from other countries are requesting access to their site. In order to support the international
growth of their flagship application, the company needs to add new features in their website to support shipping,
value-added tax (VAT) calculations and other specific requirements for each new country. At the same time,
they also have a set of new features that need to be developed for their e-commerce site, which is only specific to
the existing local users. Each feature may take about 3 months to complete all the required planning,
development, and testing stages.As the DevOps Engineer, how should you properly manage the application
feature deployments in the MOST efficient manner for this scenario?

Create a Git tag in CodeCommit to mark each commit with a label for each corresponding application
feature.

In AWS CodeCommit, instruct the developers to commit the new code for the new features in the master
branch. Delay all other application deployment related to international expansion until all features are
ready for their local users. Implement feature flags in your application to enable or disable specific
features.

Create a new repository for each new application feature in AWS CodeCommit and then commit all of the
code changes to the respective repositories.

In the application code repository in AWS CodeCommit, create a feature branch for each application
feature that will be added. Once the feature is tested, merge the commits to the master or release branch.

(Correct)

Explanation

In Git, branches are simply pointers or references to a commit. In development, they're a convenient way to
organize your work. You can use branches to separate work on a new or different version of files without
impacting work in other branches. You can use branches to develop new features, store a specific version of your
project from a particular commit, and more.

In CodeCommit, you can change the default branch for your repository. This default branch is the one used as
the base or default branch in local repos when users clone the repository. You can also create and delete
branches and view details about a branch. You can quickly compare the differences between a branch and the
default branch (or any two branches). To view the history of branches and merges in your repository, you can
use the Commit Visualizer. In Git, one of its advantages is its branching capabilities which provide an isolated
environment for every change to your codebase that doesn't affect your master or release branch. Git branches
are cheap and easy to merge, which is quite the opposite with centralized version control systems. When you
want to start working on a new feature or application functionality, you can easily create a new branch to ensure
that the master branch always contains a production-quality code. This also lets you represent development work
at the same granularity as your agile backlog.

Hence, the correct answer is: In the application code repository in AWS CodeCommit, create a feature
branch for each application feature that will be added. Once the feature is tested, merge the commits to
the master or release branch.

The option that says: In AWS CodeCommit, instruct the developers to commit the new code for the new
features in the master branch. Delay all other application deployment related to international expansion
until all features are ready for their local users is incorrect because delaying the application deployments is
an inefficient process that could even stagnate the company's expansion. Although you can commit the new code
for the features in the master branch and enable/disable them using feature flags, this development process is still
inefficient and actually riskier since all of the changes are in a single branch. If there is a development issue in
one of the features, your production code will be directly affected. A better solution for this is to create a
separate feature branch for each new feature.

The option that says: Create a Git tag in CodeCommit to mark each commit with a label for each
corresponding application feature is incorrect because these are just mere labels for your references. This
process may affect the master branch, which always contains the production-quality code. If there is a bug in one
of the features, you will have to laboriously debug and trace the code changes on a single branch instead of a
specific feature branch.

The option that says: Create a new repository for each new application feature in AWS CodeCommit and
then commit all of the code changes to the respective repositories is incorrect because this is not appropriate
at all since you can simply create new branches for your features, and not new repositories. A single application
should be hosted in one repository, which can be branched off for your feature development, testing, and
application release.

References:

https://docs.aws.amazon.com/codecommit/latest/userguide/branches.html

https://docs.aws.amazon.com/codecommit/latest/userguide/welcome.html

Check out this AWS CodeCommit Cheat Sheet:

https://tutorialsdojo.com/aws-codecommit/

Tutorials Dojo's AWS Certified DevOps Engineer Professional Exam Study Guide:

https://tutorialsdojo.com/aws-certified-devops-engineer-professional/

Question 27: Correct

A company is using AWS OpsWorks to manage the deployments of its Auto Scaling group of EC2 instances as
well as other resources. The Systems Support Team discovered that some EC2 instances in the production
environment have been restarting without reason. The DevOps team has found that those instances were
restarted by OpsWorks, based upon the information shown on the AWS CloudTrail logs. The Lead DevOps
Engineer now wants to receive an automated email notification to improve system monitoring. The DevOps
team should receive an email whenever AWS OpsWorks restarts an instance that is considered unhealthy or
unable to communicate with the service endpoint.How can the DevOps team satisfy this requirement?

Set up an Amazon SES topic and create a subscription that will notify the DevOps team via email.
Associate this SES topic to new Amazon CloudWatch rule topic, which has the following configuration:
1. {
2. "source": [
3. "aws.opsworks"
4. ],
5. "detail": {
6. "initiated_by": [
7. "auto-scaling"
8. ]
9. }
10. }

Set up an Amazon SNS topic and create a subscription that will notify the DevOps team via email.
Associate this SNS topic to new Amazon CloudWatch rule topic, which has the following configuration:
11. {
12. "source": [
13. "aws.opsworks"
14. ],
15. "detail": {
16. "initiated_by": [
17. "auto-termination"
18. ]
19. }
20. }

Set up an Amazon SNS topic and create a subscription that will notify the DevOps team via email.
Associate this SNS topic to new Amazon CloudWatch rule topic, which has the following configuration:
21. {
22. "source": [
23. "aws.opsworks"
24. ],
25. "detail": {
26. "initiated_by": [
27. "auto-healing"
28. ]
29. }
30. }

(Correct)

Set up an Amazon SNS topic and create a subscription that will notify the DevOps team via email.
Associate this SNS topic to new Amazon CloudWatch rule topic, which has the following configuration:
31. {
32. "source": [
33. "aws.opsworks"
34. ],
35. "detail": {
36. "initiated_by": [
37. "user"
38. ]
39. }
40. }

Explanation

You can configure rules in Amazon CloudWatch Events to alert you to changes in AWS OpsWorks Stacks
resources, and direct CloudWatch Events to take actions based on event contents.

Amazon CloudWatch Events delivers a near real-time stream of system events that describe changes in Amazon
Web Services (AWS) resources. Using simple rules that you can quickly set up, you can match events and route
them to one or more target functions or streams. CloudWatch Events becomes aware of operational changes as
they occur. CloudWatch Events responds to these operational changes and takes corrective action as necessary,
by sending messages to respond to the environment, activating functions, making changes, and capturing state
information.The initiated_by field is only populated when the instance is in the requested, terminating,
or stopping states. The initiated_by field can contain one of the following values.

-user - A user requested the instance state change by using either the API or AWS Management Console.

-auto-scaling - The AWS OpsWorks Stacks automatic scaling feature initiated the instance state change.

-auto-healing - The AWS OpsWorks Stacks automatic healing feature initiated the instance state change.

Hence, the correct answer is: Set up an Amazon SNS topic and create a subscription that will notify the
DevOps team via email. Associate this SNS topic to new Amazon CloudWatch rule topic, which has the
following configuration:
1. {
2. "source": [
3. "aws.opsworks"
4. ],
5. "detail": {
6. "initiated_by": [
7. "auto-healing"
8. ]
9. }
10. }

The option that uses a CloudWatch rule configuration with a value of "user" in its "detail.initiated_by"
field is incorrect because with this configuration, the notification will only be triggered in the event that the user
requested the instance state to change by using either the API or AWS Management Console.

The option that uses a CloudWatch rule configuration with a value of "auto-scaling" in its
"detail.initiated_by" field is incorrect because the notification will only be triggered in the event that AWS
OpsWorks Stacks automatic scaling feature initiated the instance state change. Remember that the scenario says
that the DevOps team should receive an email whenever AWS OpsWorks restarts an instance that is considered
unhealthy or unable to communicate with the service endpoint. Hence, this scenario is more related to auto-
healing rather than not auto-scaling. Moreover, you should use SNS instead of SES.

The option that uses a CloudWatch rule configuration with a value of "auto-termination" in its
"detail.initiated_by" field is incorrect because "auto-termination" is not a valid option. You should use set a
value of "auto-healing" instead.

References:
https://aws.amazon.com/premiumsupport/knowledge-center/opsworks-unexpected-start-instance/

https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/EventTypes.html#opsworks_event_types

https://aws.amazon.com/blogs/mt/how-to-set-up-aws-opsworks-stacks-auto-healing-notifications-in-amazon-
cloudwatch-events/

Check out this Amazon CloudWatch Cheat Sheet:

https://tutorialsdojo.com/amazon-cloudwatch/

Question 28: Correct

Due to the growth of its regional e-commerce website, the company has decided to expand its operations
globally in the coming months ahead. The REST API web services of the app is currently running in an Auto
Scaling group of EC2 instances across multiple Availability Zones behind an Application Load Balancer. For its
database tier, the website is using a single Amazon Aurora MySQL database instance in the AWS Region where
the company is based. The company wants to consolidate and store the data of their offerings into a single data
source for their product catalog across all regions. For data privacy compliance, they need to ensure that the
personal information of their users as well as their purchases and financial data are kept in their respective
region.Which of the following options can meet the above requirements and entails the LEAST amount of
change to the application?

Set up multiple read replicas in your Amazon Aurora cluster to store the product catalog data. Launch an
additional local Amazon Aurora instances in each AWS Region for storing the personal information and
financial data of their customers.

(Correct)

Set up a new Amazon Redshift database to store the product catalog. Launch a new set of Amazon
DynamoDB tables to store the personal information and financial data of their customers.

Set up multiple read replicas in your Amazon Aurora cluster to store the product catalog data. Launch a
new DynamoDB global table for storing the personal information and financial data of their customers.

Set up a DynamoDB global table to store the product catalog data of the e-commerce website. Use
regional DynamoDB tables for storing the personal information and financial data of their customers.

Explanation

An Aurora global database consists of one primary AWS Region where your data is mastered and one read-only,
secondary AWS Region. Aurora replicates data to the secondary AWS Region with typical latency of under a
second. You issue write operations directly to the primary DB instance in the primary AWS Region. An Aurora
global database uses dedicated infrastructure to replicate your data, leaving database resources available entirely
to serve application workloads. Applications with a worldwide footprint can use reader instances in the
secondary AWS Region for low-latency reads. In the unlikely event, your database becomes degraded or isolated
in an AWS region, you can promote the secondary AWS Region to take full read-write workloads in under a
minute. The Aurora cluster in the primary AWS Region where your data is mastered performs both read and
write operations. The cluster in the secondary region enables low-latency reads. You can scale up the secondary
cluster independently by adding one or more DB instances (Aurora Replicas) to serve read-only workloads. For
disaster recovery, you can remove and promote the secondary cluster to allow full read and write operations.

Only the primary cluster performs write operations. Clients that perform write operations connect to the DB
cluster endpoint of the primary cluster.

Hence, the correct answer is: Set up multiple read replicas in your Amazon Aurora cluster to store the
product catalog data. Launch an additional local Amazon Aurora instances in each AWS Region for
storing the personal information and financial data of their customers.

The option that says: Set up a new Amazon Redshift database to store the product catalog. Launch a new
set of Amazon DynamoDB tables to store the personal information and financial data of their customers is
incorrect because this solution entails a significant overhead of refactoring your application to use Redshift
instead of Aurora. Moreover, Redshift is primarily used as a data warehouse solution and is not suitable for
OLTP or e-commerce websites.

The option that says: Set up a DynamoDB global table to store the product catalog data of the e-commerce
website. Use regional DynamoDB tables for storing the personal information and financial data of their
customers is incorrect. Although the use of Global and Regional DynamoDB is acceptable, this solution still
entails a lot of changes to the application. There is no assurance that the application can work with a NoSQL
database, and even so, you have to implement a series of code changes in order for this solution to work.

The option that says: Set up multiple read replicas in your Amazon Aurora cluster to store the product
catalog data. Launch a new DynamoDB global table for storing the personal information and financial
data of their customers is incorrect. Although the use of Read Replicas is appropriate, this solution still
requires you to do a lot of code changes since you will use a different database to store your regional data.

References:

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database.html#aurora-global-
database.advantages

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/
AuroraMySQL.Replication.CrossRegion.html

Check out this Amazon Aurora Cheat Sheet:

https://tutorialsdojo.com/amazon-aurora/

Question 29: Correct

A software development company is using AWS CodeCommit, AWS CodeBuild, AWS CodeDeploy, and AWS
CodePipeline for its CI/CD process. To further improve their systems, they need to implement a solution that
automatically detects and reacts to changes in the state of their deployments in AWS CodeDeploy. Any changes
must be rolled back automatically if the deployment process fails, and a notification must be sent to the DevOps
Team's Slack channel for easy monitoring.Which of the following is the MOST suitable configuration that you
should implement to satisfy this requirement?
Monitor the API calls in the CodeDeploy project using AWS CloudTrail. Send a message to the DevOps
Team's Slack Channel when the PutLifecycleEventHookExecutionStatus API call has been detected.
Rollback the changes by using the AWS CLI.

Set up a CloudWatch Events rule to monitor AWS CodeDeploy operations with a Lambda function as a
target. Configure the rule to send out a message to the DevOps Team's Slack Channel in the event that the
deployment fails. Configure AWS CodeDeploy to use the Roll back when a deployment fails setting.

(Correct)

Set up a CloudWatch Alarm that tracks the CloudWatch metrics of the CodeDeploy project. Configure the
CloudWatch Alarm to automatically send out a message to the DevOps Team's Slack Channel when the
deployment fails. Configure AWS CodeDeploy to use the Roll back when alarm thresholds are
met setting.

Configure a CodeDeploy agent to send notification to the DevOps Team's Slack Channel when the
deployment fails. Configure AWS CodeDeploy to automatically roll back whenever the deployment is not
successful.

Explanation

You can monitor CodeDeploy deployments using the following CloudWatch tools: Amazon CloudWatch
Events, CloudWatch alarms, and Amazon CloudWatch Logs.

Reviewing the logs created by the CodeDeploy agent and deployments can help you troubleshoot the causes of
deployment failures. As an alternative to reviewing CodeDeploy logs on one instance at a time, you can use
CloudWatch Logs to monitor all logs in a central location.

You can use Amazon CloudWatch Events to detect and react to changes in the state of an instance or a
deployment (an "event") in your CodeDeploy operations. Then, based on rules you create, CloudWatch Events
will invoke one or more target actions when a deployment or instance enters the state you specify in a rule.
Depending on the type of state change, you might want to send notifications, capture state information, take
corrective action, initiate events, or take other actions.

You can select the following types of targets when using CloudWatch Events as part of your CodeDeploy
operations:

- AWS Lambda functions

- Kinesis streams

- Amazon SQS queues

- Built-in targets (CloudWatch alarm actions)

- Amazon SNS topics

The following are some use cases:

- Use a Lambda function to pass a notification to a Slack channel whenever deployments fail.
- Push data about deployments or instances to a Kinesis stream to support comprehensive, real-time status
monitoring.

- Use CloudWatch alarm actions to automatically stop, terminate, reboot, or recover Amazon EC2 instances
when a deployment or instance event you specify occurs.

Hence, the correct answer is: Set up a CloudWatch Events rule to monitor AWS CodeDeploy operations
with a Lambda function as a target. Configure the rule to send out a message to the DevOps Team's Slack
Channel in the event that the deployment fails. Configure AWS CodeDeploy to use the Roll back when a
deployment fails setting.

The option that says: Set up a CloudWatch Alarm that tracks the CloudWatch metrics of the CodeDeploy
project. Configure the CloudWatch Alarm to automatically send out a message to the DevOps Team's
Slack Channel when the deployment fails. Configure AWS CodeDeploy to use the Roll back when alarm
thresholds are met setting is incorrect because CloudWatch Alarm can't directly send a message to a Slack
Channel. You have to use a CloudWatch Events with an associated Lambda function to notify the DevOps Team
via Slack.

The option that says: Configure a CodeDeploy agent to send a notification to the DevOps Team's Slack
Channel when the deployment fails. Configure AWS CodeDeploy to automatically roll back whenever the
deployment is not successful is incorrect because a CodeDeploy agent is primarily used for deployment and not
for sending custom messages to non-AWS resources such as a Slack Channel.

The option that says: Monitor the API calls in the CodeDeploy project using AWS CloudTrail. Send a
message to the DevOps Team's Slack Channel when the PutLifecycleEventHookExecutionStatus API call has
been detected. Rollback the changes by using the AWS CLI is incorrect because this API simply sets the
result of a Lambda validation function. This is not a suitable solution since invoking various API calls is not
necessary at all. You simply have to integrate a CloudWatch Events rule with an associated Lambda function to
your CodeDeploy project in order to meet the specified requirement.

References:

https://docs.aws.amazon.com/codedeploy/latest/userguide/monitoring-cloudwatch-events.html

https://docs.aws.amazon.com/codedeploy/latest/userguide/monitoring-cloudwatch.html

Check out this AWS CodeDeploy Cheat Sheet:

https://tutorialsdojo.com/aws-codedeploy/

AWS CodeDeploy - Primary Components:

https://youtu.be/ClWBJT6k20Q

Question 30: Correct

A software development company has a hybrid cloud environment wherein its on-premises data center from
multiple sites are connected to their VPC using AWS Site-to-Site VPN. Its development department has a
proprietary source code analysis tool that follows the Open Web Application Security Project (OWASP)
standard. The tool is hosted on a dedicated server in its data center that checks the code quality and security
vulnerabilities of their projects. The company plans to use this tool to run checks against the source code as part
of the pipeline before the code is compiled into a deployable package in CodeDeploy. The code checks take
approximately an hour to complete.

As a DevOps Engineer, which among the options below is the MOST suitable solution that you should
implement?

Create a pipeline in AWS CodePipeline. Set up an action that invokes a custom Lambda function after the
source stage. Configure the function to execute the source code analysis tool, and return the results to
CodePipeline. Ensure that the function waits for the execution to complete and store the output in a
specified S3 bucket.

Create a pipeline in AWS CodePipeline. Set up a custom action type and create an associated job worker
that runs on-premises. Set the pipeline to invoke the custom action after the source stage. Configure the
job worker to poll CodePipeline for job requests for the custom action then execute the source code
analysis tool and return the status result to CodePipeline.

(Correct)

Create a pipeline in AWS CodePipeline. Expose the web services of the on-premises source-code analysis
tool over the Internet. Set up an action that will run the tool using an API call. Set the pipeline to execute
the script after the source stage. After the processing has been done, send the results to a public S3 bucket.
Configure the pipeline to poll the contents of the bucket.

Create a pipeline in AWS CodePipeline. Create a shell script that clones the code repository from
CodeCommit and run the source code analysis tool on-premises. Create an action in the pipeline that
executes the shell script after the source stage and configure it to return the results to CodePipeline.

Explanation

AWS Site-to-Site VPN enables you to securely connect your on-premises network or branch office site to your
Amazon Virtual Private Cloud (Amazon VPC). A Site-to-Site VPN connection offers two VPN tunnels between
a virtual private gateway or a transit gateway on the AWS side, and a customer gateway (which represents a
VPN device) on the remote (on-premises) side.

AWS CodePipeline includes a number of actions that help you configure build, test, and deploy resources for
your automated release process. If your release process includes activities that are not included in the default
actions, such as an internally developed build process or a test suite, you can create a custom action for that
purpose and include it in your pipeline. You can use the AWS CLI to create custom actions in pipelines
associated with your AWS account.

You can create custom actions for the following AWS CodePipeline action categories:

- A custom build action that builds or transforms the items

- A custom deploy action that deploys items to one or more servers, websites, or repositories

- A custom test action that configures and runs automated tests


- A custom invoke action that runs functions

When you create a custom action, you must also create a job worker that will poll CodePipeline for job requests
for this custom action, execute the job, and return the status result to CodePipeline. This job worker can be
located on any computer or resource as long as it has access to the public endpoint for CodePipeline. To easily
manage access and security, consider hosting your job worker on an Amazon EC2 instance.

The following diagram shows a high-level view of a pipeline that includes a custom build action:

When a pipeline includes a custom action as part of a stage, the pipeline will create a job request. A custom job
worker detects that request and performs that job (in this example, a custom process using third-party build
software). When the action is complete, the job worker returns either a success result or a failure result. If a
success result is received, the pipeline will transition the revision and its artifacts to the next action. If a failure is
returned, the pipeline will not transition the revision to the next action in the pipeline.

Hence, the correct answer is: Create a pipeline in AWS CodePipeline. Set up a custom action type and
create an associated job worker that runs on-premises. Set the pipeline to invoke the custom action after
the source stage. Configure the job worker to poll CodePipeline for job requests for the custom action
then execute the source code analysis tool and return the status result to CodePipeline.

The option that says: Create a pipeline in AWS CodePipeline. Set up an action that invokes a custom
Lambda function after the source stage. Configure the function to execute the source code analysis tool,
and return the results to CodePipeline. Ensure that the function waits for the execution to complete and
store the output in a specified S3 bucket is incorrect because using a custom Lambda function to execute the
source code analysis tool is not an appropriate solution. Remember that Lambda functions can run up to 15
minutes only and the code checks could take approximately an hour to complete. It is likely that the Lambda
function will timeout in the middle of the processing.

The option that says: Create a pipeline in AWS CodePipeline. Expose the web services of the on-premises
source-code analysis tool over the Internet. Set up an action that will run the tool using an API call. Set
the pipeline to execute the script after the source stage. After the processing has been done, send the
results to a public S3 bucket. Configure the pipeline to poll the contents of the bucket is incorrect because
this setup has a lot of security vulnerabilities. Exposing the web services of the tool can open up attacks to the
on-premises data center. The use of a public S3 bucket is inappropriate as well. Moreover, a CodePipeline
cannot directly poll the contents of an S3 bucket.

The option that says: Create a pipeline in AWS CodePipeline. Create a shell script that clones the code
repository from CodeCommit and run the source code analysis tool on-premises. Create an action in the
pipeline that executes the shell script after the source stage and configure it to return the results to
CodePipeline is incorrect because writing a custom shell script is not needed. A better solution is to simply
create a custom action type and create an associated job worker that runs on-premises.

References:

https://docs.aws.amazon.com/codepipeline/latest/userguide/actions-create-custom-action.html

https://docs.aws.amazon.com/codepipeline/latest/userguide/actions.html

https://docs.aws.amazon.com/vpn/latest/s2svpn/VPC_VPN.html
Check out this AWS CodePipeline Cheat Sheet:

https://tutorialsdojo.com/aws-codepipeline/

Question 31: Correct

You are instructed to set up a configuration management for all of your infrastructure in AWS. To comply with
the company’s strict security policies, the solution should provide a near real-time dashboard of the compliance
posture of your systems with a feature to detect violations.

Which solution would be able to meet the above requirements?

Use AWS Config to record all configuration changes and store the data reports to Amazon S3. Use
Amazon QuickSight to analyze the dataset.

(Correct)

Use Amazon inspector to monitor the compliance posture of your systems and store the reports to
Amazon CloudWatch Logs. Use a CloudWatch dashboard with a custom metric filter to monitor and view
all of the specific compliance requirements.

Use AWS Service Catalog to create the required resource configurations for your compliance posture.
Monitor the compliance and violations of all of your cloud resources using a custom CloudWatch
dashboard with an integrated Amazon SNS to send the notifications.

Tag all of your resources and use Trusted Advisor to monitor both the compliant and non-compliant
resources. Use the AWS Management Console to monitor the status of your compliance posture.

Explanation

AWS Config provides you a visual dashboard to help you quickly spot non-compliant resources and take
appropriate action. IT Administrators, Security Experts, and Compliance Officers can see a shared view of your
AWS resources compliance posture.When you run your applications on AWS, you usually use AWS resources,
which you must create and manage collectively. As the demand for your application keeps growing, so does
your need to keep track of your AWS resources.

To exercise better governance over your resource configurations and to detect resource misconfigurations, you
need fine-grained visibility into what resources exist and how these resources are configured at any time. You
can use AWS Config to notify you whenever resources are created, modified, or deleted without having to
monitor these changes by polling the calls made to each resource.

You can use AWS Config rules to evaluate the configuration settings of your AWS resources. When AWS
Config detects that a resource violates the conditions in one of your rules, AWS Config flags the resource as
non-compliant and sends a notification. AWS Config continuously evaluates your resources as they are created,
changed, or deleted.

Hence, the correct answer is: Use AWS Config to record all configuration changes and store the data
reports to Amazon S3. Use Amazon QuickSight to analyze the dataset.
The option that says: Use AWS Service Catalog to create the required resource configurations for your
compliance posture. Monitor the compliance and violations of all of your cloud resources using a custom
CloudWatch dashboard with an integrated Amazon SNS to send the notifications is incorrect. Although
AWS Service Catalog can be used for resource configuration, it is not fully capable of detecting violations of
your AWS configuration rules.

The option that says: Tag all of your resources and use Trusted Advisor to monitor both the compliant and
non-compliant resources. Use the AWS Management Console to monitor the status of your compliance
posture is incorrect because the Trusted Advisor service is not suitable for configuration management and
automatic violation detection. You should use AWS Config instead.

The option that says: Use Amazon inspector to monitor the compliance posture of your systems and store
the reports to Amazon CloudWatch Logs. Use a CloudWatch dashboard with a custom metric filter to
monitor and view all of the specific compliance requirements is incorrect because the Amazon Inspector
service is primarily used to help you check for unintended network accessibility of your Amazon EC2 instances
and for vulnerabilities on those EC2 instances.

References:

https://docs.aws.amazon.com/config/latest/developerguide/WhatIsConfig.html

https://docs.aws.amazon.com/quicksight/latest/user/QS-compliance.html

https://aws.amazon.com/config/features/

Check out this AWS Config Cheat Sheet:

https://tutorialsdojo.com/aws-config/

Question 32: Incorrect

A software development company is doing an all-in migration of their on-premises resources to AWS. The
company has a hybrid architecture that comprises over a thousand on-premises VMware servers and a few EC2
instances in their VPC. The company is using a VMWare vCenter Server for data center management of their
vSphere environments and virtual servers. A DevOps engineer is tasked to implement a solution that will collect
various information from their on-premises and EC2 instances, such as operating system details, MAC address,
IP address, and many others. The Operations team should also be able to analyze the collected data in a visual
format.Which of the following is the MOST appropriate solution that the engineer should implement with the
LEAST amount of effort

Develop a custom python script and install them on both VMware servers and EC2 instances to collect all
of the required information. Push the data to a centralized S3 bucket. Use VMware vSphere to collect the
data from your on-premises resources and push the results into a file gateway in order to store the data in
Amazon S3. Use Amazon Athena on the S3 bucket to analyze the data.

Install the AWS Systems Manager Agent (SSM Agent) on all on-premises virtual machines and the EC2
instances. Utilize the AWS Systems Manager Inventory service to provide visibility into your Amazon
EC2 and on-premises computing environment. Set up an AWS Systems Manager Resource Data Sync to
an S3 bucket in order to analyze the data with Amazon QuickSight.
Register all of the on-premises virtual machines as well as the EC2 instances to AWS Service Catalog
where all the required information such as the operating system details and many others will be
automatically populated. Export the consolidated data from AWS Service Catalog to an Amazon S3
bucket and then use Amazon QuickSight for analytics.

Using the AWS Application Discovery Service, deploy the Agentless Discovery Connector in an OVA
file format to your VMware vCenter and then install the AWS Discovery Agents on the EC2 instances to
collect the required data. Use the AWS Migration Hub Dashboard to analyze your hybrid infrastructure.

(Correct)

Explanation

AWS Application Discovery Service helps you plan your migration to the AWS cloud by collecting usage and
configuration data about your on-premises servers. Application Discovery Service is integrated with AWS
Migration Hub, which simplifies your migration tracking. After performing discovery, you can view the
discovered servers, group them into applications, and then track the migration status of each application from the
Migration Hub console. The discovered data can be exported for analysis in Microsoft Excel or AWS analysis
tools such as Amazon Athena and Amazon QuickSight.

Using Application Discovery Service APIs, you can export the system performance and utilization data for your
discovered servers. You can input this data into your cost model to compute the cost of running those servers in
AWS. Additionally, you can export the network connections and process data to understand the network
connections that exist between servers. This will help you determine the network dependencies between servers
and group them into applications for migration planning.Application Discovery Service offers two ways of
performing discovery and collecting data about your on-premises servers:

- Agentless discovery can be performed by deploying the AWS Agentless Discovery Connector (OVA file)
through your VMware vCenter. After the Discovery Connector is configured, it identifies virtual machines
(VMs) and hosts associated with vCenter. The Discovery Connector collects the following static configuration
data: Server hostnames, IP addresses, MAC addresses, and disk resource allocations. Additionally, it collects the
utilization data for each VM and computes average and peak utilization for metrics such as CPU, RAM, and
Disk I/O. You can export a summary of the system performance information for all the VMs associated with a
given VM host and perform a cost analysis of running them in AWS.

- Agent-based discovery can be performed by deploying the AWS Application Discovery Agent on each of
your VMs and physical servers. The agent installer is available for both Windows and Linux operating systems.
It collects static configuration data, detailed time-series system-performance information, inbound and outbound
network connections, and processes that are running. You can export this data to perform a detailed cost analysis
and to identify network connections between servers for grouping servers as applications.

The Agentless discovery uses the AWS Discovery Connector, which is a VMware appliance that can collect
information only about VMware virtual machines (VMs). This mode doesn't require you to install a connector on
each host. You install the Discovery Connector as a VM in your VMware vCenter Server environment using an
Open Virtualization Archive (OVA) file. Because the Discovery Connector relies on VMware metadata to gather
server information regardless of the operating system, it minimizes the time required for initial on-premises
infrastructure assessment.
Hence, the correct answer is: Using the AWS Application Discovery Service, deploy the Agentless Discovery
Connector in an OVA file format to your VMware vCenter and then install the AWS Discovery Agents on
the EC2 instances to collect the required data. Use the AWS Migration Hub Dashboard to analyze your
hybrid infrastructure.

The option that says: Develop a custom python script and install them on both VMware servers and EC2
instances to collect all of the required information. Push the data to a centralized S3 bucket. Use VMware
vSphere to collect the data from your on-premises resources and push the results into a file gateway in
order to store the data in Amazon S3. Use Amazon Athena on the S3 bucket to analyze the data is
incorrect. Although this solution may work, it takes a lot of effort to develop a custom python script as well as to
manually install it to over a thousand VMWare servers on the company's on-premises data center.

The option that says: Register all of the on-premises virtual machines as well as the EC2 instances to AWS
Service Catalog where all the required information such as the operating system details, and many others
will be automatically populated. Export the consolidated data from AWS Service Catalog to an Amazon
S3, bucket and then use Amazon QuickSight for analytics is incorrect because the AWS Service Catalog
service doesn't have the capability to integrate with the on-premises VMWare servers. This service only allows
organizations to create and manage catalogs of IT services that are approved for use on AWS.

The option that says: Install the AWS Systems Manager Agent (SSM Agent) on all on-premises virtual
machines and the EC2 instances. Utilize the AWS Systems Manager Inventory service to provide visibility
into your Amazon EC2 and on-premises computing environment. Set up an AWS Systems Manager
Resource Data Sync to an S3 bucket in order to analyze the data with Amazon QuickSight is incorrect.
Although the solution of using the AWS Systems Manager is valid, this is definitely not the one that can be
implemented with the least amount of effort. You can use the SSM Agent to fetch all of the required information
about your servers, the task of installing it to each and every on-premises VMWare server is a herculean task that
entails a lot of execution time. The use of AWS Systems Manager Resource Data Sync for analyzing data is
irrelevant too. Moreover, the scenario mentioned that the company is doing an all-in migration of their on-
premises resources to AWS, which means that installing the SSM agent is not appropriate. A better solution
would be to use the Agentless Discovery Connector of the AWS Application Discovery Service to your on-
premises VMware vCenter, which can easily fetch the required information from hundreds of VMware servers.

References:

https://docs.aws.amazon.com/application-discovery/latest/userguide/what-is-appdiscovery.html

https://docs.aws.amazon.com/application-discovery/latest/userguide/discovery-connector.html

https://docs.aws.amazon.com/application-discovery/latest/userguide/dashboard.html

AWS Migration Services Overview:


https://youtu.be/yqNBkFMnsL8

Tutorials Dojo's AWS Certified DevOps Engineer Professional Exam Study Guide:

https://tutorialsdojo.com/aws-certified-devops-engineer-professional/

Question 33: Incorrect


A technology company is planning to develop its custom online forum that covers various AWS-related
technologies. They are planning to use AWS Fargate to host the containerized application and Amazon
DynamoDB as its data store. The DevOps team is instructed to define the schema of the DynamoDB table with
the required indexes, partition key, sort key, projected attributes, and others. To minimize cost, the schema must
support certain search operations using the least provisioned read capacity units. A Thread attribute contains the
user comments in JSON format. The sample data set is shown in the diagram below:

The online forum should support searches within ForumName attribute for items where the Subject begins with a
particular letter, such as 'a' or 'b'. It should allow fetches of items within the given LastPostDateTime time frame
as well as the capability to return the threads that have been posted within the last quarter.

Which of the following schema configuration meets the above requirements?

Set the ForumName attribute as the primary key and Subject as the sort key. Create a Local Secondary Index
with LastPostDateTime as the sort key and the Thread as a projected attribute.

(Correct)

Set the Subject attribute as the primary key and ForumName as the sort key. Create a Global Secondary
Index with Thread as the sort key and LastPostDateTime as a projected attribute.

Set the Subject attribute as the primary key and ForumName as the sort key. Create a Local Secondary Index
with LastPostDateTime as the sort key and the Thread as a projected attribute.

Set the ForumName attribute as the primary key and Subject as the sort key. Create a Global Secondary
Index with Thread as the sort key and fetch operations for LastPostDateTime.

Explanation

Amazon DynamoDB provides fast access to items in a table by specifying primary key values. However, many
applications might benefit from having one or more secondary (or alternate) keys available, to allow efficient
access to data with attributes other than the primary key. To address this, you can create one or more secondary
indexes on a table and issue Query or Scan requests against these indexes.

A secondary index is a data structure that contains a subset of attributes from a table, along with an alternate key
to support Query operations. You can retrieve data from the index using a Query, in much the same way as you
use Query with a table. A table can have multiple secondary indexes, which gives your applications access to
many different query patterns.

DynamoDB supports two types of secondary indexes:

- Global secondary index — an index with a partition key and a sort key that can be different from those on the
base table. A global secondary index is considered "global" because queries on the index can span all of the data
in the base table, across all partitions.

- Local secondary index — an index that has the same partition key as the base table, but a different sort key. A
local secondary index is "local" in the sense that every partition of a local secondary index is scoped to a base
table partition that has the same partition key value.
A local secondary index maintains an alternate sort key for a given partition key value. A local secondary index
also contains a copy of some or all of the attributes from its base table; you specify which attributes are projected
into the local secondary index when you create the table. The data in a local secondary index is organized by the
same partition key as the base table, but with a different sort key. This lets you access data items efficiently
across this different dimension. For greater query or scan flexibility, you can create up to five local secondary
indexes per table.

Suppose that an application needs to find all of the threads that have been posted within the last three months.
Without a local secondary index, the application would have to Scan the entire Thread table and discard any
posts that were not within the specified time frame. With a local secondary index, a Query operation could
use LastPostDateTime as a sort key and find the data quickly.

In the provided scenario, you can create a local secondary index named LastPostIndex to meet the requirements.
Note that the partition key is the same as that of the Thread table, but the sort key is LastPostDateTime as shown
in the diagram below: With LastPostIndex, an application could use ForumName and LastPostDateTime as query
criteria. However, to retrieve any additional attributes, DynamoDB must perform additional read operations
against the Thread table. These extra reads are known as fetches, and they can increase the total amount of
provisioned throughput required for a query.Suppose that you wanted to populate a webpage with a list of all the
threads in "S3" and the number of replies for each thread, sorted by the last reply date/time beginning with the
most recent reply. To populate this list, you would need the following attributes:
-Subject

-Replies

-LastPostDateTime

The most efficient way to query this data and to avoid fetch operations would be to project the Replies attribute
from the table into the local secondary index, as shown in this diagram.

DynamoDB stores all of the items with the same partition key value contiguously. In this example, given a
particular ForumName, a Query operation could immediately locate all of the threads for that forum. Within a
group of items with the same partition key value, the items are sorted by sort key value. If the sort key (Subject)
is also provided in the query, DynamoDB can narrow down the results that are returned — for example,
returning all of the threads in the "S3" forum that have a Subject beginning with the letter "a".

A projection is the set of attributes that is copied from a table into a secondary index. The partition key and sort
key of the table are always projected into the index; you can project other attributes to support your application's
query requirements. When you query an index, Amazon DynamoDB can access any attribute in the projection as
if those attributes were in a table of their own.

Hence, the correct answer is: Set the ForumName attribute as the primary key and Subject as the sort key.
Create a Local Secondary Index with LastPostDateTime as the sort key and the Thread as a projected
attribute.

The option that says: Set the Subject attribute as the primary key and ForumName as the sort key. Create a
Local Secondary Index with LastPostDateTime as the sort key and the Thread as a projected attribute is
incorrect because the scenario says that the online forum should support searches within ForumName attribute for
items where the Subject begins with a particular letter. DynamoDB stores all of the items with the same partition
key value contiguously. In this example, given a particular ForumName, a Query operation could immediately
locate all of the threads for that forum. Within a group of items with the same partition key value, the items are
sorted by sort key value. If the sort key (Subject) is also provided in the query, DynamoDB can narrow down the
results that are returned—for example, returning all of the threads in the "S3" forum that have a Subject
beginning with the letter "a". Hence, you should set the ForumName attribute as the primary key and Subject as the
sort key instead.

The option that says: Set the ForumName attribute as the primary key and Subject as the sort key. Create a
Global Secondary Index with Thread as the sort key and fetch operations for LastPostDateTime is incorrect
because using a fetches operation can increase the total amount of provisioned throughput required for a query.
Remember that the scenario mentioned that the schema must support certain search operations using the least
provisioned read capacity units to minimize cost. In addition, you should create an LSI instead of GSI.

The option that says: Set the Subject attribute as the primary key and ForumName as the sort key. Create a
Global Secondary Index with Thread as the sort key and LastPostDateTime as a projected attribute is
incorrect because you should use a Local Secondary Index instead. You should also set the ForumName attribute as
the primary key and Subject as the sort key instead.

References:

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/SecondaryIndexes.html

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/LSI.html

Check out this Amazon DynamoDB Cheat Sheet:

https://tutorialsdojo.com/amazon-dynamodb/

Question 34: Correct

The Development team of a leading IT consultancy company would like to add a manual approval action before
their new application versions are deployed to their production environment. The approval action must be strictly
enforced even if the unit and integration tests are all successful. They have set up a pipeline using CodePipeline
to orchestrate the workflow of their continuous integration and continuous delivery processes. The new versions
of the application are built using CodeBuild and are deployed to a fleet of Amazon EC2 instances using
CodeDeploy.

Which of the following provides the SIMPLEST and the MOST cost-effective solution?

After the last deploy action of the pipeline, set up a test action to verify the application's functionality.
Add the required action steps to automatically do the unit and integration tests using AWS Step Functions.
Mark the action as successful if all of the tests have been successfully passed. Create a manual approval
action and inform the team of the stage being triggered using SNS. Add a deploy action to deploy the app
to the next stage at the end of the pipeline.

After the last deploy action of the pipeline, set up a test action to verify the application's functionality.
Add the required action steps to automatically do the unit and integration tests using a third-party CI/CD
Tool such as GitLab or Jenkins hosted in Amazon EC2. Mark the action as successful if all of the tests
have been successfully passed. Create a manual approval action and inform the team of the stage being
triggered using SNS. Add a deploy action to deploy the app to the next stage at the end of the pipeline.
After the last deploy action of the pipeline, set up a manual approval action and inform the team of the
stage being triggered using SNS. In CodeBuild, add the required actions to automatically do the unit and
integration tests. Add a deploy action to deploy the app to the next stage at the end of the pipeline.

(Correct)

After the last deploy action of the pipeline, set up a test action to verify the application's functionality. In
CodeBuild, add the required actions to automatically do the unit and integration tests. Mark the action as
successful if all of the tests have been successfully passed. Create a custom action with a corresponding
custom job worker that performs the approval action. Inform the team of the stage being triggered using
SNS. Add a deploy action to deploy the app to the next stage at the end of the pipeline.

Explanation

You can automate your release process by using AWS CodePipeline to test your code and run your builds with
CodeBuild. You can create reports in CodeBuild that contain details about tests that are run during builds.

You can create tests such as unit tests, configuration tests, and functional tests. The test file format can be JUnit
XML or Cucumber JSON. Create your test cases with any test framework that can create files in one of those
formats (for example, Surefire JUnit plugin, TestNG, and Cucumber). To create a test report, you add a report
group name to the buildspec file of a build project with information about your test cases. When you run the
build project, the test cases are run and a test report is created. You do not need to create a report group before
you run your tests. If you specify a report group name, CodeBuild creates a report group for you when you run
your reports. If you want to use a report group that already exists, you specify its ARN in the buildspec file.

In AWS CodePipeline, you can add an approval action to a stage in a pipeline at the point where you want the
pipeline execution to stop so that someone with the required AWS Identity and Access Management permissions
can approve or reject the action.If the action is approved, the pipeline execution resumes. If the action is rejected
—or if no one approves or rejects the action within seven days of the pipeline reaching the action and stopping—
the result is the same as an action failing, and the pipeline execution does not continue.

You might use manual approvals for these reasons:

- You want someone to perform a code review or change management review before a revision is allowed into
the next stage of a pipeline.

- You want someone to perform manual quality assurance testing on the latest version of an application, or to
confirm the integrity of a build artifact, before it is released.

- You want someone to review new or updated text before it is published to a company website.

Hence, the correct answer is: After the last deploy action of the pipeline, set up a manual approval action and
inform the team of the stage being triggered using SNS. In CodeBuild, add the required actions to
automatically do the unit and integration tests. Add a deploy action to deploy the app to the next stage at the
end of the pipeline.

The option that says: After the last deploy action of the pipeline, set up a test action to verify the application's
functionality. In CodeBuild, add the required actions to automatically do the unit and integration tests. Mark
the action as successful if all of the tests have been successfully passed. Create a custom action with a
corresponding custom job worker that performs the approval action. Inform the team of the stage being
triggered using SNS. Add a deploy action to deploy the app to the next stage at the end of the pipeline is
incorrect because you can just simply set up a manual approval action instead of creating a custom action. That
takes a lot of effort to configure including the development of a custom job worker.

The option that says: After the last deploy action of the pipeline, set up a test action to verify the application's
functionality. Add the required action steps to automatically do the unit and integration tests using AWS Step
Functions. Mark the action as successful if all of the tests have been successfully passed. Create a manual
approval action and inform the team of the stage being triggered using SNS. Add a deploy action to deploy
the app to the next stage at the end of the pipeline is incorrect because it is tedious to automatically perform the
unit and integration tests using AWS Step Functions. You can just use CodeBuild to handle all of the tests.

The option that says: After the last deploy action of the pipeline, set up a test action to verify the application's
functionality. Add the required action steps to automatically do the unit and integration tests using a third-
party CI/CD Tool such as GitLab or Jenkins hosted in Amazon EC2. Mark the action as successful if all of
the tests have been successfully passed. Create a manual approval action and inform the team of the stage
being triggered using SNS. Add a deploy action to deploy the app to the next stage at the end of the pipeline is
incorrect because this solution entails an additional burden to install, configure and launch a third-party CI/CD
tool in Amazon EC2. A more simple solution is to just use CodeBuild for tests.

References:

https://docs.aws.amazon.com/codepipeline/latest/userguide/approvals.html

https://docs.aws.amazon.com/codepipeline/latest/userguide/approvals-action-add.html

https://docs.aws.amazon.com/codebuild/latest/userguide/how-to-create-pipeline.html#how-to-create-pipeline-
add-test

Check out this AWS CodePipeline Cheat Sheet:

https://tutorialsdojo.com/aws-codepipeline/

Question 35: Correct

A government online portal that allows you to lodge your tax return is hosted in AWS. The portal uses a MEAN
application stack with GraphQL API as its backend and a DynamoDB table as its data store. It is also utilizing
several custom Chef recipes that are stored in a private Git repository. Since the portal has predictable peak
traffic times, the company instructed its DevOps Engineer to configure their system to scale up the application
instances only during the peak times. The deployment should perform rolling updates to the application
environment with the least amount of management overhead for easy maintenance.Which among the options
below provides the MOST cost-effective solution?

Create a new stack using AWS OpsWorks Stacks and push the custom recipes to an S3 bucket. Modify
the configuration of the custom recipes to point to the Amazon S3 bucket. Add a new application layer for
a standard Node.js application server. Configure the custom recipe to deploy the application using the S3
bucket. Set up load-based instances and attach an IAM role that provides permission to access the
DynamoDB table.
Using AWS OpsWorks Stacks, create a new stack with a custom cookbook and upload the custom recipes
to an S3 bucket. Modify the configuration of the custom recipes to point to the S3 bucket. Add a new
application layer for a standard Node.js application server. Configure the custom recipe to deploy the
application using the S3 bucket. Set up time-based instances and attach an IAM role that provides
permission to access the DynamoDB table.

(Correct)

Migrate the application to Elastic Beanstalk. Configure the environment to use Rolling as its deployment
policy. Attach the required IAM role that provides permission to the instances to access the DynamoDB
table.

Migrate the application to Elastic Beanstalk. Configure the environment to


use RollingWithAdditionalBatch as its deployment policy. This will launch an extra batch of instances
first before starting the deployment in order to maintain full capacity. Attach the required IAM role that
provides permission to the instances to access the DynamoDB table

Explanation

As your incoming traffic varies, your stack may have either too few instances to comfortably handle the load or
more instances than necessary. You can save both time and money by using time-based or load-based instances
to automatically increase or decrease a layer's instances so that you always have enough instances to adequately
handle incoming traffic without paying for unneeded capacity. There's no need to monitor server loads or
manually start or stop instances. In addition, time- and load-based instances automatically distribute, scale, and
balance applications over multiple Availability Zones within a region, giving you geographic redundancy and
scalability.Automatic scaling is based on two instance types, which adjust a layer's online instances based on
different criteria:

Time-based instances - They allow a stack to handle loads that follow a predictable pattern by including
instances that run only at certain times or on certain days. For example, you could start some instances after 6PM
to perform nightly backup tasks or stop some instances on weekends when traffic is lower.

Load-based instances - They allow a stack to handle variable loads by starting additional instances when traffic
is high and stopping instances when traffic is low, based on any of several load metrics. For example, you can
have AWS OpsWorks Stacks start instances when the average CPU utilization exceeds 80% and stop instances
when the average CPU load falls below 60%.

Both time-based and load-based instances are supported for Linux stacks, while only time-based instances are
supported for Windows stacks.

Unlike 24/7 instances, which you must start and stop manually, you do not start or stop time-based or load-based
instances yourself. Instead, you configure the instances and AWS OpsWorks Stacks starts or stops them based
on their configuration. For example, you configure time-based instances to start and stop on a specified schedule.
AWS OpsWorks Stacks then starts and stops the instances according to that configuration.

Your custom cookbooks must be stored in an online repository, either an archive such as a .zip file or a source
control manager such as Git. A stack can have only one custom cookbook repository, but the repository can
contain any number of cookbooks. When you install or update the cookbooks, AWS OpsWorks Stacks installs
the entire repository in a local cache on each of the stack's instances. When an instance needs, for example, to
run one or more recipes, it uses the code from the local cache.

Hence, the correct answer is: Using AWS OpsWorks Stacks, create a new stack with a custom cookbook and
upload the custom recipes to an S3 bucket. Modify the configuration of the custom recipes to point to the S3
bucket. Add a new application layer for a standard Node.js application server. Configure the custom recipe to
deploy the application using the S3 bucket. Set up time-based instances and attach an IAM role that provides
permission to access the DynamoDB table.

The option that says: Create a new stack using AWS OpsWorks Stacks and push the custom recipes to an S3
bucket. Modify the configuration of the custom recipes to point to the Amazon S3 bucket. Add a new
application layer for a standard Node.js application server. Configure the custom recipe to deploy the
application using the S3 bucket. Set up load-based instances and attach an IAM role that provides permission
to access the DynamoDB table is incorrect because you have to use time-based instances instead. Remember
that the scenario mentioned that the portal has predictable peak traffic times. Time-based instances allow a stack
to handle loads that follow a predictable pattern by including instances that run only at certain times or on certain
days.

The option that says: Migrate the application to Elastic Beanstalk. Configure the environment to
use RollingWithAdditionalBatch as its deployment policy. This will launch an extra batch of instances first
before starting the deployment in order to maintain full capacity. Attach the required IAM role that provides
permission to the instances to access the DynamoDB table is incorrect because although it is true that
using RollingWithAdditionalBatch as your deployment policy will launch an extra batch of instances to maintain
full capacity, the use of Elastic Beanstalk is not recommended since the architecture is already using several
custom Chef recipes. A better solution would be to use AWS OpsWorks Stacks with time-based instances.

The option that says: Migrate the application to Elastic Beanstalk. Configure the environment to
use Rolling as its deployment policy. Attach the required IAM role that provides permission to the instances
to access the DynamoDB table is incorrect because, just as explained on the previous option, you should use
AWS OpsWorks Stacks instead. With rolling deployments, Elastic Beanstalk splits the environment's instances
into batches then deploys the new application version to one batch at a time, leaving the rest of the instances in
the environment running the old application version.

References:

https://docs.aws.amazon.com/opsworks/latest/userguide/workingcookbook-installingcustom-
enable.html#workingcookbook-installingcustom-enable-repo

https://docs.aws.amazon.com/opsworks/latest/userguide/workingcookbook.html

Check out this AWS OpsWorks Cheat Sheet:

https://tutorialsdojo.com/aws-opsworks/

Question 36: Correct

An online data analytics application is launched to 12 On-Demand EC2 instances across three Availability Zones
using a golden AMI in AWS. Each instance has only 10% utilization after business hours but increases to 30%
utilization during peak hours. There are also some third-party applications that use the application from all over
the globe with no specific schedule. In the morning, there is always a sudden CPU utilization increase on the
EC2 instances due to the number of users logging in to use the application. However, its CPU utilization usually
stabilizes after a few hours. A DevOps Engineer has been instructed to reduce costs and improve the overall
reliability of the system.Which among the following options provides the MOST suitable solution in this
scenario?

Set up two Amazon Eventbridge rules and two Lambda functions. Configure each Amazon Eventbridge
rule to invoke a Lambda function and regularly run before and after the peak hours. The first function
should stop nine instances after the peak hours end while the second function should restart the nine
instances before the business day begins

Set up an Auto Scaling group using the golden AMI with a scaling action based on the CPU Utilization
average. Configure a scheduled action for the group to adjust the minimum number of Amazon EC2
instances to three after business hours end, and reset to six before business hours begin.

(Correct)

Launch a group of Scheduled Reserved Instances that regularly run before and after the peak hours.
Integrate CloudWatch Events and AWS Lambda to regularly stop nine instances after the peak hours
every day and restart the nine instances before the business day begins.

Set up two AWS Config rules and two Lambda functions. Configure each rule to invoke a Lambda
function and regularly run before and after the peak hours. The first function should stop nine instances
after the peak hours end while the second function should restart the nine instances before the business
day begins.

Explanation

Scaling based on a schedule allows you to set your own scaling schedule for predictable load changes. For
example, every week the traffic to your web application starts to increase on Wednesday, remains high on
Thursday, and starts to decrease on Friday. You can plan your scaling actions based on the predictable traffic
patterns of your web application. Scaling actions are performed automatically as a function of time and date.

To configure your Auto Scaling group to scale based on a schedule, you create a scheduled action. The
scheduled action tells Amazon EC2 Auto Scaling to perform a scaling action at specified times. To create a
scheduled scaling action, you specify the start time when the scaling action should take effect, and the new
minimum, maximum, and desired sizes for the scaling action. At the specified time, Amazon EC2 Auto Scaling
updates the group with the values for minimum, maximum, and desired size specified by the scaling action.

You can create scheduled actions for scaling one time only or for scaling on a recurring schedule.

Hence, the correct answer is: Set up an Auto Scaling group using the golden AMI with a scaling action
based on the CPU Utilization average. Configure a scheduled action for the group to adjust the minimum
number of Amazon EC2 instances to three after business hours end, and reset to six before business hours
begin.

The option that says: Set up two Amazon Eventbridge rules and two Lambda functions. Configure each
Amazon Eventbridge rule to invoke a Lambda function and regularly run before and after the peak
hours. The first function should stop nine instances after the peak hours end while the second function
should restart the nine instances before the business day begins is incorrect because you can simply
configure a scheduled action for the Auto Scaling group to adjust the minimum number of the available EC2
instances without using CloudWatch Events or a Lambda function.

The option that says: Set up two AWS Config rules and two Lambda functions. Configure each rule to
invoke a Lambda function and regularly run before and after the peak hours. The first function should
stop nine instances after the peak hours end while the second function should restart the nine instances
before the business day begins is incorrect because using AWS Config is not an appropriate service to use in
this scenario. A better solution is to configure a scheduled action in the Auto Scaling group.

The option that says: Launch a group of Scheduled Reserved Instances that regularly run before and after
peak hours. Integrate CloudWatch Events and AWS Lambda to regularly stop nine instances after the
peak hours every day and restart the nine instances before the business day begins is incorrect because
although your operating costs will be decreased by using Scheduled Reserved Instances, this setup is still not
appropriate since the traffic to the portal is not entirely predictable. Take note that there are third-party
applications that use the application from all over the globe with no specific schedule. Hence, the use of
Scheduled Reserved Instances is not recommended.

References:

https://docs.aws.amazon.com/autoscaling/ec2/userguide/schedule_time.html

https://docs.aws.amazon.com/autoscaling/ec2/userguide/scaling_plan.html

https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-run-lambda-schedule.html

Check out this AWS Auto Scaling Cheat Sheet:

https://tutorialsdojo.com/aws-auto-scaling/

Question 37: Correct

A company is planning to host their enterprise web application in an Amazon ECS Cluster which uses the
Fargate launch type. The database credentials, API keys, and other sensitive parameters should be provided to
the application image by using environment variables. A DevOps engineer was instructed to ensure that the
sensitive parameters are highly secured when passed to the image and must be kept in a dedicated storage with
lifecycle management. The size of some parameters can exceed up to 12 Kb in size and must be rotated
automatically.Which of the following is the MOST suitable solution that the DevOps engineer should
implement?

Store the API Keys and other credentials in AWS Key Management Service (AWS KMS) and enable
automatic key rotation. Set up an IAM role to the ECS task definition script that allows access to AWS
KMS to retrieve the necessary parameters when calling the register-task-definition action in Amazon
ECS.

Keep the credentials using the AWS Secrets Manager and then encrypt them using AWS KMS. Set up an
IAM Role for your Amazon ECS task execution role and reference it with your task definition which
allows access to both KMS and AWS Secrets Manager. Within your container definition, specify secrets
with the name of the environment variable to set in the container and the full ARN of the Secrets Manager
secret which contains the sensitive data, to present to the container. Enable the built-in automatic key
rotation for the credentials.

(Correct)

Store the credentials using AWS Storage Gateway in the ECS task definition file of the ECS Cluster in
order to centrally manage these sensitive data and securely transmit these only to those containers that
need access to them. Ensure that the secrets are encrypted and can only be accessed to those services
which have been granted explicit access to it via IAM Role, and only while those service tasks are
running. Launch a custom rotation function in AWS Lambda and automatically rotate the credentials
using Amazon EventBridge.

Keep the credentials using the AWS Systems Manager Parameter Store and then encrypt them using AWS
KMS. Set up an IAM Role for your Amazon ECS task execution role and reference it with your task
definition, which allows access to both KMS and the Parameter Store. Within your container definition,
specify secrets with the name of the environment variable to set in the container and the full ARN of the
Systems Manager Parameter Store parameter containing the sensitive data to present to the container.
Enable the built-in automatic key rotation for the parameters.

Explanation

Amazon ECS enables you to inject sensitive data into your containers by storing your sensitive data in either
AWS Secrets Manager secrets or AWS Systems Manager Parameter Store parameters and then referencing them
in your container definition. This feature is supported by tasks using both the EC2 and Fargate launch types.

Within your container definition, specify secrets with the name of the environment variable to set in the
container and the full ARN of either the Secrets Manager secret or Systems Manager Parameter Store parameter
containing the sensitive data to present to the container. The parameter that you reference can be from a different
Region than the container using it but must be from within the same account.

AWS Secrets Manager is a secrets management service that helps you protect access to your applications,
services, and IT resources. This service enables you to easily rotate, manage, and retrieve database credentials,
API keys, and other secrets throughout their lifecycle. Using Secrets Manager, you can secure and manage
secrets used to access resources in the AWS Cloud, on third-party services, and on-premises.

Secrets Manager can automatically rotate your secret on a schedule. To rotate a secret, Secrets Manager uses a
Lambda function to update the secret information. Rotation reduces the risk from leaving secret information such
as credentials unchanged for long periods of time. Users and applications that retrieve this secret from Secrets
Manager get the most up-to-date version as soon as the rotation completes.

As part of the schedule for rotating your secret, you can define a rotation window to ensure your secrets are
rotated at the best time for your needs. The schedule you define is in UTC+0 time zone. A rotation window must
end before midnight UTC+0 on the same day it began. It can't continue into the next day.

If you want a single store for configuration and secrets, you can use Parameter Store. If you want a dedicated
secrets store with lifecycle management, use Secrets Manager. Hence, the correct answer is to: Keep the
credentials using the AWS Secrets Manager and then encrypt them using AWS KMS. Set up an IAM Role
for your Amazon ECS task execution role and reference it with your task definition which allows access to
both KMS and AWS Secrets Manager. Within your container definition, specify secrets with the name of
the environment variable to set in the container and the full ARN of the Secrets Manager secret which
contains the sensitive data, to present to the container. Enable the built-in automatic key rotation for the
credentials.

The option that says: Keep the credentials using the AWS Systems Manager Parameter Store and then
encrypt them using AWS KMS. Set up an IAM Role for your Amazon ECS task execution role and
reference it with your task definition, which allows access to both KMS and the Parameter Store. Within
your container definition, specify secrets with the name of the environment variable to set in the container
and the full ARN of the Systems Manager Parameter Store parameter containing the sensitive data to
present to the container. Enable the built-in automatic key rotation for the parameters is incorrect.
Although the use of the AWS Systems Manager Parameter Store service in securing sensitive data in ECS is
valid, this service doesn't provide dedicated storage with lifecycle management and key rotation, unlike Secrets
Manager. Moreover, the AWS Systems Manager Parameter Store doesn't have a built-in automatic key rotation
and can only store up to 8KB of data. Take note that the size of some parameters can exceed up to 12 Kb in size.

The option that says: Store the API Keys and other credentials in AWS Key Management Service (AWS
KMS) and enable automatic key rotation. Set up an IAM role to the ECS task definition script that allows
access to AWS KMS to retrieve the necessary parameters when calling the register-task-definition action in
Amazon ECS is incorrect. Although AWS KMS has a built-in feature to automatically rotate keys, this service
is not recommended to store sensitive API keys, database passwords, or any type of credentials. The primary
function of AWS KMS is to encrypt customer-managed keys(CMKs).

The option that says: Store the credentials using AWS Storage Gateway in the ECS task definition file of
the ECS Cluster in order to centrally manage these sensitive data and securely transmit these only to
those containers that need access to them. Ensure that the secrets are encrypted and can only be accessed
to those services which have been granted explicit access to it via IAM Role, and only while those service
tasks are running. Launch a custom rotation function in AWS Lambda and automatically rotate the
credentials using Amazon EventBridge is incorrect. The AWS Storage Gateway service is not meant to store
and centrally manage your sensitive parameters. You should use AWS Secrets Manager for this particular use
case since it has a built-in key rotation and can store secrets up to 12 Kb in size each. Launching a custom
rotation function is actually possible using AWS Secrets Manager but not for AWS Storage Gateway.

References:

https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data.html

https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html

https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets_strategies.html

Check out this AWS Secrets Manager vs AWS Systems Manager Parameter Store Cheat Sheet:

https://tutorialsdojo.com/aws-secrets-manager-vs-systems-manager-parameter-store/

Question 38: Correct

A company has a hybrid cloud architecture where its on-premises data center is connected to its multiple virtual
private clouds in AWS using a Transit Gateway. They have a service-oriented architecture with several
application services distributed across their VPCs as well on its local data center. Gathering logs from each
service takes a significant amount of time, especially if one module experiences a system outage. To rectify this
issue, aggregating the system logs from the on-premises and AWS servers should be implemented. There should
also be a way to analyze the logs for audit and review purposes.As a DevOps Engineer, which among the
following options is the MOST cost-effective solution that entails the LEAST amount of effort to implement?

Install the Unified CloudWatch Logs agent to all on-premises and AWS resources to collect the system
and application logs. Store the collected data in an Amazon S3 bucket in a central account. Set up an
Amazon S3 trigger that invokes a Lambda function to analyze the logs as well as to detect any
irregularities. Analyze the log data using Amazon Athena.

(Correct)

Develop a shell script that collects and uploads on-premises logs to an S3 bucket. Analyze the log data
using Amazon Macie.

Install the Unified CloudWatch Logs agent to all on-premises and AWS resources to collect the system
and application logs. Consolidate all of the collected logs to your on-premises file server. Develop a
custom-built solution that uses an open-source ELK stack running Elasticsearch, Logstash, and Kibana to
analyze the logs

Install the Unified CloudWatch Logs agent to all AWS resources to collect the system and application
logs. Store the collected data in an Amazon S3 bucket in a central account. Analyze the log data using a
custom-made Amazon EMR cluster.

Explanation

Amazon CloudWatch monitors your Amazon Web Services (AWS) resources and the applications you run on
AWS in real-time. You can use CloudWatch to collect and track metrics, which are variables you can measure
for your resources and applications. The CloudWatch home page automatically displays metrics about every
AWS service you use. You can additionally create custom dashboards to display metrics about your custom
applications and display custom collections of metrics that you choose.

The unified CloudWatch agent enables you to do the following:

-Collect more system-level metrics from Amazon EC2 instances across operating systems. The metrics can
include in-guest metrics, in addition to the metrics for EC2 instances. The additional metrics that can be
collected are listed in Metrics Collected by the CloudWatch Agent.

-Collect system-level metrics from on-premises servers. These can include servers in a hybrid environment as
well as servers not managed by AWS.

-Retrieve custom metrics from your applications or services using the StatsD and collectd protocols. StatsD is
supported on both Linux servers and servers running Windows Server. collectd is supported only on Linux
servers.

-Collect logs from Amazon EC2 instances and on-premises servers, running either Linux or Windows Server.

You can store and view the metrics that you collect with the CloudWatch agent in CloudWatch just as you can
with any other CloudWatch metrics. The default namespace for metrics collected by the CloudWatch agent
is CWAgent, although you can specify a different namespace when you configure the agent.
The logs collected by the unified CloudWatch agent are processed and stored in Amazon CloudWatch Logs, just
like logs collected by the older CloudWatch Logs agent.

Amazon Athena is an interactive query service that makes it easy to analyze data directly in Amazon Simple
Storage Service (Amazon S3) using standard SQL. With a few actions in the AWS Management Console, you
can point Athena at your data stored in Amazon S3 and begin using standard SQL to run ad-hoc queries and get
results in seconds.

Athena is serverless, so there is no infrastructure to set up or manage, and you pay only for the queries you run.
Athena scales automatically — executing queries in parallel — so results are fast, even with large datasets and
complex queries.

Hence, the correct answer is: Install the Unified CloudWatch Logs agent to all on-premises and AWS
resources to collect the system and application logs. Store the collected data in an Amazon S3 bucket in a
central account. Set up an Amazon S3 trigger that invokes a Lambda function to analyze the logs as well
as to detect any irregularities. Analyze the log data using Amazon Athena.

The option that says: Install the Unified CloudWatch Logs agent to all AWS resources to collect the system
and application logs. Store the collected data in an Amazon S3 bucket in a central account. Analyze the
log data using a custom-made Amazon EMR cluster is incorrect. The CloudWatch Logs Agent must also be
installed on the on-premises servers. Moreover, it takes a significant amount of effort in building a custom-made
Amazon EMR cluster. You can just use Amazon Athena instead to simplify the process.

The option that says: Develop a shell script that collects and uploads on-premises logs to an S3 bucket.
Analyze the log data using Amazon Macie is incorrect because Amazon Macie is just a security service that
uses machine learning to automatically discover, classify, and protect sensitive data in AWS. Moreover, you can
simply install the Unified CloudWatch Logs Agent to the on-premises data center instead of developing a shell
script.

The option that says: Install the Unified CloudWatch Logs agent to all on-premises and AWS resources to
collect the system and application logs. Consolidate all of the collected logs to your on-premises file server.
Develop a custom-built solution that uses an open-source ELK stack running Elasticsearch, Logstash, and
Kibana to analyze the logs is incorrect. Although the use of CloudWatch Logs is valid, developing a custom-
built ELK stack solution takes a significant amount of time to implement.

References:

https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Install-CloudWatch-Agent.html

https://docs.aws.amazon.com/athena/latest/ug/what-is.html

Check out these Amazon CloudWatch and Athena Cheat Sheets:

https://tutorialsdojo.com/amazon-cloudwatch/

https://tutorialsdojo.com/amazon-athena/

Question 39: Incorrect


A financial company has several accounting applications that are hosted in AWS and used by thousands of small
and medium businesses. As part of its Business Continuity Plan, the company is required to set up an automatic
DNS failover for its applications to a disaster recovery (DR) environment. They instructed their DevOps team to
configure Amazon Route 53 to automatically route to an alternate endpoint when their primary application stack
in us-west-1 region experiences an outage or degradation of service.

What steps should the team take to satisfy this requirement? (Select TWO.)

Set up health checks in Route 53 for non-alias records to each service endpoint. Configure the network
access control list and the route table to allow Route 53 to send requests to the endpoints specified in the
health checks.

(Correct)

Set up a CloudWatch Alarm to monitor the primary Route 53 DNS endpoint and create a custom Lambda
function. Execute the ChangeResourceRecordSets API call using the function to initiate the failover to the
secondary DNS record.

Set up a record in Route 53 with a Weighted routing policy configuration. Associate the record with the
primary and secondary record sets to distribute traffic to healthy service endpoints.

Set up a record in Route 53 with a latency routing policy configuration. Associate the record with the
primary and secondary record sets to distribute traffic to healthy service endpoints.

Use a Failover routing policy configuration. Set up alias records in Route 53 that route traffic to AWS
resources. Set the Evaluate Target Health option to Yes, then create all of the required non-alias records.

(Correct)

Explanation

Use an active-passive failover configuration when you want a primary resource or group of resources to be
available the majority of the time and you want a secondary resource or group of resources to be on standby in
case all the primary resources become unavailable. When responding to queries, Route 53 includes only the
healthy primary resources. If all the primary resources are unhealthy, Route 53 begins to include only the healthy
secondary resources in response to DNS queries.

To create an active-passive failover configuration with one primary record and one secondary record, you just
create the records and specify Failover for the routing policy. When the primary resource is healthy, Route 53
responds to DNS queries using the primary record. When the primary resource is unhealthy, Route 53 responds
to DNS queries using the secondary record.

You can configure a health check that monitors an endpoint that you specify either by IP address or by domain
name. At regular intervals that you specify, Route 53 submits automated requests over the Internet to your
application, server, or other resources to verify that it's reachable, available, and functional. Optionally, you can
configure the health check to make requests similar to those that your users make, such as requesting a web page
from a specific URL.
When Route 53 checks the health of an endpoint, it sends an HTTP, HTTPS, or TCP request to the IP address
and port that you specified when you created the health check. For a health check to succeed, your router and
firewall rules must allow inbound traffic from the IP addresses that the Route 53 health checkers use.

Hence, the correct answers are:

- Set up health checks in Route 53 for non-alias records to each service endpoint. Configure the network
access control list and the route table to allow Route 53 to send requests to the endpoints specified in the
health checks.

- Use a Failover routing policy configuration. Set up alias records in Route 53 that route traffic to AWS
resources. Set the Evaluate Target Health option to Yes, then create all of the required non-alias records.

The option that says: Set up a record in Route 53 with a Weighted routing policy configuration. Associate
the record with the primary and secondary record sets to distribute traffic to healthy service endpoints is
incorrect because Weighted routing simply lets you associate multiple resources with a single domain name
(pasigcity.com) or subdomain name (blog.pasigcity.com) and choose how much traffic is routed to each
resource. This can be useful for a variety of purposes, including load balancing and testing new versions of
software.

The option that says: Set up a CloudWatch Alarm to monitor the primary Route 53 DNS endpoint and a
create a custom Lambda function. Execute the ChangeResourceRecordSets API call using the function to
initiate the failover to the secondary DNS record is incorrect because you have to use a Failover routing
policy. Calling the Route 53 API is not applicable nor useful at all in this scenario.

The option that says: Set up a record in Route 53 with a latency routing policy configuration. Associate the
record with the primary and secondary record sets to distribute traffic to healthy service endpoints is
incorrect because the Latency routing policy simply improves the application performance for your users by
serving their requests from the AWS Region that provides the lowest latency. You have to use a Failover routing
policy instead.

References:

https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover-types.html

https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/health-checks-types.html

https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover-router-firewall-rules.html

Check out this Amazon Route 53 Cheat Sheet:

https://tutorialsdojo.com/amazon-route-53/

Question 40: Incorrect

An enterprise resource planning application is hosted in an Auto Scaling group of EC2 instances behind an
Application Load Balancer which uses an Amazon DynamoDB table as its data store. For the application’s
reporting module, there are five Lambda functions which are reading from the DynamoDB Streams of the table
to count the number of products, monitor the inventory, generate reports, move new items to a Kinesis Data
Firehose for analytics, and many more. The operations team discovered that in peak times, the Lambda functions
are getting a stream throttling error and some of the requests fail which affects the performance of the reporting
module.Which of the following is the MOST scalable and cost-effective solution with the LEAST amount of
operational overhead?

Create a new local secondary index (LSI) in the DynamoDB table to improve the performance of the
queries. Double the allocated RCU of the table. Refactor the Lambda functions to directly query from the
table and disable the DynamoDB streams. Increase the concurrency limits of each Lambda function to
avoid throttling errors and set the ParallelizationFactor to 10. Re-architect the reporting module to use
Amazon Kinesis Data Analytics.

Use the Amazon CodeGuru service to optimize the codebase of the reporting module. Create a new global
secondary index (GSI) in the DynamoDB table to improve the performance of the queries. Increase the
allocated RCU of the table. Disable the Amazon DynamoDB streams and refactor the Lambda functions
to directly query from the table. Refactor the reporting module to use Amazon Kinesis Data Analytics.

Refactor your architecture to use Amazon Kinesis Adapter for real-time processing of streaming data at a
massive scale instead of directly consuming the stream using Lambda. Re-architect the reporting module
to use Amazon Kinesis Data Analytics.

(Correct)

Delete all of the Lambda functions and move all of the processing on an Amazon ECS Cluster with Auto
Scaling enabled using AWS App Runner. Set up AWS Glue to consume the DynamoDB stream which
will be processed by ECS. Re-factor the reporting module to use Amazon Kinesis Data Analytics.

Explanation

Using the Amazon Kinesis Adapter is the recommended way to consume streams from Amazon DynamoDB.
The DynamoDB Streams API is intentionally similar to that of Kinesis Data Streams, a service for real-time
processing of streaming data at a massive scale. In both services, data streams are composed of shards, which are
containers for stream records. Both services' APIs contain ListStreams, DescribeStream, GetShards,
and GetShardIterator operations. (Although these DynamoDB Streams actions are similar to their counterparts
in Kinesis Data Streams, they are not 100 percent identical.)

You can write applications for Kinesis Data Streams using the Kinesis Client Library (KCL). The KCL
simplifies coding by providing useful abstractions above the low-level Kinesis Data Streams API.

As a DynamoDB Streams user, you can use the design patterns found within the KCL to process DynamoDB
Streams shards and stream records. To do this, you use the DynamoDB Streams Kinesis Adapter. The Kinesis
Adapter implements the Kinesis Data Streams interface so that the KCL can be used for consuming and
processing records from DynamoDB Streams.

Amazon Kinesis Data Analytics is the easiest way to transform and analyze streaming data in real time using
Apache Flink, an open-source framework and engine for processing data streams. Amazon Kinesis Data
Analytics simplifies building and managing Apache Flink workloads and allows you to easily integrate
applications with other AWS services.
Hence, the correct answer is: Refactor your architecture to use Amazon Kinesis Adapter for real-time
processing of streaming data at a massive scale instead of directly consuming the stream using Lambda.
Re-architect the reporting module to use Amazon Kinesis Data Analytics.

The option that says: Delete all of the Lambda functions and move all of the processing on an Amazon ECS
Cluster with Auto Scaling enabled using AWS App Runner. Set up AWS Glue to consume the DynamoDB
stream which will be processed by ECS. Re-factor the reporting module to use Amazon Kinesis Data
Analytics is incorrect because using an ECS Cluster is more expensive than using Lambda functions. The use of
AWS Glue is not warranted and irrelevant here since this is just a fully-managed extract, transform, and load
(ETL) service that makes it easy for customers to prepare and load their data for analytics, and not for
consuming DynamoDB streams.

The option that says: Create a new local secondary index (LSI) in the DynamoDB table to improve the
performance of the queries. Double the allocated RCU of the table. Refactor the Lambda functions to
directly query from the table and disable the DynamoDB streams. Increase the concurrency limits of each
Lambda function to avoid throttling errors and set the ParallelizationFactor to 10. Re-architect the
reporting module to use Amazon Kinesis Data Analytics is incorrect because doubling the allocated RCU of
the table will significantly increase the cost of your architecture and technically, this will not solve the throttling
issue of the Lambda functions. The same goes with the creation of an LSI since the issue here is the consumption
of the streams and not the actual DynamoDB query performance. Although increasing the concurrency limits of
the Lambda function may help, this is not the most scalable solution to consume the high amount of DynamoDB
Streams. The recommended way is to use an Amazon Kinesis Adapter instead in order to consume the streams at
a massive scale.

The option that says: Use the Amazon CodeGuru service to optimize the codebase of the reporting module.
Create a new global secondary index (GSI) in the DynamoDB table to improve the performance of the
queries. Increase the allocated RCU of the table. Disable the Amazon DynamoDB streams and refactor
the Lambda functions to directly query from the table. Refactor the reporting module to use Amazon
Kinesis Data Analytics is incorrect because just as mentioned above, adding a GSI and increasing the allocated
RCU are actually not related to this issue. In fact, querying directly from the table entails a lot of operation
overhead since you have to develop the queries and maintain the indices as well as the capacity units (e.g. RCU,
WCU) of your table.

References:

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.KCLAdapter.html

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html

https://docs.aws.amazon.com/lambda/latest/dg/with-ddb.html

Check out this Amazon DynamoDB Cheat Sheet:

https://tutorialsdojo.com/amazon-dynamodb/

Question 41: Incorrect

A CTO of a leading insurance company has recently decided to migrate its online customer portal to AWS. Their
customers will be using the online portal to view their paid insurance premiums and manage their accounts. For
improved scalability, the application should be hosted in an Auto Scaling group of On-Demand EC2 instances
with a custom Amazon Machine Image (AMI). The same architecture will also be used for their non-production
environments (DEV, TEST, and STAGING). You are instructed by the CTO to design a deployment strategy
that securely stores the credentials of each environment, expedites the startup time for the EC2 instances, and
allows the same AMI to work in all environments.As a DevOps Engineer, how should you set up the deployment
configuration to accomplish this task?

Add a tag to each EC2 instance based on their environment. Use a preconfigured AMI from AWS
Marketplace. Write a bootstrap script in the User Data to analyze the tag and set the environment
configuration accordingly. Use the AWS Secrets Manager to store the credentials.

Add a tag to each EC2 instance based on their environment. Preconfigure the AMI by installing all of the
required applications and software dependencies using the AWS Systems Manager Patch Manager. Set up
a Lambda function that will be invoked by the User Data to analyze the associated tag and set the
environment configuration accordingly. Use the AWS Systems Manager Parameter Store to store the
credentials as Secure String parameters.

Add a tag to each EC2 instance based on their environment. Preconfigure the AMI by installing all of the
required applications and software dependencies using the AWS Systems Manager Session Manager. Set
up a Lambda function that will be invoked by the User Data to analyze the associated tag and set the
environment configuration accordingly. Use the AWS Secrets Manager to store the credentials.

Add a tag to each EC2 instance based on their environment. Use AWS Systems Manager Automation to
preconfigure the AMI by installing all of the required applications and software dependencies. Write a
bootstrap script in the User Data to analyze the tag and set the environment configuration accordingly.
Use the AWS Systems Manager Parameter Store to store the credentials as Secure String parameters.

(Correct)

Explanation

AWS Systems Manager Parameter Store provides secure, hierarchical storage for configuration data
management and secrets management. You can store data such as passwords, database strings, and license codes
as parameter values. You can store values as plain text or encrypted data. You can then reference values by using
the unique name that you specified when you created the parameter. Highly scalable, available, and durable,
Parameter Store is backed by the AWS Cloud.

Systems Manager Automation simplifies common maintenance and deployment tasks of Amazon EC2 instances
and other AWS resources. Automation enables you to do the following.

- Build Automation workflows to configure and manage instances and AWS resources.

- Create custom workflows or use pre-defined workflows maintained by AWS.

- Receive notifications about Automation tasks and workflows by using Amazon CloudWatch Events.

- Monitor Automation progress and execution details by using the Amazon EC2 or the AWS Systems Manager
console.
Automation offers one-click automations for simplifying complex tasks such as creating golden Amazon
Machines Images (AMIs), and recovering unreachable EC2 instances. Here are some examples:

- Use the AWS-UpdateLinuxAmi and AWS-UpdateWindowsAmi documents to create golden AMIs from a source AMI.
You can run custom scripts before and after updates are applied. You can also include or exclude specific
packages from being installed.

- Use the AWSSupport-ExecuteEC2Rescue document to recover impaired instances. An instance can become
unreachable for a variety of reasons, including network misconfigurations, RDP issues, or firewall settings.
Troubleshooting and regaining access to the instance previously required dozens of manual steps before you
could regain access. The AWSSupport-ExecuteEC2Rescue document lets you regain access by specifying an
instance ID and clicking a button.

A Systems Manager Automation document defines the Automation workflow (the actions that Systems Manager
performs on your managed instances and AWS resources). Automation includes several pre-defined Automation
documents that you can use to perform common tasks like restarting one or more Amazon EC2 instances or
creating an Amazon Machine Image (AMI). Documents use JavaScript Object Notation (JSON) or YAML, and
they include steps and parameters that you specify. Steps run in sequential order.

Hence, the correct answer is: Add a tag to each EC2 instance based on their environment. Use AWS Systems
Manager Automation to preconfigure the AMI by installing all of the required applications and software
dependencies. Write a bootstrap script in the User Data to analyze the tag and set the environment
configuration accordingly. Use the AWS Systems Manager Parameter Store to store the credentials as Secure
String parameters.

The option that says: Add a tag to each EC2 instance based on their environment. Preconfigure the AMI by
installing all of the required applications and software dependencies using the AWS Systems Manager
Session Manager. Set up a Lambda function that will be invoked by the User Data to analyze the associated
tag and set the environment configuration accordingly. Use the AWS Secrets Manager to store the
credentials is incorrect because the Session Manager service is just a fully managed AWS Systems Manager
capability that lets you manage your Amazon EC2 instances through an interactive one-click browser-based shell
or through the AWS CLI. It is not capable to build a custom AMI, unlike Systems Manager Automation.

The option that says: Add a tag to each EC2 instance based on their environment. Use a preconfigured AMI
from AWS Marketplace. Write a bootstrap script in the User Data to analyze the tag and set the environment
configuration accordingly. Use the AWS Secrets Manager to store the credentials is incorrect because the
company is using a custom AMI and not a public AMI from AWS Marketplace. You have to preconfigure the
AMI using the Systems Manager Automation instead.

The option that says: Add a tag to each EC2 instance based on their environment. Preconfigure the AMI by
installing all of the required applications and software dependencies using the AWS Systems Manager State
Manager. Set up a Lambda function that will be invoked by the User Data to analyze the associated tag and
set the environment configuration accordingly. Use the AWS Systems Manager Parameter Store to store the
credentials as Secure String parameters is incorrect because the AWS Systems Manager Patch Manager simply
automates the process of patching managed instances with both security-related and other types of updates. A
better solution is to preconfigure the AMI using the Systems Manager Automation instead.

References:
https://aws.amazon.com/blogs/big-data/create-custom-amis-and-push-updates-to-a-running-amazon-emr-cluster-
using-amazon-ec2-systems-manager/

https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-automation.html

https://docs.aws.amazon.com/systems-manager/latest/userguide/automation-documents.html

Check out this AWS Systems Manager Cheat Sheet:

https://tutorialsdojo.com/aws-systems-manager/

Question 42: Incorrect

A leading IT consultancy firm has several Python-based Flask and Django web applications hosted in AWS.
Some of their developers are freelance contractors located overseas. The firm wants to automate remediation
actions for issues relating to the health of its AWS resources by using the AWS Health Dashboard and the AWS
Health API. They need to automatically detect any of their own IAM access key that is accidentally or
deliberately listed on a public Github repository. Once detected, the IAM access key must be immediately
deleted and a notification should be sent to the DevOps team.As a DevOps Engineer, how can you meet this
requirement?

Set up three Lambda functions in AWS Step Functions that deletes the exposed IAM access key,
summarizes the recent API activity for the exposed key using CloudTrail and sends a notification to the IT
Security team using Amazon SNS. Create a CloudWatch Events rule with an aws.health event source and
the AWS_RISK_CREDENTIALS_EXPOSED event to monitor any exposed IAM keys from the Internet. Set the Step
Functions as the target of the CloudWatch Events rule.

(Correct)

Set up three Lambda functions in AWS Step Functions that deletes the exposed IAM access key,
summarizes the recent API activity for the exposed key using CloudTrail and sends a notification to the IT
Security team using Amazon SNS. Create an AWS Config rule for
the AWS_RISK_CREDENTIALS_EXPOSED event with Multi-Account Multi-Region Data Aggregation to monitor
any exposed IAM keys from the Internet. Set the Step Functions as the target of the CloudWatch Events
rule.

Set up three Lambda functions in AWS Step Functions that deletes the exposed IAM access key,
summarizes the recent API activity for the exposed key using CloudTrail and sends a notification to the IT
Security team using Amazon SNS. Create an AWS Personal Health Dashboard rule for
the AWS_RISK_CREDENTIALS_EXPOSED event to monitor any exposed IAM keys from the Internet. Set the Step
Functions as the target of the CloudWatch Events rule.

Set up three Lambda functions in AWS Step Functions that deletes the exposed IAM access key,
summarizes the recent API activity for the exposed key using CloudTrail and sends a notification to the IT
Security team using Amazon SNS. Use a combination of Amazon GuardDuty and Amazon Macie to
monitor any exposed IAM keys from the Internet. Set the Step Functions as the target of the CloudWatch
Events rule.

Explanation
AWS Step Functions lets you coordinate multiple AWS services into serverless workflows so you can build and
update apps quickly. Using Step Functions, you can design and run workflows that stitch together services, such
as AWS Lambda, AWS Fargate, and Amazon SageMaker, into feature-rich applications. Workflows are made
up of a series of steps, with the output of one step acting as input into the next. Application development is
simpler and more intuitive using Step Functions, because it translates your workflow into a state machine
diagram that is easy to understand, easy to explain to others, and easy to change.

AWS proactively monitors popular code repository sites for exposed AWS Identity and Access Management
(IAM) access keys. On detection of an exposed IAM access key, AWS Health generates
an AWS_RISK_CREDENTIALS_EXPOSED CloudWatch Event. In response to this event, you can set up an
automated workflow deletes the exposed IAM Access Key, summarizes the recent API activity for the exposed
key, and sends the summary message to an Amazon Simple Notification Service (SNS) Topic to notify the
subscribers which are all orchestrated by an AWS Step Functions state machine.

You can use Amazon CloudWatch Events to detect and react to changes in the status of AWS Personal Health
Dashboard (AWS Health) events. Then, based on the rules that you create, CloudWatch Events invokes one or
more target actions when an event matches the values that you specify in a rule. Depending on the type of event,
you can send notifications, capture event information, take corrective action, initiate events, or take other
actions.

You can automate actions in response to new scheduled events for your EC2 instances. For example, you can
create CloudWatch Events rules for EC2 scheduled events generated by the AWS Health service. These rules
can then trigger targets, such as AWS Systems Manager Automation documents, to automate actions.

Hence, the correct answer is: Set up three Lambda functions in AWS Step Functions that deletes the exposed
IAM access key, summarizes the recent API activity for the exposed key using CloudTrail and sends
notification to the IT Security team using Amazon SNS. Create a CloudWatch Events rule with
an aws.health event source and the AWS_RISK_CREDENTIALS_EXPOSED event to monitor any exposed IAM keys
from the Internet. Set the Step Functions as the target of the CloudWatch Events rule.

The option that says: Set up three Lambda functions in AWS Step Functions that deletes the exposed IAM
access key, summarizes the recent API activity for the exposed key using CloudTrail and sends a notification
to the IT Security team using Amazon SNS. Use a combination of Amazon GuardDuty and Amazon Macie to
monitor any exposed IAM keys from the Internet. Set the Step Functions as the target of the CloudWatch
Events rule is incorrect because you can't monitor any exposed IAM keys from the Internet using Amazon
GuardDuty and Amazon Macie. Amazon GuardDuty is a threat detection service that continuously monitors for
malicious activity and unauthorized behavior to protect your AWS accounts and workloads while Amazon
Macie is simply a security service that uses machine learning to automatically discover, classify, and protect
sensitive data in AWS.

The option that says: Set up three Lambda functions in AWS Step Functions that deletes the exposed IAM
access key, summarizes the recent API activity for the exposed key using CloudTrail and sends a notification
to the IT Security team using Amazon SNS. Create an AWS Config rule for
the AWS_RISK_CREDENTIALS_EXPOSED event with Multi-Account Multi-Region Data Aggregation to monitor any
exposed IAM keys from the Internet. Set the Step Functions as the target of the CloudWatch Events rule is
incorrect because the use of Multi-Account Multi-Region Data Aggregation in CloudTrail will not satisfy the
requirement. An aggregator is simply an AWS Config resource type that collects AWS Config configuration and
compliance data from multiple accounts across multiple regions.
The option that says: Set up three Lambda functions in AWS Step Functions that deletes the exposed IAM
access key, summarizes the recent API activity for the exposed key using CloudTrail and sends a notification
to the IT Security team using Amazon SNS. Create an AWS Personal Health Dashboard rule for
the AWS_RISK_CREDENTIALS_EXPOSED event to monitor any exposed IAM keys from the Internet. Set the Step
Functions as the target of the CloudWatch Events rule is incorrect because you have to use the AWS Health
API instead of the AWS Personal Health Dashboard. The AWS_RISK_CREDENTIALS_EXPOSED event is only
applicable from an aws.health event source and not from an AWS Personal Health Dashboard rule.

References:

https://aws.amazon.com/blogs/compute/automate-your-it-operations-using-aws-step-functions-and-amazon-
cloudwatch-events/

https://github.com/aws/aws-health-tools/tree/master/automated-actions/
AWS_RISK_CREDENTIALS_EXPOSED

https://docs.aws.amazon.com/health/latest/ug/cloudwatch-events-health.html

Check out these AWS Health and Amazon CloudWatch Cheat Sheets:

https://tutorialsdojo.com/aws-health/

https://tutorialsdojo.com/amazon-cloudwatch/

Question 43: Correct

A company has a fleet of Linux and Windows servers which they use for their enterprise application. They need
to implement an automated daily check of each golden AMI they own to monitor the latest Common
Vulnerabilities and Exposures (CVE) using Amazon Inspector.Which among the options below is the MOST
suitable solution that they should implement?

Use AWS Step Functions to launch an Amazon EC2 instance for each operating system from the golden
AMI and install the Amazon Inspector agent. Configure the Step Functions to trigger the Amazon
Inspector assessment for all instances right after the EC2 instances have booted up. Configure the Step
Functions to run every day using the CloudWatch Event Bus.

Use AWS Step Functions to launch an Amazon EC2 instance for each operating system from the golden
AMI, install the Amazon Inspector agent, and add a custom tag for tracking. Configure the Step Functions
to trigger the Amazon Inspector assessment for all instances with the custom tag you added right after the
EC2 instances have booted up. Trigger the Step Functions every day using an Amazon CloudWatch
Events rule.

(Correct)

Use an AWS Lambda function to launch an Amazon EC2 instance for each operating system from the
golden AMI and add a custom tag for tracking. Create another Lambda function that will trigger the
Amazon Inspector assessment for all instances with the custom tag you added right after the EC2
instances have booted up. Trigger the function every day using an Amazon CloudWatch Events rule.
Use an AWS Lambda function to launch an Amazon EC2 instance for each operating system from the
golden AMI and add a custom tag for tracking. Create another Lambda function that will call the Amazon
Inspector API action StartAssessmentRun after the EC2 instances have booted up, which will run the
assessment against all instances with the custom tag you added. Trigger the function every day using an
Amazon CloudWatch Alarms.

Explanation

Amazon Inspector tests the network accessibility of your Amazon EC2 instances and the security state of your
applications that run on those instances. Amazon Inspector assesses applications for exposure, vulnerabilities,
and deviations from best practices. After performing an assessment, Amazon Inspector produces a detailed list of
security findings that is organized by level of severity.

With Amazon Inspector, you can automate security vulnerability assessments throughout your development and
deployment pipelines or for static production systems. This allows you to make security testing a regular part of
development and IT operations.

Amazon Inspector also offers predefined software called an agent that you can optionally install in the operating
system of the EC2 instances that you want to assess. The agent monitors the behavior of the EC2 instances,
including network, file system, and process activity. It also collects a wide set of behavior and configuration data
(telemetry).

If you want to set up a recurring schedule for your assessment, you can configure your assessment template to
run automatically by creating a Lambda function using the AWS Lambda console. Alternatively, you can select
the "Set up recurring assessment runs once every <number_of_days>, starting now" checkbox and specify the
recurrence pattern (number of days) using the up and down arrows.

Hence, the correct answer is to use AWS Step Functions to launch an Amazon EC2 instance for each
operating system from the golden AMI, install the Amazon Inspector agent, and add a custom tag for
tracking. Configure the Step Functions to trigger the Amazon Inspector assessment for all instances with
the custom tag you added right after the EC2 instances have booted up. Trigger the Step Functions every
day using an Amazon CloudWatch Events rule.

The option that says: Use an AWS Lambda function to launch an Amazon EC2 instance for each operating
system from the golden AMI and add a custom tag for tracking. Create another Lambda function that
will call the Amazon Inspector API action StartAssessmentRun after the EC2 instances have booted up,
which will run the assessment against all instances with the custom tag you added. Trigger the function
every day using an Amazon CloudWatch Alarms is incorrect because you can't trigger a Lambda function on
a regular basis using CloudWatch Alarms. You have to use CloudWatch Events instead. Moreover, you have to
install the Amazon Inspector agent to the EC2 instance in order to run the security assessments.

The option that says: Use an AWS Lambda function to launch an Amazon EC2 instance for each operating
system from the golden AMI and add a custom tag for tracking. Create another Lambda function that
will trigger the Amazon Inspector assessment for all instances with the custom tag you added right after
the EC2 instances have booted up. Trigger the function every day using an Amazon CloudWatch Events
rule is incorrect because, in order for this solution to work, you have to install the Amazon Inspector agent first
to the EC2 instance before you can run the security assessments.
The option that says: Use AWS Step Functions to launch an Amazon EC2 instance for each operating
system from the golden AMI and install the Amazon Inspector agent. Configure the Step Functions to
trigger the Amazon Inspector assessment for all instances right after the EC2 instances have booted up.
Configure the Step Functions to run every day using the CloudWatch Event Bus is incorrect because the
CloudWatch Event bus is primarily used to accept events from AWS services, other AWS accounts, and
PutEvents API calls. You should also add a custom tag to the EC2 instance in order to run the Amazon Inspector
assessments.

References:

https://aws.amazon.com/premiumsupport/knowledge-center/set-up-amazon-inspector/

https://docs.aws.amazon.com/inspector/latest/userguide/inspector_assessments.html#assessment_runs-schedule

https://aws.amazon.com/blogs/security/how-to-set-up-continuous-golden-ami-vulnerability-assessments-with-
amazon-inspector/

Check out this Amazon Inspector Cheat Sheet:

https://tutorialsdojo.com/amazon-inspector/

Question 44: Incorrect

You are working as a DevOps engineer for a company that has hundreds of Amazon EC2 instances in their AWS
account which runs their enterprise web applications. The company is using Slack, which is a cloud-based
proprietary instant messaging platform, for updates and system alerts. There is a new requirement to create a
system that should automatically send all AWS-scheduled maintenance notifications to the Slack channel of the
company. This will easily notify their IT Operations team if there are any AWS-initiated changes to their EC2
instances and other resources.Which of the following is the EASIEST method that you should implement in
order to meet the above requirement?

Use a combination of AWS Config and AWS Trusted Advisor to track any AWS-initiated changes. Create
rules in AWS Config which can trigger an event that invokes an AWS Lambda function to send
notifications to the company's Slack channel.

Use a combination of AWS Personal Health Dashboard and Amazon CloudWatch Events to track the
AWS-initiated activities to your resources. Create an event using CloudWatch Events which can invoke
an AWS Lambda function to send notifications to the company's Slack channel.

(Correct)

Use a combination of AWS Support APIs and AWS CloudTrail to track and monitor the AWS-initiated
activities to your resources. Create a new trail using AWS CloudTrail which can invoke an AWS Lambda
function to send notifications to the company's Slack channel.

Use a combination of Amazon EC2 Events and Amazon CloudWatch to track and monitor the AWS-
initiated activities to your resources. Create an alarm using CloudWatch Alarms which can invoke an
AWS Lambda function to send notifications to the company's Slack channel.

Explanation
AWS Health provides ongoing visibility into the state of your AWS resources, services, and accounts. The
service gives you awareness and remediation guidance for resource performance or availability issues that affect
your applications running on AWS. AWS Health provides relevant and timely information to help you manage
events in progress. AWS Health also helps to be aware of and to prepare for planned activities. The service
delivers alerts and notifications triggered by changes in the health of AWS resources, so that you get near-instant
event visibility and guidance to help accelerate troubleshooting.

All customers can use the Personal Health Dashboard (PHD), powered by the AWS Health API. The dashboard
requires no setup, and it's ready to use for authenticated AWS users. Additionally, AWS Support customers who
have a Business or Enterprise support plan can use the AWS Health API to integrate with in-house and third-
party systems.

You can use Amazon CloudWatch Events to detect and react to changes in the status of AWS Personal Health
Dashboard (AWS Health) events. Then, based on the rules that you create, CloudWatch Events invokes one or
more target actions when an event matches the values that you specify in a rule. Depending on the type of event,
you can send notifications, capture event information, take corrective action, initiate events, or take other
actions.

Only those AWS Health events that are specific to your AWS account and resources are published to
CloudWatch Events. This includes events such as EBS volume lost, EC2 instance store drive performance
degraded, and all the scheduled change events. In contrast, Service Health Dashboard events provide information
about the regional availability of a service and are not specific to AWS accounts, so they are not published to
CloudWatch Events. These event types have the word "operational" in the title in the Personal Health
Dashboard; for example, "SWF operational issue".

Hence, the correct answer is: Use a combination of AWS Personal Health Dashboard and Amazon
CloudWatch Events to track the AWS-initiated activities to your resources. Create an event using
CloudWatch Events which can invoke an AWS Lambda function to send notifications to the company's
Slack channel.

The option that says: Use a combination of AWS Config and AWS Trusted Advisor to track any AWS-
initiated changes. Create rules in AWS Config which can trigger an event that invokes an AWS Lambda
function to send notifications to the company's Slack channel is incorrect because AWS Config is not
capable of tracking any AWS-initiated changes or maintenance in your AWS resources.

The option that says: Use a combination of Amazon EC2 Events and Amazon CloudWatch to track and
monitor the AWS-initiated activities to your resources. Create an alarm using CloudWatch Alarms which
can invoke an AWS Lambda function to send notifications to the company's Slack channel is incorrect
because although the use of CloudWatch is acceptable, especially CloudWatch Events, it is still invalid to use
Amazon EC2 Events to track AWS-initiated activities. A better combination should be the AWS Personal Health
Dashboard and Amazon CloudWatch Events.

The option that says: Use a combination of AWS Support APIs and AWS CloudTrail to track and monitor
the AWS-initiated activities to your resources. Create a new trail using AWS CloudTrail which can invoke
an AWS Lambda function to send notifications to the company's Slack channel is incorrect because an
integration of AWS Support API and AWS CloudTrail will not be able to properly monitor the AWS-initiated
activities in your resources since CloudTrail simply tracks all of the API events of your account only but not the
AWS-initiated events or maintenance.
References:

https://docs.aws.amazon.com/health/latest/ug/cloudwatch-events-health.html#automating-instance-actions

https://docs.aws.amazon.com/health/latest/ug/cloudwatch-events-health.html

https://docs.aws.amazon.com/health/latest/ug/what-is-aws-health.html

Tutorials Dojo's AWS Certified DevOps Engineer Professional Exam Study Guide:

https://tutorialsdojo.com/aws-certified-devops-engineer-professional/

Question 45: Incorrect

An American tech company used an AWS CloudFormation template to deploy its static corporate website hosted
on Amazon S3 in the US East (N. Virginia) region. The template defines an Amazon S3 bucket with a Lambda-
backed custom resource that downloads the content from a file server into the bucket. There is a new task for the
DevOps Engineer to move the website to the US West (Oregon) region to better serve its customers on the West
Coast with lower latency. However, the application stack could not be deleted successfully in CloudFormation.

Which among the following options shows the root cause of this issue, and how can the DevOps Engineer
mitigate this problem for current and future versions of the website?

The CloudFormation stack deletion fails for an S3 bucket that still has contents. To fix the issue, modify
the Lambda function code of the custom resource to recursively empty the bucket if the stack is selected
for deletion.

(Correct)

The CloudFormation stack deletion fails for an S3 bucket because it is not yet empty. To fix the issue, set
the DeletionPolicy to ForceDelete instead.

The CloudFormation stack deletion fails for an S3 bucket because the DeletionPolicy attribute is set
to Snapshot. To fix the issue, set the DeletionPolicy to Delete instead.

The CloudFormation stack deletion fails for an S3 bucket that is used as a static web hosting. To fix the
issue, modify the CloudFormation template to remove the website configuration for the S3 bucket.

Explanation

When you associate a Lambda function with a custom resource, the function is invoked whenever the custom
resource is created, updated, or deleted. AWS CloudFormation calls a Lambda API to invoke the function and to
pass all the request data (such as the request type and resource properties) to the function. The power and
customizability of Lambda functions in combination with AWS CloudFormation enable a wide range of
scenarios, such as dynamically looking up AMI IDs during stack creation, or implementing and using utility
functions, such as string reversal functions.

AWS CloudFormation templates that declare an Amazon Elastic Compute Cloud (Amazon EC2) instance must
also specify an Amazon Machine Image (AMI) ID, which includes an operating system and other software and
configuration information used to launch the instance. The correct AMI ID depends on the instance type and
region in which you're launching your stack. And IDs can change regularly, such as when an AMI is updated
with software updates.

Normally, you might map AMI IDs to specific instance types and regions. To update the IDs, you manually
change them in each of your templates. By using custom resources and AWS Lambda (Lambda), you can create
a function that gets the IDs of the latest AMIs for the region and instance type that you're using so that you don't
have to maintain mappings.

You can also run the custom resource to recursively empty the bucket when the CloudFormation stack is
triggered for deletion. In CloudFormation, you can only delete empty buckets. Any request for deletion will fail
for buckets that still have contents. To control how AWS CloudFormation handles the bucket when the stack is
deleted, you can set a deletion policy for your bucket. You can choose to retain the bucket or to delete the
bucket.

Hence, the correct answer is: The CloudFormation stack deletion fails for an S3 bucket that still has contents.
To fix the issue, modify the Lambda function code of the custom resource to recursively empty the bucket if
the stack is selected for deletion.

The option that says: The CloudFormation stack deletion fails for an S3 bucket that is used as a static web
hosting. To fix the issue, modify the CloudFormation template to remove the website configuration for the S3
bucket is incorrect because the CloudFormation deletion process will not be hindered simply because your S3
bucket is configured for static web hosting. The primary root cause of this issue is that the CloudFormation stack
deletion fails for an S3 bucket that still has contents.

The option that says: The CloudFormation stack deletion fails for an S3 bucket because
the DeletionPolicy attribute is set to Snapshot. To fix the issue, set the DeletionPolicy to Delete instead is
incorrect because you can only set the DeletionPolicy to either Retain or Delete for an Amazon S3 resource. In
addition, the CloudFormation deletion will still fail as long as the S3 bucket is not empty, even if
the DeletionPolicy attribute is already set to Delete.

The option that says: The CloudFormation stack deletion fails for an S3 bucket is not yet empty. To fix the
issue, set the DeletionPolicy to ForceDelete instead is incorrect because although the provided root cause is
accurate, the configuration for DeletionPolicy remains invalid. ForceDelete is not a valid value for the deletion
policy attribute.

References:

https://aws.amazon.com/blogs/devops/faster-auto-scaling-in-aws-cloudformation-stacks-with-lambda-backed-
custom-resources/

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/walkthrough-custom-resources-lambda-
lookup-amiids.html

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-custom-resources-lambda.html

Check out this AWS CloudFormation Cheat Sheet:

https://tutorialsdojo.com/aws-cloudformation/
Question 46: Incorrect

A government-sponsored health service is running their web application containing information about the
clinics, hospitals, medical specialists and other medical services in the country. They also have a set of public
web services which enable third-party companies to search medical data for their respective applications and
clients. Lambda functions are used for the public APIs. For its database-tier, an Amazon DynamoDB table stores
all of the data with an Amazon ES domain which supports the search feature and stores the indexes. A DevOps
engineer has been instructed to ensure that in the event of a failed deployment, there should be no downtime and
a system should be in place to prevent subsequent deployments. The service must strictly maintain full capacity
during API deployment without any reduced capacity to avoid degradation of service.

How can the engineer meet the above requirements in the MOST efficient way?

Deploy the DynamoDB tables, Lambda functions, and Amazon ES domain using AWS CloudFormation.
Deploy changes with an AWS CodeDeploy in-place deployment. Host the web application in AWS
Elastic Beanstalk and set the deployment policy to Rolling.

Deploy the DynamoDB tables, Lambda functions, and Amazon ES domain using AWS CloudFormation.
Deploy changes with an AWS CodeDeploy blue/green deployment. Host the web application in Amazon
S3 and enable cross-region replication.

Deploy the DynamoDB tables, Lambda functions, and Amazon ES domain using AWS CloudFormation.
Deploy changes with an AWS CodeDeploy blue/green deployment. Host the web application in AWS
Elastic Beanstalk and set the deployment policy to All at Once.

Deploy the DynamoDB tables, Lambda functions, and Amazon ES domain using AWS CloudFormation.
Deploy changes with an AWS CodeDeploy blue/green deployment. Host the web application in AWS
Elastic Beanstalk and set the deployment policy to Immutable.

(Correct)

Explanation

AWS Elastic Beanstalk provides several options for how deployments are processed, including deployment
policies (All at once, Rolling, Rolling with additional batch, and Immutable) and options that let you configure
batch size and health check behavior during deployments. By default, your environment uses all-at-once
deployments. If you created the environment with the EB CLI and it's an automatically scaling environment (you
didn't specify the --single option), it uses rolling deployments.

With rolling deployments, Elastic Beanstalk splits the environment's EC2 instances into batches and deploys the
new version of the application to one batch at a time, leaving the rest of the instances in the environment running
the old version of the application. During a rolling deployment, some instances serve requests with the old
version of the application, while instances in completed batches serve other requests with the new version.

To maintain full capacity during deployments, you can configure your environment to launch a new batch of
instances before taking any instances out of service. This option is known as a rolling deployment with an
additional batch. When the deployment completes, Elastic Beanstalk terminates the additional batch of
instances.''
Immutable deployments perform an immutable update to launch a full set of new instances running the new
version of the application in a separate Auto Scaling group, alongside the instances running the old version.
Immutable deployments can prevent issues caused by partially completed rolling deployments. If the new
instances don't pass health checks, Elastic Beanstalk terminates them, leaving the original instances untouched.

Hence, the correct answer is: Deploy the DynamoDB tables, Lambda functions, and Amazon ES domain
using AWS CloudFormation. Deploy changes with an AWS CodeDeploy blue/green deployment. Host the
web application in AWS Elastic Beanstalk and set the deployment policy to Immutable.

The option that says: Deploy the DynamoDB tables, Lambda functions, and Amazon ES domain using
AWS CloudFormation. Deploy changes with an AWS CodeDeploy blue/green deployment. Host the web
application in AWS Elastic Beanstalk and set the deployment policy to All at Once is incorrect because this
policy deploys the new version to all instances simultaneously, which means that the instances in your
environment are out of service for a short time while the deployment occurs.

The option that says: Deploy the DynamoDB tables, Lambda functions, and Amazon ES domain using
AWS CloudFormation. Deploy changes with an AWS CodeDeploy in-place deployment. Host the web
application in AWS Elastic Beanstalk and set the deployment policy to Rolling is incorrect because this
policy will deploy the new version in batches where each batch is taken out of service during the deployment
phase, reducing your environment's capacity by the number of instances in a batch.

The option that says: Deploy the DynamoDB tables, Lambda functions, and Amazon ES domain using
AWS CloudFormation. Deploy changes with an AWS CodeDeploy blue/green deployment. Host the web
application in Amazon S3 and enable cross-region replication is incorrect because you can't host a dynamic
web application in Amazon S3.

References:

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.rolling-version-deploy.html

https://docs.aws.amazon.com/codedeploy/latest/userguide/welcome.html#welcome-deployment-overview

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/environmentmgmt-updates-immutable.html

Check out this AWS Elastic Beanstalk Cheat Sheet:

https://tutorialsdojo.com/aws-elastic-beanstalk/

Question 47: Incorrect

A company has a suite of applications that uses the MERN stack for the presentation tier and NGINX for the
web tier. AWS CodeDeploy will be used to automate its application deployments. The DevOps team created a
deployment group for their TEST environment and performed functional tests within the application. The team
will set up additional deployment groups for STAGING and PROD environments later on. The current log level
is configured within the NGINX settings, but the team wants to change this configuration dynamically when the
deployment occurs. This will enable them to set different log level configurations depending on the deployment
group without having a different application revision for each group.
Which among the options below provides the LEAST management overhead and does not require different
script versions for each deployment group?

Develop a custom shell script that uses the DEPLOYMENT_GROUP_NAME environment variable in CodeDeploy
to identify which deployment group the Amazon EC2 instance is associated with. In
the appspec.yml config file, add a reference to this script as part of the Beforelnstall lifecycle hook.
Configure the log level settings of the instance based on the result of the script.

(Correct)

Use the AWS Resource Groups Tag Editor to add a tag on each EC2 instance based on its deployment
group. Attach a shell script in the application revision that will fetch the instance tag using the aws ec2
describe-tags CLI command to determine which deployment group the Amazon EC2 instance is
associated with. In the appspec.yml config file, add a reference to this script as part of
the ValidateService lifecycle hook. Configure the log level settings of the instance based on the result of
the script.

Develop a custom shell script that uses the DEPLOYMENT_GROUP_ID environment variable in CodeDeploy to
identify which deployment group the Amazon EC2 instance is associated with. In the appspec.yml config
file, add a reference to this script as part of the ApplicationStart lifecycle hook. Configure the log level
settings of the instance based on the result of the script.

Set up a custom environment variable in CodeDeploy for each environment with a value of TEST,
STAGING or PROD. Attach a shell script in the application revision that will read the custom variable
and determine which deployment group the Amazon EC2 instance is associated with. In
the appspec.yml config file, add a reference to this script as part of the ValidateService lifecycle hook.
Configure the log level settings of the instance based on the result of the script.

Explanation

The content in the 'hooks' section of the AppSpec file varies, depending on the compute platform for your
deployment. The 'hooks' section for an EC2/On-Premises deployment contains mappings that link deployment
lifecycle event hooks to one or more scripts. The 'hooks' section for a Lambda or an Amazon ECS deployment
specifies Lambda validation functions to run during a deployment lifecycle event. If an event hook is not
present, no operation is executed for that event.

There is also a set of available environment variables for the hooks. During each deployment lifecycle event,
hook scripts can access the following environment variables:

During each deployment lifecycle event, hook scripts can access the following environment variables:

APPLICATION_NAME - The name of the application in CodeDeploy that is part of the current
deployment (for example, WordPress_App).

DEPLOYMENT_ID - The ID CodeDeploy has assigned to the current deployment (for example, d-
AB1CDEF23).
DEPLOYMENT_GROUP_NAME - The name of the deployment group in CodeDeploy that is part of the
current deployment (for example, WordPress_DepGroup).

DEPLOYMENT_GROUP_ID - The ID of the deployment group in CodeDeploy that is part of the current
deployment (for example, b1a2189b-dd90-4ef5-8f40-4c1c5EXAMPLE).

LIFECYCLE_EVENT - The name of the current deployment lifecycle event (for example, AfterInstall).

These environment variables are local to each deployment lifecycle event.

The following script changes the listening port on an Apache HTTP server to 9090 instead of 80 if the value
of DEPLOYMENT_GROUP_NAME is equal to Staging. This script must be invoked during
the BeforeInstall deployment lifecycle event:
1. if [ "$DEPLOYMENT_GROUP_NAME" == "Staging" ]
2. then
3. sed -i -e 's/Listen 80/Listen 9090/g' /etc/httpd/conf/httpd.conf
4. fi

The following script example changes the verbosity level of messages recorded in its error log from warning to
debug if the value of the DEPLOYMENT_GROUP_NAME environment variable is equal to Staging. This
script must be invoked during the BeforeInstall deployment lifecycle event:
1. if [ "$DEPLOYMENT_GROUP_NAME" == "Staging" ]
2. then
3. sed -i -e 's/LogLevel warn/LogLevel debug/g' /etc/httpd/conf/httpd.conf
4. fi

Hence, the correct answer is: Develop a custom shell script that uses the DEPLOYMENT_GROUP_NAME environment
variable in CodeDeploy to identify which deployment group the Amazon EC2 instance is associated with. In
the appspec.yml config file, add a reference to this script as part of the Beforelnstall lifecycle hook.
Configure the log level settings of the instance based on the result of the script.

The option that says: Use the AWS Resource Groups Tag Editor to add a tag on each EC2 instance based on
its deployment group. Attach a shell script in the application revision that will fetch the instance tag using
the aws ec2 describe-tags CLI command to determine which deployment group the Amazon EC2 instance is
associated with. In the appspec.yml config file, add a reference to this script as part of
the ValidateService lifecycle hook. Configure the log level settings of the instance based on the result of the
script is incorrect because it is better to use the DEPLOYMENT_GROUP_NAME environment variable in
CodeDeploy instead of adding tags and using AWS CLI for deployment. Take note that you also have to provide
your access keys for the AWS CLI in order to run the aws ec2 describe-tags CLI command. This creates an
unnecessary management overhead. Moreover, you have to add a reference to the script as part of
the Beforelnstall lifecycle hook and not to the ValidateService.

The option that says: Set up a custom environment variable in CodeDeploy for each environment with a value
of TEST, STAGING or PROD. Attach a shell script in the application revision that will read the custom
variable and determine which deployment group the Amazon EC2 instance is associated with. In
the appspec.yml config file, add a reference to this script as part of the ValidateService lifecycle hook.
Configure the log level settings of the instance based on the result of the script is incorrect because there is no
such thing as custom environment variable in CodeDeploy. During each deployment lifecycle event, hook scripts
can only access the following predefined environment variables: APPLICATION_NAME, DEPLOYMENT_ID,
DEPLOYMENT_GROUP_NAME, DEPLOYMENT_GROUP_ID and LIFECYCLE_EVENT. In addition, you
have to add a reference to the script as part of the Beforelnstall lifecycle hook and not to the ValidateService.

The option that says: Develop a custom shell script that uses the DEPLOYMENT_GROUP_ID environment variable in
CodeDeploy to identify which deployment group the Amazon EC2 instance is associated with. In
the appspec.yml config file, add a reference to this script as part of the ApplicationStart lifecycle hook.
Configure the log level settings of the instance based on the result of the script is incorrect because you have to
use the DEPLOYMENT_GROUP_NAME environment variable in CodeDeploy to identify which deployment
group the Amazon EC2 instance is associated with and not DEPLOYMENT_GROUP_ID. Moreover, you have
to add a reference to the script as part of the Beforelnstall lifecycle hook and not to the ApplicationStart.

References:

https://aws.amazon.com/blogs/devops/using-codedeploy-environment-variables/

https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-
hooks.html#reference-appspec-file-structure-environment-variable-availability

Check out this AWS CodeDeploy Cheat Sheet:

https://tutorialsdojo.com/aws-codedeploy/

Question 48: Correct

A software development company has a microservices architecture that consists of several AWS Lambda
functions with a DynamoDB table as its data store. The current workflow of the development team is to
manually deploy the new version of the Lambda function right after the QA team completed the testing. There is
a new requirement to improve the workflow by automating the tests as well as the code deployments. Whenever
there is a new release, the application traffic to the new versions of each microservice should be incrementally
shifted over time after deployment. This will provide them the option to verify the changes to a subset of users in
production and easily rollback the changes if needed.

Which of the following solutions will improve the velocity of the company's development workflow?

Set up an AWS CodeBuild configuration which automatically starts whenever a new code is pushed.
Configure CloudFormation to trigger a pipeline in AWS CodePipeline that deploys the new Lambda
version. Specify the percentage of traffic that will initially be routed to your updated Lambda function as
well as the interval to deploy the code over time in the CloudFormation template.

Set up a new pipeline in AWS CodePipeline and configure a new source code step that will automatically
trigger whenever a new code is pushed in AWS CodeCommit. Use AWS CodeBuild for the build step to
run the tests automatically then set up an AWS CodeDeploy configuration to deploy the updated Lambda
function. Select the predefined CodeDeployDefault.LambdaLinear10PercentEvery3Minutes configuration
option for deployment.

(Correct)

Develop a custom shell script that utilizes a post-commit hook to upload the latest version of the Lambda
function in an S3 bucket. Set up the S3 event trigger which will invoke a Lambda function that deploys
the new version. Specify the percentage of traffic that will initially be routed to your updated Lambda as
well as the interval to deploy the code over time.

Set up a new pipeline in AWS CodePipeline and configure a post-commit hook to start the pipeline after
all the automated tests have passed. Configure AWS CodeDeploy to use an All-at-once deployment
configuration for deployments.

Explanation

AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and
produces software packages that are ready to deploy. With CodeBuild, you don’t need to provision, manage, and
scale your own build servers. CodeBuild scales continuously and processes multiple builds concurrently, so your
builds are not left waiting in a queue.

AWS CodePipeline is a continuous delivery service you can use to model, visualize, and automate the steps
required to release your software. You can quickly model and configure the different stages of a software release
process. CodePipeline automates the steps required to release your software changes continuously.

When you deploy to an AWS Lambda compute platform, the deployment configuration specifies the way traffic
is shifted to the new Lambda function versions in your application.

There are three ways traffic can shift during a deployment:

Canary: Traffic is shifted in two increments. You can choose from predefined canary options that specify the
percentage of traffic shifted to your updated Lambda function version in the first increment and the interval, in
minutes, before the remaining traffic is shifted in the second increment.

Linear: Traffic is shifted in equal increments with an equal number of minutes between each increment. You
can choose from predefined linear options that specify the percentage of traffic shifted in each increment and the
number of minutes between each increment.

All-at-once: All traffic is shifted from the original Lambda function to the updated Lambda function version all
at once.

Hence, the correct answer is Set up a new pipeline in AWS CodePipeline and configure a new source code
step that will automatically trigger whenever a new code is pushed in AWS CodeCommit. Use AWS
CodeBuild for the build step to run the tests automatically then set up an AWS CodeDeploy configuration
to deploy the updated Lambda function. Select the
predefined CodeDeployDefault.LambdaLinear10PercentEvery3Minutes configuration option for deployment.

The option that says: Set up a new pipeline in AWS CodePipeline and configure a post-commit hook to
start the pipeline after all the automated tests have passed. Configure AWS CodeDeploy to use an All-at-
once deployment configuration for deployments is incorrect because the All-at-once configuration will cause
all traffic to be shifted from the original Lambda function to the updated Lambda function version all at once,
just as what its name implies.

The option that says: Set up an AWS CodeBuild configuration which automatically starts whenever a new
code is pushed. Configure CloudFormation to trigger a pipeline in AWS CodePipeline that deploys the
new Lambda version. Specify the percentage of traffic that will initially be routed to your updated
Lambda function as well as the interval to deploy the code over time in the CloudFormation template is
incorrect because you can just use CodeDeploy for deployments to streamline the workflow instead of using a
combination of CodePipeline and CloudFormation.

The option that says: Develop a custom shell script that utilizes a post-commit hook to upload the latest
version of the Lambda function in an S3 bucket. Set up the S3 event trigger which will invoke a Lambda
function that deploys the new version. Specify the percentage of traffic that will initially be routed to your
updated Lambda as well as the interval to deploy the code over time is incorrect because developing a
custom shell script as well as using S3 event triggers take a lot of time and administrative effort to implement.
You should use a combination of CodeCommit, CodeBuild, CodeDeploy, and CodePipeline instead.

References:

https://docs.aws.amazon.com/codedeploy/latest/userguide/deployment-configurations.html

https://docs.aws.amazon.com/codedeploy/latest/userguide/deployment-steps-lambda.html

Check out this AWS CodeDeploy Cheat Sheet:

https://tutorialsdojo.com/aws-codedeploy/

Tutorials Dojo's AWS Certified DevOps Engineer Professional Exam Study Guide:

https://tutorialsdojo.com/aws-certified-devops-engineer-professional/

Question 49: Incorrect

A company has an application hosted in an Auto Scaling group of EC2 instances which calls an external API
with a URL of http://api.tutorialsdojo.com as part of its processing. There was a recent deployment that
changed the protocol of the URL from HTTP to HTTPS but after that, the application has stopped working
properly. The DevOps engineer has verified using his POSTMAN tool that the external API works without any
issues and the VPC being utilized is still using the default network ACL.

Which of the following is the MOST appropriate course of action that the engineer should take to determine the
root cause of this problem?

Log in to the AWS Management Console and look for REJECT records in the VPC flow logs which
originated from the Auto Scaling group. Verify that the egress security group rules of the Auto Scaling
Group allow the outgoing traffic to the external API.

(Correct)

Log in to the AWS Management Console and view the application logs in Amazon CloudWatch Logs to
troubleshoot the issue. Verify that the existing egress security group rules of the Auto Scaling Group, as
well as the network ACL, allow the outgoing traffic to the external API.

Log in to the AWS Management Console and then in the VPC flow logs, look for ACCEPT records which
were originated from the Auto Scaling group. Verify that the ingress security group rules of the Auto
Scaling Group allow the incoming traffic from the external API.
Log in to the AWS Management Console and view the application logs in Amazon CloudWatch Logs to
troubleshoot the issue. Verify that the ingress security group rules of the Auto Scaling Group, as well as
the network ACL, allow the incoming traffic from the external API.

Explanation

Amazon Virtual Private Cloud provides features that you can use to increase and monitor the security of your
virtual private cloud (VPC):

Security groups: Security groups act as a firewall for associated Amazon EC2 instances, controlling both
inbound and outbound traffic at the instance level. When you launch an instance, you can associate it with one or
more security groups that you've created. Each instance in your VPC could belong to a different set of security
groups. If you don't specify a security group when you launch an instance, the instance is automatically
associated with the default security group for the VPC.

Network access control lists (ACLs): Network ACLs act as a firewall for associated subnets, controlling both
inbound and outbound traffic at the subnet level.

Flow logs: Flow logs capture information about the IP traffic going to and from network interfaces in your VPC.
You can create a flow log for a VPC, subnet, or individual network interface. Flow log data is published to
CloudWatch Logs or Amazon S3, and can help you diagnose overly restrictive or overly permissive security
group and network ACL rules.

Traffic mirroring: You can copy network traffic from an elastic network interface of an Amazon EC2 instance.
You can then send the traffic to out-of-band security and monitoring appliances.

You can use AWS Identity and Access Management to control who in your organization has permission to
create and manage security groups, network ACLs, and flow logs. For example, you can give only your network
administrators that permission, but not personnel who only need to launch instances.

For HTTP traffic, you must add an inbound rule on port 80 from the source address 0.0.0.0/0. For HTTPS traffic,
add an inbound rule on port 443 from the source address 0.0.0.0/0. These inbound rules allow traffic from IPv4
addresses. To allow IPv6 traffic, add inbound rules on the same ports from the source address ::/0. Because
security groups are stateful, the return traffic from the instance to users is allowed automatically, so you don't
need to modify the security group's outbound rules.

The default network ACL is configured to allow all traffic to flow in and out of the subnets with which it is
associated. Each network ACL also includes a rule whose rule number is an asterisk. This rule ensures that if a
packet doesn't match any of the other numbered rules, it's denied.

In this scenario, the change of the URL from HTTP to HTTPS means that the application is using port 443 and
not port 80 anymore. Since the application is the one that initiates the call to the external API, it makes sense to
check if the egress security group rules allow outgoing HTTPS (443) traffic.

Hence, the correct answer is: Log in to the AWS Management Console and look for REJECT records in the
VPC flow logs which originated from the Auto Scaling group. Verify that the egress security group rules
of the Auto Scaling Group allow the outgoing traffic to the external API.
The option that says: Log in to the AWS Management Console and then in the VPC flow logs, look
for ACCEPT records which were originated from the Auto Scaling group. Verify that the ingress security
group rules of the Auto Scaling Group allow the incoming traffic from the external API is incorrect
because you should first check the egress rules (instead of ingress) of your security group first. Remember that it
is the Auto Scaling group of EC2 instances that initiates the call to the external API and not the other way
around. In addition, it is more effective if you look for REJECT records instead of ACCEPT records in VPC Flow
Logs to view the details of the failed connection to your external API.

The option that says: Log in to the AWS Management Console and view the application logs in Amazon
CloudWatch Logs to troubleshoot the issue. Verify that the existing egress security group rules of the
Auto Scaling Group, as well as the network ACL, allow the outgoing traffic to the external API is incorrect
because, in the first place, the scenario didn't mention that the CloudWatch Logs agent is installed in the EC2
instances. Although it is right to check the existing egress security group rules, you don't need to check the
network ACL since the architecture is already using a default one which is configured to allow all traffic.

The option that says: Log in to the AWS Management Console and view the application logs in Amazon
CloudWatch Logs to troubleshoot the issue. Verify that the ingress security group rules of the Auto
Scaling Group, as well as the network ACL, allow the incoming traffic from the external API is incorrect
because you have to check the egress security group rules first instead. The scenario also didn't mention that the
CloudWatch Logs agent is installed in the EC2 instances, which means that you might not be able to view the
application logs in CloudWatch.

References:

https://aws.amazon.com/premiumsupport/knowledge-center/connect-http-https-ec2/

https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html

https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html

Amazon VPC Overview:

https://youtu.be/oIDHKeNxvQQ

Check out this Amazon VPC Cheat Sheet:

https://tutorialsdojo.com/amazon-vpc/

Question 50: Incorrect

A company that develops smart home devices has a set of serverless APIs that uses several independent AWS
Lambda functions that handles web, mobile, Internet of Things (IoT), and 3rd party API requests. The current
CI/CD pipeline builds, tests, packages, and deploys each Lambda function in sequence using AWS CodePipeline
and AWS CodeBuild. A CloudWatch Events rule is used to monitor and ensure that the pipeline execution starts
promptly after a code change has been made. The SysOps team monitors all of the AWS resources as well as the
performance of each pipeline. They noticed that for the past few days, the pipeline takes too long to finish the
build and deployment process. The team instructed a DevOps Engineer to implement a solution that will
expedite the deployment process of the independent Lambda functions.
Which of the following is the MOST suitable solution that the DevOps Engineer should implement?

Enable local caching in AWS CodeBuild using a Docker layer cache mode.

Set up a custom CodeBuild execution environment with multiprocessing option that runs builds in
parallel.

Execute actions for each Lambda function in parallel by setting up a configuration that specifies the
same runOrder value in CodePipeline.

(Correct)

Upgrade the compute type of the build environment in CodeBuild pipeline with a higher memory, vCPUs,
and disk space.

Explanation

In AWS CodePipeline, an action is part of the sequence in a stage of a pipeline. It is a task performed on the
artifact in that stage. Pipeline actions occur in a specified order, in sequence or in parallel, as determined in the
configuration of the stage.

CodePipeline provides support for six types of actions:

- Source

- Build

- Test

- Deploy

- Approval

- Invoke

By default, any pipeline you successfully create in AWS CodePipeline has a valid structure. However, if you
manually create or edit a JSON file to create a pipeline or update a pipeline from the AWS CLI, you might
inadvertently create a structure that is not valid. The following reference can help you better understand the
requirements for your pipeline structure and how to troubleshoot issues.

The default runOrder value for an action is 1. The value must be a positive integer (natural number). You cannot
use fractions, decimals, negative numbers, or zero. To specify a serial sequence of actions, use the smallest
number for the first action and larger numbers for each of the rest of the actions in sequence. To specify parallel
actions, use the same integer for each action you want to run in parallel.

For example, if you want three actions to run in sequence in a stage, you would give the first action
the runOrder value of 1, the second action the runOrder value of 2, and the third the runOrder value of 3.
However, if you want the second and third actions to run in parallel, you would give the first action
the runOrder value of 1 and both the second and third actions the runOrder value of 2.
The numbering of serial actions do not have to be in strict sequence. For example, if you have three actions in a
sequence and decide to remove the second action, you do not need to renumber the runOrder value of the third
action. Because the runOrder value of that action (3) is higher than the runOrder value of the first action (1), it
runs serially after the first action in the stage.

Hence, the correct answer is: Execute actions for each Lambda function in parallel by setting up a
configuration that specifies the same runOrder value in CodePipeline.

The option that says: Upgrade the compute type of the build environment in CodeBuild pipeline with a
higher memory, vCPUs, and disk space is incorrect because although it may help speed up the build time, it
will only improve the performance of CodeBuild and not the entire pipeline. A better solution is to run the tasks
in parallel in CodePipeline.

The option that says: Set up a custom CodeBuild execution environment with multiprocessing option that
runs builds in parallel is incorrect because there is no multiprocessing option in CodeBuild. You can upgrade
the compute type of the build environment or use local cache, but there is no such thing as multiprocessing in
CodeBuild.

The option that says: Enable local caching in AWS CodeBuild using a Docker layer cache mode is incorrect
because although using a local cache can help expedite the build process, the use of Docker layer cache mode is
only applicable for containerized applications and not for Lambda functions.

References:

https://docs.aws.amazon.com/codepipeline/latest/userguide/reference-pipeline-structure.html

https://docs.aws.amazon.com/codepipeline/latest/userguide/actions.html

Check out this AWS CodePipeline Cheat Sheet:

https://tutorialsdojo.com/aws-codepipeline/

Question 51: Incorrect

A leading food and beverage company is currently migrating its Docker-based application hosted on-premises to
AWS. The application will be hosted in an Amazon ECS cluster with multiple ECS services to run its various
workloads. The cluster is configured to use an Application Load Balancer to distribute traffic evenly across the
tasks in your service. A DevOps Engineer was instructed to configure the cluster to automatically collect logs
from all of the services and upload them to an S3 bucket for near-real-time analysis.

How should the Engineer configure the ECS setup to satisfy these requirements? (Select THREE.)

Capture detailed information about requests sent to your load balancer by enabling access logging on the
Application Load Balancer. Configure it to store the logs to the S3 bucket.

(Correct)

Create a CloudWatch Logs subscription filter integrated with Amazon Kinesis to analyze the
logs. Configure the CloudWatch Logs to export the logs to an S3 bucket.
(Correct)

Capture detailed information about requests sent to your load balancer by using Detailed Monitoring in
CloudWatch. Configure it to store the logs to the S3 bucket.

Set up Amazon Macie to analyze the access logs in the S3 bucket.

Create the required IAM Policy and attach it to the ecsInstanceRole. Install the Amazon CloudWatch
Logs agent on the Amazon ECS instances. Use the awslogs Log Driver in the Amazon ECS task
definition.

(Correct)

Integrate a Lambda function with CloudWatch Events to run a process every minute that invokes
the CreateLogGroup and CreateExportTask CloudWatch Logs API to push the logs to the S3 bucket.

Explanation

You can configure the containers in your tasks to send log information to CloudWatch Logs. If you are using the
Fargate launch type for your tasks, this allows you to view the logs from your containers. If you are using the
EC2 launch type, this enables you to view different logs from your containers in one convenient location, and it
prevents your container logs from taking up disk space on your container instances.

The type of information that is logged by the containers in your task depends mostly on
their ENTRYPOINT command. By default, the logs that are captured show the command output that you would
normally see in an interactive terminal if you ran the container locally, which are the STDOUT and STDERR I/O
streams. The awslogs log driver simply passes these logs from Docker to CloudWatch Logs.

You can use subscriptions to get access to a real-time feed of log events from CloudWatch Logs and have it
delivered to other services such as a Amazon Kinesis stream, Amazon Kinesis Data Firehose stream, or AWS
Lambda for custom processing, analysis, or loading to other systems. To begin subscribing to log events, create
the receiving source, such as a Kinesis stream, where the events will be delivered. A subscription filter defines
the filter pattern to use for filtering which log events get delivered to your AWS resource, as well as information
about where to send matching log events to.

Elastic Load Balancing provides access logs that capture detailed information about requests sent to your load
balancer. Each log contains information such as the time the request was received, the client's IP address,
latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and
troubleshoot issues.

Access logging is an optional feature of Elastic Load Balancing that is disabled by default. After you enable
access logging for your load balancer, Elastic Load Balancing captures the logs and stores them in the Amazon
S3 bucket that you specify as compressed files. You can disable access logging at any time.

Each access log file is automatically encrypted before it is stored in your S3 bucket and decrypted when you
access it. You do not need to take any action; the encryption and decryption is performed transparently. Each log
file is encrypted with a unique key, which is itself encrypted with a master key that is regularly rotated.

Hence, the correct answers are:


- Create the required IAM Policy and attach it to the ecsInstanceRole. Install the Amazon CloudWatch
Logs agent on the Amazon ECS instances. Use the awslogs Log Driver in the Amazon ECS task definition.

- Capture detailed information about requests sent to your load balancer by enabling access logging on the
Application Load Balancer. Configure it to store the logs to the S3 bucket.

- Create a CloudWatch Logs subscription filter integrated with Amazon Kinesis to analyze the logs.
Configure the CloudWatch Logs to export the logs to an S3 bucket.

The option that says: Set up Amazon Macie to analyze the access logs in the S3 bucket is incorrect because
Amazon Macie is just a security service that uses machine learning to automatically discover, classify, and
protect sensitive data in AWS. It can't analyze logs in near-real-time unlike Kinesis.

The option that says: Integrate a Lambda function with CloudWatch Events to run a process every minute
that invokes the CreateLogGroup and CreateExportTask CloudWatch Logs API to push the logs to the S3
bucket is incorrect because although this step may work, it is still better to use the awslogs Log Driver instead of
developing a custom scheduled job. It is unnecessary since you only need to change the log driver in your task
definition.

The option that says: Capture detailed information about requests sent to your load balancer by using
Detailed Monitoring in CloudWatch. Configure it to store the logs to the S3 bucket is incorrect because the
Detailed Monitoring feature simply sends the metric data for your instance to CloudWatch in 1-minute periods.

References:

https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html

https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-access-logs.html

Check out these Amazon ECS and AWS Elastic Load Balancing Cheat Sheets:

https://tutorialsdojo.com/amazon-elastic-container-service-amazon-ecs/

https://tutorialsdojo.com/aws-elastic-load-balancing-elb/

Question 52: Incorrect

A multinational investment bank is implementing regulatory compliance checks over their AWS accounts. All
API calls made on each of their AWS resources across their accounts must be monitored and tracked for auditing
purposes. AWS CloudTrail will be used to monitor all API activities and detect sensitive security issues in the
company's AWS accounts. The DevOps Team was assigned to come up with a solution to remediate CloudTrail
from being disabled on some AWS accounts automatically.

As a DevOps Engineer, what solution should you apply that provides the LEAST amount of downtime for the
CloudTrail log deliveries?

Use the cloudtrail-enabled AWS Config managed rule to evaluate whether the AWS account enabled
AWS CloudTrail with a trigger type of Configuration changes. By default, this managed rule will
automatically remediate the accounts that disabled its CloudTrail.
Use the cloudtrail-enabled AWS Config managed rule with a periodic interval of 1 hour to evaluate
whether your AWS account enabled the AWS CloudTrail. Set up a CloudWatch Events rule for AWS
Config rules compliance change. Launch a Lambda function that uses the AWS SDK and add the Amazon
Resource Name (ARN) of the Lambda function as the target in the CloudWatch Events rule. Once
a StopLogging event is detected, the Lambda function will re-enable the logging for that trail by calling
the StartLogging API on the resource ARN.

(Correct)

Use AWS CDK to evaluate the CloudTrail status. Configure CloudTrail to send information to an
Amazon SNS topic. Subscribe to the Amazon SNS topic to receive notifications.

Integrate CloudWatch Events and AWS Lambda to have an automated process that runs every minute to
query the CloudTrail in the current account. Ensure that the Lambda function uses the AWS SDK. Once
a DeleteTrail event is detected, the Lambda function will re-enable the logging for that trail by calling
the CreateTrail API on the resource ARN.

(Incorrect)

Explanation

AWS Config provides AWS managed rules, which are predefined, customizable rules that AWS Config uses to
evaluate whether your AWS resources comply with common best practices. For example, you could use a
managed rule to quickly assess whether your Amazon Elastic Block Store (Amazon EBS) volumes are encrypted
or whether specific tags are applied to your resources. You can set up and activate these rules without writing the
code to create an AWS Lambda function, which is required if you want to create custom rules. The AWS Config
console guides you through the process of configuring and activating a managed rule. You can also use the AWS
Command Line Interface or AWS Config API to pass the JSON code that defines your configuration of a
managed rule.

You can customize the behavior of a managed rule to suit your needs. For example, you can define the rule's
scope to constrain which resources trigger an evaluation for the rule, such as EC2 instances or volumes. You can
customize the rule's parameters to define attributes that your resources must have to comply with the rule. For
example, you can customize a parameter to specify that your security group should block incoming traffic to a
specific port number.

After you activate a rule, AWS Config compares your resources to the rule's conditions. After this initial
evaluation, AWS Config continues to run evaluations each time one is triggered. The evaluation triggers are
defined as part of the rule, and they can include the following types:

Configuration changes – AWS Config triggers the evaluation when any resource that matches the rule's scope
changes in configuration. The evaluation runs after AWS Config sends a configuration item change notification.

Periodic – AWS Config runs evaluations for the rule at a frequency that you choose (for example, every 24
hours).

The cloudtrail-enabled checks whether AWS CloudTrail is enabled in your AWS account. Optionally, you can
specify which S3 bucket, SNS topic, and Amazon CloudWatch Logs ARN to use.
Hence, the correct answer is: Use the cloudtrail-enabled AWS Config managed rule with a periodic interval of
1 hour to evaluate whether your AWS account enabled the AWS CloudTrail. Set up a CloudWatch Events
rule for AWS Config rules compliance change. Launch a Lambda function that uses the AWS SDK and
add the Amazon Resource Name (ARN) of the Lambda function as the target in the CloudWatch Events
rule. Once a StopLogging event is detected, the Lambda function will re-enable the logging for that trail by
calling the StartLogging API on the resource ARN.

The option that says: Use the cloudtrail-enabled AWS Config managed rule to evaluate whether the AWS
account enabled AWS CloudTrail with a trigger type of Configuration changes. This managed rule will
automatically remediate the accounts that disabled its CloudTrail is incorrect because, by default, AWS
Config will not automatically remediate the accounts that disabled its CloudTrail. You must manually set this up
using a CloudWatch Events rule and a custom Lambda function that calls the StartLogging API to enable
CloudTrail back again. Furthermore, the cloudtrail-enabled AWS Config managed rule is only available for
the periodic trigger type and not Configuration changes.

The option that says: Use AWS CDK to evaluate the CloudTrail status. Configure CloudTrail to send
information to an Amazon SNS topic. Subscribe to the Amazon SNS topic to receive notifications is
incorrect. AWS Cloud Development Kit (AWS CDK) is an open-source software development framework for
building cloud applications and infrastructure using programming languages. It isn't used to check whether the
CloudTrail is enabled in an AWS account.

The option that says: Integrate CloudWatch Events and AWS Lambda to have an automated process that
runs every minute to query the CloudTrail in the current account. Ensure that the Lambda function uses
the AWS SDK. Once a DeleteTrail event is detected, the Lambda function will re-enable the logging for
that trail by calling the CreateTrail API on the resource ARN is incorrect. Instead, you should detect
the StopLogging event and call the StartLogging API to enable CloudTrail again.
The DeleteTrail and CreateTrail events, as their name implies, are simply for deleting and creating the trails
respectively.

References:

https://aws.amazon.com/blogs/mt/monitor-changes-and-auto-enable-logging-in-aws-cloudtrail/

https://docs.aws.amazon.com/config/latest/developerguide/cloudtrail-enabled.html

https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config_use-managed-rules.html

https://docs.aws.amazon.com/cdk/v2/guide/home.html

Check out this AWS Config Cheat Sheet:

https://tutorialsdojo.com/aws-config/

Question 53: Correct

A company is planning to deploy a new version of their legacy application in AWS which is deployed to an Auto
Scaling group of EC2 instances with an Application Load Balancer in front. To avoid any disruption of their
services, they need to implement canary testing first before all of the traffic is shifted to the new application
version.Which of the following solutions can meet this requirement?
Prepare another stack that consists of an Application Load Balancer and Auto Scaling group which
contains the new application version for blue/green environments. Use an Amazon CloudFront web
distribution to adjust the weight of the incoming traffic to the two Application Load Balancers.

Do a Canary deployment using CodeDeploy with


a CodeDeployDefault.LambdaCanary10Percent30Minutes deployment configuration.

Prepare another stack that consists of an Application Load Balancer and Auto Scaling group which
contains the new application version for blue/green environments. Create weighted Alias A records in
Route 53 for the two Application Load Balancers to adjust the traffic.

(Correct)

Set up an Amazon API Gateway private integration with an Application Load Balancer and prepare a
separate stage for the new application version. Configure the API Gateway to do a canary release
deployment.

Explanation

The purpose of a canary deployment is to reduce the risk of deploying a new version that impacts the workload.
The method will incrementally deploy the new version, making it visible to new users in a slow fashion. As you
gain confidence in the deployment, you will deploy it to replace the current version in its entirety.

To properly implement the canary deployment, you should do the following steps:

- Use a router or load balancer that allows you to send a small percentage of users to the new version.

- Use a dimension on your KPIs to indicate which version is reporting the metrics.

- Use the metric to measure the success of the deployment; this indicates whether the deployment should
continue or rollback.

- Increase the load on the new version until either all users are on the new version or you have fully rolled back.

Hence, the correct answer is: Prepare another stack that consists of an Application Load Balancer and Auto
Scaling group which contains the new application version for blue/green environments. Create weighted
Alias A records in Route 53 for the two Application Load Balancers to adjust the traffic.

The option that says: Do a Canary deployment using CodeDeploy with


a CodeDeployDefault.LambdaCanary10Percent30Minutes deployment configuration is incorrect because this
specific configuration type is only applicable for Lambda functions and for the applications hosted in an Auto
Scaling group.

The option that says: Prepare another stack that consists of an Application Load Balancer and Auto
Scaling group which contains the new application version for blue/green environments. Use an Amazon
CloudFront web distribution to adjust the weight of the incoming traffic to the two Application Load
Balancers is incorrect because you can't use CloudFront to adjust the weight of the incoming traffic to your
application. You should use Route 53 instead.
The option that says: Set up an Amazon API Gateway private integration with an Application Load
Balancer and prepare a separate stage for the new application version. Configure the API Gateway to do
a canary release deployment is incorrect because you can only integrate a Network Load Balancer to your
Amazon API Gateway. Moreover, this service is only applicable for APIs, not full-fledged web applications.

References:

https://wa.aws.amazon.com/wat.concept.canary-deployment.en.html

https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-elb-load-balancer.html

https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-values-weighted.html

https://aws.amazon.com/blogs/devops/blue-green-deployments-with-application-load-balancer/

Question 54: Incorrect

A company has a web application that runs on an Auto Scaling group of EC2 instances across multiple
Availability Zones behind an Application Load Balancer. For its database tier, it is using an Amazon RDS
MySQL and routes the incoming traffic to the load balancer using Amazon Route 53. The Application Load
Balancer has a health check that monitors the status of the web servers and verifies that the servers can properly
access the database. For compliance purposes, the management instructed their Operations team to implement a
geographically isolated disaster recovery site to ensure business continuity. The required RPO is 5 minutes while
the RTO should be 2 hours.

Which of the following options require the LEAST amount of changes to the application stack?

Clone the application stack except for RDS in a different AWS Region. Create Read Replicas in the new
region and configure the new application stack to point to the local Amazon RDS database instance. Set
up a failover routing policy in Route 53 that will automatically route traffic to the new application stack in
the event of an outage.

(Correct)

Clone the entire application stack except for its RDS database in a different Availability Zone. Create
Read Replicas in another Availability Zone and configure the new stack to point to the local RDS
instance. Set up a failover routing policy in Route 53 that will automatically route traffic to the new stack
in the event of an outage.

Configure the Amazon RDS to use Multi-AZ deployments configuration and create Read Replicas.
Increase the number of application servers of the stack. Set up a latency routing policy in Route 53 that
will automatically route traffic to the application servers

Clone the application stack except for RDS in a different AWS Region. Enable Amazon RDS Multi-AZ
deployments configuration and deploy the standby database instance in the new region. Configure the new
application stack to point to the local RDS database instance. Set up a latency routing policy in Route 53
that will automatically route traffic to the new stack in the event of an outage.

Explanation
When you have more than one resource performing the same function — for example, more than one HTTP
server or mail server — you can configure Amazon Route 53 to check the health of your resources and respond
to DNS queries using only the healthy resources. Suppose you have a website with a domain name of
tutorialsdojo.com, which is hosted on six servers, two each in three data centers around the world. You can
configure Amazon Route 53 to check the health of those servers and to respond to DNS queries for
tutorialsdojo.com using only the servers that are currently healthy.

Route 53 can check the health of your resources in both simple and complex configurations:

- In simple configurations, you create a group of records that all have the same name and type, such as a group of
weighted records with a type of A for tutorialsdojo.com. You then configure Route 53 to check the health of the
corresponding resources. Route 53 responds to DNS queries based on the health of your resources.

- In more complex configurations, you create a tree of records that route traffic based on multiple criteria. For
example, if latency for your users is your most important criterion, then you might use latency alias records to
route traffic to the region that provides the best latency. The latency alias records might have weighted records in
each region as the alias target. The weighted records might route traffic to EC2 instances based on the instance
type. As with a simple configuration, you can configure Route 53 to route traffic based on the health of your
resources.

Hence, the correct answer is: Clone the application stack except for RDS in a different AWS Region. Create
Read Replicas in the new region and configure the new application stack to point to the local Amazon
RDS database instance. Set up a failover routing policy in Route 53 that will automatically route traffic to
the new application stack in the event of an outage

The option that says: Clone the entire application stack except for its RDS database in a different
Availability Zone. Create Read Replicas in another Availability Zone and configure the new stack to point
to the local RDS instance. Set up a failover routing policy in Route 53 that will automatically route traffic
to the new stack in the event of an outage is incorrect because this is only deployed in another Availability
Zone which could also be affected by an AWS Region outage. The new stack should be deployed on a totally
separate AWS Region instead.

The option that says: Clone the application stack except for RDS in a different AWS Region. Enable
Amazon RDS Multi-AZ deployments configuration and deploy the standby database instance in the new
region. Configure the new application stack to point to the local RDS database instance. Set up a latency
routing policy in Route 53 that will automatically route traffic to the new stack in the event of an outage is
incorrect because a Multi-AZ RDS database spans to several Availability Zones within a single Region only, and
not to an entirely new region. You cannot deploy the standby database instance in the new AWS region.The
option that says: Configure the Amazon RDS to use Multi-AZ deployments configuration and create Read
Replicas. Increase the number of application servers of the stack. Set up a latency routing policy in Route
53 that will automatically route traffic to the application servers is incorrect. Although this architecture can
cope with an individual AZ outage, the systems will still be unavailable in the event of an AWS Region-wide
unavailability.

References:

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo.RDS.html

https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover-configuring.html
Check out these AWS Elastic Beanstalk and Amazon Route 53 Cheat Sheets:

https://tutorialsdojo.com/aws-elastic-beanstalk/

https://tutorialsdojo.com/amazon-route-53/

Tutorials Dojo's AWS Certified DevOps Engineer Professional Exam Study Guide:

https://tutorialsdojo.com/aws-certified-devops-engineer-professional/

Question 55: Incorrect

A leading telecommunications company is using CloudFormation templates to deploy enterprise applications to


their production, staging, and development environments in AWS. Their current process involves
manual changes to their CloudFormation templates in order to specify the configuration variables and static
attributes for each environment. The DevOps Engineer was tasked to set up automated deployments using AWS
CodePipeline and ensure that the CloudFormation template is reusable across multiple pipelines.

How should the DevOps Engineer satisfy this requirement?

Set up a Lambda-backed custom resource in the CloudFormation templates. Configure the custom
resource to monitor the status of the pipeline in AWS CodePipeline in order to detect which environment
was launched. Use the cfn-init helper script to modify the launch configuration of each application stack
based on its environment.

Launch a new pipeline using AWS CodePipeline for each environment with multiple stages for each
application. Trigger the CloudFormation deployments using a Lambda function to dynamically modify
the UserData of the EC2 instances that were launched in each environment.

Manually configure the CloudFormation templates to use input parameters. Add a configuration that
whenever the CloudFormation stack is updated, it will dynamically modify
the LaunchConfiguration and UserData sections of the EC2 instances.

Launch a new pipeline using AWS CodePipeline that has multiple stages for each environment and
configure it to use input parameters. Switch the associated UserData of the EC2 instances to match the
environment where the application stack is being launched using CloudFormation mappings.
Specify parameter overrides for AWS CloudFormation actions.

(Correct)

Explanation

Continuous delivery is a release practice in which code changes are automatically built, tested, and prepared for
release to production. With AWS CloudFormation and CodePipeline, you can use continuous delivery to
automatically build and test changes to your AWS CloudFormation templates before promoting them to
production stacks. This release process lets you rapidly and reliably make changes to your AWS infrastructure.

For example, you can create a workflow that automatically builds a test stack when you submit an updated
template to a code repository. After AWS CloudFormation builds the test stack, you can test it and then decide
whether to push the changes to a production stack.
You can use CodePipeline to build a continuous delivery workflow by building a pipeline for AWS
CloudFormation stacks. CodePipeline has built-in integration with AWS CloudFormation, so you can specify
AWS CloudFormation-specific actions, such as creating, updating, or deleting a stack within a pipeline.

In a CodePipeline stage, you can specify parameter overrides for AWS CloudFormation actions. Parameter
overrides let you specify template parameter values that override values in a template configuration file. AWS
CloudFormation provides functions to help you specify dynamic values (values that are unknown until the
pipeline runs).

You can set the Fn::GetArtifactAtt function which retrieves the value of an attribute from an input artifact, such
as the S3 bucket name where the artifact is stored. You can use this function to specify attributes of an artifact,
such as its filename or S3 bucket name, that can be used in the pipeline.

Hence, the correct answer is: Launch a new pipeline using AWS CodePipeline that has multiple stages for
each environment and configure it to use input parameters. Switch the associated UserData of the EC2
instances to match the environment where the application stack is being launched using CloudFormation
mappings. Specify parameter overrides for AWS CloudFormation actions.

The option that says: Set up a Lambda-backed custom resource in the CloudFormation templates.
Configure the custom resource to monitor the status of the pipeline in AWS CodePipeline in order to
detect which environment was launched. Use the cfn-init helper script to modify the launch configuration
of each application stack based on its environment is incorrect because monitoring the pipeline using a
custom resource in CloudFormation entails a lot of administrative overhead. A better solution would be to use
input parameters or parameter overrides for AWS CloudFormation actions.

The option that says: Launch a new pipeline using AWS CodePipeline for each environment with multiple
stages for each application. Trigger the CloudFormation deployments using a Lambda function to
dynamically modify the UserData of the EC2 instances that were launched in each environment is incorrect
because using a Lambda function to modify the UserData of the already running EC2 instances is not a suitable
solution. The parameters should have been dynamically populated and set before the resources were launched by
using parameter overrides.

The option that says: Manually configure the CloudFormation templates to use input parameters. Add a
configuration that whenever the CloudFormation stack is updated, it will dynamically modify
the LaunchConfiguration and UserData sections of the EC2 instances is incorrect. Although using input
parameters is helpful in this scenario, you should still integrate CloudFormation and CodePipeline in order to
properly map the configuration files for each environment.

References:

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/mappings-section-structure.html

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/continuous-delivery-codepipeline.html

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/continuous-delivery-codepipeline-
parameter-override-functions.html

Check out this AWS CloudFormation Cheat Sheet:


https://tutorialsdojo.com/aws-cloudformation/

Elastic Beanstalk vs CloudFormation vs OpsWorks vs CodeDeploy:

https://tutorialsdojo.com/elastic-beanstalk-vs-cloudformation-vs-opsworks-vs-codedeploy/

Question 56: Correct

A company has a suite of applications that are hosted in AWS and each app has its own AMI. Currently, a new
AMI must be manually created and deployed to the server if there is a new application version. A DevOps
engineer was instructed to automate the process of generating the AMIs to streamline the company's CI/CD
workflow. The ID of the newly created AMI must be stored in a centralized location where other build pipelines
can programmatically access it.

Which of the following is the MOST cost-effective way to accomplish this requirement with the LEAST amount
of overhead?

Using AWS Systems Manager, create an Automation document with values that configure how the
machine image should be created. Launch a new pipeline with a custom action in AWS CodePipeline and
integrate it with Amazon Eventbridge to execute the automation document. Build the AMI when the
process is triggered. Store the AMI IDs in Systems Manager Parameter Store.

(Correct)

Using AWS CodePipeline, set up a new pipeline that will take an EBS snapshot of the EBS-backed EC2
instance which runs the latest version of each application. Launch a new EC2 instance from the generated
snapshot and update the running instance using a Lambda function. Take a snapshot of the updated EC2
instance and then afterward, convert it to an Amazon Machine Image (AMI). Store all of the AMI IDs in
an S3 bucket.

Set up a brand new pipeline in AWS CodePipeline with several EC2 instances for processing. Configure it
to download and save the latest operating system of the application in an Open Virtualization Format
(OVF) image format and then store the image in an S3 bucket. Customize the image using guestfish
interactive shell or the virt-rescue shell. Convert the OVF to an AMI using the virtual machine (VM)
import command and then store the AMI IDs in AWS Systems Manager Parameter Store.

Use an open-source machine image creation tool such as Packer then configure it with values defining
how the AMI should be created. Set up a Jenkins pipeline hosted in a large EC2 instance to start the
Packer when triggered to build an AMI. Store the AMI IDs to an Amazon DynamoDB table.

Explanation

Systems Manager Automation simplifies common maintenance and deployment tasks of Amazon EC2
instances and other AWS resources. Automation enables you to do the following.

- Build Automation workflows to configure and manage instances and AWS resources.

- Create custom workflows or use pre-defined workflows maintained by AWS.

- Receive notifications about Automation tasks and workflows by using Amazon CloudWatch Events.
- Monitor Automation progress and execution details by using the Amazon EC2 or the AWS Systems Manager
console.

Automation offers one-click automation for simplifying complex tasks such as creating golden Amazon
Machines Images (AMIs) and recovering unreachable EC2 instances. Here are some examples:

- Use the AWS-UpdateLinuxAmi and AWS-UpdateWindowsAmi documents to create golden AMIs from a source AMI.
You can run custom scripts before and after updates are applied. You can also include or exclude specific
packages from being installed.

- Use the AWSSupport-ExecuteEC2Rescue document to recover impaired instances. An instance can become
unreachable for various reasons, including network misconfigurations, RDP issues, or firewall settings.
Troubleshooting and regaining access to the instance previously required dozens of manual steps before you
could regain access. The AWSSupport-ExecuteEC2Rescue document lets you regain access by specifying an
instance ID and clicking a button.

A Systems Manager Automation document defines the Automation workflow (the actions that the Systems
Manager performs on your managed instances and AWS resources). Automation includes several pre-defined
Automation documents that you can use to perform common tasks like restarting one or more Amazon EC2
instances or creating an Amazon Machine Image (AMI). Documents use JavaScript Object Notation (JSON) or
YAML, and they include steps and parameters that you specify. Steps run in sequential order.

Hence, the correct answer is: Using AWS Systems Manager, create an Automation document with values
that configure how the machine image should be created. Launch a new pipeline with a custom action in
AWS CodePipeline and integrate it with Amazon Eventbridge to execute the automation document. Build
the AMI when the process is triggered. Store the AMI IDs in Systems Manager Parameter Store.

The option that says: Set up a brand new pipeline in AWS CodePipeline with several EC2 instances for
processing. Configure it to download and save the latest operating system of the application in an Open
Virtualization Format (OVF) image format and then store the image in an S3 bucket. Customize the
image using guestfish interactive shell or the virt-rescue shell. Convert the OVF to an AMI using the
virtual machine (VM) import command and then store the AMI IDs in AWS Systems Manager Parameter
Store is incorrect because manually customizing the image using an interactive shell and downloading each
application image in an OVF file entails a lot of effort. It is also better to use the AWS Systems Manager
Automation instead of creating a new pipeline in AWS CodePipeline.

The option that says: Using AWS CodePipeline, set up a new pipeline that will take an EBS snapshot of the
EBS-backed EC2 instance which runs the latest version of each application. Launch a new EC2 instance
from the generated snapshot and update the running instance using a Lambda function. Take a snapshot
of the updated EC2 instance and then afterward, convert it to an Amazon Machine Image(AMI). Store all
of the AMI IDs in an S3 bucket is incorrect. Although you can technically generate an AMI using an EBS
volume snapshot, this process is still tedious and entails a lot of configuration. Using the AWS Systems Manager
Automation to generate the AMIs is a more suitable solution.

The option that says: Use an open-source machine image creation tool such as Packer then configure it with
values defining how the AMI should be created. Set up a Jenkins pipeline hosted in a large EC2 instance
to start the Packer when triggered to build an AMI. Store the AMI IDs to an Amazon DynamoDB table is
incorrect. Although this may work, this solution costs more to maintain than other options since it uses an EC2
instance and an Amazon DynamoDB table. There is also an associated overhead in configuring and using Packer
for generating the AMIs and preparing the Jenkins pipeline.

References:

https://aws.amazon.com/blogs/big-data/create-custom-amis-and-push-updates-to-a-running-amazon-emr-cluster-
using-amazon-ec2-systems-manager/

https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-automation.html

https://docs.aws.amazon.com/systems-manager/latest/userguide/automation-documents.html

https://docs.aws.amazon.com/codepipeline/latest/userguide/detect-state-changes-cloudwatch-events.html

Check out this AWS Systems Manager Cheat Sheet:

https://tutorialsdojo.com/aws-systems-manager/

Question 57: Incorrect

A government agency is planning to launch a distributed system in AWS that processes thousands of
transactions every day. The agency purchased a proprietary software with 100 licenses, which can be used by a
maximum of 100 application servers. A DevOps Engineer needs to set up an automated solution that
dynamically allocates the software licenses to the application servers. The Engineer also needs to provide a way
to see the list of available licenses that are not in use.

Which of the following options below is the MOST suitable way to accomplish this task?

Prepare a CloudFormation template that uses an Auto Scaling group to launch the EC2 instances with an
associated lifecycle hook for Instance Terminate. Store the 100 license codes to a DynamoDB table. Pull
an available license from the DynamoDB table using the User Data script of the instance upon launch.
Use the lifecycle hook to update the license mapping after the instance is terminated.

(Correct)

Prepare a CloudFormation template that uses an Auto Scaling group to launch the EC2 instances. Store
the 100 license codes to AWS Systems Manager Parameter Store. Pull an available license from the
Systems manager using the instance metadata script of the instance upon launch. Configure the metadata
script to update the license mapping after the instance is terminated.

Prepare a CloudFormation template that uses an Auto Scaling group to launch the EC2 instances with an
associated lifecycle hook for Instance Terminate. Store the 100 license codes to AWS Certificate
Manager (ACM). Pull an available license from ACM using the User Data script of the instance upon
launch. Use the lifecycle hook to update the license mapping after the instance is terminated.

Prepare a CloudFormation template that uses an Auto Scaling group to launch the EC2 instances with an
associated lifecycle hook for Instance Terminate. Store the 100 license codes to a public S3 bucket. Pull
an available license from the bucket using the User Data script of the instance upon launch. Use the
lifecycle hook to update the license mapping after the instance is terminated.
Explanation

You can use AWS CloudFormation to automatically install, configure, and start applications on Amazon EC2
instances. Doing so enables you to easily duplicate deployments and update existing installations without
connecting directly to the instance, which can save you a lot of time and effort.

AWS CloudFormation includes a set of helper scripts (cfn-init, cfn-signal, cfn-get-metadata, and cfn-hup) that
are based on cloud-init. You call these helper scripts from your AWS CloudFormation templates to install,
configure, and update applications on Amazon EC2 instances that are in the same template.

The EC2 instances in an Auto Scaling group have a path, or lifecycle, that differs from that of other EC2
instances. The lifecycle starts when the Auto Scaling group launches an instance and puts it into service. The
lifecycle ends when you terminate the instance, or the Auto Scaling group takes the instance out of service and
terminates it.

Lifecycle hooks enable you to perform custom actions by pausing instances as an Auto Scaling group launches
or terminates them. When an instance is paused, it remains in a wait state until either you complete the lifecycle
action using the complete-lifecycle-action command or the CompleteLifecycleAction operation, or the timeout
period ends (one hour by default).

For example, your newly launched instance completes its startup sequence and a lifecycle hook pauses the
instance. While the instance is in a wait state, you can install or configure software on it, making sure that your
instance is fully ready before it starts receiving traffic. For another example of the use of lifecycle hooks, when a
scale-in event occurs, the terminating instance is first deregistered from the load balancer (if the Auto Scaling
group is being used with Elastic Load Balancing). Then, a lifecycle hook pauses the instance before it is
terminated. While the instance is in the wait state, you can, for example, connect to the instance and download
logs or other data before the instance is fully terminated.

Hence, the correct answer is: Prepare a CloudFormation template that uses an Auto Scaling group to launch
the EC2 instances with an associated lifecycle hook for Instance Terminate. Store the 100 license codes to a
DynamoDB table. Pull an available license from the DynamoDB table using the User Data script of the
instance upon launch. Use the lifecycle hook to update the license mapping after the instance is terminated.

The option that says: Prepare a CloudFormation template that uses an Auto Scaling group to launch the EC2
instances with an associated lifecycle hook for Instance Terminate. Store the 100 license codes to a public S3
bucket. Pull an available license from the bucket using the User Data script of the instance upon launch. Use
the lifecycle hook to update the license mapping after the instance is terminated is incorrect because using a
public S3 bucket to store the license codes is a security risk since it can be seen by anyone. You should use a
DynamoDB table instead.

The option that says: Prepare a CloudFormation template that uses an Auto Scaling group to launch the EC2
instances with an associated lifecycle hook for Instance Terminate. Store the 100 license codes to AWS
Certificate Manager (ACM). Pull an available license from ACM using the User Data script of the instance
upon launch. Use the lifecycle hook to update the license mapping after the instance is terminated is incorrect
because you cannot upload license codes to ACM. Take note that the AWS Certificate Manager is simply a
service that lets you easily provision, manage, and deploy public and private Secure Sockets Layer/Transport
Layer Security (SSL/TLS) certificates for use with AWS services and your internal connected resources.
The option that says: Prepare a CloudFormation template that uses an Auto Scaling group to launch the EC2
instances. Store the 100 license codes to AWS Systems Manager Parameter Store. Pull an available license
from the Systems manager using the instance metadata script of the instance upon launch. Configure the
metadata script to update the license mapping after the instance is terminated is incorrect because you should
use a user data script instead of a metadata script to update the license mapping. In addition, you should set up a
lifecycle hook for Instance Terminate in order to execute the mapping update before the instance is terminated.

References:

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-init.html

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/deploying.applications.html

Check out this AWS CloudFormation Cheat Sheet:

https://tutorialsdojo.com/aws-cloudformation/

Configuring Notifications for Amazon EC2 Auto Scaling Lifecycle Hooks:

https://tutorialsdojo.com/configuring-notifications-for-amazon-ec2-auto-scaling-lifecycle-hooks/

Tutorials Dojo's AWS Certified DevOps Engineer Professional Exam Study Guide:

https://tutorialsdojo.com/aws-certified-devops-engineer-professional/

Question 58: Incorrect

A software development company is using AWS CodeCommit, CodeBuild, and CodePipeline for their CI/CD
process. A newly hired DevOps engineer has recently applied the AWS-
managed AWSCodeCommitPowerUser policy to the IAM Role that is being used by the development team. After
several weeks, the team leader discovered that their developers can now directly push code to the master branch
which was prohibited before. All other new capabilities brought by the AWS-managed policy is needed by the
team instead of this particular action. This has caused an incident in which a junior developer pushed an untested
code in the master branch and the changes directly went to their production environment.

What should the DevOps engineer do to restrict this specific action?

Maintain the recently added AWSCodeCommitPowerUser AWS-managed policy but create an additional
policy to include a Deny rule for the codecommit:GitPush action. Add a restriction for the specific
repositories in the resource statement with a condition for the master branch.

(Correct)

Replace the AWSCodeCommitPowerUser AWS-managed policy with AWSCodeCommitReadOnly policy instead. In


the IAM Role of the developer team, add an Allow rule for the codecommit:GitPush action for the specific
repositories in the resource statement with a condition for the master branch.

Replace the AWSCodeCommitPowerUser AWS-managed policy with AWSCodeCommitFullAccess policy instead.


Include a deny rule for the codecommit: GitPush action for the specific repositories in the resource
statement with a condition for the master branch.
Modify the AWSCodeCommitPowerUser AWS-managed policy to include a deny rule for the codecommit:
GitPush action for the specific repositories in the resource statement with a condition for the master
branch.

Explanation

You can create a policy that denies users permission to actions you specify on one or more branches.
Alternatively, you can create a policy that allows actions on one or more branches that they might not otherwise
have in other branches of a repository. You can use these policies with the appropriate managed (predefined)
policies.

For example, you can create a Deny policy that denies users the ability to make changes to a branch named
master, including deleting that branch, in a repository named TutorialsDojoManila. You can use this policy with
the AWSCodeCommitPowerUser managed policy. Users with these two policies applied would be able to
create and delete branches, create pull requests, and all other actions as allowed
by AWSCodeCommitPowerUser, but they would not be able to push changes to the branch named master, add
or edit a file in the master branch in the CodeCommit console, or merge branches or a pull request into
the master branch. Because Deny is applied to GitPush, you must include a Null statement in the policy to allow
initial GitPush calls to be analyzed for validity when users make pushes from their local repos.

There are various AWS-managed policies that you can use for providing CodeCommit access. They are:

AWSCodeCommitFullAccess – Grants full access to CodeCommit. You should apply this policy only to
administrative-level users to whom you want to grant full control over CodeCommit repositories and related
resources in your AWS account, including the ability to delete repositories.

AWSCodeCommitPowerUser – Allows users access to all of the functionality of CodeCommit and repository-
related resources, except it does not allow them to delete CodeCommit repositories or create or delete repository-
related resources in other AWS services, such as Amazon CloudWatch Events. It is recommended to apply this
policy to most users.

AWSCodeCommitReadOnly – Grants read-only access to CodeCommit and repository-related resources in


other AWS services, as well as the ability to create and manage their own CodeCommit-related resources (such
as Git credentials and SSH keys for their IAM user to use when accessing repositories). You should apply this
policy to users to whom you want to grant the ability to read the contents of a repository but not make any
changes to its contents.

Remember that you can't modify these AWS-managed policies. In order to customize the permissions, you can
add a Deny rule to the IAM Role in order to block certain capabilities included in these policies.

Hence, the correct answer is to maintain the recently added AWSCodeCommitPowerUser AWS-managed
policy but create an additional policy to include a Deny rule for the codecommit:GitPush action. Add a
restriction for the specific repositories in the resource statement with a condition for the master branch.

The option that says: Replace the AWSCodeCommitPowerUser AWS-managed policy


with AWSCodeCommitReadOnly policy instead. In the IAM Role of the developer team, add an Allow rule
for the codecommit:GitPush action for the specific repositories in the resource statement with a condition
for the master branch is incorrect because the AWSCodeCommitReadOnly policy is quite restrictive, and it will
block some of the required actions by the development team such as the ability to create pull requests. Take note
that the scenario says that all other new capabilities brought by the new AWS-managed policy
(AWSCodeCommitPowerUser) are needed by the team except for the action that allows direct code push to the
master branch.

The option that says: Modify the AWSCodeCommitPowerUser AWS-managed policy to include a deny rule
for the codecommit: GitPush action for the specific repositories in the resource statement with a condition
for the master branch is incorrect because you cannot modify an AWS-managed policy. You should just
maintain the recently added policy and then add certain Deny rules to meet the requirement.

The option that says: Replace the AWSCodeCommitPowerUser AWS-managed policy


with AWSCodeCommitFullAccess policy instead. Include a deny rule for the codecommit: GitPush action for
the specific repositories in the resource statement with a condition for the master branch is incorrect
because you should apply this policy only to administrative-level users to whom you want to grant full control
over CodeCommit repositories and related resources in your AWS account. This does not follow the AWS best
practice of granting least privilege.

References:

https://docs.aws.amazon.com/codecommit/latest/userguide/auth-and-access-control-permissions-reference.html

https://docs.aws.amazon.com/codecommit/latest/userguide/auth-and-access-control-iam-identity-based-access-
control.html

https://docs.aws.amazon.com/codecommit/latest/userguide/pull-requests.html

Question 59: Correct

A global cryptocurrency trading company has a suite of web applications hosted in an Auto Scaling group of
Amazon EC2 instances across multiple Available Zones behind an Application Load Balancer to distribute the
incoming traffic. The Auto Scaling group is configured to use Elastic Load Balancing health checks for scaling
instead of the default EC2 status checks. However, there are several occasions when some instances are
automatically terminated after failing the HTTPS health checks in the ALB that purges all the logs stored in the
instance. To improve system monitoring, a DevOps Engineer must implement a solution that collects all of the
application and server logs effectively. The Operations team should be able to perform a root cause analysis
based on the logs, even if the Auto Scaling group immediately terminated the instance.

How can the DevOps Engineer automate the log collection from the Amazon EC2 instances with the LEAST
amount of effort?

Delay the termination of unhealthy Amazon EC2 instances by adding a lifecycle hook to your Auto
Scaling group to move instances in the Terminating state to the Pending:Wait state. Set up a CloudWatch
Events rule for the EC2 Instance-terminate Lifecycle Action Auto Scaling Event with an associated
Lambda function. Use the AWS Systems Manager Automation to run a script that collects and uploads the
application logs from the instance to a CloudWatch Logs group. Resume the instance termination once all
the logs are sent.

Delay the termination of unhealthy Amazon EC2 instances by adding a lifecycle hook to your Auto
Scaling group to move instances in the Terminating state to the Terminating:Wait state. Set up a
CloudWatch Events rule for the EC2 Instance Terminate Successful Auto Scaling Event with an
associated Lambda function. Use the AWS Systems Manager Run Command to run a script that collects
and uploads the application logs from the instance to a CloudWatch Logs group. Resume the instance
termination once all the logs are sent.

Delay the termination of unhealthy Amazon EC2 instances by adding a lifecycle hook to your Auto
Scaling group to move instances in the Terminating state to the Terminating:Wait state. Use AWS Step
Functions to collect the application logs and send them to a CloudWatch Log group. Resume the instance
termination once all the logs are sent to CloudWatch Logs.

Delay the termination of unhealthy Amazon EC2 instances by adding a lifecycle hook to your Auto
Scaling group to move instances in the Terminating state to the Terminating:Wait state. Set up a
CloudWatch Events rule for the EC2 Instance-terminate Lifecycle Action Auto Scaling Event with an
associated AWS Systems Manager Automation document. Trigger the CloudWatch agent to push the
application logs and then resume the instance termination once all the logs are sent to CloudWatch Logs.

(Correct)

Explanation

The EC2 instances in an Auto Scaling group have a path, or lifecycle, that differs from that of other EC2
instances. The lifecycle starts when the Auto Scaling group launches an instance and puts it into service. The
lifecycle ends when you terminate the instance, or the Auto Scaling group takes the instance out of service and
terminates it.

You can add a lifecycle hook to your Auto Scaling group so that you can perform custom actions when instances
launch or terminate.

When Amazon EC2 Auto Scaling responds to a scale out event, it launches one or more instances. These
instances start in the Pending state. If you added an autoscaling:EC2_INSTANCE_LAUNCHING lifecycle hook to your
Auto Scaling group, the instances move from the Pending state to the Pending:Wait state. After you complete the
lifecycle action, the instances enter the Pending:Proceed state. When the instances are fully configured, they are
attached to the Auto Scaling group and they enter the InService state.

When Amazon EC2 Auto Scaling responds to a scale in event, it terminates one or more instances. These
instances are detached from the Auto Scaling group and enter the Terminating state. If you added
an autoscaling:EC2_INSTANCE_TERMINATING lifecycle hook to your Auto Scaling group, the instances move from
the Terminating state to the Terminating:Wait state. After you complete the lifecycle action, the instances enter
the Terminating:Proceed state. When the instances are fully terminated, they enter the Terminated state.

Using CloudWatch agent is the most suitable tool to use to collect the logs. The unified CloudWatch agent
enables you to do the following:

- Collect more system-level metrics from Amazon EC2 instances across operating systems. The metrics can
include in-guest metrics, in addition to the metrics for EC2 instances. The additional metrics that can be
collected are listed in Metrics Collected by the CloudWatch Agent.

- Collect system-level metrics from on-premises servers. These can include servers in a hybrid environment as
well as servers not managed by AWS.
- Retrieve custom metrics from your applications or services using the StatsD and collectd protocols. StatsD is
supported on both Linux servers and servers running Windows Server. On the other hand, collectd is supported
only on Linux servers.

- Collect logs from Amazon EC2 instances and on-premises servers, running either Linux or Windows Server.

You can store and view the metrics that you collect with the CloudWatch agent in CloudWatch just as you can
with any other CloudWatch metrics. The default namespace for metrics collected by the CloudWatch agent
is CWAgent, although you can specify a different namespace when you configure the agent.

Hence, the correct answer is: Delay the termination of unhealthy Amazon EC2 instances by adding a
lifecycle hook to your Auto Scaling group to move instances in the Terminating state to
the Terminating:Wait state. Set up a CloudWatch Events rule for the EC2 Instance-terminate Lifecycle
Action Auto Scaling Event with an associated AWS Systems Manager Automation document. Trigger the
CloudWatch agent to push the application logs and then resume the instance termination once all the logs
are sent to CloudWatch Logs.

The option that says: Delay the termination of unhealthy Amazon EC2 instances by adding a lifecycle hook
to your Auto Scaling group to move instances in the Terminating state to the Pending:Wait state. Set up a
CloudWatch Events rule for the EC2 Instance-terminate Lifecycle Action Auto Scaling Event with an
associated Lambda function. Use the AWS Systems Manager Automation to run a script that collects and
uploads the application logs from the instance to a CloudWatch Logs group. Resume the instance
termination once all the logs are sent is incorrect because the Pending:Wait state refers to the scale-out action
in Amazon EC2 Auto Scaling and not for scale-in or for terminating the instances.

The option that says: Delay the termination of unhealthy Amazon EC2 instances by adding a lifecycle hook
to your Auto Scaling group to move instances in the Terminating state to the Terminating:Wait state. Use
AWS Step Functions to collect the application logs and send them to a CloudWatch Log group. Resume
the instance termination once all the logs are sent to CloudWatch Logs is incorrect because using AWS Step
Functions is inappropriate in collecting the logs from your EC2 instances. You should use a CloudWatch agent
instead.

The option that says: Delay the termination of unhealthy Amazon EC2 instances by adding a lifecycle hook
to your Auto Scaling group to move instances in the Terminating state to the Terminating:Wait state. Set up
a CloudWatch Events rule for the EC2 Instance Terminate Successful Auto Scaling Event with an associated
Lambda function. Use the AWS Systems Manager Run Command to run a script that collects and
uploads the application logs from the instance to a CloudWatch Logs group. Resume the instance
termination once all the logs are sent is incorrect. The EC2 Instance Terminate Successful indicates that the
ASG has terminated an instance. The automated solution won't work because the target instance is already
deleted the moment the Lambda function is triggered.

References:

https://aws.amazon.com/blogs/infrastructure-and-automation/run-code-before-terminating-an-ec2-auto-scaling-
instance/

https://docs.aws.amazon.com/autoscaling/ec2/userguide/AutoScalingGroupLifecycle.html

https://docs.aws.amazon.com/autoscaling/ec2/userguide/cloud-watch-events.html#terminate-successful
https://aws.amazon.com/premiumsupport/knowledge-center/auto-scaling-delay-termination/

Check out this AWS Auto Scaling Cheat Sheet:

https://tutorialsdojo.com/aws-auto-scaling/

Question 60: Correct

A dynamic Node.js-based photo sharing application hosted in four Amazon EC2 web servers is using a
DynamoDB table for session management and an S3 bucket for storing media files. The users can upload, view,
organize, and share their photos using the content management system of the application. When a user uploads
an image, a Lambda function will be invoked to process the media file then store it in Amazon S3. Due to the
recent growth of the application’s user base in the country, they decided to manually add another six EC2
instances for the web tier to handle the peak load. However, each of the instances took more than half an hour to
download the required application libraries and become fully configured.

Which of the following is the MOST resilient and highly available solution that will also lessen the deployment
time of the new servers?

Deploy a Spot Fleet of EC2 instances with a target capacity of 20 then place them behind an Application
Load Balancer. Configure Amazon Route 53 to point the application DNS record to the Application Load
Balancer. Increase the RCU and WCU of the DynamoDB table.

Host the entire Node.js application to Amazon S3 as a static website. Create an Amazon CloudFront web
distribution with the S3 bucket as its origin. Enable Auto Scaling in the Amazon DynamoDB table. In
Route 53, point the application DNS record to the CloudFront URL.

Host the entire application in Elastic Beanstalk. Create a custom AMI using AWS Systems Manager
Automation which includes all of the required dependencies and web components. Configure the Elastic
Beanstalk environment to have an Auto Scaling group of EC2 instances across multiple Availability
Zones with a load balancer in front that balances the incoming traffic. Enable Amazon DynamoDB Auto
Scaling and point the application DNS record to the Elastic Beanstalk load balancer using Amazon Route
53.

(Correct)

Set up your application to use AWS OpsWorks for deployment to automatically download the required
libraries of each new EC2 instance once it is launched. Place the EC2 instances to an Auto Scaling group
across multiple Availability Zones with an Application Load Balancer in front that balances the incoming
traffic. Enable Amazon DynamoDB Auto Scaling and configure Amazon Route 53 to point the
application DNS record to the Application Load Balancer.

Explanation

When you create an AWS Elastic Beanstalk environment, you can specify an Amazon Machine Image (AMI) to
use instead of the standard Elastic Beanstalk AMI included in your platform version. A custom AMI can
improve provisioning times when instances are launched in your environment if you need to install a lot of
software that isn't included in the standard AMIs.
Using configuration files is great for configuring and customizing your environment quickly and consistently.
Applying configurations, however, can start to take a long time during environment creation and updates. If you
do a lot of server configuration in configuration files, you can reduce this time by making a custom AMI that
already has the software and configuration that you need.

A custom AMI also allows you to make changes to low-level components, such as the Linux kernel, that are
difficult to implement or take a long time to apply in configuration files. To create a custom AMI, launch an
Elastic Beanstalk platform AMI in Amazon EC2, customize the software and configuration to your needs, and
then stop the instance and save an AMI from it.

Hence, the correct solution for this scenario is: Host the entire application in Elastic Beanstalk. Create a
custom AMI using AWS Systems Manager Automation which includes all of the required dependencies
and web components. Configure the Elastic Beanstalk environment to have an Auto Scaling group of EC2
instances across multiple Availability Zones with a load balancer in front that balances the incoming
traffic. Enable Amazon DynamoDB Auto Scaling and point the application DNS record to the Elastic
Beanstalk load balancer using Amazon Route 53.

The option that says: Set up your application to use AWS OpsWorks for deployment to automatically
download the required libraries of each new EC2 instance once it is launched. Place the EC2 instances to
an Auto Scaling group across multiple Availability Zones with an Application Load Balancer in front that
balances the incoming traffic. Enable Amazon DynamoDB Auto Scaling and configure Amazon Route 53
to point the application DNS record to the Application Load Balancer is incorrect because this will still take
time since you have to configure the instances one by one in OpsWorks instead of just using a custom AMI
which already has the required dependencies.

The option that says: Deploy a Spot Fleet of EC2 instances with a target capacity of 20 then place them
behind an Application Load Balancer. Configure Amazon Route 53 to point the application DNS record to
the Application Load Balancer. Increase the RCU and WCU of the DynamoDB table is incorrect because
using Spot Instances is susceptible to interruptions and could lead to outages of your application. Moreover,
setting an exact number of target capacity is not recommended since your servers won't scale up or scale down
based on the actual demand.

The option that says: Host the entire Node.js application to Amazon S3 as a static website. Create an
Amazon CloudFront web distribution with the S3 bucket as its origin. Enable Auto Scaling in the Amazon
DynamoDB table. In Route 53, point the application DNS record to the CloudFront URL is incorrect
because the web application is a dynamic site and cannot be migrated to a static S3 website hosting.

References:

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.customenv.html

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_nodejs.html

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/beanstalk-environment-configuration-advanced.html

AWS Elastic Beanstalk Overview:

https://youtu.be/rx7e7Fej1Oo
Check out this AWS Elastic Beanstalk Cheat Sheet:

https://tutorialsdojo.com/aws-elastic-beanstalk/

Question 61: Correct

A company wants to implement a continuous delivery workflow that will facilitate the process for source code
promotion in their development, staging, and production environments in AWS. In the event of system
degradation or failure, they should also have the ability to roll back the recent deployment of their application.

Which of the following CI/CD designs is the MOST suitable one to implement and will incur the LEAST
amount of downtime?

Create several repositories in AWS CodeCommit for each of their developers and then create a centralized
development branch to hold merged changes from each of the developer's repository. Set up AWS
CodeBuild to build and test the code stored in the development branch. Merge to the master branch using
pull requests that will be approved by senior developers. To deploy the latest code to the production
environment, set up a blue/green deployment using AWS CodeDeploy.

Create a repository in AWS CodeCommit for the development environment and another one for the
production environment. Set up AWS CodeBuild to build and merge the two repositories. Do a blue/green
deployment using AWS CodeDeploy to deploy the latest code in production.

Create a single repository in AWS CodeCommit and create a development branch to hold merged
changes. Set up AWS CodeBuild to build and test the code stored in the development branch which is
triggered to run on every new commit. Merge to the master branch using pull requests that will be
approved by senior developers. To deploy the latest code to the production environment, set up a
blue/green deployment using AWS CodeDeploy.

(Correct)

Create an Amazon ECR repository and then create a development branch to hold merged changes made
by the developers. Set up AWS CodeBuild to build and test the code stored in the development branch
which is triggered to run on every new commit. Merge to the master branch using pull requests that will
be approved by senior developers. To deploy the latest code to the production environment, set up a
blue/green deployment using AWS CodeDeploy.

Explanation

A repository is the fundamental version control object in CodeCommit. It's where you securely store code and
files for your project. It also stores your project history, from the first commit through the latest changes. You
can share your repository with other users so you can work together on a project. If you add AWS tags to
repositories, you can set up notifications so that repository users receive emails about events (for example,
another user commenting on code). You can also change the default settings for your repository, browse its
contents, and more. You can create triggers for your repository so that code pushes or other events trigger
actions, such as emails or code functions. You can even configure a repository on your local computer (a local
repo) to push your changes to more than one repository.
In designing your CI/CD process in AWS, you can use a single repository in AWS CodeCommit and create
different branches for development, master, and release. You can use CodeBuild to build your application and
run tests to verify that all of the core features of your application are working. For deployment, you can either
select an in-place or blue/green deployment using CodeDeploy.

Hence, the correct answer is: Create a single repository in AWS CodeCommit and create a development
branch to hold merged changes. Set up AWS CodeBuild to build and test the code stored in the
development branch which is triggered to run on every new commit. Merge to the master branch using
pull requests that will be approved by senior developers. To deploy the latest code to the production
environment, set up a blue/green deployment using AWS CodeDeploy.

The option that says: Create several repositories in AWS CodeCommit for each of their developers and
then create a centralized development branch to hold merged changes from each of the developer's
repository. Set up AWS CodeBuild to build and test the code stored in the development branch. Merge to
the master branch using pull requests that will be approved by senior developers. To deploy the latest
code to the production environment, set up a blue/green deployment using AWS CodeDeploy is incorrect
because creating a separate repository for each developer is absurd since they can simply clone the code instead.
A single repository will suffice in this scenario which can have several branches for development and production
deployment purposes.

The option that says: Create a repository in AWS CodeCommit for the development environment and
another one for the production environment. Set up AWS CodeBuild to build and merge the two
repositories. Do a blue/green deployment using AWS CodeDeploy to deploy the latest code in
production is incorrect because you don't need to create two CodeCommit repositories for one application.
Instead, you can just create at least two different branches to separate your development and production code.

The option that says: Create an Amazon ECR repository and then create a development branch to hold
merged changes made by the developers. Set up AWS CodeBuild to build and test the code stored in the
development branch which is triggered to run on every new commit. Merge to the master branch using
pull requests that will be approved by senior developers. To deploy the latest code to the production
environment, set up a blue/green deployment using AWS CodeDeploy is incorrect because Amazon ECR is a
fully-managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker
container images. This is not a suitable service to be used to store your application code.

References:

https://aws.amazon.com/devops/continuous-integration/

https://aws.amazon.com/devops/continuous-delivery/

https://docs.aws.amazon.com/codecommit/latest/userguide/repositories.html

Check out this AWS CodeCommit Cheat Sheet:

https://tutorialsdojo.com/aws-codecommit/

Tutorials Dojo's AWS Certified DevOps Engineer Professional Exam Study Guide:

https://tutorialsdojo.com/aws-certified-devops-engineer-professional/
Question 62: Incorrect

A leading software development company has various web applications hosted in an Auto Scaling group of EC2
instances which are designed for high availability and fault tolerance. They are using AWS CloudFormation to
easily manage their cloud infrastructure as code as well as for deployment. Currently, they have to manually
update their CloudFormation templates for every new available AMI of their application. This procedure is
prone to human errors and entails a high management overhead on their deployment process.

Which of the following is the MOST suitable and cost-effective solution that the DevOps engineer should
implement to automate this process?

Pull the new AMI IDs using an AWS Lambda-backed custom resource in the CloudFormation template.
Reference the AMI ID that the custom resource fetched in the launch configuration resource block.

(Correct)

Configure the AWS CloudFormation template to use conditional statements to check if new AMIs are
available. Fetch the new AMI ID using the cfn-init helper script and reference it in the launch
configuration resource block.

Launch an EC2 instance to run a custom shell script every hour to check for new AMIs. The script should
update the launch configuration resource block of the CloudFormation template with the new AMI ID if
there are new ones available.

Configure the CloudFormation template to use AMI mappings. Integrate AWS Lambda and Amazon
CloudWatch Events to create a function that regularly runs every hour to detect new AMIs as well as
update the mapping in the template. Reference the AMI mappings in the launch configuration resource
block.

Explanation

Custom resources enable you to write custom provisioning logic in templates that AWS CloudFormation runs
anytime you create, update (if you changed the custom resource), or delete stacks. For example, you might want
to include resources that aren't available as AWS CloudFormation resource types. You can include those
resources by using custom resources. That way, you can still manage all your related resources in a single stack.

Use the AWS::CloudFormation::CustomResource or, alternatively, the Custom::<User-Defined Resource


Name> resource type to define custom resources in your templates. Custom resources require one property: the
service token, which specifies where AWS CloudFormation sends requests to, such as an Amazon SNS topic.

When you associate a Lambda function with a custom resource, the function is invoked whenever the custom
resource is created, updated, or deleted. AWS CloudFormation calls a Lambda API to invoke the function and to
pass all the request data (such as the request type and resource properties) to the function. The power and
customizability of Lambda functions, in combination with AWS CloudFormation, enable a wide range of
scenarios, such as dynamically looking up AMI IDs during stack creation or implementing and using utility
functions, such as string reversal functions.

AWS CloudFormation templates that declare an Amazon Elastic Compute Cloud (Amazon EC2) instance must
also specify an Amazon Machine Image (AMI) ID, which includes an operating system and other software and
configuration information used to launch the instance. The correct AMI ID depends on the instance type and
region in which you're launching your stack. And IDs can change regularly, such as when an AMI is updated
with software updates.

Normally, you might map AMI IDs to specific instance types and regions. To update the IDs, you manually
change them in each of your templates. By using custom resources and AWS Lambda (Lambda), you can create
a function that gets the IDs of the latest AMIs for the region and instance type that you're using so that you don't
have to maintain mappings.

Hence, the correct answer is: Use an AWS Lambda-backed custom resource in the CloudFormation
template to pull the new AMI IDs. Reference the AMI ID that the custom resource fetched in the launch
configuration resource block.

The option that says: Configure the CloudFormation template to use AMI mappings. Integrate AWS
Lambda and Amazon CloudWatch Events to create a function that regularly runs every hour to detect
new AMIs as well as update the mapping in the template. Reference the AMI mappings in the launch
configuration resource block is incorrect. Although this solution may work, it is not economical to set up a
scheduled job that runs every 1 hour just to detect new AMIs and update your CloudFormation templates. This is
an inefficient solution since the AMIs are not updated that often to begin with, which means that most of the
hourly processing done by the Lambda function will yield no result. A better design would be to use AWS
Lambda-backed custom resources instead in CloudFormation, which will fetch the new AMI IDs upon
deployment.

The option that says: Configure the AWS CloudFormation template to use conditional statements to check
if new AMIs are available. Fetch the new AMI ID using the cfn-init helper script and reference it in the
launch configuration resource block is incorrect because a cfn-init helper script is primarily used to fetch
metadata, install packages and start/stop services to your EC2 instances that are already running. A better
solution to implement here is to use AWS Lambda-backed custom resource in the CloudFormation template to
pull the new AMI IDs.

The option that says: Launch an EC2 instance to run a custom shell script every hour to check for new
AMIs. The script should update the launch configuration resource block of the CloudFormation template
with the new AMI ID if there are new ones available is incorrect. Although this solution may work, it
includes the unnecessary cost of running an EC2 instance which is charged 24/7 but only does the actual
processing every hour. This can simply be replaced by using an AWS Lambda-backed custom resource in
CloudFormation.

References:

https://aws.amazon.com/blogs/devops/faster-auto-scaling-in-aws-cloudformation-stacks-with-lambda-backed-
custom-resources/

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/walkthrough-custom-resources-lambda-
lookup-amiids.html

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-custom-resources-lambda.html

Check out this AWS CloudFormation Cheat Sheet:


https://tutorialsdojo.com/aws-cloudformation/

Question 63: Incorrect

A multinational investment bank is using AWS Organizations to handle their multiple AWS accounts across
various AWS regions around the world. To comply with the strict financial IT regulations, they have to ensure
that all of their EBS volumes in all of their AWS accounts are encrypted. A DevOps engineer has been requested
to set up an automated solution that will provide a detailed report of all unencrypted EBS volumes of the
company as well as to notify them if there is a newly launched EC2 instance which uses an unencrypted volume.

Which of the following should the DevOps engineer implement to meet this requirement with the LEAST
amount of operational overhead?

Set up an AWS Config rule with a corresponding Lambda function on all the target accounts of the
company. Collect data from multiple accounts and AWS Regions using AWS Config aggregators. Export
the aggregated report to an S3 bucket then deliver the notifications using Amazon SNS.

(Correct)

Use the AWS Systems Manager Configuration Compliance to monitor all EBS volumes across all the
accounts and AWS Regions of the company. Export and store the detailed compliance report to an S3
bucket and then deliver the notifications using Amazon SNS.

Prepare a CloudFormation template which contains an AWS Config managed rule for EBS encryption of
your EBS volumes. Deploy the template across all accounts and regions of the company using the
CloudFormation stack set. Store consolidated results of the AWS config rules evaluation in an Amazon S3
bucket. When non-compliant EBS resources are detected, send a notification to the Operations team using
Amazon SNS.

Configure AWS CloudTrail to deliver all events to an Amazon S3 bucket in a centralized AWS account.
Run an AWS Lambda function to parse AWS CloudTrail logs whenever logs are delivered to the S3
bucket using the S3 event notification. Use the same Lambda function to publish the results to Amazon
SNS.

Explanation

AWS Config provides a detailed view of the configuration of AWS resources in your AWS account. This
includes how the resources are related to one another and how they were configured in the past so that you can
see how the configurations and relationships change over time.

An AWS resource is an entity you can work with in AWS, such as an Amazon Elastic Compute Cloud (EC2)
instance, an Amazon Elastic Block Store (EBS) volume, a security group, or an Amazon Virtual Private Cloud
(VPC).

With AWS Config, you can do the following:

- Evaluate your AWS resource configurations for desired settings.

- Get a snapshot of the current configurations of the supported resources that are associated with your AWS
account.
- Retrieve configurations of one or more resources that exist in your account.

- Retrieve historical configurations of one or more resources.

- Receive a notification whenever a resource is created, modified, or deleted.

- View relationships between resources. For example, you might want to find all resources that use a particular
security group.

An aggregator is an AWS Config resource type that collects AWS Config configuration and compliance data
from the following:

- Multiple accounts and multiple regions.

- Single account and multiple regions.

- An organization in AWS Organizations and all the accounts in that organization.

You can use an aggregator to view the resource configuration and compliance data recorded in AWS Config.

Hence, the correct answer is: Set up an AWS Config rule with a corresponding Lambda function on all the
target accounts of the company. Collect data from multiple accounts and AWS Regions using AWS Config
aggregators. Export the aggregated report to an S3 bucket then deliver the notifications using Amazon
SNS.

The option that says: Configure AWS CloudTrail to deliver all events to an Amazon S3 bucket in a
centralized AWS account. Run an AWS Lambda function to parse AWS CloudTrail logs whenever logs
are delivered to the S3 bucket using the S3 event notification. Use the same Lambda function to publish
the results to Amazon SNS is incorrect. Although this solution may work, it certainly entails a lot of operational
overhead to execute and implement. Parsing thousands of API actions from all of your accounts in CloudTrail
just to ensure that the EBS encryption was enabled on all volumes could take a significant amount of time
compared with just using AWS Config.

The option that says: Prepare a CloudFormation template which contains an AWS Config managed rule
for EBS encryption of your EBS volumes. Deploy the template across all accounts and regions of the
company using the CloudFormation stack set. Store consolidated results of the AWS config rules
evaluation in an Amazon S3 bucket. When non-compliant EBS resources are detected, send a notification
to the Operations team using Amazon SNS is incorrect. Although it is right to use AWS Config here, this
solution still entails a lot of management overhead to maintain all of the CloudFormation templates. A better
solution is to use AWS Config aggregators instead.

The option that says: Use the AWS Systems Manager Configuration Compliance to monitor all EBS
volumes across all the accounts and AWS Regions of the company. Export and store the detailed
compliance report to an S3 bucket and then deliver the notifications using Amazon SNS is incorrect.
Although you can collect and aggregate data from multiple AWS accounts and Regions using the AWS Systems
Manager Configuration Compliance service, this solution has a lot of prerequisites and configuration needed.
You have to install SSM agent to all of your EC2 instances, create Resource Data Syncs, set up a custom
compliance type to check the EBS encryption and many others. Moreover, the AWS Systems Manager
Configuration Compliance service is more suitable for verifying the patch compliance of all your resources.
References:

https://docs.aws.amazon.com/config/latest/developerguide/aggregate-data.html

https://docs.aws.amazon.com/config/latest/developerguide/WhatIsConfig.html

Check out this AWS Config Cheat Sheet:

https://tutorialsdojo.com/aws-config/

Question 64: Incorrect

An insurance firm is using CloudFormation for deploying their applications in AWS. They have a multi-tier web
application which stores financial data in an Amazon RDS MySQL database in a Multi-AZ deployments
configuration. They instructed their DevOps Engineer to upgrade the RDS instance to the latest major version of
MySQL database. It is of utmost importance to ensure minimal downtime when doing the upgrade to avoid any
business disruption.

Which of the following should the engineer implement to properly upgrade the database while minimizing
downtime?

In the AWS::RDS::DBInstance resource type in the CloudFormation template, update


the AutoMinorVersionUpgrade property to the latest MySQL database version. Launch a new RDS Read
Replica with the same properties as the primary database instance that will be upgraded. Finally, trigger
an Update Stack operation in CloudFormation.

In the AWS::RDS::DBInstance resource type in the CloudFormation template, update


the AllowMajorVersionUpgrade property to the latest MySQL database version. Afterward, directly trigger
an Update Stack operation in CloudFormation.

In the AWS::RDS::DBInstance resource type in the CloudFormation template, update


the EngineVersion property to the latest MySQL database version. Create a second application stack and
launch a new Read Replica with the same properties as the primary database instance that will be
upgraded. Finally, perform an Update Stack operation in CloudFormation.

(Correct)

In the AWS::RDS::DBInstance resource type in the CloudFormation template, update


the DBEngineVersion property to the latest MySQL database version. Trigger an Update Stack operation in
CloudFormation. Launch a new RDS Read Replica with the same properties as the primary database
instance that will be upgraded. Finally, perform a second Update Stack operation.

Explanation

If your MySQL DB instance is currently in use with a production application, you can follow a procedure to
upgrade the database version for your DB instance that can reduce the amount of downtime for your application.

Periodically, Amazon RDS performs maintenance on Amazon RDS resources. Maintenance most often involves
updates to the DB instance's underlying hardware, underlying operating system (OS), or database engine version.
Updates to the operating system most often occur for security issues and should be done as soon as
possible.Some maintenance items require that Amazon RDS take your DB instance offline for a short time.
Maintenance items that require a resource to be offline include the required operating system or database
patching. Required patching is automatically scheduled only for patches that are related to security and instance
reliability. Such patching occurs infrequently (typically once every few months) and seldom requires more than a
fraction of your maintenance window.

When you modify the database engine for your DB instance in a Multi-AZ deployment, Amazon RDS upgrades
both the primary and secondary DB instances at the same time. In this case, the database engine for the entire
Multi-AZ deployment is shut down during the upgrade.

Hence, the correct answer is: In the AWS::RDS::DBInstance resource type in the CloudFormation template,
update the EngineVersion property to the latest MySQL database version. Create a second application
stack and launch a new Read Replica with the same properties as the primary database instance that will
be upgraded. Finally, perform an Update Stack operation in CloudFormation.

The option that says: In the AWS::RDS::DBInstance resource type in the CloudFormation template, update
the DBEngineVersion property to the latest MySQL database version. Trigger an Update Stack operation in
CloudFormation. Launch a new RDS Read Replica with the same properties as the primary database
instance that will be upgraded. Finally, perform a second Update Stack operation is incorrect because this
solution may possibly experience downtime since you trigger the Update Stack operation first before creating a
Read Replica, which you could have used as a backup instance in the event of update failures. Remember that
when you modify the database engine for your RDS Multi-AZ instance, the database engine for the entire Multi-
AZ deployment is shut down during the upgrade. In addition, there is no such DBEngineVersion property.

The option that says: In the AWS::RDS::DBInstance resource type in the CloudFormation template, update
the AutoMinorVersionUpgrade property to the latest MySQL database version. Launch a new RDS Read
Replica with the same properties as the primary database instance that will be upgraded. Finally, trigger
an Update Stack operation in CloudFormation is incorrect because the AutoMinorVersionUpgrade property
is simply a value that indicates whether minor engine upgrades are applied automatically to the DB instance
during the maintenance window. By default, minor engine upgrades are applied automatically. You have to use
the EngineVersion property instead.

The option that says: In the AWS::RDS::DBInstance resource type in the CloudFormation template, update
the AllowMajorVersionUpgrade property to the latest MySQL database version. Afterward, directly trigger
an Update Stack operation in CloudFormation is incorrect because if the database upgrade fails, your entire
system will be unavailable since you have no Read Replicas that you can use as a failover. Remember that when
you modify the database engine for your DB instance in a Multi-AZ deployment configuration, Amazon RDS
upgrades both the primary and secondary DB instances at the same time, which means that the RDS shuts down
the whole database. The AllowMajorVersionUpgrade property is only a value that indicates whether major version
upgrades are allowed.

References:

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-rds-database-instance.html

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.Maintenance.html

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/
USER_UpgradeDBInstance.MySQL.html#USER_UpgradeDBInstance.MySQL.ReducedDowntime
Check out these AWS CloudFormation and Amazon RDS Cheat Sheets:

https://tutorialsdojo.com/aws-cloudformation/

https://tutorialsdojo.com/amazon-relational-database-service-amazon-rds/

Question 65: Incorrect

A startup recently hired you as a replacement for their DevOps engineer who abruptly resigned from his
position. Since there was no time for him to hand over his tasks to anyone, you have to start from scratch and
understand the current configurations he made in the client’s AWS account. You saw that there is an existing
CloudWatch Events rule with a custom event pattern as shown below:What does this CloudWatch Events rule
do?

It will capture all rejected or failed build actions across all the pipelines in AWS CodePipeline and send a
notification.

It will capture all rejected or failed approval actions across all the pipelines in AWS CodePipeline and
send a notification.

(Correct)

It will capture all manual approval actions across all the pipelines in AWS CodePipeline and send a
notification.

It will capture all pipelines with a FAILED state in AWS CodePipeline and send a notification.

Explanation

Monitoring is an important part of maintaining the reliability, availability, and performance of AWS
CodePipeline. You should collect monitoring data from all parts of your AWS solution so that you can more
easily debug a multi-point failure if one occurs.

You can use the following tools to monitor your CodePipeline pipelines and their resources:

Amazon CloudWatch Events — Use Amazon CloudWatch Events to detect and react to pipeline execution
state changes (for example, send an Amazon SNS notification or invoke a Lambda function).

AWS CloudTrail — Use CloudTrail to capture API calls made by or on behalf of CodePipeline in your AWS
account and deliver the log files to an Amazon S3 bucket. You can choose to have CloudWatch publish Amazon
SNS notifications when new log files are delivered so you can take quick action.

Console and CLI — You can use the CodePipeline console and CLI to view details about the status of a
pipeline or a particular pipeline execution.

Amazon CloudWatch Events is a web service that monitors your AWS resources and the applications you run
on AWS. You can use Amazon CloudWatch Events to detect and react to changes in the state of a pipeline,
stage, or action. Then, based on rules you create, CloudWatch Events invokes one or more target actions when a
pipeline, stage, or action enters the state you specify in a rule. Depending on the type of state change, you might
want to send notifications, capture state information, take corrective action, initiate events, or take other actions.
You can configure notifications to be sent when the state changes for:

- Specified pipelines or all your pipelines. You control this by using "detail-
type": "CodePipeline Pipeline Execution State Change".

- Specified stages or all your stages, within a specified pipeline or all your pipelines. You control this by
using "detail-type": "CodePipeline Stage Execution State Change".

- Specified actions or all actions, within a specified stage or all stages, within a specified pipeline or all your
pipelines. You control this by using "detail-type": "CodePipeline Action Execution State Change".

Hence, the correct answer is: It will capture all rejected or failed approval actions across all the pipelines in
AWS CodePipeline and send a notification.

The option that says: It will capture all manual approval actions across all the pipelines in AWS
CodePipeline and send a notification is incorrect because this will just capture all rejected or failed approval
actions, excluding the successful ones.

The option that says: It will capture all pipelines with a FAILED state in AWS CodePipeline and send a
notification is incorrect because the custom event pattern is tracking the changes of the specific "Actions" and
not the entire "State" of the pipeline.

The option that says: It will capture all rejected or failed build actions across all the pipelines in AWS
CodePipeline and send a notification is incorrect because the indicated source is aws.codepipeline and
not aws.codebuild which means that this rule tracks the failed or rejected approval actions across all the
pipelines, and not the status of the build actions.

References:

https://docs.aws.amazon.com/codepipeline/latest/userguide/detect-state-changes-cloudwatch-events.html

https://docs.aws.amazon.com/codepipeline/latest/userguide/monitoring.html

Question 66: Incorrect

A company is planning to migrate its online customer portal to AWS. It should be hosted in AWS Elastic
Beanstalk and use Amazon RDS MySQL in Multi-AZ configuration for its database. A DevOps Engineer was
instructed to ensure that the application resources must be at full capacity during deployment by using a new
group of instances. The solution should also include a way to roll back the change easily and prevent issues
caused by partially completed rolling deployments. The application performance should not be affected while a
new version of the app is being deployed.

Which is the MOST cost-effective deployment set up that the DevOps Engineer should implement to meet these
requirements?

Host the online customer portal using AWS Elastic Beanstalk coupled with an Amazon RDS MySQL
database. In the Elastic Beanstalk database configuration, set the Availability option to High (Multi-
AZ) to run a warm backup in a second Availability Zone. Use the All at once deployment policy to
release the new application version.
Host the online customer portal using AWS Elastic Beanstalk coupled with Amazon RDS MySQL
database as part of the environment. For high availability, set the Availability option to High (Multi-
AZ) in the Elastic Beanstalk database configuration to run a warm backup in a second Availability Zone.
Use the Rolling with additional batch policy for application deployments.

Host the online customer portal using AWS Elastic Beanstalk and integrate it to an external Amazon RDS
MySQL database in Multi-AZ deployments configuration. Configure the Elastic Beanstalk to use
blue/green deployment for releasing the new application version to a new environment. Swap the
CNAME in the two environments to redirect traffic to the new version using the Swap Environment
URLs feature. Once the deployment has been successfully implemented, keep the old environment running
as a backup.

Host the online customer portal using AWS Elastic Beanstalk and integrate it to an external Amazon RDS
MySQL database in Multi-AZ deployments configuration. Use immutable updates for application
deployments.

(Correct)

Explanation

AWS Elastic Beanstalk provides several options for how deployments are processed, including deployment
policies (All at once, Rolling, Rolling with additional batch, and Immutable) and options that let you
configure the batch size and health check behavior during deployments. By default, your environment uses all-
at-once deployments. If you created the environment with the EB CLI and it's an automatically scaling
environment (you didn't specify the --single option), it uses rolling deployments.

With rolling deployments, Elastic Beanstalk splits the environment's EC2 instances into batches and deploys the
new version of the application to one batch at a time, leaving the rest of the instances in the environment running
the old version of the application. During a rolling deployment, some instances serve requests with the old
version of the application, while instances in completed batches serve other requests with the new version.To
maintain full capacity during deployments, you can configure your environment to launch a new batch of
instances before taking any instances out of service. This option is known as a rolling deployment with an
additional batch. When the deployment completes, Elastic Beanstalk terminates the additional batch of
instances.

Immutable deployments perform an immutable update to launch a full set of new instances running the new
version of the application in a separate Auto Scaling group alongside the instances running the old version.
Immutable deployments can prevent issues caused by partially completed rolling deployments. If the new
instances don't pass health checks, Elastic Beanstalk terminates them, leaving the original instances untouched.

If your application doesn't pass all health checks, but still operates correctly at a lower health status, you can
allow instances to pass health checks with a lower status, such as Warning, by modifying the Healthy
threshold option. If your deployments fail because they don't pass health checks and you need to force an update
regardless of health status, specify the Ignore health check option.

When you specify a batch size for rolling updates, Elastic Beanstalk also uses that value for rolling application
restarts. Use rolling restarts when you need to restart the proxy and application servers running on your
environment's instances without downtime.
AWS Elastic Beanstalk provides support for running Amazon Relational Database Service (Amazon RDS)
instances in your Elastic Beanstalk environment. This works great for development and testing environments.
However, it isn't ideal for a production environment because it ties the lifecycle of the database instance to the
lifecycle of your application's environment.

Hence, the correct answer is: Host the online customer portal using AWS Elastic Beanstalk and integrate it
to an external Amazon RDS MySQL database in Multi-AZ deployments configuration. Use immutable
updates for application deployments.

The option that says: Host the online customer portal using AWS Elastic Beanstalk and integrate it to an
external Amazon RDS MySQL database in Multi-AZ deployments configuration. Configure the Elastic
Beanstalk to use blue/green deployment for releasing the new application version to a new environment.
Swap the CNAME in the two environments to redirect traffic to the new version using the Swap
Environment URLs feature. Once the deployment has been successfully implemented, keep the old
environment running as a backup is incorrect. Although using the blue/green deployment configuration is an
ideal option, keeping the old environment running is not recommended since it entails a significant cost. Take
note that the scenario asks for the most cost-effective solution, which is why the old environment should be
deleted.

The option that says: Host the online customer portal using AWS Elastic Beanstalk coupled with an
Amazon RDS MySQL database. In the Elastic Beanstalk database configuration, set the Availability option
to High (Multi-AZ) to run a warm backup in a second Availability Zone. Use the All at once deployment
policy to release the new application version is incorrect because this will deploy the new version to all
existing instances and will not create new EC2 instances. Moreover, you should decouple your RDS database
from Elastic Beanstalk, as this is tied to the lifecycle of the database instance and to the lifecycle of your
application's environment.

The option that says: Host the online customer portal using AWS Elastic Beanstalk coupled with Amazon
RDS MySQL database as part of the environment. For high availability, set the Availability option to High
(Multi-AZ) in the Elastic Beanstalk database configuration to run a warm backup in a second Availability
Zone. Use the Rolling with additional batch policy for application deployments is incorrect because this type of
configuration could potentially cause partially completed rolling deployments. The new batch of instances is
within the same Auto Scaling group and not in a new one. The rollback process is also cumbersome to
implement, unlike the Immutable or Blue/Green deployment.

References:

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo.RDS.html

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.rolling-version-deploy.html

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/environmentmgmt-updates-immutable.html

Check out this AWS Elastic Beanstalk Cheat Sheet:

https://tutorialsdojo.com/aws-elastic-beanstalk/

Question 67: Incorrect


You are developing a mobile news homepage that curates several news sources to a single page. The app is
mainly composed of several Lambda functions configured as a deployment group on AWS CodeDeploy. For
each new app version, you need to test all APIs before fully deploying it to production. The APIs are using a set
of AWS Lambda validation scripts. You want the ability to check the APIs during deployments and be notified
for any API errors as well as automatic rollback if the validation fails.

Which combination of the options below can help you implement a solution for this scenario? (Select THREE)

Add a step on AWS CodeDeploy to trigger your Lambda validation scripts after deployment and invoke
them after deployment to validate your new app version.

Configure your Lambda validation scripts to run during deployment and configure a CloudWatch Alarm
that will trigger a rollback when the function validation fails.

(Correct)

Define your Lambda validation scripts on the AppSpec lifecycle hook during deployment to run the
validation using test traffic and trigger a rollback if checks fail.

(Correct)

Have Lambda send results to AWS CloudWatch Alarms directly and trigger a rollback when 5xx reply
errors are received during deployment.

Associate an AWS CloudWatch Alarm to your deployment group that can send a notification to an AWS
SNS topic when threshold for 5xx is reached on CloudWatch.

(Correct)

Have CodeDeploy run the AWS Lambda validations after the deployment so you can test with production
traffic. When errors are found, have another trigger to rollback the deployment.

Explanation

You can use CloudWatch Alarms to track metrics on your new deployment and you can set thresholds for those
metrics in your Auto Scaling groups being managed by CodeDeploy. This can invoke an action if the metric you
are tracking crosses the threshold for a defined period of time. You can also monitor metrics such as instance
CPU utilization, Memory utilization or custom metrics you have configured. If the alarm is activated,
CloudWatch initiates actions such as sending a notification to Amazon Simple Notification Service, stopping a
CodeDeploy deployment, or changing the state of an instance. You will also have the option to automatically roll
back a deployment when a deployment fails or when a CloudWatch alarm is activated. CodeDeploy will
redeploy the last known working version of the application when it rolls back.

With Amazon SNS, you can create triggers that send notifications to subscribers of an Amazon SNS topic when
specified events, such as success or failure events, occur in deployments and instances. CloudWatch Alarms can
trigger sending out notifications to your configured SNS topic.

The BeforeAllowTraffic and AfterAllowTraffic lifecycle hooks of the AppSpec.yaml file allows you to use
Lambda functions to validate the new version task set using the test traffic during the deployment. For example,
a Lambda function can serve traffic to the test listener and track metrics from the replacement task set. If
rollbacks are configured, you can configure a CloudWatch alarm that triggers a rollback when the validation test
in your Lambda function fails.

Hence, the correct answers are:

-Define your Lambda validation scripts on the AppSpec lifecycle hook during deployment to run the
validation using test traffic and trigger a rollback if checks fail.

-Associate an AWS CloudWatch Alarm to your deployment group that can send a notification to an AWS
SNS topic when threshold for 5xx is reached on CloudWatch.

-Configure your Lambda validation scripts to run during deployment and configure a CloudWatch Alarm
that will trigger a rollback when the function validation fails.The option that says: Add a step on AWS
CodeDeploy to trigger your Lambda validation scripts after deployment and invoke them after
deployment to validate your new app version is incorrect because you will want the validation script to run
before production traffic is flowing on the new app version. You can use AppSpec hooks to do this, which also
includes an option to rollback when validation fails.

The option that says: Have CodeDeploy run the AWS Lambda validations after the deployment so you can
test with production traffic. When errors are found, have another trigger to rollback the deployment is
incorrect because when the new app version is deployed to production, there's a possibility that clients will
notice these errors. Rollback will take some time as the old version will need to be re-deployed. It is better to run
the validation scripts during the deployment using the test traffic.

The option that says: Have Lambda send results to AWS CloudWatch Alarms directly and trigger a
rollback when 5xx reply errors are received during deployment is incorrect because CloudWatch Alarms
can't receive direct test results from AWS Lambda. If you want to do this, store test logs on CloudWatch logs
and have CloudWatch Events monitor those logs.

References:

https://docs.aws.amazon.com/codedeploy/latest/userguide/deployment-steps.html#deployment-steps-what-
happens

https://docs.aws.amazon.com/codedeploy/latest/userguide/deployments-rollback-and-
redeploy.html#deployments-rollback-and-redeploy-automatic-rollbacks

https://docs.aws.amazon.com/codedeploy/latest/userguide/deployment-groups-configure-advanced-options.html

Check out these AWS Cheat Sheets:

https://tutorialsdojo.com/amazon-cloudwatch/

https://tutorialsdojo.com/amazon-sns/

Question 68: Incorrect

You are working as a DevOps engineer for a leading telecommunications company which is planning to host a
distributed system in AWS. Their system must be hosted on multiple Linux-based application servers which
must use the same configuration file that tracks any changes in the cluster such as adding or removing a server.
The configuration file is named as tdojo-nodes.config which contains the list of private IP addresses of the
servers in the cluster and other metadata.Which of the following is the MOST automated way to meet the above
requirements?

Store the tdojo-nodes.config configuration file in CodeCommit and set up a CodeDeploy deployment
group based on the tags of each application server nodes of the cluster. Integrate CodeDeploy with
Amazon CloudWatch Events to automatically update the configuration file of each server if a new node is
added or removed in the cluster then persist the changes in CodeCommit.

Layer the application server nodes of the cluster using AWS OpsWorks Stacks and add a Chef recipe
associated with the Configure Lifecycle Event which populates the tdojo-nodes.config file. Set up a
configuration which runs each layer's Configure recipes that updates the configuration file when a cluster
change is detected.

(Correct)

Store the tdojo-nodes.config configuration file in Amazon S3 and develop a crontab script that will
periodically poll any changes to the file and download it if there is any. Use a Node.js based process
manager such as PM2 to restart the application servers in the cluster in the event that the configuration file
is modified. Use CloudWatch Events to monitor the changes in your cluster and update the configuration
file in S3 for any changes.

Use AWS OpsWorks Stacks which layers the application server nodes of the cluster using a Chef recipe
associated with the Deploy Lifecycle Event. Set up a configuration which populates the tdojo-nodes.config
file and runs each layer's Deploy recipes that updates the configuration file when a cluster change is
detected.

Explanation

In AWS OpsWorks Stacks Lifecycle Events, each layer has a set of five lifecycle events, each of which has an
associated set of recipes that are specific to the layer. When an event occurs on a layer's instance, AWS
OpsWorks Stacks automatically runs the appropriate set of recipes. To provide a custom response to these
events, implement custom recipes and assign them to the appropriate events for each layer. AWS OpsWorks
Stacks runs those recipes after the event's built-in recipes.

When AWS OpsWorks Stacks runs a command on an instance—for example, a deploy command in response to
a Deploy lifecycle event—it adds a set of attributes to the instance's node object that describes the stack's current
configuration. For Deploy events and Execute Recipes stack commands, AWS OpsWorks Stacks installs deploy
attributes, which provide some additional deployment information.

There are five lifecycle events namely: Setup, Configure, Deploy, UnDeploy and Shutdown. The Configure
event occurs on all of the stack's instances when one of the following occurs:

- An instance enters or leaves the online state.

- You associate an Elastic IP address with an instance or disassociate one from an instance.

- You attach an Elastic Load Balancing load balancer to a layer or detach one from a layer.
For example, suppose that your stack has instances A, B, and C, and you start a new instance, D. After D has
finished running its setup recipes, AWS OpsWorks Stacks triggers the Configure event on A, B, C, and D. If you
subsequently stop A, AWS OpsWorks Stacks triggers the Configure event on B, C, and D. AWS OpsWorks
Stacks responds to the Configure event by running each layer's Configure recipes, which update the instances'
configuration to reflect the current set of online instances. The Configure event is therefore, a good time to
regenerate configuration files. For example, the HAProxy Configure recipes reconfigure the load balancer to
accommodate any changes in the set of online application server instances.

Hence, the correct solution for this scenario is: Layer the application server nodes of the cluster using AWS
OpsWorks Stacks and add a Chef recipe associated with the Configure Lifecycle Event which populates the
tdojo-nodes.config file. Set up a configuration which runs each layer's Configure recipes that updates the
configuration file when a cluster change is detected.

The option that says: Use AWS OpsWorks Stacks which layers the application server nodes of the cluster
using a Chef recipe associated with the Deploy Lifecycle Event. Set up a configuration which populates the
tdojo-nodes.config file and runs each layer's Deploy recipes that updates the configuration file when a
cluster change is detected is incorrect. Although this is properly using the AWS OpsWorks Stacks Lifecycle
Events to track the configuration file, the type of Lifecycle event being used is wrong. You should use
the Configure Lifecycle Event instead.

The option that says: Store the tdojo-nodes.config configuration file in CodeCommit and set up a
CodeDeploy deployment group based on the tags of each application server nodes of the cluster. Integrate
CodeDeploy with Amazon CloudWatch Events to automatically update the configuration file of each
server if a new node is added or removed in the cluster then persist the changes in CodeCommit is
incorrect because CodeCommit is not a suitable service to use to store your dynamic configuration files.
Moreover, the integration of CodeDeploy and CloudWatch Events is only applicable when the actual
deployment is being executed and not suitable for monitoring your cluster.

The option that says: Store the tdojo-nodes.config configuration file in Amazon S3 and develop a crontab
script that will periodically poll any changes to the file and download it if there is any. Use a Node.js based
process manager such as PM2 to restart the application servers in the cluster in the event that the
configuration file is modified. Use CloudWatch Events to monitor the changes in your cluster and update
the configuration file in S3 for any changes is incorrect because using Amazon S3 to store the configuration
file entails a lot of management overhead. A better solution is to use AWS OpsWorks Stacks instead of
CloudWatch Events and S3.

References:

https://docs.aws.amazon.com/opsworks/latest/userguide/welcome_classic.html#welcome-classic-lifecycle

https://docs.aws.amazon.com/opsworks/latest/userguide/workingcookbook-events.html

Check out this AWS OpsWorks Cheat Sheet:

https://tutorialsdojo.com/aws-opsworks/

Question 69: Incorrect


A leading financial company provides GraphQL APIs to allow its various customers and partners to consume its
global stock market data. The web service is hosted in an Auto Scaling group of EC2 instances behind an
Application Load Balancer. All new versions of the service are deployed via a CI/CD pipeline. A DevOps
Engineer has been instructed to track the health of the service in deployment to avoid any issues. If the latency of
the service increases more than the defined threshold then the deployment should be halted until the service has
been fully recovered.Which solution should the Engineer implement to provide the FASTEST detection time for
deployment issues?

Enable Detailed Monitoring in CloudWatch to monitor and detect the latency in the Application Load
Balancer. When latency increases beyond the defined threshold, trigger a CloudWatch alarm and stop the
current deployment.

Define the thresholds to roll back the deployments based on the latency using
the MinimumHealthyHosts deployment configuration in AWS CodeDeploy. Rollback the deployment if the
threshold was breached.

Collect the ELB access logs and use a Lambda function to calculate and detect the average latency of the
service. When latency increases beyond the defined threshold, trigger the alarm and stop the current
deployment.

Calculate the average latency using Amazon CloudWatch metrics that monitors the Application Load
Balancer. Associate a CloudWatch alarm with the CodeDeploy deployment group. When latency
increases beyond the defined threshold, it will automatically trigger an alarm that automatically stops the
on-going deployment.

(Correct)

Explanation

You can create a CloudWatch alarm for an instance or Amazon EC2 Auto Scaling group you are using in your
CodeDeploy operations. An alarm watches a single metric over a time period you specify and performs one or
more actions based on the value of the metric relative to a given threshold over a number of time periods.
CloudWatch alarms invoke actions when their state changes (for example, from OK to ALARM).Using native
CloudWatch alarm functionality, you can specify any of the actions supported by CloudWatch when an instance
you are using in a deployment fails, such as sending an Amazon SNS notification or stopping, terminating,
rebooting, or recovering an instance. For your CodeDeploy operations, you can configure a deployment group to
stop a deployment whenever any CloudWatch alarm you associate with the deployment group is activated.

You can associate up to ten CloudWatch alarms with a CodeDeploy deployment group. If any of the specified
alarms are activated, the deployment stops, and the status is updated to Stopped. You must grant CloudWatch
permissions to your CodeDeploy service role to use this option.

Hence, the correct answer is: Calculate the average latency using Amazon CloudWatch metrics that
monitors the Application Load Balancer. Associate a CloudWatch alarm with the CodeDeploy
deployment group. When latency increases beyond the defined threshold, it will automatically trigger an
alarm that automatically stops the ongoing deployment.

The option that says: Collect the ELB access logs and use a Lambda function to calculate and detect the
average latency of the service. When latency increases beyond the defined threshold, trigger the alarm
and stop the current deployment is incorrect. Although ELB access logs contain latency information, you still
need to parse the data using Lambda. The calculation process entails a significant amount of time. A faster
solution would be to use Amazon CloudWatch metrics.

The option that says: Define the thresholds to roll back the deployments based on the latency using
the MinimumHealthyHosts deployment configuration in AWS CodeDeploy. Rollback the deployment if the
threshold was breached is incorrect because the MinimumHealthyHosts is just a property of
the DeploymentConfig resource that defines how many instances must remain healthy during an AWS
CodeDeploy deployment. It doesn't calculate the latency of the service.

The option that says: Enable Detailed Monitoring in CloudWatch to monitor and detect the latency in the
Application Load Balancer. When latency increases beyond the defined threshold, trigger a CloudWatch
alarm and stop the current deployment is incorrect because you cannot enable detailed monitoring in your
Application Load Balancer. The detailed monitoring feature in CloudWatch is primarily used to collect data
from your EC2 and other resources in 1-minute frequency for an additional cost. This level of frequency is
already available for your load balancers. If there are requests flowing through the load balancer, Elastic Load
Balancing measures and sends its metrics in 60-second intervals.

References:

https://docs.aws.amazon.com/codedeploy/latest/userguide/monitoring-create-alarms.html

https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-cloudwatch-metrics.html

https://aws.amazon.com/premiumsupport/knowledge-center/elb-latency-troubleshooting/

Check out this AWS CodeDeploy and Elastic Load Balancing Cheat Sheets:

https://tutorialsdojo.com/aws-codedeploy/

https://tutorialsdojo.com/aws-elastic-load-balancing-elb/

Question 70: Incorrect

A company would like to set up an audit process to ensure that their enterprise application is running exclusively
on Amazon EC2 Dedicated Hosts. They are also concerned about the increasing costs of their application
software licensing from their third-party vendor. To meet the compliance requirement, a DevOps Engineer must
create a workflow to audit the enterprise applications hosted in their VPC.

Which of the following options should the Engineer implement to satisfy the requirement with the LEAST
administrative overhead?

Record configuration changes for Dedicated Hosts using the AWS Systems Manager Configuration
Compliance. Utilize the PutComplianceItems API action to scan and populate a collection of noncompliant
Amazon EC2 instances based on their host placement configuration. Store these instance IDs to Systems
Manager Parameter Store and generate a report by calling the ListComplianceSummaries API action.

Record configuration changes for EC2 Instances and Dedicated Hosts by turning on the Config Recording
option in AWS Config. Set up a custom AWS Config rule that triggers a Lambda function by using
the config-rule-change-triggered blueprint. Customize the predefined evaluation logic to verify host
placement to return a NON_COMPLIANT result whenever the EC2 instance is not running on a
Dedicated Host. Use the AWS Config report to address noncompliant Amazon EC2 instances.

(Correct)

Install the Amazon Inspector agent to all EC2 instances. Record configuration changes for Dedicated
Hosts by using Amazon Inspector. Set up an automatic assessment runs through a Lambda Function by
using the inspector-scheduled-run blueprint.

Record configuration changes for Dedicated Hosts by AWS CloudTrail. Filter all EC2 RunCommand API
actions in the logs to detect any changes to the instances. Analyze the host placement of the instance using
a Lambda function and store the instance IDs of noncompliant resources in an Amazon S3 bucket.
Generate a report by using Amazon Athena to query the S3 data.

Explanation

You can use AWS Config to record configuration changes for Dedicated Hosts, and instances that are launched,
stopped, or terminated on them. You can then use the information captured by AWS Config as a data source for
license reporting.

AWS Config records configuration information for Dedicated Hosts and instances individually and pairs this
information through relationships. There are three reporting conditions:

-AWS Config recording status — When On, AWS Config is recording one or more AWS resource types, which
can include Dedicated Hosts and Dedicated Instances. To capture the information required for license reporting,
verify that hosts and instances are being recorded with the following fields.

-Host recording status — When Enabled, the configuration information for Dedicated Hosts is recorded.

-Instance recording status — When Enabled, the configuration information for Dedicated Instances is recorded.

If any of these three conditions are disabled, the icon in the Edit Config Recording button is red. To derive the
full benefit of this tool, ensure that all three recording methods are enabled. When all three are enabled, the icon
is green. To edit the settings, choose Edit Config Recording. You are directed to the Set up AWS Config page in
the AWS Config console, where you can set up AWS Config and start recording for your hosts, instances, and
other supported resource types. AWS Config records your resources after it discovers them, which might take
several minutes.

After AWS Config starts recording configuration changes to your hosts and instances, you can get the
configuration history of any host that you have allocated or released and any instance that you have launched,
stopped, or terminated. For example, at any point in the configuration history of a Dedicated Host, you can look
up how many instances are launched on that host, along with the number of sockets and cores on the host. For
any of those instances, you can also look up the ID of its Amazon Machine Image (AMI). You can use this
information to report on licensing for your own server-bound software that is licensed per-socket or per-core.

You can view configuration histories in any of the following ways.


-By using the AWS Config console. For each recorded resource, you can view a timeline page, which provides a
history of configuration details. To view this page, choose the gray icon in the Config Timeline column of the
Dedicated Hosts page.

-By running AWS CLI commands. First, you can use the list-discovered-resources command to get a list of all
hosts and instances. Then, you can use the get-resource-config-history command to get the configuration details
of a host or instance for a specific time interval.

-By using the AWS Config API in your applications. First, you can use the ListDiscoveredResources action to
get a list of all hosts and instances. Then, you can use the GetResourceConfigHistory action to get the
configuration details of a host or instance for a specific time interval.

Hence, the correct answer is: Record configuration changes for EC2 Instances and Dedicated Hosts by
turning on the Config Recording option in AWS Config. Set up a custom AWS Config rule that triggers a
Lambda function by using the config-rule-change-triggered blueprint. Customize the predefined evaluation
logic to verify host placement to return a NON_COMPLIANT result whenever the EC2 instance is not
running on a Dedicated Host. Use the AWS Config report to address noncompliant Amazon EC2
instances.

The option that says: Record configuration changes for Dedicated Hosts using the AWS Systems Manager
Configuration Compliance. Utilize the PutComplianceItems API action to scan and populate a collection of
noncompliant Amazon EC2 instances based on their host placement configuration. Store these instance
IDs to Systems Manager Parameter Store and generate a report by calling
the ListComplianceSummaries API action is incorrect because the AWS Systems Manager Configuration
Compliance service is primarily used to scan your fleet of managed instances for patch compliance and
configuration inconsistencies. A better solution is to use AWS Config to record the status of your Dedicated
Hosts.

The option that says: Install the Amazon Inspector agent to all EC2 instances. Record configuration
changes for Dedicated Hosts by using Amazon Inspector. Set up an automatic assessment runs through a
Lambda Function by using the inspector-scheduled-run blueprint is incorrect because Amazon Inspector is just
an automated security assessment service that helps improve the security and compliance of applications
deployed on AWS. It is not capable of recording the status of your EC2 instances nor detect if they are
configured as a Dedicated Host.

The option that says: Record configuration changes for Dedicated Hosts by AWS CloudTrail. Filter all EC2
RunCommand API actions in the logs to detect any changes to the instances. Analyze the host placement
of the instance using a Lambda function and store the instance IDs of noncompliant resources in an
Amazon S3 bucket. Generate a report by using Amazon Athena to query the S3 data is incorrect. Although
this may be a possible solution, it entails a lot of administrative effort in comparison to just using AWS Config.

References:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/dedicated-hosts-aws-config.html

https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-compliance-about.html#sysman-
compliance-custom

https://aws.amazon.com/blogs/aws/now-available-ec2-dedicated-hosts/
Check out these AWS Systems Manager and AWS Config Cheat Sheets:

https://tutorialsdojo.com/aws-systems-manager/

https://tutorialsdojo.com/aws-config/

Question 71: Incorrect

A company has an Amazon ECS cluster with Service Auto Scaling which consists of multiple Amazon EC2
instances that runs a Docker-based application. The development team always pushes a new image to a private
Docker container registry whenever they publish a new version of their application. They stop and start all of the
tasks to ensure that the containers have the latest version of the application. However, the team noticed that the
new tasks are occasionally running the old image of the application.As the DevOps engineer, what should you
do to fix this issue?

Migrate your repository from your private Docker container registry to Amazon ECR.

Configure the task definition to use the repository-url/image@digest format then manually update the
SHA-256 digest of the image.

Restart the ECS agent.

(Correct)

Ensure that the latest tag is being used in the Docker image of the task definition.

Explanation

You can update a running service to change the number of tasks that are maintained by a service, which task
definition is used by the tasks, or if your tasks are using the Fargate launch type, you can change the platform
version your service uses. If you have an application that needs more capacity, you can scale up your service. If
you have unused capacity to scale down, you can reduce the number of desired tasks in your service and free up
resources. If you have updated the Docker image of your application, you can create a new task definition with
that image and deploy it to your service.

If your updated Docker image uses the same tag as what is in the existing task definition for your service (for
example, my_image:latest), you do not need to create a new revision of your task definition. You can update the
service using the procedure below, keep the current settings for your service, and select Force new deployment.
The new tasks launched by the deployment pull the current image/tag combination from your repository when
they start.

The service scheduler uses the minimum healthy percent and maximum percent parameters (in the deployment
configuration for the service) to determine the deployment strategy. When a new task starts, the Amazon ECS
container agent pulls the latest version of the specified image and tag for the container to use. However,
subsequent updates to a repository image are not propagated to already running tasks.

To have your service use a newly updated Docker image with the same tag as in the existing task definition (for
example, my_image:latest) or keep the current settings for your service, select Force new deployment. The new
tasks launched by the deployment pull the current image/tag combination from your repository when they start.
The Force new deployment option is also used when updating a Fargate task to use a more current platform
version when you specify LATEST. For example, if you specified LATEST and your running tasks are using
the 1.0.0 platform version and you want them to relaunch using a newer platform version.

To verify that the container agent is running on your container instance, run the following command:
sudo systemctl status ecs

If the command output doesn't show the service as active, run the following command to restart the service:
sudo systemctl restart ecs

It is mentioned in the scenario that the new tasks are occasionally running the old image of the application. The
ECS cluster is also using Service Auto Scaling that automatically launches new tasks based on demand. We can
conclude that the root cause is not in the task definition since this issue only occurs occasionally, and the other
tasks were properly updated. If the ECS task is still running an old image, then it is possible that the ECS agent
is not running properly.

Hence, the correct answer is: Restart the ECS agent.

The option that says: Ensuring that the latest tag is being used in the Docker image of the task definition is
incorrect. Although this will release you from the burden of constantly updating your task definition, this is still
not the solution for the issue. Remember that there are other tasks that were successfully updated, which means
that the image tag is not the root cause.

The option that says: Configuring the task definition to use the repository-url/image@digest format then
manually updating the SHA-256 digest of the image is incorrect because this will just explicitly fetch the
exact Docker image from the registry based on the provided SHA-256 digest and not based on its tag (e.g., latest,
1.0.0). In addition, it is stated in the scenario that the issue only occurs occasionally, which means that the other
tasks are updated correctly. Thus, it suggests that the issue has nothing to do with the task definition but more
with the ECS agent.

The option that says: Migrating your repository from your private Docker container registry to Amazon
ECR is incorrect because the issue will be the same even if you used Amazon ECR if the ECS agent is not
running properly in one of the instances.

References:

https://aws.amazon.com/premiumsupport/knowledge-center/ecs-agent-disconnected-linux2-ami/

https://docs.aws.amazon.com/AmazonECS/latest/developerguide/update-service.html

https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html

Check out this Amazon ECS Cheat Sheet:

https://tutorialsdojo.com/amazon-elastic-container-service-amazon-ecs/

Tutorials Dojo's AWS Certified DevOps Engineer Professional Exam Study Guide:

https://tutorialsdojo.com/aws-certified-devops-engineer-professional/
Question 72: Correct

On a recent AWS resources audit on your AWS Test environment account, you have found several load
balancers with minimal usage in which some of them do not have back-end EC2 instances at all. Your
developers are using these load balancers to test their applications on the public Internet, but they forgot to delete
them after testing. These load balancers are also showing up on AWS Trusted Advisor as unused/underutilized
which incur unnecessary costs. You want to be notified whenever there are changes in the Trusted Advisor
checks so that you can quickly take action based on the results. Which of the following options will help you
achieve this? (Select THREE.)

Create a Lambda function and integrate it with CloudWatch Events. Configure the function to run on a
regular basis and to check AWS Trusted Advisor via API. Based on the results, publish a message to an
Amazon SNS Topic to notify the subscribers.

(Correct)

Utilize CloudWatch Events to monitor Trusted Advisor recommendation results. Set up a trigger to send
an email using SNS to notify you about the results of the check.

(Correct)

Create a CloudWatch alarm to monitor the utilization of your load balancers using ELB metrics. Use
Amazon SES to send an email notification.

Launch a small EC2 instance that runs a custom script that programmatically checks the utilization of
your load balancers. Send out email notification via Amazon SES.

Use AWS Config in order to automatically detect the load balancers with low utilization. Use Amazon
SNS to deliver the notifications.

Use Amazon CloudWatch to create alarms on Trusted Advisor metrics in order to detect the load
balancers with low utilization. Specify an SNS topic for notification.

(Correct)

Explanation

AWS Trusted Advisor is integrated with the Amazon CloudWatch Events and Amazon CloudWatch services.
You can use Amazon CloudWatch Events to detect and react to changes in the status of Trusted Advisor checks.
And you can use Amazon CloudWatch to create alarms on Trusted Advisor metrics for check status changes,
resource status changes, and service limit utilization

Based on the rules that you create, CloudWatch Events invokes one or more target actions when a check status
changes to the value you specify in a rule. Depending on the type of status change, you might want to send
notifications, capture status information, take corrective action, initiate events, or take other actions.

Hence, the correct answers are:


- Create a Lambda function and integrate it with CloudWatch Events. Configure the function to run on a
regular basis and to check AWS Trusted Advisor via API. Based on the results, publish a message to an
Amazon SNS Topic to notify the subscribers.

- Utilize CloudWatch Events to monitor Trusted Advisor recommendation results. Set up a trigger to send
an email using SNS to notify you about the results of the check.

- Use Amazon CloudWatch to create alarms on Trusted Advisor metrics in order to detect the load
balancers with low utilization. Specify an SNS topic for notification.

The option that says: Create a CloudWatch alarm to monitor the utilization of your load balancers using
ELB metrics. Use Amazon SES to send an email notification is incorrect because you have to use the AWS
Trusted Advisor metrics and not ELB metrics. Moreover, you have to use SNS and not SES for sending an email
notification.

The option that says: Use AWS Config in order to automatically detect idle load balancers. Use Amazon
SNS to deliver the notifications is incorrect because AWS Config cannot detect the utilization of AWS
resources. You have to use AWS Trusted Advisor instead.

The option that says: Launch a small EC2 instance that runs a custom script that programmatically checks
the utilization of your load balancers. Send out email notification via Amazon SES is incorrect because this
entails a lot of manual action to develop a custom script. You can simply just use Trusted Advisor to monitor the
idle load balancers. Moreover, you have to use SNS for notifications and not SES.

References:

https://docs.aws.amazon.com/awssupport/latest/user/cloudwatch-ta.html

https://docs.aws.amazon.com/awssupport/latest/user/trustedadvisor.html

https://docs.aws.amazon.com/awssupport/latest/user/cloudwatch-events-ta.html

Check out these AWS Trusted Advisor and CloudWatch Cheat Sheets:

https://tutorialsdojo.com/aws-trusted-advisor/

https://tutorialsdojo.com/amazon-cloudwatch/

Question 73: Correct

A company is planning to implement a monitoring system that will track the cost-effectiveness of their EC2
resources across the multiple AWS accounts that they own. All of their existing resources have appropriate tags
that map the corresponding environment, department, and business unit for cost allocation purposes. They
instructed their DevOps engineer to automate infrastructure cost optimization across multiple shared
environments and accounts such as the detection of low utilization of EC2 instances.

Which is the MOST suitable solution that the DevOps engineer should implement in this scenario?
Set up a CloudWatch dashboard for EC2 instance tags based on the environment, department, and
business unit in order to track the instance utilization. Create a trigger using a CloudWatch Events rule
and AWS Lambda to terminate the underutilized EC2 instances.

Integrate CloudWatch Events rule and AWS Trusted Advisor to detect the EC2 instances with low
utilization. Create a trigger with an AWS Lambda function that filters out the reported data based on tags
for each environment, department, and business unit. Create a second trigger that will invoke another
Lambda function to terminate the underutilized EC2 instances.

(Correct)

Set up AWS Systems Manager to track the instance utilization of all of your EC2 instances and report
underutilized instances to CloudWatch. Filter the data in CloudWatch based on tags for each environment,
department, and business unit. Create triggers in CloudWatch that will invoke an AWS Lambda function
that will terminate underutilized EC2 instances.

Develop a custom shell script on an EC2 instance which runs periodically to report the instance utilization
of all instances and store the result into a DynamoDB table. Use a QuickSight dashboard with DynamoDB
as the source data to monitor and identify the underutilized EC2 instances. Integrate Amazon QuickSight
and AWS Lambda to trigger an EC2 termination command for the underutilized instances.

Explanation

AWS Trusted Advisor draws upon best practices learned from serving hundreds of thousands of AWS
customers. Trusted Advisor inspects your AWS environment, and then makes recommendations when
opportunities exist to save money, improve system availability and performance, or help close security gaps. All
AWS customers have access to five Trusted Advisor checks. Customers with a Business or Enterprise support
plan can view all Trusted Advisor checks.

AWS Trusted Advisor is integrated with the Amazon CloudWatch Events and Amazon CloudWatch services.
You can use Amazon CloudWatch Events to detect and react to changes in the status of Trusted Advisor checks.
And you can use Amazon CloudWatch to create alarms on Trusted Advisor metrics for check status changes,
resource status changes, and service limit utilization

You can use Amazon CloudWatch Events to detect and react to changes in the status of Trusted Advisor checks.
Then, based on the rules that you create, CloudWatch Events invokes one or more target actions when a check
status changes to the value you specify in a rule. Depending on the type of status change, you might want to send
notifications, capture status information, take corrective action, initiate events, or take other actions.

Hence, the correct answer is to: Integrate CloudWatch Events rule and AWS Trusted Advisor to detect the
EC2 instances with low utilization. Create a trigger with an AWS Lambda function that filters out the
reported data based on tags for each environment, department, and business unit. Create a second trigger
that will invoke another Lambda function to terminate the underutilized EC2 instances.

The option that develops a custom shell script on an EC2 instance is incorrect because it takes time to build a
custom shell script to track the EC2 instance utilization. A more suitable way is to just use AWS Trusted
Advisor instead.
The option that sets up a CloudWatch dashboard for EC2 instance tags based on the environment,
department, and business unit is incorrect because CloudWatch alone can't provide the instance utilization of
all of your EC2 servers. You have to use AWS Trusted Advisor to get this specific data.

The option that sets up AWS Systems Manager to track the instance utilization of all of your EC2
instances is incorrect because the Systems Manager service is primarily used to manage your EC2 instances. It
doesn't provide an easy way to provide the list of under or over-utilized EC2 instances like what Trusted Advisor
can.

References:

https://docs.aws.amazon.com/awssupport/latest/user/cloudwatch-events-ta.html

https://docs.aws.amazon.com/awssupport/latest/user/trustedadvisor.html

https://aws.amazon.com/premiumsupport/technology/trusted-advisor/best-practice-checklist/

Check out this AWS Trusted Advisor Cheat Sheet:

https://tutorialsdojo.com/aws-trusted-advisor/

Question 74: Correct

A leading digital payments company is using AWS to host its suite of web applications which uses external APIs
for credit and debit transactions. The current architecture is using CloudTrail with several trails to log all API
actions. Each trail is protected with an IAM policy to restrict access from unauthorized users. In order to
maintain the system's PCI DSS compliance, a solution must be implemented that allows them to trace the
integrity of each file and prevent the files from being tampered.

Which of the following is the MOST suitable solution with the LEAST amount of effort to implement?

Use AWS Systems Manager State Manager to directly enable the log file integrity feature in CloudTrail.
This will automatically generate a digest file for every log file that CloudTrail delivers. Verify the
integrity of the delivered CloudTrail files using the generated digest files.

Use AWS Config to directly enable the log file integrity feature in CloudTrail. This will automatically
generate a digest file for every log file that CloudTrail delivers. Verify the integrity of the delivered
CloudTrail files using the generated digest files.

In the Amazon S3 bucket of the trail, enable the log file integrity feature that will automatically generate a
digest file for every log file that CloudTrail delivers. Grant the IT Security team full access to download
the file integrity logs stored in the S3 bucket via an IAM policy.

In AWS CloudTrail, enable the log file integrity feature on the trail that will automatically generate a
digest file for every log file that CloudTrail delivers. Verify the integrity of the delivered CloudTrail files
using the generated digest files.

(Correct)

Explanation
To determine whether a log file was modified, deleted, or unchanged after CloudTrail delivered it, you can use
CloudTrail log file integrity validation. This feature is built using industry-standard algorithms: SHA-256 for
hashing and SHA-256 with RSA for digital signing. This makes it computationally infeasible to modify, delete
or forge CloudTrail log files without detection. You can use the AWS CLI to validate the files in the location
where CloudTrail delivered them.

Validated log files are invaluable in security and forensic investigations. For example, a validated log file
enables you to assert positively that the log file itself has not changed, or that particular user credentials
performed specific API activity. The CloudTrail log file integrity validation process also lets you know if a log
file has been deleted or changed, or assert positively that no log files were delivered to your account during a
given period of time.

When you enable log file integrity validation, CloudTrail creates a hash for every log file that it delivers. Every
hour, CloudTrail also creates and delivers a file that references the log files for the last hour and contains a hash
of each. This file is called a digest file. CloudTrail signs each digest file using the private key of a public and
private key pair. After delivery, you can use the public key to validate the digest file. CloudTrail uses different
key pairs for each AWS region.

The digest files are delivered to the same Amazon S3 bucket associated with your trail as your CloudTrail log
files. If your log files are delivered from all regions or from multiple accounts into a single Amazon S3 bucket,
CloudTrail will deliver the digest files from those regions and accounts into the same bucket.

The digest files are put into a folder separate from the log files. This separation of digest files and log files
enables you to enforce granular security policies and permits existing log processing solutions to continue to
operate without modification. Each digest file also contains the digital signature of the previous digest file if one
exists. The signature for the current digest file is in the metadata properties of the digest file Amazon S3 object.

Hence, the correct answer is: In AWS CloudTrail, enable the log file integrity feature on the trail that will
automatically generate a digest file for every log file that CloudTrail delivers. Verify the integrity of the
delivered CloudTrail files using the generated digest files.

The option that says: Use AWS Systems Manager State Manager to directly enable the log file integrity
feature in CloudTrail. This will automatically generate a digest file for every log file that CloudTrail delivers.
Verify the integrity of the delivered CloudTrail files using the generated digest files is incorrect because there
is no direct way that you can enable the log file integrity feature in CloudTrail using AWS Systems Manager
State Manager. This must be manually enabled using the Console or the AWS CLI.

The option that says: In the Amazon S3 bucket of the trail, enable the log file integrity feature that will
automatically generate a digest file for every log file that CloudTrail delivers. Grant the IT Security team full
access to download the file integrity logs stored in the S3 bucket via an IAM policy is incorrect because the log
file integrity feature must be configured in the trail itself and not on the S3 bucket.

The option that says: Use AWS Config to directly enable the log file integrity feature in CloudTrail. This will
automatically generate a digest file for every log file that CloudTrail delivers. Verify the integrity of the
delivered CloudTrail files using the generated digest files is incorrect because there is no direct way that you
can enable the log file integrity feature in CloudTrail using AWS Config. You have to manually enable it.

References:
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-log-file-validation-intro.html

https://aws.amazon.com/blogs/aws/aws-cloudtrail-update-sse-kms-encryption-log-file-integrity-verification/

Check out this AWS CloudTrail Cheat Sheet:

https://tutorialsdojo.com/aws-cloudtrail/

Question 75: Correct

A development company is currently using AWS CodeBuild for automated building and testing of their
application. They recently hired a DevOps engineer to review their current process as well as to provide
recommendations for optimization and security. It is of utmost importance that the engineer identifies security
issues and ensure that the company complies with AWS security best practices. One of their buildspec.yaml files
is shown below:Which of the following changes should the DevOps engineer recommend? (Select TWO.)

Configure the CodeBuild project to use an IAM Role with the required permissions and remove the AWS
credentials from the buildspec.yaml file. Run scp and ssh commands using the AWS Systems Manager
Run Command.

(Correct)

Store the environment variables to the tutorialsdojo-db S3 bucket and then enable Server Side
Encryption. In the pre_build phase of the buildspec.yaml file, add the configuration that will download
and export the environment variables.

Using the AWS Systems Manager Parameter Store, create a DATABASE_PASSWORD secure string
parameter then remove the DATABASE_PASSWORD from the environment variables.

(Correct)

Hash the environment variables and passwords using a Base64 encoder to prevent other developers from
seeing the credentials in plaintext.

In the post-build phase of the buildspec.yaml file, add a configuration that will remove all temporary files
which contain the environment variables and passwords from the container.

Explanation

AWS Systems Manager Run Command lets you remotely and securely manage the configuration of your
managed instances. A managed instance is any Amazon EC2 instance or on-premises machine in your hybrid
environment that has been configured for Systems Manager. Run Command enables you to automate common
administrative tasks and perform ad hoc configuration changes at scale. You can use Run Command from the
AWS console, the AWS Command Line Interface, AWS Tools for Windows PowerShell, or the AWS SDKs.
Run Command is offered at no additional cost.

Administrators use Run Command to perform the following types of tasks on their managed instances: install or
bootstrap applications, build a deployment pipeline, capture log files when an instance is terminated from an
Auto Scaling group, and join instances to a Windows domain, to name a few.
AWS Systems Manager Parameter Store provides secure, hierarchical storage for configuration data
management and secrets management. You can store data such as passwords, database strings, and license codes
as parameter values. You can store values as plain text or encrypted data. You can then reference values by using
the unique name that you specified when you created the parameter.

Using an IAM Role is better than storing and using your AWS access keys since this is a security risk. The same
goes for your database passwords and any other credentials.

Hence, the correct answers are:

- Configure the CodeBuild project to use an IAM Role with the required permissions and remove the
AWS credentials from the buildspec.yaml file. Run scp and ssh commands using the AWS Systems
Manager Run Command.

- Using the AWS Systems Manager Parameter Store, create a DATABASE_PASSWORD secure string
parameter then remove the DATABASE_PASSWORD from the environment variables.

The option that says: In the post-build phase of the buildspec.yaml file, add a configuration that will remove
all temporary files which contain the environment variables and passwords from the container is incorrect
because this solution still exposes the sensitive credentials in the buildspec.yaml file. You should use an IAM
Role and store the database password in the Systems Manager Parameter Store instead.

The option that says: Store the environment variables to the tutorialsdojo-db S3 bucket and then enable
Server Side Encryption. In the pre_build phase of the buildspec.yaml file, add the configuration that will
download and export the environment variables is incorrect because storing sensitive passwords in Amazon
S3 is a security risk, especially if the bucket was accidentally set to public.

The option that says: Hash the environment variables and passwords using a Base64 encoder to prevent
other developers to see the credentials in plaintext is incorrect because hashed credentials could be reversed
to their original plaintext form. Using the Systems Manager Parameter Store feature is still the best way to store
your database credentials.

References:

https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html

https://docs.aws.amazon.com/systems-manager/latest/userguide/execute-remote-commands.html

https://aws.amazon.com/systems-manager/

Check out this AWS Systems Manager Cheat Sheet:

https://tutorialsdojo.com/aws-systems-manager/

Question 1: Correct

A company developed a web portal for gathering Census data within your city. The household information
uploaded on the portal contains personally identifiable information (PII) and is stored in encrypted files on an
Amazon S3 bucket. The object indexes are saved on a DynamoDB table.
Data security is important, so you have enabled S3 access logs as well as CloudTrail to keep track of who and
what accesses the S3 objects. For added security, you want to verify that data access meets compliance standards
and be alerted if there are any risk of unauthorized access or inadvertent data leaks.Which of the following AWS
services enables you to do this?

Use Amazon GuardDuty to monitor malicious activity on your S3 data.

Use Amazon Lookout for Metrics to monitor the S3 metrics and recognize access patterns on your S3
data.

Use Amazon Inspector to alert you whenever a security violation is detected on your S3 data.

Use Amazon Macie to monitor and detect usage patterns on your S3 data.

(Correct)

Explanation

Amazon Macie is an ML-powered security service that helps you prevent data loss by automatically
discovering, classifying, and protecting sensitive data stored in Amazon S3. Amazon Macie uses machine
learning to recognize sensitive data such as personally identifiable information (PII) or intellectual property,
assigns a business value, and provides visibility into where this data is stored and how it is being used in your
organization.

Amazon Macie continuously monitors data access activity for anomalies and delivers alerts when it detects the
risk of unauthorized access or inadvertent data leaks. Amazon Macie has the ability to detect global access
permissions inadvertently being set on sensitive data, detect uploading of API keys inside source code, and
verify sensitive customer data is being stored and accessed in a manner that meets their compliance standards.

Hence, the correct answer is: Use Amazon Macie to monitor and detect usage patterns on your S3 data.

The option that says: Use Amazon Lookout for Metrics to monitor the S3 metrics and recognize access
patterns on your S3 data is incorrect because Amazon Lookout for Metrics is simply a service that finds
anomalies in your data, determines their root causes, and enables you to quickly take action.

The option that says: Use Amazon GuardDuty to monitor malicious activity on your S3 data is incorrect.
Although GuardDuty can continuously monitor malicious activity and unauthorized behavior in your Amazon
S3 bucket, it still is not capable of recognizing sensitive data such as personally identifiable information (PII),
which is required in the scenario.

The option that says: Use Amazon Inspector to alert you whenever a security violation is detected on your
S3 data is incorrect because Inspector is basically an automated security assessment service that helps improve
the security and compliance of applications deployed on AWS.

References:

https://docs.aws.amazon.com/macie/latest/userguide/what-is-macie.html

https://aws.amazon.com/macie/faq/
https://docs.aws.amazon.com/macie/index.html

Check out this Amazon Macie Cheat Sheet:

https://tutorialsdojo.com/amazon-macie/

Question 2: Incorrect

An organization runs its web application on EC2 instances within an Auto Scaling group. The EC2 instances is
behind an Application Load Balancer and is deployed across multiple Availability Zones. The developers of the
organization have introduced fresh features to the web application, but require to be tested before
implementation to prevent any interruptions. The organization requires that the deployment strategy should:

 Deploy a duplicate fleet of instances with an equivalent capacity to the primary fleet.
 Keep the original fleet unaltered while the secondary fleet is being launched.
 Shift traffic to the secondary fleet once it is completely deployed.
 Automatically terminate the original fleet one hour after the transition.

Which of the following is the MOST suitable solution that the DevOps Engineer should implement?

Utilize AWS CodeDeploy and set up a deployment group that has a blue/green deployment configuration.
Set the BlueInstanceTerminationOption action to TERMINATE and terminationWaitTimeInMinutes with
a 1-hour waiting period.

(Correct)

Configure AWS Elastic Beanstalk with an Immutable setting, then create a .ebextension file using the
Resources key to establish a deletion policy of 1 hour for the ALB, then deploy the application.

Deploy an AWS CloudFormation template that includes a retention policy of 1 hour for the ALB. Then,
update the Amazon Route 53 record to reflect the updated ALB.

Create two AWS Elastic Beanstalk environments to execute a blue/green deployment from the original
environment to the new one. Configure an application version lifecycle policy to terminate the primary
environment in 1 hour

Explanation

AWS CodeDeploy is a fully managed deployment service that automates software deployments to various
compute services, such as Amazon Elastic Compute Cloud (EC2), Amazon Elastic Container Service (ECS),
AWS Lambda, and your on-premises servers.

A deployment group is the AWS CodeDeploy entity for grouping EC2 instances or AWS Lambda functions in
a CodeDeploy deployment. For EC2 deployments, it is a set of instances associated with an application that you
target for a deployment.

BlueInstanceTerminationOption contains information about whether instances in the original environment are
terminated when a blue/green deployment is successful.

-action
The action to take on instances in the original environment after a successful blue/green deployment.

TERMINATE: Instances are terminated after a specified wait time.

KEEP_ALIVE:Instances are left running after they are deregistered from the load balancer and removed from the
deployment group.

-terminationWaitTimeInMinutes

The number of minutes to wait after a successful blue/green deployment before terminating instances from the
original environment.

Hence, the correct answer is the option that says: Utilize AWS CodeDeploy and set up a deployment group
that has a blue/green deployment configuration. Set the BlueInstanceTerminationOption action to
TERMINATE and terminationWaitTimeInMinutes with a 1-hour waiting period.

The option that says: Deploy an AWS CloudFormation template that includes a retention policy of 1 hour
for the ALB. Then, update the Amazon Route 53 record to reflect the updated ALB is incorrect because
you cannot set a retention policy in CloudFormation.

The option that says: Create two AWS Elastic Beanstalk environments to execute a blue/green deployment
from the original environment to the new one. Configure an application version lifecycle policy to
terminate the primary environment in 1 hour is incorrect because the application version lifecycle policy is
not used for EC2 instances and only deletes old .config files. In addition, the minimum age limit is set in days,
not hours.

The option that says: Configure AWS Elastic Beanstalk with an Immutable setting, then create
a .ebextension file using the Resources key to establish a deletion policy of 1 hour for the ALB, then deploy
the application is incorrect because deletion policy is primarily used to preserve, and in some cases, backup a
resource when the stack is deleted. In addition, the deletion policy cannot be set to delete a resource after 1 hour.

References:

https://docs.aws.amazon.com/codedeploy/latest/APIReference/API_BlueInstanceTerminationOption.html

https://aws.amazon.com/codedeploy/

https://aws.amazon.com/codedeploy/faqs/

Check out this AWS CodeDeploy Cheat Sheet:

https://tutorialsdojo.com/aws-codedeploy/

Tutorials Dojo's AWS Certified DevOps Engineer Professional Exam Study Guide:

https://tutorialsdojo.com/aws-certified-devops-engineer-professional/

Question 3: Incorrect
You have a fleet of 400 Amazon EC2 instances for your High Performance Computing (HPC) application. The
fleet is configured to use target tracking scaling and is behind an ALB. You want to create a simple web page
hosted on an S3 bucket that displays the status of the EC2 instances on the fleet. This web page will be updated
whenever a new instance is launched or terminated. You are also required to keep a searchable log of these
events so they can be reviewed later.

Which of the following options will you implement?

Create a CloudWatch Events rule for the scale-in/scale-out event and create two targets for the event - one
target to the webpage S3 bucket and another to deliver event logs to CloudWatch Logs.

Write your own Lambda function to update the simple webpage on S3 and send event logs to CloudWatch
Logs. Create a CloudWatch Events rule to invoke the Lambda for scale-in/scale-out events.

(Correct)

No need to do anything. CloudTrail records these events and you can view and search the logs on the
console.

Create a CloudWatch Events rule for the scale-in/scale-out event and deliver these logs to CloudWatch
Logs. Manually export the CloudWatch Log group to the S3 bucket to view them.

Explanation

You can create an AWS Lambda function that logs the changes in state for an Amazon EC2 instance. You can
create a rule that runs the function whenever there is a state transition or a transition to one or more states that
are of interest. Be sure to assign proper permissions to your Lambda function to write to S3 and to send logs to
CloudWatch Logs. After creating the Lambda function, create a rule on CloudWatch Events that will watch for
the scale-in/scale-out events. Set a trigger to run your Lambda function which will then update the S3 webpage
and send logs to CloudWatch Logs.

Hence, the correct answer is: Write your own Lambda function to update the simple webpage on S3 and
send event logs to CloudWatch Logs. Create a CloudWatch Events rule to invoke the Lambda for scale-
in/scale-out events.The option that says: Create a CloudWatch Events rule for the scale-in/scale-out event
and deliver these logs to CloudWatch Logs. Manually export the CloudWatch Log group to the S3 bucket
to view them is incorrect because you have to export the logs manually on an S3 bucket. This action is not
recommended since you can simply use a Lambda function to log the changes of the EC2 instances to a file in
the S3 bucket.

The option that says: Create a CloudWatch Events rule for the scale-in/scale-out event and create two
targets for the event - one target to the webpage S3 bucket and another to deliver event logs to
CloudWatch Logs is incorrect because you cannot set an S3 bucket or object as a target for a CloudWatch
Events rule.

The option that says: No need to do anything. CloudTrail records these events and you can view and search
the logs on the console is incorrect. Although this is true, it would be hard to search all the relevant logs from
the trail. Moreover, it doesn’t provide a solution for the required S3 web page.

References:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/LogEC2InstanceState.html

https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/CloudWatch-Events-Tutorials.html

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/LaunchingAndUsingInstances.html

Check out this Amazon CloudWatch Cheat Sheet:

https://tutorialsdojo.com/amazon-cloudwatch/

Tutorials Dojo's AWS Certified DevOps Engineer Professional Exam Study Guide:

https://tutorialsdojo.com/aws-certified-devops-engineer-professional/

Question 4: Incorrect

An organization has a fleet of Amazon EC2 instances and uses SSH for remote access. Whenever a DevOps
Engineer leaves the organization, the SSH keys are rotated as a security measure. The Chief Information Officer
(CIO) required to stop the use of EC2 key pairs for operational efficiency and instead utilize AWS Systems
Manager Session Manager. To strengthen security, access to Session Manager should be limited solely through a
private network.

Which set of actions should the new DevOps Engineer take to meet the CIO requirements? (Select TWO.)

Provision a VPC endpoint for Systems Manager in the designated region.

(Correct)

Enable outbound traffic to TCP port 22 from the VPC CIDR range in all associated security groups of
EC2 instances.

Enable incoming traffic to TCP port 22 from the VPC CIDR range in all associated security groups of
EC2 instances.

Create an IAM instance profile to be associated with the fleet of EC2 instances and attach an IAM policy
with the required Systems Manager permissions.

(Correct)

Launch a new EC2 instance that will function as a bastion host for the other EC2 instances in the fleet.

Explanation

With Session Manager, you can manage your Amazon Elastic Compute Cloud (Amazon EC2) instances, edge
devices, on-premises servers, and virtual machines (VMs).

By default, AWS Systems Manager doesn't have permission to perform actions on your instances. You must
grant access by using AWS Identity and Access Management (IAM).
An instance profile that contains the AWS managed policy AmazonSSMManagedInstanceCore is needed to
be attached to the EC2 instance for the Session Manager to work.

The security posture of managed instances (including those in a hybrid environment) can be improved by
configuring AWS Systems Manager to use an interface VPC endpoint in Amazon Virtual Private Cloud
(Amazon VPC). An interface VPC endpoint (interface endpoint) can be used to connect to services powered by
AWS PrivateLink, which is a technology that enables private access to Amazon Elastic Compute Cloud
(Amazon EC2) and Systems Manager APIs using private IP addresses.

Hence, the correct answers are the option that says:

- Create an IAM instance profile to be associated with the fleet of EC2 instances and attach an IAM policy
with the required Systems Manager permissions

- Provision a VPC endpoint for Systems Manager in the designated region

The option that says: Enable incoming traffic to TCP port 22 from the VPC CIDR range in all associated
security groups of EC2 instances is incorrect because Session Manager will work without the need to open
inbound ports.

The option that says: Launch a new EC2 instance that will function as a bastion host for the other EC2
instances in the fleet is incorrect because a bastion host will not meet the requirement as it will still use SSH
key pairs to remote access.

The option that says: Enable outbound traffic to TCP port 22 from the VPC CIDR range in all associated
security groups of EC2 instances is incorrect because Session Manager will work without the need to open
outbound ports.

References:

https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html

https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-getting-started-instance-
profile.html

https://docs.aws.amazon.com/systems-manager/latest/userguide/setup-create-vpc.html

Check out this AWS Systems Manager Cheat Sheet:

https://tutorialsdojo.com/aws-systems-manager/

Tutorials Dojo's AWS Certified DevOps Engineer Professional Exam Study Guide:

https://tutorialsdojo.com/aws-certified-devops-engineer-professional/

Question 5: Incorrect

There are several teams sharing your single AWS account that hosts your production infrastructure. Your teams
are primarily storing media and images on AWS S3 buckets; some are used for public internet use, while other
buckets are used for internal applications only. Since the permissions on these public access buckets are listed on
AWS Trusted Advisor, you want to take advantage of it to make sure that all public buckets only
allow List operations for intended users. You want to be notified when any public S3 bucket have the wrong
permissions and have it automatically changed if needed.

Which of the following should you implement to meet the requirements in this scenario? (Select THREE.)

Set up a custom AWS Config rule to execute a default remediation action to update the permissions on the
public S3 bucket.

Set up a custom AWS Config rule that checks public S3 buckets permissions. Then, send a non-
compliance notification to your subscribed SNS topic.

(Correct)

Set up a custom Amazon Inspector rule that checks public S3 buckets permissions. Send an action to
AWS Systems Manager to correct the S3 bucket policy.

Utilize CloudWatch Events to monitor Trusted Advisor security recommendation results and then set a
trigger to send an email using SNS to notify you about the results of the check.

(Correct)

Create a Lambda function that executes every hour to refresh AWS Trusted Advisor scan results via API.
Subscribe to AWS Trusted advisor notification messages to receive the results.

Set up a custom AWS Config rule to check public S3 buckets permissions and have it send an event on
CloudWatch Events about policy violations. Have CloudWatch Events trigger a Lambda Function to
update the S3 bucket permission.

(Correct)

Explanation

AWS Config enables you to assess, audit, and evaluate the configurations of your AWS resources. Config
continuously monitors and records your AWS resource configurations and allows you to automate the evaluation
of recorded configurations against desired configurations. You can configure AWS Config to stream
configuration changes and notifications to an Amazon SNS topic. This way, you can be notified when AWS
Config evaluates your custom or managed rules against your resources.

AWS Config can monitor your Amazon Simple Storage Service (S3) bucket ACLs and policies for violations
that allow public read or public write access. If AWS Config finds a policy violation, it can trigger an Amazon
CloudWatch Event rule to trigger an AWS Lambda function which either corrects the S3 bucket ACL, or notifies
you via Amazon Simple Notification Service (Amazon SNS) that the policy is in violation and allows public
read or public write access.

You can use Amazon CloudWatch Events to detect and react to changes in the status of Trusted Advisor checks.
Then, based on the rules that you create, CloudWatch Events invokes one or more target actions when a check
status changes to the value you specify in a rule. Depending on the type of status change, you might want to send
notifications, capture status information, take corrective action, initiate events, or take other actions.
Hence, the correct answers are:

- Set up a custom AWS Config rule that checks public S3 buckets permissions. Then, send a non-
compliance notification to your subscribed SNS topic.

- Set up a custom AWS Config rule to check public S3 buckets permissions and have it send an event on
CloudWatch Events about policy violations. Have CloudWatch Events trigger a Lambda Function to
update the S3 bucket permission.

- Utilize CloudWatch Events to monitor Trusted Advisor security recommendation results and then set a
trigger to send an email using SNS to notify you about the results of the check.

The option that says: Set up a custom Amazon Inspector rule that checks public S3 buckets permissions.
Send an action to AWS Systems Manager to correct the S3 bucket policy is incorrect because Amazon
Inspector is just an automated security assessment service that is primarily used for EC2 instances. You have to
use AWS Config instead.

The option that says: Set up a custom AWS Config rule to execute a default remediation action to update
the permissions on the public S3 bucket is incorrect because there is no default remediation action for this.
This should be integrated with the AWS Systems Manager Automation service where you can configure the
actions for your remediation.

The option that says: Create a Lambda function that executes every hour to refresh AWS Trusted Advisor
scan results via API. Subscribe to AWS Trusted Advisor notification messages to receive the results is
incorrect because it is better to use AWS Config instead of AWS Trusted Advisor in this scenario. Moreover,
Trusted Advisor only sends the summary notification every week so this won't notify you immediately about
your non-compliant resources.

References:

https://aws.amazon.com/blogs/security/how-to-use-aws-config-to-monitor-for-and-respond-to-amazon-s3-
buckets-allowing-public-access/

https://docs.aws.amazon.com/awssupport/latest/user/trustedadvisor.html

https://docs.aws.amazon.com/awssupport/latest/user/cloudwatch-events-ta.html

Check out these AWS Cheat Sheets:

https://tutorialsdojo.com/aws-config/

https://tutorialsdojo.com/amazon-cloudwatch/

https://tutorialsdojo.com/aws-trusted-advisor/

Question 6: Incorrect

A business is developing a serverless Angular application in AWS for internal use. As part of the deployment
process, the application is built and packaged using AWS CodeBuild and then copied to an S3 bucket.
The buildspec.yml file includes the subsequent code:

1. version: 0.2

2. phases:

3. install:

4. runtime-versions:

5. nodejs: 14

6. commands:

7. - npm install -g @angular/cli

8. build:

9. commands:

10. - ng build --prod

11. post_build:

12. commands:

13. - aws s3 cp dist s3://nueva-ecija-angular-internal --acl

14. authenticated-read

After deployment, the security team discovered that any individual with an AWS account could access the
objects within the S3 bucket despite the application being designed solely for internal purposes.

Which of the following solutions should the DevOps Engineer implement to resolve the issue in the MOST
secure manner?

Replace the --acl authenticated-read from the post_build command specified in the buildspec.yml
with --acl public-read-write.

Replace the --acl authenticated-readfrom the post_build command specified in the buildspec.yml
with --acl public-read.

Modify the post_build command stated in the buildspec.yml by including --sse AES256 to encrypt the
objects.

Add a bucket policy that allows access exclusively to the AWS accounts affiliated with the business.
Remove the acl authenticated-read from the post_build command specified in the buildspec.yml.

(Correct)

Explanation

Amazon S3 has a set of predefined groups. When granting account access to a group, you specify one of our
URIs instead of a canonical user ID. AWS provides the following predefined groups:
-Authenticated Users group – Represented by /AuthenticatedUsers. This group represents all AWS accounts.
Access permission to this group allows any AWS account to access the resource. However, all requests must be
signed (authenticated).

-All Users group – Represented by /AllUsers. Access permission to this group allows anyone in the world
access to the resource. The requests can be signed (authenticated) or unsigned (anonymous). Unsigned requests
omit the Authentication header in the request.

-Log Delivery group – Represented by /LogDelivery. WRITE permission on a bucket enables this group to
write server access logs to the bucket.

Access control lists (ACLs) are one of resource-based options that you can use to manage access to your
buckets and objects. You can use ACLs to grant basic read/write permissions to other AWS accounts.

Amazon S3 supports a set of predefined grants known as canned ACLs. Each canned ACL has a predefined set
of grantees and permissions. The following table lists the set of canned ACLs and the associated predefined
grants.

However, there are limits to managing permissions using ACLs. For example, you can grant permissions only to
other AWS accounts; you cannot grant permissions to users in your account. You cannot grant conditional
permissions, nor can you explicitly deny permissions. ACLs are suitable for specific scenarios. For example, if a
bucket owner allows other AWS accounts to upload objects, permissions to these objects can only be managed
using object ACL by the AWS account that owns the object.

In this scenario, removing the --acl authenticated-read will prevent AuthenticatedUsers group (All AWS
accounts) from reading the objects in the S3 bucket. This will resolve the issue of anyone with an AWS account
being able to access the objects. In addition, attaching a bucket policy that only grants the AWS accounts
relevant to the business will make the bucket more secure.

Hence, the correct answer is: Add a bucket policy that allows access exclusively to the AWS accounts
affiliated with the business. Remove the --acl authenticated-read from the post_build command specified
in the buildspec.yml.

The option that says: Replace the --acl authenticated-read from the post_build command specified in the
buildspec.yml with --acl public-read is incorrect because using the --acl public-read will grant READ
access to AllUsers group (anyone in the world). This will not resolve the issue of anyone with an AWS account
being able to access the objects.

The option that says: Replace the --acl authenticated-read from the post_build command specified in the
buildspec.yml with --acl public-read-write is incorrect because using the public-read-write will make the
issue much worst because this will grant AllUsers group (anyone in the world) READ and WRITE access.
Granting this on a bucket is generally not recommended.

The option that says: Modify the post_build command stated in the buildspec.yml by including --sse
AES256 to encrypt the objects is incorrect because this will only apply encryption at rest in the objects. It does
not resolve the issue of anyone with an AWS account being able to access the objects since --acl
authenticated-read still exist in the command.

References:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/acls.html

https://docs.aws.amazon.com/AmazonS3/latest/userguide/acl-overview.html

https://docs.aws.amazon.com/AmazonS3/latest/userguide/managing-acls.html

Check out this Amazon S3 Cheat Sheet:

https://tutorialsdojo.com/amazon-s3/

Question 7: Correct

A company has a PROD, DEV, and TEST environment in its software development department, each contains
hundreds of EC2 instances and other AWS services. There was a series of security patches that have been
released on the official Ubuntu operating system for a critical flaw that was recently discovered. Although this is
an urgent matter, there is no guarantee that these patches will be bug-free and production-ready. This is why a
DevOps engineer was instructed to immediately patch all of their affected EC2 instances in all the environments,
except for the PROD environment. The EC2 instances in their PROD environment will only be patched after the
initial patches have been verified to work effectively in their non-PROD environments. Each environment also
has different baseline patch requirements that you will need to satisfy.

How should the DevOps engineer perform this task with the LEAST amount of effort?

Tag each instance based on its environment, business unit, and operating system. Set up a patch baseline
in AWS Systems Manager Patch Manager for each environment. Categorize each Amazon EC2 instance
based on its tags using Patch Groups. Apply the required patches specified in the corresponding patch
baseline to each Patch Group.

(Correct)

Set up a new patch baseline in AWS Systems Manager Patch Manager for each environment. Tag each
Amazon EC2 instance based on its operating system. Categorize EC2 instances based on their tags using
Patch Groups. Apply the patches specified in their corresponding patch baseline to each Patch Group. Use
Patch Compliance to ensure that the patches have been installed correctly. Using AWS Config, record all
of the changes to patch and association compliance statuses.

Use the AWS Systems Manager Maintenance Windows to set up a scheduled maintenance period for each
environment, where the period is after business hours so as not to affect daily operations. The Systems
Manager will execute a cron job that will install the required patches for each Amazon EC2 instance in
each environment during the maintenance period. Use the Systems Manager Managed Instances to verify
that your environments are fully patched and compliant.

Develop various shell scripts for each environment that specifies which patch will serve as its baseline.
Tag each instance based on its environment, business unit, and operating system. Add the Amazon EC2
instances into Target Groups using the AWS Systems Manager Run Command and then execute the script
corresponding to each Target Group.

Explanation
AWS Systems Manager Patch Manager automates the process of patching managed instances with security-
related updates. For Linux-based instances, you can also install patches for non-security updates. You can patch
fleets of Amazon EC2 instances or your on-premises servers and virtual machines (VMs) by operating system
type.

Patch Manager uses patch baselines, which include rules for auto-approving patches within days of their release,
as well as a list of approved and rejected patches. You can install patches on a regular basis by scheduling
patching to run as a Systems Manager Maintenance Window task. You can also install patches individually or to
large groups of instances by using Amazon EC2 tags. For each auto-approval rule that you create, you can
specify an auto-approval delay. This delay is the number of days to wait after the patch was released before the
patch is automatically approved for patching.

A patch group is an optional means of organizing instances for patching. For example, you can create patch
groups for different operating systems (Linux or Windows), different environments (Development, Test, and
Production), or different server functions (web servers, file servers, databases). Patch groups can help you avoid
deploying patches to the wrong set of instances. They can also help you avoid deploying patches before they
have been adequately tested.

You create a patch group by using Amazon EC2 tags. Unlike other tagging scenarios across Systems Manager, a
patch group must be defined with the tag key: Patch Group. After you create a patch group and tag instances, you
can register the patch group with a patch baseline. By registering the patch group with a patch baseline, you
ensure that the correct patches are installed during the patching execution.

Hence, the correct answer is: Tag each instance based on its environment, business unit, and operating
system. Set up a patch baseline in AWS Systems Manager Patch Manager for each environment.
Categorize each Amazon EC2 instance based on its tags using Patch Groups. Apply the required patches
specified in the corresponding patch baseline to each Patch Group.

The option that says: Develop various shell scripts for each environment that specifies which patch will
serve as its baseline. Tag each instance based on its environment, business unit, and operating system. Add
the Amazon EC2 instances into Target Groups using the AWS Systems Manager Run Command and then
execute the script corresponding to each Target Group is incorrect as this option takes more effort to perform
because you are using Systems Manager Run Command instead of Patch Manager. The Run Command service
enables you to automate common administrative tasks and perform ad hoc configuration changes at scale,
however, it takes a lot of effort to implement this solution. You can use Patch Manager instead to perform the
task required by the scenario since you need to perform this task with the least amount of effort.

The option that says: Set up a new patch baseline in AWS Systems Manager Patch Manager for each
environment. Tag each Amazon EC2 instance based on its operating system. Categorize EC2 instances
based on their tags using Patch Groups. Apply the patches specified in their corresponding patch baseline
to each Patch Group. Use Patch Compliance to ensure that the patches have been installed correctly.
Using AWS Config, record all of the changes to patch and association compliance statuses is incorrect
because you should be tagging instances based on the environment and its OS type in which they belong and not
just its OS type. This is because the type of patches that will be applied varies between the different
environments. With this option, the Ubuntu EC2 instances in all of your environments, including in production,
will automatically be patched.

The option that says: Use the AWS Systems Manager Maintenance Windows to set up a scheduled
maintenance period for each environment, where the period is after business hours so as not to affect daily
operations. The Systems Manager will execute a cron job that will install the required patches for each
Amazon EC2 instance in each environment during the maintenance period. Use the Systems Manager
Managed Instances to verify that your environments are fully patched and compliant is incorrect because
this is not the simplest way to address the issue using AWS Systems Manager. The AWS Systems Manager
Maintenance Windows feature lets you define a schedule for when to perform potentially disruptive actions on
your instances such as patching an operating system, updating drivers, or installing software or patches. Each
Maintenance Window has a schedule, a maximum duration, a set of registered targets (the instances that are
acted upon), and a set of registered tasks. Although this solution may work, it entails a lot of configuration and
effort to implement.

References:

https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-patch.htmll

https://aws.amazon.com/blogs/mt/patching-your-windows-ec2-instances-using-aws-systems-manager-patch-
manager/

Check out this AWS Systems Manager Cheat Sheet:

https://tutorialsdojo.com/aws-systems-manager/

Question 8: Correct

An international IT consulting firm has multiple on-premises data centers across the globe. Their technical team
regularly uploads financial and regulatory files from each of their respective data centers to a centralized web
portal hosted in AWS. It uses an Amazon S3 bucket named financial-tdojo-reports to store the data. Another
team downloads various reports from a CloudFront web distribution that uses the same Amazon S3 bucket as the
origin. A DevOps Engineer noticed that the staff are using both the CloudFront link and the direct Amazon S3
URLs to download the reports. The IT Security team of the company considered this as a security risk, and they
recommended to re-design the architecture. A new system must be implemented that prevents anyone from
bypassing the CloudFront distribution and disable direct access from Amazon S3 URLs.

What should the Engineer do to meet the above requirement?

In the CloudFront web distribution, set up a field-level encryption configuration and for each user, revoke
the existing permission to access Amazon S3 URLs to download the objects.

Set up a custom SSL in your CloudFront web distribution instead of the default SSL. For each user,
revoke the existing permission to access Amazon S3 URLs to download the objects.

Set up a special CloudFront user called an origin access identity (OAI). Grant the origin access identity
permission to read the objects in your bucket. For each user, revoke the existing permission to access
Amazon S3 URLs to download the objects.

(Correct)

Configure the distribution to use Signed URLs and create a special CloudFront user called an origin
access identity (OAI). Grant permission to the OAI to read the objects in the S3 bucket.

Explanation
To restrict access to content that you serve from Amazon S3 buckets, you create CloudFront signed URLs or
signed cookies to limit access to files in your Amazon S3 bucket, and then you create a special CloudFront user
called an origin access identity (OAI) and associate it with your distribution. Then you configure permissions so
that CloudFront can use the OAI to access and serve files to your users, but users can't use a direct URL to the
S3 bucket to access a file there.

In this scenario, the main objective is to prevent the staff from using the direct Amazon S3 URLs to download
the reports. The best solution that you can choose here is to use an Origin Access Identity (OAI) and remove
anyone else's permission to use the S3 URLs to read the objects.

Hence, the correct answer is: Set up a special CloudFront user called an origin access identity (OAI). Grant
the origin access identity permission to read the objects in your bucket. For each user, revoke the existing
permission to access Amazon S3 URLs to download the objects.

The option that says: Set up a custom SSL in your CloudFront web distribution instead of the default SSL.
For each user, revoke the existing permission to access Amazon S3 URLs to download the objects incorrect
because SSL is not needed in this particular scenario. What you need to implement is an OAI.

The option that says: In the CloudFront web distribution, set up a field-level encryption configuration and
for each user, revoke the existing permission to access Amazon S3 URLs to download the objects is
incorrect because the field-level encryption configuration is primarily used for safeguarding sensitive fields in
your CloudFront. Therefore, it is not suitable for this scenario.

The option that says: Configure the distribution to use Signed URLs and create a special CloudFront user
called an origin access identity (OAI). Grant permission to the OAI to read the objects in the S3 bucket is
incorrect because although it is recommended to use Signed URLs and OAI in CloudFront, this option is still
missing the crucial step of removing the user permissions to directly use the S3 URLs in order to download the
files.

References:

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-overview.html

https://aws.amazon.com/blogs/networking-and-content-delivery/amazon-s3-amazon-cloudfront-a-match-made-
in-the-cloud/

Check out this Amazon CloudFront Cheat Sheet:

https://tutorialsdojo.com/amazon-cloudfront/

Tutorials Dojo's AWS Certified DevOps Engineer Professional Exam Study Guide

https://tutorialsdojo.com/aws-certified-devops-engineer-professional/

Question 9: Correct

A mobile phone manufacturer hosts a suite of enterprise resource planning (ERP) solutions to several Amazon
EC2 instances in their AWS VPC. Its DevOps team is using AWS CloudFormation templates to design, launch,
and deploy resources to their cloud infrastructure. Each template is manually updated to map the latest AMI IDs
of the ERP solution. This process takes a significant amount of time to execute, which is why the team was
tasked to automate this process.

In this scenario, which of the following options is the MOST suitable solution that can satisfy the requirement?

Integrate AWS CloudFormation with AWS Service Catalog to fetch the latest AMI IDs and automatically
use them for succeeding deployments.

Use Systems Manager Parameter Store in conjunction with CloudFormation to retrieve the latest AMI IDs
for your template. Call the update-stack API in CloudFormation in your template whenever you decide to
update the Amazon EC2 instances.

(Correct)

Integrate AWS Service Catalog with AWS Config to automatically fetch the latest AMI and use it for
succeeding deployments.

Set up and configure the Systems Manager State Manager service to store the latest AMI IDs and
integrate it with your AWS CloudFormation template. Call the update-stack API in CloudFormation
whenever you decide to update the EC2 instances in your CloudFormation template.

Explanation

You can use the existing Parameters section of your CloudFormation template to define Systems Manager
parameters, along with other parameters. Systems Manager parameters are a unique type that is different from
existing parameters because they refer to actual values in the Parameter Store. The value for this type of
parameter would be the Systems Manager (SSM) parameter key instead of a string or other value.
CloudFormation will fetch values stored against these keys in Systems Manager in your account and use them
for the current stack operation.

If the parameter being referenced in the template does not exist in Systems Manager, a synchronous validation
error is thrown. Also, if you have defined any parameter value validations (AllowedValues, AllowedPattern,
etc.) for Systems Manager parameters, they will be performed against SSM keys which are given as input values
for template parameters, not actual values stored in Systems Manager.

Parameters stored in Systems Manager are mutable. Any time you use a template containing Systems Manager
parameters to create/update your stacks, CloudFormation uses the values for these Systems Manager parameters
at the time of the create/update operation. So, as parameters are updated in Systems Manager, you can have the
new value of the parameter take effect by just executing a stack update operation. The Parameters section in the
output for Describe API will show an additional ‘ResolvedValue’ field that contains the resolved value of the
Systems Manager parameter that was used for the last stack operation.

Hence, the correct answer Use Systems Manager Parameter Store in conjunction with CloudFormation to
retrieve the latest AMI IDs for your template. Call the update-stack API in CloudFormation in your template
whenever you decide to update the Amazon EC2 instances.

The option that says: Set up and configure the Systems Manager State Manager service to store the latest AMI
IDs and integrate it with your AWS CloudFormation template. Call the update-stack API in CloudFormation
whenever you decide to update the EC2 instances in your CloudFormation template is incorrect because the
Systems Manager State Manager service simply automates the process of keeping your Amazon EC2 and hybrid
infrastructure in a state that you define. This can't be used as a parameter store that refers to the latest AMI of
your application.

The option that says: Integrate AWS Service Catalog with AWS Config to automatically fetch the latest AMI
and use it for succeeding deployments is incorrect because using AWS Service Catalog is not suitable in this
scenario. This service just allows organizations to create and manage catalogs of IT services that are approved
for use on AWS.

The option that says: Integrate AWS CloudFormation with AWS Service Catalog to fetch the latest AMI IDs
and automatically use them for succeeding deployments is incorrect because AWS Service Catalog just allows
organizations to create and manage catalogs of IT services that are approved for use on AWS. A better solution
is to use Systems Manager Parameter Store in conjunction with CloudFormation to retrieve the latest AMI IDs
for your template.

References:

https://aws.amazon.com/blogs/mt/integrating-aws-cloudformation-with-aws-systems-manager-parameter-store/

https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html

Check out this AWS Systems Manager Cheat Sheet:

https://tutorialsdojo.com/aws-systems-manager/

Question 10: Correct

You are a project owner of a global online marketplace application that is composed of several components. The
application code is hosted in an AWS CodeCommit repository. Due to the project complexity and the large pool
of developers concurrently working on the project, the source code repository had accumulated over a hundred
different branches for its development, testing, and production environments. This setup has caused you to lose
track of all the relevant code changes. To have better visibility, you want to capture all events in the repository,
such as cloning a repo or creating a branch on a single location for you to review.

Which of the following actions will you take to achieve this?

Create an Amazon SES topic and subscribe to it. On CodeCommit, go to repository settings and select the
Notifications tab. Select all the events you want to capture and choose the SES topic as the target for your
notification.

Create a new Amazon SNS topic. On CodeCommit, go to the repository settings and select the SNS topic
as the destination on the Events Tab. Select the events you want to capture and these will be sent to the
SNS topic and notify you when they occur.

Create a CloudWatch Log group. On CodeCommit, go to repository settings and select the Notifications
tab. Select all the events you want to capture and choose the Log group as the target for you notifications.
Create a Lambda function that sends event logs to AWS CloudWatch Logs and set a trigger by selecting
the CodeCommit repository. On CodeCommit, go to the repository settings and select the Lambda
function on the Triggers list when creating a new trigger.

(Correct)

Explanation

You can set up notification rules for a repository so that repository users receive emails about the repository
event types you specify. Notifications are sent when events match the notification rule settings. You can create
an Amazon SNS topic to use for notifications or use an existing one in your AWS account. You use the
CodeCommit console and the AWS CLI to configure notifications.

Although you can configure a trigger to use Amazon SNS to send emails about some repository events, those
events are limited to operational events, such as creating branches and pushing code to a branch. Triggers do not
use CloudWatch Events rules to evaluate repository events. They are more limited in scope.

You can integrate Amazon SNS topics and Lambda functions with triggers in CodeCommit, but you must first
create and then configure resources with a policy that grants CodeCommit the permissions to interact with those
resources. You must create the resource in the same AWS Region as the CodeCommit repository. For example,
if the repository is in US East (Ohio) (us-east-2), the Amazon SNS topic or Lambda function must be in US East
(Ohio).

Hence, the correct answer is: Create a Lambda function that sends event logs to AWS CloudWatch Logs and
set a trigger by selecting the CodeCommit repository. On CodeCommit, go to the repository settings and select
the Lambda function on the Triggers list when creating a new trigger.

The option that says: Create a new Amazon SNS topic. On CodeCommit, go to the repository settings and
select the SNS topic as the destination on the Events tab. Select the events you want to capture and these will
be sent to the SNS topic and notify you when they occur is incorrect because there is no Events tab on the
CodeCommit repository settings. The repository notification setting is the appropriate one that you should
configure.

The option that says: Create an Amazon SES topic and subscribe to it. On CodeCommit, go to repository
settings and select the Notifications tab. Select all the events you want to capture and choose the SES topic as
the target for your notification is incorrect because you have to use Amazon SNS instead of Amazon SES.

The option that says: Create a CloudWatch Log group. On CodeCommit, go to repository settings and select
the Notifications tab. Select all the events you want to capture and choose the CloudWatch Log group as the
target for you notifications is incorrect because you can't send notifications directly to CloudWatch Logs. You
have to use Amazon SNS or a Lambda trigger first.

References:

https://docs.aws.amazon.com/codecommit/latest/userguide/how-to-notify-lambda.html

https://docs.aws.amazon.com/codecommit/latest/userguide/how-to-repository-email.html

https://docs.aws.amazon.com/codecommit/latest/userguide/how-to-notify.html
Question 11: Correct

A multinational investment bank has several AWS accounts across the globe that host its cloud-based
applications. A DevOps Engineer was tasked to secure all of the company's AWS resources as well as their web
applications from common web vulnerabilities and cyber attacks. He must also safeguard their cloud
infrastructure from Distributed Denial of Service attack (DDoS) in which there are numerous incoming traffic
coming from many different locations that simultaneously go into the company's web application and flood their
network.

Which among the options below can the Engineer implement as part of the company's DDoS attack surface
reduction strategy to minimize the blast radius in their cloud infrastructure? (Select TWO.)

Set up AWS WAF rules that identify and block common DDoS request patterns to effectively mitigate a
DDoS attack on the company's cloud infrastructure. Ensure that the Network Access Control Lists (ACLs)
only allow the required ports and network addresses in the VPC.

(Correct)

Set up AWS Systems Manager Session Manager to filter all client-side web sessions of their Amazon EC2
instances. Use extra-large EC2 instances to accommodate a surge of incoming traffic caused by a DDoS
attack. Configure Elastic Load Balancing and Auto Scaling to your EC2 instances across multiple
Availability Zones to improve availability and scalability to your compute capacity.

Use AWS Shield Advanced to enable enhanced DDoS attack detection and monitoring for application-
layer traffic of the company's AWS resources. Ensure that every security group in the VPC only allows
certain ports and traffic from authorized servers or services. Protect your origin servers by putting it
behind a CloudFront distribution.

(Correct)

Enable Multi-Factor Authentication (MFA) to all of the IAM users as well as Amazon S3 buckets.
Improve the security of your AWS resources using Systems Manager State Manager, AWS Config, and
Trusted Advisor

Set up a Multi-Account Multi-Region Data Aggregation using AWS Config to monitor all of the
company's AWS accounts. Enable the Versioning feature on all of the Amazon S3 buckets. Automate the
OS patching of all of the company's Amazon EC2 instances using Systems Manager Patch Manager.

Explanation

A Distributed Denial of Service (DDoS) attack is a malicious attempt to make a targeted system, such as a
website or application, unavailable to end-users. To achieve this, attackers use a variety of techniques that
consume network or other resources, interrupting access for legitimate end-users.

Another important consideration when architecting on AWS is to limit the opportunities that an attacker may
have to target your application. For example, if you do not expect an end-user to directly interact with certain
resources, you will want to make sure that those resources are not accessible from the Internet. Similarly, if you
do not expect end-users or external applications to communicate with your application on certain ports or
protocols, you will want to make sure that traffic is not accepted. This concept is known as attack surface
reduction. Resources that are not exposed to the Internet are more difficult to attack, which limits the options an
attacker might have to target the availability of your application.

AWS Shield is a managed DDoS protection service that is available in two tiers: Standard and Advanced. AWS
Shield Standard applies always-on detection and inline mitigation techniques, such as deterministic packet
filtering and priority-based traffic shaping, to minimize application downtime and latency. AWS Shield Standard
is included automatically and transparently to your Elastic Load Balancing load balancers, Amazon CloudFront
distributions, and Amazon Route 53 resources at no additional cost. When you use these services that include
AWS Shield Standard, you receive comprehensive availability protection against all known infrastructure layer
attacks.

Customers who have the technical expertise to manage their own monitoring and mitigation of application-layer
attacks can use AWS Shield together with AWS WAF rules to create a comprehensive DDoS attack mitigation
strategy.

Hence, the correct answers are:

- Use AWS Shield Advanced to enable enhanced DDoS attack detection and monitoring for application-
layer traffic of the company's AWS resources. Ensure that every security group in the VPC only allows
certain ports and traffic from authorized servers or services. Protect your origin servers by putting it
behind a CloudFront distribution.

- Set up AWS WAF rules that identify and block common DDoS request patterns to effectively mitigate a
DDoS attack on the company's cloud infrastructure. Ensure that the Network Access Control Lists
(ACLs) only allow the required ports and network addresses in the VPC.

The option that says: Set up AWS Systems Manager Session Manager to filter all client-side web sessions of
their Amazon EC2 instances. Use extra-large EC2 instances to accommodate a surge of incoming traffic
caused by a DDoS attack. Configure Elastic Load Balancing and Auto Scaling to your EC2 instances
across multiple Availability Zones to improve availability and scalability to your compute capacity is
incorrect. Although it improves the scalability of your network in case of an ongoing DDoS attack, it simply
absorbs the heavy application layer traffic and doesn't minimize the attack surface in your cloud architecture. In
addition, the AWS Systems Manager Session Manager is primarily used to provide secure and auditable instance
management without the need to open inbound ports, maintain bastion hosts, or manage SSH keys, but not to
filter client-side web sessions.

The option that says: Set up a Multi-Account Multi-Region Data Aggregation using AWS Config to
monitor all of the company's AWS accounts. Enable the Versioning feature on all of the Amazon S3
buckets. Automate the OS patching of all of the company's Amazon EC2 instances using Systems
Manager Patch Manager is incorrect. Although it is recommended that all of your instances are properly
patched using the Systems Manager Patch Manager, it is still not enough to protect your cloud infrastructure
against DDoS attacks. The S3 Versioning feature is primarily used to preserve, retrieve, and restore every
version of every object stored in your Amazon S3 bucket, but not as DDoS attack mitigation.

The option that says: Enable Multi-Factor Authentication (MFA) to all of the IAM users as well as Amazon
S3 buckets. Improve the security of your AWS resources using Systems Manager State Manager, AWS
Config, and Trusted Advisor is incorrect because MFA doesn't minimize the DDoS attack surface area. The
AWS Systems Manager State Manager is just a configuration management service that automates the process of
keeping your Amazon EC2 and hybrid infrastructure in a state that you define. This service will not help you
minimize the blast radius of a security attack.

References:

https://aws.amazon.com/answers/networking/aws-ddos-attack-mitigation/

https://d1.awsstatic.com/whitepapers/Security/DDoS_White_Paper.pdf

https://d0.awsstatic.com/whitepapers/DDoS_White_Paper_June2015.pdf

Tutorials Dojo's AWS Certified DevOps Engineer Exam Study Guide:

https://tutorialsdojo.com/aws-certified-devops-engineer-professional/

Question 12: Correct

You are hosting your application code on AWS CodeCommit in US East (Ohio) region. You want to be notified
when anyone pushes code to the master branch so you have created an AWS SNS topic on the US East (N.
Virginia) region, on which your corporate Amazon SES email domain is also hosted. On CodeCommit, you have
created a trigger that will publish a message to your SNS topic. However, your CodeCommit trigger is not
working as expected.Which of the following is the MOST likely cause of this issue?

Your CodeCommit repository and AWS SNS topic must be on the same region for the triggers to work.

(Correct)

You need to input the full ARN of the SNS topic on CodeCommit trigger destination since your SNS
topic is on a different region from your CodeCommit repository.

AWS CodeCommit needs to have permission to interact with Amazon SNS. Configure the necessary
policy for CodeCommit to publish to Amazon SNS topic.

AWS CodeCommit needs to have permission to interact with both Amazon SNS and SES. Configure the
necessary policy for CodeCommit to publish to Amazon SNS topic and to use SES to send emails.

Explanation

You can configure a CodeCommit repository so that code pushes or other events trigger actions, such as sending
a notification from Amazon Simple Notification Service (Amazon SNS) or invoking a function in AWS
Lambda. You can create up to 10 triggers for each CodeCommit repository.

Triggers are commonly configured to:

- Send emails to subscribed users every time someone pushes to the repository.

- Notify an external build system to start a build after someone pushes to the main branch of the repository.

Scenarios like notifying an external build system require writing a Lambda function to interact with other
applications. The email scenario simply requires creating an Amazon SNS topic.
You can integrate Amazon SNS topics and Lambda functions with triggers in CodeCommit, but you must first
create and then configure resources with a policy that grants CodeCommit the permissions to interact with those
resources. You must create the resource in the same AWS Region as the CodeCommit repository. For example,
if the repository is in US East (Ohio) (us-east-2), the Amazon SNS topic or Lambda function must be in US East
(Ohio).

Hence, the correct answer is: Your CodeCommit repository and Amazon SNS topic must be on the same region
for the triggers to work.

The option that says: AWS CodeCommit needs to have permission to interact with Amazon SNS. Configure
the necessary policy for CodeCommit to publish to Amazon SNS topic is incorrect because, for Amazon SNS
topics, you do not need to configure additional IAM policies or permissions if the Amazon SNS topic is created
using the same account as the CodeCommit repository. However this will still not work if the SNS topic and
CodeCommit repository are on a different region.

The option that says: You need to input the full ARN of the SNS topic on CodeCommit trigger destination
since your SNS topic is on a different region from your CodeCommit repository is incorrect because even if
you do this, the problem will still persist. Your CodeCommit repository and the SNS topic must be created in the
same AWS region in order to resolve this issue.

The option that says: AWS CodeCommit needs to have permission to interact with both AWS SNS and SES.
Configure the necessary policy for CodeCommit to publish to AWS SNS topic and to use SES to send
emails is incorrect. CodeCommit does not need permission to SES as SNS will handle that. And you don’t have
to configure additional IAM policies for SNS if your repository is on the same account.

References:

https://docs.aws.amazon.com/codecommit/latest/userguide/how-to-notify.html

https://docs.aws.amazon.com/codecommit/latest/userguide/how-to-notify-sns.html

https://docs.aws.amazon.com/codecommit/latest/userguide/how-to-notify-edit.html

Check out this AWS CodeCommit Cheat Sheet:

https://tutorialsdojo.com/aws-codecommit/

AWS CodeCommit Repository:

https://tutorialsdojo.com/aws-codecommit-repository/

Question 13: Incorrect

A cloud-based payments company is heavily using Amazon EC2 instances to host their applications in AWS
Cloud. They would like to improve the security of their cloud resources by ensuring that all of their EC2
instances were launched from pre-approved AMIs only. The list of AMIs is set and managed by their IT Security
team. Their Software Development team has an automated CI/CD process that launches several EC2 instances
with new and untested AMIs for testing. The development process must not be affected by the new solution,
which will be implemented by their Lead DevOps Engineer.
Which of the following can the Engineer implement to satisfy the requirement with the LEAST impact on the
development process? (Select TWO.)

Integrate AWS Lambda and CloudWatch Events to schedule a daily process that will search through the
list of running Amazon EC2 instances within your VPC. Configure the function to determine if any of
these are based on unauthorized AMIs. Publish a new message to an Amazon SNS topic to inform the
Security and Development teams that the issue occurred and then automatically terminate the EC2
instance.

(Correct)

Set up a centralized IT Systems Operations team that has the required policies, roles, and permissions.
The team will manually process the security approval steps to ensure that Amazon EC2 instances are
launched from pre-approved AMIs only.

Set up IAM policies to restrict the ability of users to launch Amazon EC2 instances based on a specific set
of pre-approved AMIs which were tagged by the IT Security team.

Do regular scans using Amazon Inspector via a custom assessment template that determines if the
Amazon EC2 instance is based upon a pre-approved AMI or not. Terminate the instances and inform the
IT Security team by email about the security breach.

Use AWS Config with a Lambda function that periodically evaluates whether there are EC2 instances
launched based on non-approved AMIs. Set up a remediation action using AWS Systems Manager
Automation that will automatically terminate the EC2 instance. Publish a message to an Amazon SNS
topic to inform the IT Security and Development teams about the occurrence.

(Correct)

Explanation

AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS
resources. Config continuously monitors and records your AWS resource configurations and allows you to
automate the evaluation of recorded configurations against desired configurations. With Config, you can review
changes in configurations and relationships between AWS resources, dive into detailed resource configuration
histories, and determine your overall compliance against the configurations specified in your internal guidelines.
This enables you to simplify compliance auditing, security analysis, change management, and operational
troubleshooting.

When you run your applications on AWS, you usually use AWS resources, which you must create and manage
collectively. As the demand for your application keeps growing, so does your need to keep track of your AWS
resources. AWS Config is designed to help you oversee your application resources.

AWS Config provides a detailed view of the configuration of AWS resources in your AWS account. This
includes how the resources are related to one another and how they were configured in the past so that you can
see how the configurations and relationships change over time. With AWS Config, you can do the following:

- Evaluate your AWS resource configurations for desired settings.


- Get a snapshot of the current configurations of the supported resources that are associated with your AWS
account.

- Retrieve configurations of one or more resources that exist in your account.

- Retrieve historical configurations of one or more resources.

- Receive a notification whenever a resource is created, modified, or deleted.

- View relationships between resources. For example, you might want to find all resources that use a particular
security group.

AWS Config provides AWS managed rules which are predefined, customizable rules that AWS Config uses to
evaluate whether your AWS resources comply with common best practices. In this scenario, you can use
the approved-amis-by-id AWS manage rule which checks whether running instances are using specified AMIs.
You can also use a Lambda function which is scheduled to run regularly to scan all of the running EC2 instances
in your VPC and check if there is an instance that was launched using an unauthorized AMI.

In this scenario, we have to balance two things: security and development operations. The former should always
be our utmost priority, this is why the scenario says that all of the company's EC2 instances must be launched
from pre-approved AMIs only.

In the first place, why should the development team deploy a non-approved AMI? That's a clear violation of the
company policy as stated above. As a Solutions Architect, a small security risk is still considered a risk and it
must always be dealt with. In some organizations, such as banks and financial institutions, the IT Security team
has the power to stop or revert back any recent deployments if there is a security issue.

Hence, the correct answers are:

- Use AWS Config with a Lambda function that periodically evaluates whether there are EC2 instances
launched based on non-approved AMIs. Set up a remediation action using AWS Systems Manager
Automation that will automatically terminate the EC2 instance. Publish a message to an Amazon SNS
topic to inform the IT Security and Development teams about the occurrence.

- Integrate AWS Lambda and CloudWatch Events to schedule a daily process that will search through the
list of running Amazon EC2 instances within your VPC. Configure the function to determine if any of
these are based on unauthorized AMIs. Publish a new message to an Amazon SNS topic to inform the
Security and Development teams that the issue occurred and then automatically terminate the EC2
instance.

Remember that the question asks for the LEAST impact solution. The development teams will still be able to
continue deploying their applications without any disruption. If you are using the Atlassian suite, your Bamboo
build and deployment plans would still execute successfully. Take note that the two solutions will not
immediately terminate the EC2 instances running with a non-approved AMI. The first solution uses AWS
Config with a Lambda function that periodically evaluates the instances while the second has a once-a-day
(daily) process. Therefore, these two answers provide the LEAST impact solution while keeping the architecture
secure.
The option that says: Set up a centralized IT Systems Operations team that has the required policies, roles,
and permissions. The team will manually process the security approval steps to ensure that Amazon EC2
instances are launched from pre-approved AMIs only is incorrect because having manual information
security approval will impact the development process. A better solution is to implement an automated process
using AWS Config or a scheduled job using AWS Lambda and CloudWatch Events.

The option that says: Set up IAM policies to restrict the ability of users to launch Amazon EC2 instances
based on a specific set of pre-approved AMIs which were tagged by the IT Security team is incorrect
because setting up an IAM Policy will totally restrict the development team from launching EC2 instances with
unapproved AMIs which could impact their CI/CD process. The scenario clearly says that the solution should
not have any interruption in the company's development process.

The option that says: Do regular scans using Amazon Inspector via a custom assessment template that
determines if the Amazon EC2 instance is based upon a pre-approved AMI or not. Terminate the
instances and inform the IT Security team by email about the security breach is incorrect because Amazon
Inspector is just an automated security assessment service that helps improve the security and compliance of
applications deployed in AWS. It does not have the capability to detect EC2 instances that are using unapproved
AMIs, unlike AWS Config.

References:

https://docs.aws.amazon.com/config/latest/developerguide/approved-amis-by-id.html

https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config_use-managed-rules.html

Check out this AWS Config Cheat Sheet:

https://tutorialsdojo.com/aws-config/

Question 14: Correct

A business has its AWS accounts managed by AWS Organizations and has employees in different countries. The
business is reviewing its AWS account security policies and is looking for a way to monitor its AWS accounts
for unusual behavior that is associated with an IAM identity. The business wants to:

 send a notification to any employee for whom the unusual activity is detected.
 send a notification to the user's team leader.
 an external messaging platform will send the notifications. The platform requires a target user-id for each
recipient.

The business already has an API that can be used to retrieve the team leader's and the employee's user-id from
IAM user names.

Which solution will satisfy the requirements?

Choose an AWS account in the organization to serve as the Amazon GuardDuty administrator. Add the
business' AWS accounts as GuardDuty member accounts that are associated with the GuardDuty
administrator account. Create a Lambda function to perform the user-id lookup and deliver notifications to
the external messaging platform. Create a rule in Amazon EventBridge from the GuardDuty administrator
account to match the Impact:IAMUser/AnomalousBehavior notification type and invoke the Lambda
function.

(Correct)

Choose an AWS account in the organization to serve as the Amazon GuardDuty administrator. Add the
business' AWS accounts as GuardDuty member accounts that are associated with the GuardDuty
administrator account. Create a Lambda function to perform the user-id lookup and deliver notifications to
the external messaging platform. Create a topic in SNS from the GuardDuty administrator account to
match the Impact:IAMUser/AnomalousBehavior notification type and invoke the Lambda function.

Choose an AWS account in the organization to serve as the Amazon Detective administrator. Add the
business' AWS accounts as Detective member accounts that are associated with the Detective
administrator account. Create a Lambda function to perform the user-id lookup and deliver notifications to
the external messaging platform. Create a rule in Amazon EventBridge from the Detective administrator
account to match the Impact:IAMUser/AnomalousBehavior notification type and invoke the Lambda
function.

Choose an AWS account in the organization to serve as the Amazon Detective administrator. Add the
business' AWS accounts as Detective member accounts that are associated with the Detective
administrator account. Create a Lambda function to perform the user-id lookup and deliver notifications to
the external messaging platform. Create a topic in SNS from the Detective administrator account to match
the Impact:IAMUser/AnomalousBehavior notification type and invoke the Lambda function.

Explanation

Amazon GuardDuty is a threat detection service that continuously monitors your AWS accounts and workloads
for malicious activity and delivers detailed security findings for visibility and remediation. A GuardDuty
finding represents a potential security issue detected within your network. GuardDuty generates a finding
whenever it detects unexpected and potentially malicious activity in your AWS environment.

In this scenario, findings from Amazon GuardDuty are published to Amazon EventBridge as events that can be
used to trigger a Lambda function which will send notifications to the external messaging platform.

Hence, the correct answer is: Choose an AWS account in the organization to serve as the Amazon
GuardDuty administrator. Add the business' AWS accounts as GuardDuty member accounts that are
associated with the GuardDuty administrator account. Create a Lambda function to perform the user-id
lookup and deliver notifications to the external messaging platform. Create a rule in Amazon EventBridge
from the GuardDuty administrator account to match the Impact:IAMUser/AnomalousBehavior
notification type and
invoke the Lambda function.

The option that says: Choose an AWS account in the organization to serve as the Amazon Detective
administrator. Add the business' AWS accounts as Detective member accounts that are associated with
the Detective administrator account. Create a Lambda function to perform the user-id lookup and deliver
notifications to the external messaging platform. Create a rule in Amazon EventBridge from the Detective
administrator account to match the Impact:IAMUser/AnomalousBehavior notification type and invoke
the Lambda function is incorrect because Amazon Detective will not by itself detect unusual activity. Detective
provides analysis information related to a given finding.
The option that says: Choose an AWS account in the organization to serve as the Amazon GuardDuty
administrator. Add the business' AWS accounts as GuardDuty member accounts that are associated with
the GuardDuty administrator account. Create a Lambda function to perform the user-id lookup and
deliver notifications to the external messaging platform. Create a topic in SNS from the GuardDuty
administrator account to match the Impact:IAMUser/AnomalousBehavior notification type and invoke
the Lambda function is incorrect because Amazon SNS can filter messages by attributes and not by message
contents. An EventBridge rule would be required to publish to the SNS topic.

The option that says: Choose an AWS account in the organization to serve as the Amazon Detective
administrator. Add the business' AWS accounts as Detective member accounts that are associated with
the Detective administrator account. Create a Lambda function to perform the user-id lookup and deliver
notifications to the external messaging platform. Create a topic in SNS from the Detective administrator
account to match the Impact:IAMUser/AnomalousBehavior notification type and invoke the Lambda
function is incorrect because Amazon Detective will not by itself detect unusual activity. In addition, Amazon
SNS can filter messages by attributes and not by message contents. An EventBridge rule would be required to
publish to the SNS topic.

References:

https://aws.amazon.com/guardduty/

https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_findings.html

Check out this Amazon GuardDuty Cheat Sheet:

https://tutorialsdojo.com/amazon-guardduty/

Question 15: Correct

A company is planning to host their enterprise application in an ECS Cluster which uses the Fargate launch type.
The database credentials should be provided to the AMI by using environment variables for security purposes. A
DevOps engineer was instructed to ensure that the credentials are secure when passed to the image and that the
sensitive passwords cannot be viewed on the cluster itself. In addition, the credentials must be kept in a
dedicated storage with lifecycle management and key rotation.

Which of the following is the MOST suitable solution that the engineer should implement with the LEAST
amount of effort?

Store the database credentials using Docker Secrets in the ECS task definition file of the ECS Cluster
where you can centrally manage sensitive data and securely transmit it to only those containers that need
access to it. Ensure that the secrets are encrypted during transit and at rest.

Upload and manage the database credentials using AWS Systems Manager Parameter Store then encrypt
them using AWS KMS. Create an IAM Role for your Amazon ECS task execution role and reference it
with your task definition, which allows access to both KMS and the Parameter Store. Within your
container definition, specify secrets with the name of the environment variable to set in the container and
the full ARN of the Systems Manager Parameter Store parameter containing the sensitive data to present
to the container.
Store the database credentials in the ECS task definition file of the ECS Cluster and encrypt them with
KMS. Store the task definition JSON file in a private Amazon S3 bucket. Ensure that HTTPS is enabled
on the bucket to encrypt the data in-flight. Set up an IAM role to the ECS task definiton script that allows
access to the specific S3 bucket and then pass the --cli-input-json parameter when calling the ECS
register-task-definition. Reference the task definition JSON file in the S3 bucket which contains the
database credentials.

Store the database credentials using the AWS Secrets Manager. Encrypt the credentials using AWS KMS.
Create an IAM Role for your Amazon ECS task execution role and reference it with your task definition
which allows access to both KMS and AWS Secrets Manager. Within your container definition, specify
secrets with the name of the environment variable to set in the container and the full ARN of the Secrets
Manager secret, which contains the sensitive data, to present to the container.

(Correct)

Explanation

Amazon ECS enables you to inject sensitive data into your containers by storing your sensitive data in either
AWS Secrets Manager secrets or AWS Systems Manager Parameter Store parameters and then referencing them
in your container definition. This feature is supported by tasks using both the EC2 and Fargate launch types.

Within your container definition, specify secrets with the name of the environment variable to set in the
container and the full ARN of either the Secrets Manager secret or Systems Manager Parameter Store parameter
containing the sensitive data to present to the container. The parameter that you reference can be from a different
Region than the container using it, but must be from within the same account.

AWS Secrets Manager is a secrets management service that helps you protect access to your applications,
services, and IT resources. This service enables you to easily rotate, manage, and retrieve database credentials,
API keys, and other secrets throughout their lifecycle. Using Secrets Manager, you can secure and manage
secrets used to access resources in the AWS Cloud, on third-party services, and on-premises.

If you want a single store for configuration and secrets, you can use Parameter Store. If you want a dedicated
secrets store with lifecycle management, use Secrets Manager.

Hence, the correct answer is the option that says: Store the database credentials using the AWS Secrets
Manager. Encrypt the credentials using AWS KMS. Create an IAM Role for your Amazon ECS task
execution role and reference it with your task definition which allows access to both KMS and AWS
Secrets Manager. Within your container definition, specify secrets with the name of the environment
variable to set in the container and the full ARN of the Secrets Manager secret, which contains the
sensitive data, to present to the container.

The option that says: Upload and manage the database credentials using AWS Systems Manager
Parameter Store then encrypt them using AWS KMS. Create an IAM Role for your Amazon ECS task
execution role and reference it with your task definition, which allows access to both KMS and the
Parameter Store. Within your container definition, specify secrets with the name of the environment
variable to set in the container and the full ARN of the Systems Manager Parameter Store parameter
containing the sensitive data to present to the container is incorrect. Although the use of Systems Manager
Parameter Store in securing sensitive data in ECS is valid, this service doesn't provide dedicated storage with
lifecycle management and key rotation, unlike Secrets Manager.
The option that says: Store the database credentials in the ECS task definition file of the ECS Cluster and
encrypt them with KMS. Store the task definition JSON file in a private Amazon S3 bucket. Ensure that
HTTPS is enabled on the bucket to encrypt the data in-flight. Set up an IAM role to the ECS task
definition script that allows access to the specific S3 bucket and then pass the --cli-input-json parameter
when calling the ECS register-task-definition. Reference the task definition JSON file in the S3 bucket
which contains the database credentials is incorrect. Although the solution may work, it is not recommended
to store sensitive credentials in S3. This entails a lot of overhead and manual configuration steps which can be
simplified by simply using the Secrets Manager or Systems Manager Parameter Store.

The option that says: Store the database credentials using Docker Secrets in the ECS task definition file of
the ECS Cluster where you can centrally manage sensitive data and securely transmit it to only those
containers that need access to it. Ensure that the secrets are encrypted during transit and at rest is
incorrect. Although you can use Docker Secrets to secure the sensitive database credentials, this feature is only
applicable in Docker Swarm. In AWS, the recommended way to secure sensitive data is either through the use of
Secrets Manager or Systems Manager Parameter Store.

References:

https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data.html

https://aws.amazon.com/blogs/mt/the-right-way-to-store-secrets-using-parameter-store/

Check out this Amazon ECS Cheat Sheet:

https://tutorialsdojo.com/amazon-elastic-container-service-amazon-ecs/

Check out this AWS Secrets Manager Cheat Sheet:

https://tutorialsdojo.com/aws-secrets-manager/

AWS Security Services Overview - Secrets Manager, ACM, Macie:

https://www.youtube.com/watch?v=ogVamzF2Dzk

Question 16: Incorrect

You have an Amazon S3 bucket named team-cebu-devops that is shared by all the DevOps admin from your
team. It is used to store artifact files for several CI/CD pipelines. One of your junior developers accidentally
changed the S3 bucket policy that denied all pipelines from downloading the needed artifact files, causing all
deployments to halt. In the future, you want to be notified of any S3 bucket policy changes so that you can
quickly identify and take action if any problem occurs.

Which of the following options will help you achieve this?

Create a CloudTrail trail that sends logs to a CloudWatch Log group. Create a CloudWatch Metric Filter
for S3 bucket policy events on the log group. Create an Alarm that will send you a notification whenever
this metric threshold is reached.

(Correct)
Create a CloudTrail trail that sends logs to a CloudWatch Log group. Create a CloudWatch Event rule for
S3 bucket policy events on the log group. Create an Alarm based on the event that will send you a
notification whenever an event is matched.

Enable S3 server access logging on your bucket. Create a CloudWatch Metric Filter for bucket policy
events. Create an Alarm for this metric to notify you whenever an event is matched.

Enable S3 server access logging on your bucket. Send the access logs to a CloudWatch log group. Create
a CloudWatch Metric Filter for bucket policy events on the log group. Create an Alarm for this metric to
notify you whenever the threshold is reached.

Explanation

You can configure alarms for several scenarios in CloudTrail events. In this case, you can create an Amazon
CloudWatch alarm that is triggered when an Amazon S3 API call is made to PUT or DELETE bucket policy,
bucket lifecycle, bucket replication, or to PUT a bucket ACL. A CloudTrail trail is required since it will send its
logs to a CloudWatch Log group. To create an alarm, you must first create a metric filter and then configure an
alarm based on the filter.

For this scenario, since all CloudTrail events will be sent to the Log group, you will need to create a Metric to
filter specific S3 events that change the bucket policy of your team-cebu-devops bucket. For notification, you
will then create a CloudWatch Alarm for this Metric with a threshold of >=1 and set your email as a notification
recipient. Even a single S3 bucket event on the log group will trigger this alarm and should send you a
notification.

Hence, the correct answer is: Create a CloudTrail trail that sends logs to a CloudWatch Log group. Create
a CloudWatch Metric Filter for S3 bucket policy events on the log group. Create an Alarm that will send
you a notification whenever this metric threshold is reached.

The option that says: Enable S3 server access logging on your bucket. Create a CloudWatch Metric Filter
for bucket policy events. Create an Alarm for this metric to notify you whenever an event is matched is
incorrect because S3 Server access logging is primarily used to provide detailed records for the requests that are
made to a bucket. Each access log record provides details about a single access request, such as the requester,
bucket name, request time, request action, response status, and an error code, if relevant. It is more appropriate to
use CloudWatch or CloudTrail to track the S3 bucket policy changes.

The option that says: Enable S3 server access logging on your bucket. Send the access logs to a CloudWatch
log group. Create a CloudWatch Metric Filter for bucket policy events on the log group. Create an Alarm
for this metric to notify you whenever the threshold is reached is incorrect because you can’t directly send
the S3 server access logs to CloudWatch logs. You need to use CloudTrail to send the events to a log group
before you can create a metric and alarm for those events.

The option that says: Create a CloudTrail trail that sends logs to a CloudWatch Log group. Create a
CloudWatch Event rule for S3 bucket policy events on the log group. Create an Alarm based on the event
that will send you a notification whenever an event is matched is incorrect because you can’t use
CloudWatch Events to filter your log groups directly.

References:
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html

https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/ConsoleAlarms.html

https://docs.aws.amazon.com/codepipeline/latest/userguide/create-cloudtrail-S3-source-console.html

https://docs.aws.amazon.com/AmazonS3/latest/userguide/cloudtrail-logging.html

Check out this AWS CloudTrail Cheat Sheet:

https://tutorialsdojo.com/aws-cloudtrail/

Question 17: Incorrect

A technology consulting company has an Oracle Real Application Clusters (RAC) database on their on-premises
network which they want to migrate to AWS Cloud. Their Chief Technology Officer instructed the DevOps
team to automate the patch management process of the operating system in which their database will run. They
are also mandated as well to set up scheduled backups to comply with the disaster recovery plan of the company.

What should the DevOps team do to satisfy the requirement for this scenario with the LEAST amount of effort?

Migrate the Oracle RAC database to a large EBS-backed Amazon EC2 instance then install the SSM
agent. Use the AWS Systems Manager Patch Manager to automate the patch management process. Set up
the Amazon Data Lifecycle Manager service to automate the creation of Amazon EBS snapshots from the
EBS volumes of the EC2 instance.

(Correct)

Migrate the database that is hosted on-premises to Amazon RDS which provides a multi-AZ failover
feature for your Oracle RAC cluster. The RPO and RTO will be reduced in the event of system failure
since Amazon RDS offers features such as patch management and maintenance of the underlying host.

Migrate the on-premises database to Amazon Aurora. Enable automated backups for your Aurora RAC
cluster. With Amazon Aurora, patching is automatically handled during the system maintenance window.

Migrate the Oracle RAC database to a large EBS-backed Amazon EC2 instance. Launch an AWS Lambda
function that will call the CreateSnapshot EC2 API to automate the creation of Amazon EBS snapshots of
the database. Integrate CloudWatch Events and Lambda in order to run the function on a regular basis. Set
up the CodeDeploy and CodePipeline services to automate the patch management process of the database.

Explanation

AWS Systems Manager Patch Manager automates the process of patching managed instances with security-
related updates. For Linux-based instances, you can also install patches for non-security updates. You can patch
fleets of Amazon EC2 instances or your on-premises servers and virtual machines (VMs) by operating system
type. This includes supported versions of Windows Server, Ubuntu Server, Red Hat Enterprise Linux (RHEL),
SUSE Linux Enterprise Server (SLES), CentOS, Amazon Linux, and Amazon Linux 2. You can scan instances
to see only a report of missing patches, or you can scan and automatically install all missing patches.
Amazon Data Lifecycle Manager (DLM) for EBS Snapshots provides a simple, automated way to back up
data stored on Amazon EBS volumes. You can define backup and retention schedules for EBS snapshots by
creating lifecycle policies based on tags. With this feature, you no longer have to rely on custom scripts to create
and manage your backups.

Oracle RAC is supported via the deployment using Amazon EC2 only since Amazon RDS and Aurora do not
support it. Amazon RDS does not support certain features in Oracle such as Multitenant Database, Real
Application Clusters (RAC), Unified Auditing, Database Vault and many more. You can use AWS Systems
Manager Patch Manager to automate the process of patching managed instances with security-related updates.

Hence, the correct answer is: Migrate the Oracle RAC database to a large EBS-backed Amazon EC2 instance
then install the SSM agent. Use the AWS Systems Manager Patch Manager to automate the patch
management process. Set up the Amazon Data Lifecycle Manager service to automate the creation of Amazon
EBS snapshots from the EBS volumes of the EC2 instance.

The option that says: Migrate the database that is hosted on-premises to Amazon RDS which provides a multi-
AZ failover feature for your Oracle RAC cluster. The RPO and RTO will be reduced in the event of system
failure since Amazon RDS offers features such as patch management and maintenance of the underlying
host is incorrect because Amazon RDS doesn't support Oracle RAC.

The option that says: Migrate the on-premises database to Amazon Aurora. Enable automated backups for
your Aurora RAC cluster. With Amazon Aurora, patching is automatically handled during the system
maintenance window is incorrect because, just like Amazon RDS, the Amazon Aurora doesn't support Oracle
RAC as well.

The option that says: Migrate the Oracle RAC database to a large EBS-backed Amazon EC2 instance. Launch
an AWS Lambda function that will call the CreateSnapshot EC2 API to automate the creation of Amazon
EBS snapshots of the database. Integrate CloudWatch Events and Lambda in order to run the function on a
regular basis. Set up the CodeDeploy and CodePipeline services to automate the patch management process
of the database is incorrect because CodeDeploy and CodePipeline are CI/CD services and are not suitable for
patch management. You should use AWS Systems Manager Patch Manager instead. In addition, the Amazon
Data Lifecycle Manager service is the recommended way to automate the creation of Amazon EBS snapshots
and not a combination of Lambda and CloudWatch Events.

References:

https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-patch.html

https://aws.amazon.com/rds/oracle/faqs/

Check out this Amazon RDS Cheat Sheet:

https://tutorialsdojo.com/amazon-relational-database-service-amazon-rds/

Question 18: Incorrect

A company is receiving a high number of bad reviews from their customers lately because their website takes a
lot of time to load. Upon investigation, there is a significant latency experienced whenever a user logs into their
site. The DevOps Engineer configured CloudFront to serve the static content to its users around the globe, yet
the problem still persists. There are times when their users get HTTP 504 errors in the login process. The
engineers were tasked to fix this problem immediately to prevent users from unsubscribing on their website
which will result in financial loss to the company.

Which combination of options below should the DevOps Engineer use together to set up a solution for this
scenario with MINIMAL costs? (Select TWO.)

Create an origin group with two origins to set up an origin failover in Amazon CloudFront. Specify one as
the primary origin and the other as the second origin. This configuration will cause the CloudFront service
to automatically switch to the second origin in the event that the primary origin returns specific HTTP
status code failure responses.

(Correct)

Deploy your application stack to multiple and geographically disperse VPCs on various AWS regions. Set
up a Transit VPC to easily connect all your VPCs and resources. Using the AWS Serverless Application
Model (SAM) service, deploy several AWS Lambda functions in each region to improve the overall
application performance.

Use Lambda@Edge to customize the content that the CloudFront distribution delivers to your users across
the globe. This will allow the Lambda functions to execute the authentication process in a specific AWS
location that is proximate to the user.

(Correct)

In CloudFront, add a Cache-Control max-age directive to your objects. Improve the cache hit ratio by
setting the longest practical value for max-age of your CloudFront distribution.

Replicate your application stack to multiple AWS regions to serve your users around the world. Set up a
Route 53 record with Latency routing policy that will automatically route traffic to the region with the
best latency to the user.

Explanation

Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content, such
as .html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwide network
of data centers called edge locations. When a user requests content that you're serving with CloudFront, the user
is routed to the edge location that provides the lowest latency (time delay), so that content is delivered with the
best possible performance.

Lambda@Edge lets you run Lambda functions to customize the content that CloudFront delivers, executing the
functions in AWS locations closer to the viewer. The functions run in response to CloudFront events, without
provisioning or managing servers. You can use Lambda functions to change CloudFront requests and responses
at the following points:

- After CloudFront receives a request from a viewer (viewer request)

- Before CloudFront forwards the request to the origin (origin request)

- After CloudFront receives the response from the origin (origin response)
- Before CloudFront forwards the response to the viewer (viewer response)

In the given scenario, you can use Lambda@Edge to allow your Lambda functions to customize the content that
CloudFront delivers and to execute the authentication process in AWS locations closer to the users. In addition,
you can set up an origin failover by creating an origin group with two origins with one as the primary origin and
the other as the second origin, which CloudFront automatically switches to when the primary origin fails. This
will alleviate the occasional HTTP 504 errors that users are experiencing.

Hence, the correct answers in this scenario are:

- Use Lambda@Edge to customize the content that the CloudFront distribution delivers to your users across
the globe. This will allow the Lambda functions to execute the authentication process in a specific AWS
location that is proximate to the user.

- Create an origin group with two origins to set up an origin failover in Amazon CloudFront. Specify one as
the primary origin and the other as the second origin. This configuration will cause the CloudFront service to
automatically switch to the second origin in the event that the primary origin returns specific HTTP status
code failure responses.

The option that says: Replicate your application stack to multiple AWS regions to serve your users around the
world. Set up a Route 53 record with Latency routing policy that will automatically route traffic to the region
with the best latency to the user is incorrect because although this may resolve the performance issue, this
solution entails a significant implementation cost since you have to deploy your application to multiple AWS
regions. Remember that the scenario asks for a solution that will improve the performance of the application
with minimal cost.The option that says: Deploy your application stack to multiple and geographically disperse
VPCs on various AWS regions. Set up a Transit VPC to easily connect all your VPCs and resources. Using
the AWS Serverless Application Model (SAM) service, deploy several AWS Lambda functions in each region
to improve the overall application performance is incorrect because although setting up multiple VPCs across
various regions which are connected with a transit VPC is valid, this solution still entails higher setup and
maintenance costs. A more cost-effective option would be to use Lambda@Edge instead.

The option that says: In CloudFront, add a Cache-Control max-age directive to your objects. Improve the
cache hit ratio by setting the longest practical value for max-age of your CloudFront distribution is incorrect
because improving the cache hit ratio for the CloudFront distribution is irrelevant in this scenario. You can
improve your cache performance by increasing the proportion of your viewer requests that are served from
CloudFront edge caches instead of going to your origin servers for content. However, take note that the problem
in the scenario is the slow authentication process of your global users and not just the caching of the static
objects.

References:

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/high_availability_origin_failover.html

https://docs.aws.amazon.com/lambda/latest/dg/lambda-edge.html

Check out these Amazon CloudFront and AWS Lambda Cheat Sheets:

https://tutorialsdojo.com/amazon-cloudfront/
https://tutorialsdojo.com/aws-lambda/

Tutorials Dojo's AWS Certified DevOps Engineer Professional Exam Study Guide:

https://tutorialsdojo.com/aws-certified-devops-engineer-professional/

Question 19: Incorrect

A leading technology company is planning to build a document management portal that will utilize an existing
Amazon DynamoDB table. The portal will be launched in Kubernetes and managed via AWS App Runner for
easier deployment. The table has an attribute of DocumentName that acts as the partition key and another attribute
called Category as its sort key. A DevOps Engineer was instructed to develop a feature that queries
the DocumentName attribute yet uses a different sort key other than the existing one. To fetch the latest data, strong
read consistency must be used in the database tier.

Which of the following solutions below should the engineer implement?

Add a Local Secondary Index that uses the DocumentName attribute and a different sort key.

Add a Global Secondary Index which uses the DocumentName attribute and use an alternative sort key as
projected attributes.

Add a Global Secondary Index that uses the DocumentName attribute and a different sort key.

Set up a new DynamoDB table with a Local Secondary Index that uses the DocumentName attribute with a
different sort key. Migrate the data from the existing table to the new table.

(Correct)

Explanation

A local secondary index maintains an alternate sort key for a given partition key value. A local secondary index
also contains a copy of some or all of the attributes from its base table; you specify which attributes are projected
into the local secondary index when you create the table. The data in a local secondary index is organized by the
same partition key as the base table, but with a different sort key. This lets you access data items efficiently
across this different dimension. For greater query or scan flexibility, you can create up to five local secondary
indexes per table.

Suppose that an application needs to find all of the threads that have been posted within the last three months.
Without a local secondary index, the application would have to Scan the entire Thread table and discard any
posts that were not within the specified time frame. With a local secondary index, a Query operation could
use LastPostDateTime as a sort key and find the data quickly.

To create a Local Secondary Index, make sure that the primary key of the index is the same as the primary
key/partition key of the table, just as shown below. Then you must select an alternative sort key that is different
from the sort key of the table.

When you request a strongly consistent read, DynamoDB returns a response with the most up-to-date data,
reflecting the updates from all prior write operations that were successful. A strongly consistent read might not
be available if there is a network delay or outage. Strongly consistent reads are not supported on global
secondary indexes.

The primary key of a local secondary index must be composite (partition key and sort key). A local secondary
index lets you query over a single partition, as specified by the partition key value in the query.

Local secondary indexes are created at the same time that you create a table. You cannot add a local secondary
index to an existing table, nor can you delete any local secondary indexes that currently exist.

Hence, the correct answer is: Set up a new DynamoDB table with a Local Secondary Index that uses
the DocumentName attribute with a different sort key. Migrate the data from the existing table to the new
table.

The option that says: Add a Global Secondary Index that uses the DocumentName attribute and a different
sort key is incorrect. Although it is possible to query data without using a scan command, it is still not enough
because GSI does not support strong read consistency which is required in the scenario.

The option that says: Add a Global Secondary Index which uses the DocumentName attribute and use an
alternative sort key as projected attributes is incorrect because using a local secondary index is a more
appropriate solution to be used in this scenario just as explained above. Moreover, projected attributes are just
attributes stored in the index that can be returned by queries and scans performed on the index, hence, these are
not useful in satisfying the provided requirement.

The option that says: Add a Local Secondary Index that uses the DocumentName attribute and a different sort
key is incorrect. Although it is using the correct type of index, you cannot add a local secondary index to an
already existing table.

References:

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/SecondaryIndexes.html

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/LSI.html

Check out this Amazon DynamoDB Cheat Sheet:

https://tutorialsdojo.com/amazon-dynamodb/

Tutorials Dojo's AWS Certified DevOps Engineer Professional Exam Study Guide:

https://tutorialsdojo.com/aws-certified-devops-engineer-professional/

Question 20: Correct

You are hosting several repositories of your application code on AWS CodeCommit. You are the only developer
at the moment and have full access to your AWS Account. However, a new team will be joining your group to
help accelerate the development of a project hosted on one of your repositories. You want each member to have
read/write access on that repository so that they can freely commit and push code using their IAM CodeCommit
Git credentials. For security purposes, the members should not be allowed to delete CodeCommit repositories.

Which of the following steps will give them the proper access needed?
Create a new IAM group and add the new team members in this group. Attach
the AWSCodeCommitFullAccess policy to this group to allow access to all of the functionality of
CodeCommit and repository-related resources.

Create a new IAM group and add the new team members on this group. Attach
the AWSCodeCommitPowerUser policy to this group to allow user access to all of the functionality of
CodeCommit and repository-related resources.

(Correct)

Create a new IAM group and add the new team members in this group. Attach
the AmazonCodeGuruReviewerPowerUser policy to this group to allow access to all of the functionality of
CodeCommit and repository-related resources. This will also enable the Amazon CodeGuru Reviewer to
automatically analyze the pull requests that developers make.

Create a new IAM group and add the new team members in this group. Attach
the AWSCodeCommitReadOnly policy to this group to allow access to all of the functionality of CodeCommit
and repository-related resources.

Explanation

You can create a policy that denies users permissions to actions you specify on one or more branches.
Alternatively, you can create a policy that allows actions on one or more branches that they might not otherwise
have in other branches of a repository. You can use these policies with the appropriate managed (predefined)
policies.

For example, you can create a Deny policy that denies users the ability to make changes to a branch named
master, including deleting that branch, in a repository named TutorialsDojoManila. You can use this policy with
the AWSCodeCommitPowerUser managed policy. Users with these two policies applied would be able to
create and delete branches, create pull requests, and all other actions as allowed
by AWSCodeCommitPowerUser, but they would not be able to push changes to the branch named master, add
or edit a file in the master branch in the CodeCommit console, or merge branches or a pull request into
the master branch. Because Deny is applied to GitPush, you must include a Null statement in the policy, to allow
initial GitPush calls to be analyzed for validity when users make pushes from their local repos.

There are various AWS-managed policies that you can use for providing CodeCommit access. They are:

AWSCodeCommitFullAccess – Grants full access to CodeCommit. You should apply this policy only to
administrative-level users to whom you want to grant full control over CodeCommit repositories and related
resources in your AWS account, including the ability to delete repositories.

AWSCodeCommitPowerUser – Allows users access to all of the functionality of CodeCommit and repository-
related resources, except it does not allow them to delete CodeCommit repositories or create or delete repository-
related resources in other AWS services, such as Amazon CloudWatch Events. It is recommended to apply this
policy to most users.

AWSCodeCommitReadOnly – Grants read-only access to CodeCommit and repository-related resources in


other AWS services, as well as the ability to create and manage their own CodeCommit-related resources (such
as Git credentials and SSH keys for their IAM user to use when accessing repositories). You should apply this
policy to users to whom you want to grant the ability to read the contents of a repository, but not make any
changes to its contents.

Remember that you can't modify these AWS-managed policies. In order to customize the permissions, you can
add a Deny rule to the IAM Role in order to block certain capabilities included in these policies.

Hence, the correct answer is: Create a new IAM group and add the new team members in this group. Attach
the AWSCodeCommitPowerUser policy to this group to allow access to all of the functionality of CodeCommit and
repository-related resources.

The option that says: Create a new IAM group and add the new team members in this group. Attach
the AWSCodeCommitReadOnly policy to this group to allow access to all of the functionality of CodeCommit and
repository-related resources is incorrect because you have to use the AWSCodeCommitPowerUser managed policy
instead. The AWSCodeCommitReadOnly policy is not enough since this is primarily used to provide read access only.

The option that says: Create a new IAM group and add the new team members in this group. Attach
the AWSCodeCommitFullAccess policy to this group to allow access to all of the functionality of CodeCommit
and repository-related resources is incorrect because the members will be able to delete CodeCommit
repositories using this policy.

The option that says: Create a new IAM group and add the new team members in this group. Attach
the AmazonCodeGuruReviewerPowerUser managed policy to this group to allow access to all of the functionality
of CodeCommit and repository-related resources. This will also enable the Amazon CodeGuru Reviewer to
automatically analyze the pull requests that developers make is incorrect because there is no such thing
as AmazonCodeGuruReviewerPowerUser managed policy. You have to use the AWSCodeCommitPowerUser policy
instead.

References:

https://docs.aws.amazon.com/codecommit/latest/userguide/auth-and-access-control-permissions-reference.html

https://docs.aws.amazon.com/codecommit/latest/userguide/auth-and-access-control-iam-identity-based-access-
control.html

https://docs.aws.amazon.com/codecommit/latest/userguide/pull-requests.html

Question 21: Correct

A global cloud-based payment processing system is hosted in AWS which accepts credit card payments as well
as cryptocurrencies such as Bitcoin. It is deployed in AWS which uses EC2, DynamoDB, S3, and CloudFront to
process the payments. Since they are accepting credit card information from the users, they are required to be
compliant with the Payment Card Industry Data Security Standard (PCI DSS). It was found that the credit card
numbers are not properly encrypted on the recent 3rd-party audit and hence, their system failed the PCI DSS
compliance test. You were hired by the company to solve this issue so they can release the product in the market
as soon as possible. In addition, you also have to improve performance by increasing the proportion of your
viewer requests that are served from CloudFront edge caches instead of going to your origin servers for content.

Which of the following is the BEST option to protect and encrypt the sensitive credit card information of the
users and improve the cache hit ratio of your CloudFront distribution?
Use a custom SSL in the CloudFront distribution. Configure your origin to add User-
Agent and Host headers to your objects to increase your cache hit ratio.

Modify the CloudFront distribution to use Signed URLs. Configure your origin to add a Cache-Control
max-age directive to your objects, and specify the longest practical value for max-age to increase your
cache hit ratio.

Set up an origin access identity (OAI) and add it to the CloudFront distribution. Configure your origin to
add User-Agent and Host headers to your objects to increase your cache hit ratio.

Secure end-to-end connections to the origin servers in your CloudFront distribution by using HTTPS and
field-level encryption. Set up your origin to add a Cache-Control max-age directive to your objects. Apply
the longest practical value for max-age to increase your cache hit ratio.

(Correct)

Explanation

You can already configure CloudFront to help enforce secure end-to-end connections to origin servers by using
HTTPS. Field-level encryption adds an additional layer of security along with HTTPS that lets you protect
specific data throughout system processing so that only certain applications can see it. Field-level encryption
allows you to securely upload user-submitted sensitive information to your web servers. The sensitive
information provided by your clients is encrypted at the edge closer to the user and remains encrypted
throughout your entire application stack, ensuring that only applications that need the data—and have the
credentials to decrypt it—are able to do so.

To use field-level encryption, you configure your CloudFront distribution to specify the set of fields in POST
requests that you want to be encrypted, and the public key to use to encrypt them. You can encrypt up to 10 data
fields in a request.

You can improve performance by increasing the proportion of your viewer requests that are served from
CloudFront edge caches instead of going to your origin servers for content; that is, by improving the cache hit
ratio for your distribution. To increase your cache hit ratio, you can configure your origin to add a Cache-Control
max-age directive to your objects, and specify the longest practical value for max-age. The shorter the cache
duration, the more frequently CloudFront forwards another request to your origin to determine whether the
object has changed and, if so, to get the latest version.

Hence, the correct answer is: Secure end-to-end connections to the origin servers in your CloudFront
distribution by using HTTPS and field-level encryption. Set up your origin to add a Cache-Control max-
age directive to your objects. Apply the longest practical value for max-age to increase your cache hit ratio.

The option that says: Use a custom SSL in the CloudFront distribution. Configure your origin to add User-
Agent and Host headers to your objects to increase your cache hit ratio is incorrect because although it
provides secure end-to-end connections to origin servers, it is better to add field-level encryption to protect the
credit card information.

The option that says: Modify the CloudFront distribution to use Signed URLs. Configure your origin to add
a Cache-Control max-age directive to your objects, and specify the longest practical value for max-age to
increase your cache hit ratio is incorrect because a Signed URL provides a way to distribute private content but
it doesn't encrypt the sensitive credit card information.

The option that says: Set up an origin access identity (OAI) and add it to the CloudFront distribution.
Configure your origin to add User-Agent and Host headers to your objects to increase your cache hit ratio is
incorrect because OAI is mainly used to restrict access to objects in S3 bucket, but not provide encryption to
specific fields.

References:

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/field-level-encryption.html#field-level-
encryption-setting-up

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/cache-hit-ratio.html

Check out this Amazon CloudFront Cheat Sheet:

https://tutorialsdojo.com/amazon-cloudfront/

Question 22: Correct

A company has a CI/CD pipeline built in AWS CodePipeline for deployment updates. Part of the deployment
process is to perform database schema updates and is performed via AWS CodeBuild. A recent security audit
has discovered that the AWS CodeBuild is downloading the database scripts via Amazon S3 in an
unauthenticated manner. The security team requires a solution that will enhance the security of the company's
CI/CD pipeline.

What action should the DevOps Engineer take to address the issue in the MOST secure way?

Secure the S3 bucket by removing unauthenticated access through a bucket policy. Update the service role
of the CodeBuild project to grant access to Amazon S3. Download the database scripts using the AWS
CLI.

(Correct)

Secure the S3 bucket by removing unauthenticated access through a bucket policy. Add an IAM access
key and a secret access key in the CodeBuild as an environment variable to grant access to Amazon S3.
Download the database scripts using the AWS CLI.

Deny unauthenticated access from the S3 bucket by using Amazon GuardDuty. Update the service role of
the CodeBuild project to grant access to Amazon S3. Utilize the AWS CLI to fetch the database scripts.

Deny unauthenticated access from the S3 bucket through an IAM Policy. Add an IAM access key and a
secret access key in the CodeBuild as an environment variable to grant access to Amazon S3. Utilize the
AWS CLI to fetch the database scripts.

Explanation

Service role: A service role is an AWS Identity and Access Management (IAM) that grants permissions to an
AWS service so that the service can access AWS resources. You need an AWS CodeBuild service role so that
CodeBuild can interact with dependent AWS services on your behalf. You can create a CodeBuild service role
by using the CodeBuild or AWS CodePipeline consoles.

In this scenario, the S3 bucket will be safeguarded from unauthorized access by utilizing a bucket policy.
Moreover, CodeBuild leverages the service role for executing S3 actions on your behalf.

Hence, the correct answer is: Secure the S3 bucket by removing unauthenticated access through a bucket
policy. Update the service role of the CodeBuild project to grant access to Amazon S3. Download the
database scripts using the AWS CLI.

The option that says: Secure the S3 bucket by removing unauthenticated access through a bucket policy.
Add an IAM access key and a secret access key in the CodeBuild as an environment variable to grant
access to Amazon S3. Download the database scripts using the AWS CLI is incorrect. While the use of IAM
access key and secret access key can provide S3 access to CodeBuild, it is not the most secure approach to
address the issue.

The option that says: Deny unauthenticated access from the S3 bucket through an IAM Policy. Add an
IAM access key and a secret access key in the CodeBuild as an environment variable to grant access to
Amazon S3. Utilize the AWS CLI to fetch the database scripts is incorrect because an IAM policy alone
cannot secure an S3 bucket from unauthorized access. A bucket policy must be used instead. Furthermore, this
option uses IAM access key and secret access key, which is not the most secure way.

The option that says: Deny unauthenticated access from the S3 bucket by using Amazon GuardDuty.
Update the service role of the CodeBuild project to grant access to Amazon S3. Utilize the AWS CLI to
fetch the database scripts is incorrect because Amazon GuardDuty is a threat detection service that
continuously monitors your AWS accounts and workloads for malicious activity and delivers detailed security
findings for visibility and remediation. It is not used for removing unauthenticated access to the S3 bucket.

References:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/service-role.html

https://docs.aws.amazon.com/codebuild/latest/userguide/setting-up.html#setting-up-service-role

https://aws.amazon.com/codebuild/

Question 23: Correct

A multinational insurance firm has recently consolidated its multiple AWS accounts using AWS Organizations
with several Organizational units (OUs) that group each department of the firm. Their IT division consists of two
teams: the DevOps team and the Release & Deployment team. The first one is responsible for protecting its
cloud infrastructure and ensuring that all of its AWS resources are compliant, while the latter is responsible for
deploying new applications to AWS Cloud. The DevOps team has been instructed to set up a system that
regularly checks if all of the running EC2 instances are using an approved AMI. However, the solution should
not stop the Release & Deployment team from deploying an EC2 instance running on a non-approved AMI for
their release process. The DevOps team must build a notification system that sends the compliance state of the
AWS resources to improve and maintain the security of their cloud resources.

Which of the following is the MOST suitable solution that the DevOps team should implement?
Verify whether the running EC2 instances in your VPCs are using approved AMIs using Trusted Advisor
checks. Set up a CloudWatch alarm and integrate it with the Trusted Advisor metrics that will check all of
the AMIs being used by your Amazon EC2 instances and that will send a notification to both teams if
there is a running instance which uses an unapproved AMI.

Set up an AWS Config Managed Rule and specify a list of approved AMI IDs. Modify the rule to check
whether the running Amazon EC2 instances are using specified AMIs. Configure AWS Config to stream
configuration changes and notifications to an Amazon SNS topic which will send a notification to both
teams for non-compliant instances.

(Correct)

Assign a Service Control Policy and an IAM policy that restricts the AWS accounts and the development
team from launching an EC2 instance using an unapproved AMI. Set up a CloudWatch alarm that will
automatically notify the DevOps team if there are non-compliant EC2 instances running in their VPCs.

Automatically check all of the AMIs that are being used by your EC2 instances using the Amazon
Inspector service. Use an SNS topic that will send a notification to both the DevOps and Release &
Deployment teams if there is a non-compliant EC2 instance running in their VPCs.

Explanation

When you run your applications on AWS, you usually use AWS resources, which you must create and manage
collectively. As the demand for your application keeps growing, so does your need to keep track of your AWS
resources. AWS Config is designed to help you oversee your application resources.

AWS Config provides a detailed view of the configuration of AWS resources in your AWS account. This
includes how the resources are related to one another and how they were configured in the past so that you can
see how the configurations and relationships change over time. With AWS Config, you can do the following:

- Evaluate your AWS resource configurations for desired settings.

- Get a snapshot of the current configurations of the supported resources that are associated with your AWS
account.

- Retrieve configurations of one or more resources that exist in your account.

- Retrieve historical configurations of one or more resources.

- Receive a notification whenever a resource is created, modified, or deleted.

- View relationships between resources. For example, you might want to find all resources that use a particular
security group.

AWS Config provides AWS managed rules, which are predefined, customizable rules that AWS Config uses to
evaluate whether your AWS resources comply with common best practices. In this scenario, you can use
the approved-amis-by-id AWS manage rule which checks whether running instances are using specified AMIs.

Hence, the correct answer is: Set up an AWS Config Managed Rule and specify a list of approved AMI IDs.
Modify the rule to check whether the running Amazon EC2 instances are using specified AMIs. Configure
AWS Config to stream configuration changes and notifications to an Amazon SNS topic which will send a
notification to both teams for non-compliant instances.

The option that says: Assign a Service Control Policy and an IAM policy that restricts the AWS accounts
and the development team from launching an EC2 instance using an unapproved AMI. Set up a
CloudWatch alarm that will automatically notify the DevOps team if there are non-compliant EC2
instances running in their VPCs is incorrect because setting up an SCP and IAM Policy will totally restrict the
Release & Deployment team from launching EC2 instances with unapproved AMIs. The scenario clearly says
that the solution should not have this kind of restriction.

The option that says: Automatically check all of the AMIs that are being used by your EC2 instances using
the Amazon Inspector service. Use an SNS topic that will send a notification to both the DevOps and
Release & Deployment teams if there is a non-compliant EC2 instance running in their VPCs is incorrect
because the Amazon Inspector service is just an automated security assessment service that helps improve the
security and compliance of applications deployed on AWS. It does not have the capability to detect EC2
instances that are using unapproved AMIs, unlike AWS Config.

The option that says: Verify whether the running EC2 instances in your VPCs are using approved AMIs using
Trusted Advisor checks. Set up a CloudWatch alarm and integrate it with the Trusted Advisor metrics that will
check all of the AMIs being used by your Amazon EC2 instances and that will send a notification to both teams
if there is a running instance which uses an unapproved AMI is incorrect because AWS Trusted Advisor is
primarily used to check if your cloud infrastructure is in compliance with the best practices and
recommendations across five categories: cost optimization, security, fault tolerance, performance, and service
limits. Their security checks for EC2 does not cover the checking of individual AMIs that are being used by your
EC2 instances.

References:

https://docs.aws.amazon.com/config/latest/developerguide/approved-amis-by-id.html

https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config_use-managed-rules.html

Check out this AWS Config Cheat Sheet:

https://tutorialsdojo.com/aws-config/

Question 24: Correct

A multinational company is using multiple AWS accounts for its global cloud architecture. The AWS resources
in their production account are shared among various business units of the company. A single business unit may
have one or more AWS accounts that have resources in the production account. Recently, there were a lot of
incidents in which the developers from a specific business unit accidentally terminated the Amazon EC2
instances owned by another business unit. A DevOps Engineer was tasked to come up with a solution to only
allow a specific business unit who owns the EC2 instances and other AWS resources to terminate their own
resources.

How should the Engineer implement a multi-account strategy to satisfy this requirement?
Centrally manage all of your accounts using AWS Organizations. Group your accounts, which belong to a
specific business unit, to individual Organization Units (OU). Set up an IAM Role in the production
account which has a policy that allows access to the EC2 instances including resource-level permission to
terminate the instances owned by a particular business unit. Associate the cross-account access and the
IAM policy to every member accounts of the OU.

(Correct)

Centrally manage all of your accounts using a multi-account aggregator in AWS Config and AWS
Control Tower. Configure AWS Config to allow access to certain Amazon EC2 instances in production
per business unit. Launch the Customizations for AWS Control Tower (CfCT) in the different account
where your AWS Control Tower landing zone is deployed. Configure the CfCT to only allow a specific
business unit that owns the EC2 instances and other AWS resources to terminate their own resources.

Centrally manage all of your accounts using AWS Organizations. Group your accounts, which belong to a
specific business unit, to the individual Organization Unit (OU). Set up an IAM Role in the production
account for each business unit which has a policy that allows access to the Amazon EC2 instances
including resource-level permission to terminate the instances that it owns. Use an
AWSServiceRoleForOrganizations service-linked role to the individual member accounts of the OU to
enable trusted access.

Centrally manage all of your accounts using AWS Service Catalog. Group your accounts, which belong to
a specific business unit, to the individual Organization Unit (OU). Set up a Service Control Policy in the
production account which has a policy that allows access to the EC2 instances including resource-level
permission to terminate the instances owned by a particular business unit. Associate the cross-account
access and the SCP to the OUs, which will then be automatically inherited by its member accounts.

Explanation

AWS Organizations is an account management service that enables you to consolidate multiple AWS accounts
into an organization that you create and centrally manage. AWS Organizations includes account management
and consolidated billing capabilities that enable you to better meet the budgetary, security, and compliance needs
of your business. As an administrator of an organization, you can create accounts in your organization and invite
existing accounts to join the organization.

You can use organizational units (OUs) to group accounts together to administer as a single unit. This greatly
simplifies the management of your accounts. For example, you can attach a policy-based control to an OU, and
all accounts within the OU automatically inherit the policy. You can create multiple OUs within a single
organization, and you can create OUs within other OUs. Each OU can contain multiple accounts, and you can
move accounts from one OU to another. However, OU names must be unique within a parent OU or root.

Resource-level permissions refer to the ability to specify which resources users are allowed to perform actions
on. Amazon EC2 has partial support for resource-level permissions. This means that for certain Amazon EC2
actions, you can control when users are allowed to use those actions based on conditions that have to be fulfilled
or specific resources that users are allowed to use. For example, you can grant users permissions to launch
instances, but only of a specific type and only using a specific AMI.

Hence, the correct answer is: Centrally manage all of your accounts using AWS Organizations. Group your
accounts, which belong to a specific business unit, to individual Organization Units (OU). Set up an IAM
Role in the production account which has a policy that allows access to the EC2 instances including
resource-level permission to terminate the instances owned by a particular business unit. Associate the
cross-account access and the IAM policy to every member accounts of the OU.

The option that says: Centrally manage all of your accounts using AWS Organizations. Group your
accounts, which belong to a specific business unit, to the individual Organization Unit (OU). Set up an
IAM Role in the production account for each business unit which has a policy that allows access to the
Amazon EC2 instances including resource-level permission to terminate the instances that it owns. Use an
AWSServiceRoleForOrganizations service-linked role to the individual member accounts of the OU to
enable trusted access is incorrect. The AWSServiceRoleForOrganizations service-linked role is primarily used
to only allow AWS Organizations to create service-linked roles for other AWS services. This service-linked role
is present in all organizations and not just in a specific OU.

The option that says: Centrally manage all of your accounts using a multi-account aggregator in AWS
Config and AWS Control Tower. Configure AWS Config to allow access to certain Amazon EC2 instances
in production per business unit. Launch the Customizations for AWS Control Tower (CfCT) in the
different account where your AWS Control Tower landing zone is deployed. Configure the CfCT to only
allow a specific business unit that owns the EC2 instances and other AWS resources to terminate their
own resources is incorrect. Although the use of the AWS Control Tower is right, the aggregator feature is
simply an AWS Config resource type that collects AWS Config configuration and compliance data from the
following various AWS accounts. In addition, you have to launch the Customizations for AWS Control Tower
(CfCT) on the same AWS region where your AWS Control Tower landing zone is deployed, and not on a
different account, to put it in effect.

The option that says: Centrally manage all of your accounts using AWS Service Catalog. Group your
accounts, which belong to a specific business unit, to the individual Organization Unit (OU). Set up a
Service Control Policy in the production account which has a policy that allows access to the EC2
instances including resource-level permission to terminate the instances owned by a particular business
unit. Associate the cross-account access and the SCP to the OUs, which will then be automatically
inherited by its member accounts is incorrect. AWS Service Catalog simply allows organizations to create and
manage catalogs of IT services that are approved for use on AWS. A more suitable service to use here is AWS
Organizations.

References:

https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_ous.html

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-supported-iam-actions-resources.html

https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.html

Check out this AWS Organizations Cheat Sheet:

https://tutorialsdojo.com/aws-organizations/

Service Control Policies (SCP) vs IAM Policies:

https://tutorialsdojo.com/service-control-policies-scp-vs-iam-policies/
Comparison of AWS Services Cheat Sheets:

https://tutorialsdojo.com/comparison-of-aws-services/

Question 25: Incorrect

There are confidential files containing patent information that are stored on your Amazon S3 bucket in the US
East (Ohio) region. Among team members working on the project, you want to be able to monitor the actions
made on your objects such as PUT, GET, and DELETE operations. You want to easily search and review these
actions for auditing purposes.

Which of the following options will you implement to achieve this in the MOST cost-effective manner?

Enable logging on your Amazon S3 bucket to save the access logs on a separate S3 bucket. Import logs to
AWS Elasticsearch service to easily search and query the logs.

Create an AWS CloudWatch Log group and configure S3 logging to send object-level events to this log
group. View and search the event logs on the created CloudWatch log group.

Create an AWS CloudTrail trail to track and store your S3 API call logs to an Amazon S3 bucket. Create
a Lambda function that logs data events of your S3 bucket. Trigger this Lambda function using
CloudWatch Events rule for every action taken on your S3 objects. View the logs on the CloudWatch
Logs group.

(Correct)

Enable logging on your S3 bucket. Create a CloudWatch Events rule that watches the object-level events
of your S3 bucket. Create a target on the rule to send the event logs to a CloudWatch Log group. View
and search the logs on CloudWatch Logs.

Explanation

When an event occurs in your account, CloudTrail evaluates whether the event matches the settings for your
trails. Only events that match your trail settings are delivered to your Amazon S3 bucket and Amazon
CloudWatch Logs log group.

You can configure your trails to log the following:

Data events: These events provide insight into the resource operations performed on or within a resource. These
are also known as data plane operations.

Management events: Management events provide insight into management operations that are performed on
resources in your AWS account. These are also known as control plane operations. Management events can also
include non-API events that occur in your account. For example, when a user logs in to your account, CloudTrail
logs the ConsoleLogin event.

You can configure multiple trails differently so that the trails process and log only the events that you specify.
For example, one trail can log read-only data and management events, so that all read-only events are delivered
to one S3 bucket. Another trail can log only write-only data and management events, so that all write-only events
are delivered to a separate S3 bucket.
You can also configure your trails to have one trail log and deliver all management events to one S3 bucket, and
configure another trail to log and deliver all data events to another S3 bucket.

Data events provide insight into the resource operations performed on or within a resource. These are also
known as data plane operations. Data events are often high-volume activities.

Example data events include:

Amazon S3 object-level API activity (for example, GetObject, DeleteObject, and PutObject API operations)

AWS Lambda function execution activity (the Invoke API)

Data events are disabled by default when you create a trail. To record CloudTrail data events, you must
explicitly add the supported resources or resource types for which you want to collect activity to a trail.

You can log the object-level API operations on your S3 buckets. Before Amazon CloudWatch Events can match
these events, you must use AWS CloudTrail to set up a trail configured to receive these events. To log data
events for an S3 bucket to AWS CloudTrail and CloudWatch Events, create a trail. A trail captures API calls and
related events in your account and delivers the log files to an S3 bucket that you specify. After you create a trail
and configure it to capture the log files you want, you need to be able to find the log files and interpret the
information they contain.

Typically, log files appear in your bucket within 15 minutes of the recorded AWS API call or other AWS event.
Then you need to create a Lambda function to log data events for your S3 bucket. Finally, you need to create a
trigger to run your Lambda function in response to an Amazon S3 data event. You can create this rule on
CloudWatch Events, and setting Lambda function as the target. Your logs will show up on CloudWatch log
group, which you can view and search as needed

Hence, the correct answer is: Create an AWS CloudTrail trail to track and store your S3 API call logs to an
Amazon S3 bucket. Create a Lambda function that logs data events of your S3 bucket. Trigger this Lambda
function using CloudWatch Events rule for every action taken on your S3 objects. View the logs on the
CloudWatch Logs group.

The option that says: Enable logging on your Amazon S3 bucket to save the access logs on a separate S3
bucket. Import logs to AWS Elasticsearch service to easily search and query the logs is incorrect because this
would incur more cost if you provision an Elasticsearch cluster. Take note that the scenario asks for the most
cost-effective solution.

The option that says: Enable logging on your S3 bucket. Create a CloudWatch Events rule that watches the
object-level events of your S3 bucket. Create a target on the rule to send the event logs to a CloudWatch Log
group. View and search the logs on CloudWatch Logs is incorrect because before Amazon CloudWatch Events
can match API call events. You must use AWS CloudTrail first to set up a trail configured to receive these
events.

The option that says: Create an AWS CloudWatch Log group and configure S3 logging to send object-level
events to this log group. View and search the event logs on the created CloudWatch log group is incorrect
because you can't configure an S3 bucket to directly send access logs to a CloudWatch log group.

References:
https://aws.amazon.com/blogs/aws/cloudtrail-update-capture-and-process-amazon-s3-object-level-api-activity/

https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-management-and-data-events-with-
cloudtrail.html#example-logging-all-S3-objects

https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html

Check out this AWS CloudTrail Cheat Sheet:

https://tutorialsdojo.com/aws-cloudtrail/

Question 26: Correct

A company runs a popular e-commerce website that provides discounts and weekly sales promotions on various
products. It was recently moved from its previous cloud hosting provider to AWS. For its architecture, it uses an
Auto Scaling group of Reserved EC2 instances with an Application Load Balancer in front. Their DevOps team
needs to set up a CloudFront web distribution that uses a custom domain name where the origin is set to point to
the ALB.How can the DevOps Engineers properly implement an end-to-end HTTPS connection from the origin
to the CloudFront viewers?

Utilize an SSL certificate that is signed by a trusted 3rd party certificate authority for the ALB, which is
then imported into AWS Certificate Manager. Set the Viewer Protocol Policy to HTTPS Only in
CloudFront. Use an SSL/TLS certificate from 3rd party certificate authority which was imported to an
Amazon S3 bucket.

Use a certificate that is signed by a trusted 3rd party certificate authority for the Application Load
Balancer, which is then imported into AWS Certificate Manager. Choose the Match Viewer setting for the
Viewer Protocol Policy to support both HTTP or HTTPS in CloudFront. Use an SSL/TLS certificate from
a 3rd party certificate authority that was imported to either AWS Certificate Manager or the IAM
certificate store.

Generate an SSL certificate that is signed by a trusted third-party certificate authority. Import the
certificate into AWS Certificate Manager and use it for the Application Load balancer. Set the Viewer
Protocol Policy to HTTPS Only in CloudFront and then use an SSL/TLS certificate from a 3rd party
certificate authority which was imported to either AWS Certificate Manager or the IAM certificate store.

(Correct)

Generate an SSL self-signed certificate for the Application Load Balancer. Configure the Viewer Protocol
Policy setting to HTTPS Only in CloudFront. Use an SSL/TLS certificate from a 3rd party certificate
authority that was imported to either AWS Certificate Manager or the IAM certificate store.

Explanation

For web distributions, you can configure CloudFront to require that viewers use HTTPS to request your objects,
so that connections are encrypted when CloudFront communicates with viewers. You also can configure
CloudFront to use HTTPS to get objects from your origin, so that connections are encrypted when CloudFront
communicates with your origin.
Remember that there are rules on which type of SSL Certificate to use if you are using an EC2 or an ELB as
your origin. This question is about setting up an end-to-end HTTPS connection between the Viewers,
CloudFront, and your custom origin, which is an ALB instance.

The certificate issuer you must use depends on whether you want to require HTTPS between viewers and
CloudFront or between CloudFront and your origin:

HTTPS between viewers and CloudFront

- You can use a certificate that was issued by a trusted certificate authority (CA) such as Comodo, DigiCert,
Symantec, or other third-party providers.

- You can use a certificate provided by AWS Certificate Manager (ACM)

HTTPS between CloudFront and a custom origin

- If the origin is not an ELB load balancer, such as Amazon EC2, the certificate must be issued by a trusted CA
such as Comodo, DigiCert, Symantec or other third-party providers.

- If your origin is an ELB load balancer, you can also use a certificate provided by ACM.

If you're using your own domain name, such as tutorialsdojo.com, you need to change several CloudFront
settings. You also need to use an SSL/TLS certificate provided by AWS Certificate Manager (ACM), or import a
certificate from a third-party certificate authority into ACM or the IAM certificate store. Lastly, you should set
the Viewer Protocol Policy to HTTPS Only in CloudFront.

Hence, the correct answer is: Generate an SSL certificate that is signed by a trusted third-party certificate
authority. Import the certificate into AWS Certificate Manager and use it for the Application Load
balancer. Set the Viewer Protocol Policy to HTTPS Only in CloudFront and then use an SSL/TLS certificate
from a 3rd party certificate authority which was imported to either AWS Certificate Manager or the IAM
certificate store.

The option says: Generate an SSL self-signed certificate for the Application Load Balancer. Configure the
Viewer Protocol Policy setting to HTTPS Only in CloudFront. Use an SSL/TLS certificate from a 3rd party
certificate authority that was imported to either AWS Certificate Manager or the IAM certificate store is
incorrect because you cannot directly upload a self-signed certificate in your ALB.

The option that says: Use a certificate that is signed by a trusted 3rd party certificate authority for the
Application Load Balancer, which is then imported into AWS Certificate Manager. Choose the Match
Viewer setting for the Viewer Protocol Policy to support both HTTP or HTTPS in CloudFront. Use an
SSL/TLS certificate from a 3rd party certificate authority which was imported to either AWS Certificate
Manager or the IAM certificate store is incorrect because you have to set the Viewer Protocol Policy to HTTPS
Only. With Match Viewer, CloudFront communicates with your custom origin using HTTP or HTTPS,
depending on the protocol of the viewer request. For example, if you choose Match Viewer for Origin Protocol
Policy and the viewer uses HTTP to request an object from CloudFront, CloudFront also uses the same protocol
(HTTP) to forward the request to your origin.

The option that says: Utilize an SSL certificate that is signed by a trusted 3rd party certificate authority for
the ALB, which is then imported into AWS Certificate Manager. Set the Viewer Protocol Policy to HTTPS
Only in CloudFront. Use an SSL/TLS certificate from 3rd party certificate authority which was imported
to an Amazon S3 bucket is incorrect because you cannot use an SSL/TLS certificate from a third-party
certificate authority which was imported to S3.

References:

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/cnames-and-https-requirements.html

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/using-https-viewers-to-cloudfront.html

Check out this Amazon CloudFront Cheat Sheet:

https://tutorialsdojo.com/amazon-cloudfront/

Question 27: Incorrect

There are times that the DevOps engineers are unable to connect to certain Amazon EC2 Windows instances and
experiences boot issues. This process involves a lot of manual steps before they could regain access. The team
was assigned to design a solution to automatically recover impaired Amazon EC2 instances in the company's
AWS VPC. To meet the compliance requirements, It should automatically fix an EC2 instance that has become
unreachable due to network misconfigurations, RDP issues, firewall settings, and many others.

Which is the MOST suitable solution that the DevOps engineers should implement to satisfy this requirement?

Use a combination of AWS Config and the AWS Systems Manager Session Manager to self diagnose and
troubleshoot problems on your Amazon EC2 Linux and Windows Server instances. Automate the
recovery process by setting up a monitoring system using CloudWatch, AWS Lambda, and the AWS
Systems Manager Run Command that will automatically monitor and recover impaired EC2 instances

Diagnose and troubleshoot problems on your Amazon EC2 Linux and Windows Server instances using
the EC2Rescue tool. Automatically run the tool by using the Systems Manager Automation and
the AWSSupport-ExecuteEC2Rescue document.

(Correct)

Integrate AWS Config and the AWS Systems Manager State Manager to self diagnose and troubleshoot
problems on your Amazon EC2 Linux and Windows Server instances. Use AWS Systems Manager
Maintenance Windows to automate the recovery process.

Diagnose and troubleshoot problems on your Amazon EC2 Linux and Windows Server instances using
the EC2Rescue tool. Run the tool automatically by using AWS OpsWorks Chef Automate and
the AWSSupport-ExecuteEC2Rescue document.

Explanation

EC2Rescue can help you diagnose and troubleshoot problems on Amazon EC2 Linux and Windows Server
instances. You can run the tool manually or you can run the tool automatically by using Systems Manager
Automation and the AWSSupport-ExecuteEC2Rescue document. The AWSSupport-
ExecuteEC2Rescue document is designed to perform a combination of Systems Manager actions, AWS
CloudFormation actions, and Lambda functions that automate the steps normally required to use EC2Rescue.
Systems Manager Automation simplifies common maintenance and deployment tasks of Amazon EC2 instances
and other AWS resources. Automation enables you to do the following:

- Build Automation workflows to configure and manage instances and AWS resources.

- Create custom workflows or use pre-defined workflows maintained by AWS.

- Receive notifications about Automation tasks and workflows by using Amazon CloudWatch Events.

- Monitor Automation progress and execution details by using the Amazon EC2 or the AWS Systems Manager
console.

Hence, the correct answer is: Diagnose and troubleshoot problems on your Amazon EC2 Linux and Windows
Server instances using the EC2Rescue tool. Automatically run the tool by using the Systems Manager
Automation and the AWSSupport-ExecuteEC2Rescue document.The option that says: Integrate AWS Config and
the AWS Systems Manager State Manager to self diagnose and troubleshoot problems on your Amazon EC2
Linux and Windows Server instances. Use AWS Systems Manager Maintenance Windows to automate the
recovery process is incorrect because AWS Config is a service that is primarily used to assess, audit, and
evaluate the configurations of your AWS resources but not to diagnose and troubleshoot problems in your EC2
instances. In addition, AWS Systems Manager State Manager is primarily used as a secure and scalable
configuration management service that automates the process of keeping your Amazon EC2 and hybrid
infrastructure in a state that you define but does not help you in troubleshooting your EC2 instances.

The option that says: Use a combination of AWS Config and the AWS Systems Manager Session Manager to
self diagnose and troubleshoot problems on your Amazon EC2 Linux and Windows Server instances.
Automate the recovery process by setting up a monitoring system using CloudWatch, AWS Lambda and the
AWS Systems Manager Run Command that will automatically monitor and recover impaired EC2
instances is incorrect because just like the previous option, AWS Config does not help you in troubleshooting
the problems in your EC2 instances. Moreover, the AWS Systems Manager Sessions Manager simply provides
secure and auditable instance management without the need to open inbound ports, maintain bastion hosts, or
manage SSH keys for your EC2 instances but it does not provide the capability of helping you diagnose and
troubleshoot problems in your instance like what the EC2Rescue tool can do. In addition, setting up a
CloudWatch, AWS Lambda and the AWS Systems Manager Run Command that will automatically monitor and
recover impaired EC2 instances is an operational overhead that can be easily done by using AWS Systems
Manager Automation.

The option that says: Diagnose and troubleshoot problems on your Amazon EC2 Linux and Windows Server
instances using the EC2Rescue tool. Run the tool automatically by using AWS OpsWorks Chef Automate and
the AWSSupport-ExecuteEC2Rescue document is incorrect because Chef Automate is a suite of automation tools
from Chef which is mainly used for configuration management, compliance and security, and continuous
deployment.

References:

https://docs.aws.amazon.com/systems-manager/latest/userguide/automation-ec2rescue.html

https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-automation.html

Check out this AWS Systems Manager Cheat Sheet:


https://tutorialsdojo.com/aws-systems-manager/

Question 28: Correct

You are developing a store module that lets users choose which plugins they want to activate on your mobile app
that is deployed on ECS Fargate. When the cluster was created, the service was launched with the LATEST
platform version which is 1.2.0. However, there is a new update on the platform version (1.3.0) that supports a
Splunk log driver which you are planning to use.

What is the recommended way of updating the platform version of the service on the AWS Console?

Update the service on ECS and select “Redeploy” on the deployment strategy so that the cluster will be
re-deployed with the new platform version.

Update the task definition with the new platform version ARN so that ECS re-deploys the cluster with the
new platform version.

Update the service on ECS and select “Force new deployment” so that the cluster will be re-deployed with
the new platform version.

(Correct)

Enable the automatic platform version upgrade feature in ECS.

Explanation

AWS Fargate platform versions are used to refer to a specific runtime environment for Fargate task
infrastructure. It is a combination of the kernel and container runtime versions. New platform versions are
released as the runtime environment evolves, for example, if there are kernel or operating system updates, new
features, bug fixes, or security updates. Security updates and patches are deployed automatically for your Fargate
tasks. If a security issue is found that affects a platform version, AWS patches the platform version. In some
cases, you may be notified that your Fargate tasks have been scheduled for retirement

You can update a running service to change the number of tasks that are maintained by a service, which task
definition is used by the tasks, or if your tasks are using the Fargate launch type, you can change the platform
version your service uses. If you have an application that needs more capacity, you can scale up your service. If
you have the unused capacity to scale down, you can reduce the number of desired tasks in your service and free
up resources.

If your updated Docker image uses the same tag as what is in the existing task definition for your service (for
example, my_image:latest), you do not need to create a new revision of your task definition. You can update
your service with your custom configuration, keep the current settings for your service, and select Force new
deployment. The new tasks launched by the deployment pull the current image/tag combination from your
repository when they start. The Force new deployment option is also used when updating a Fargate task to use a
more current platform version when you specify LATEST. For example, if you specified LATEST and your running
tasks are using the 1.0.0 platform version and you want them to relaunch using a newer platform version.

By default, deployments are not forced but you can use the forceNewDeployment request parameter (or the --
force-new-deployment parameter if you are using the AWS CLI) to force a new deployment of the service. You
can use this option to trigger a new deployment with no service definition changes. For example, you can update
a service's tasks to use a newer Docker image with the same image/tag combination (my_image:latest) or to roll
Fargate tasks onto a newer platform version.

Hence, the correct answer is: Update the service on ECS and select “Force new deployment” so that the
cluster will be re-deployed with the new platform version.

The option that says: Update the service on ECS and select “Redeploy” on the deployment strategy so that the
cluster will be re-deployed with the new platform version is incorrect because there is no "Redeploy"
deployment strategy option in ECS.

The option that says: Update the task definition with the new platform version ARN so that ECS re-deploys the
cluster with the new platform version is incorrect because modifying the task definition will cause a new
version to be deployed but since the ARN or the image revision did not change, no further deployments will be
made by ECS.

The option that says: Enable the automatic platform version upgrade feature in ECS is incorrect because there
is no such feature in Amazon ECS.

References:

https://docs.aws.amazon.com/AmazonECS/latest/developerguide/platform_versions.html

https://docs.aws.amazon.com/AmazonECS/latest/developerguide/update-service.html

https://docs.aws.amazon.com/cli/latest/reference/ecs/update-service.html

Check out this Amazon ECS Cheat Sheet:

https://tutorialsdojo.com/amazon-elastic-container-service-amazon-ecs/

Question 29: Correct

An insurance firm has recently undergone digital transformation and AWS cloud adoption. Its app development
team has four environments, namely DEV, TEST, PRE-PROD, and PROD, for its flagship application that is
configured with AWS CodePipeline. After several weeks, they noticed that there were several outages caused by
misconfigured files or faulty code blocks that were deployed into the PROD environment. A DevOps Engineer
has been assigned to add the required steps to identify issues in the application before it is released.

Which of the following is the MOST appropriate combination of steps that the Engineer should implement to
identify functional issues during the deployment process? (Select TWO.)

Migrate the pipeline from CodePipeline to AWS Data Pipeline to enable more CI/CD features. Add a test
action that uses Amazon Macie to the pipeline. Run an assessment using the Runtime Behavior Analysis
rules package to verify that the deployed code complies with the strict security standards of the company
before deploying it to the PROD environment.

In the pipeline, add an AWS CodeDeploy action to deploy the latest version of the application to the PRE-
PROD environment. Set up a manual approval action in the pipeline so that the QA team can perform the
required tests. Add another CodeDeploy action that deploys the verified code to the PROD environment
after the manual approval action.
(Correct)

Add a test action that uses Amazon GuardDuty to the pipeline. Run an assessment using the Runtime
Behavior Analysis rules package to verify that the deployed code complies with the strict security
standards of the company before deploying it to the PROD environment.

Add a test action to the pipeline to run both the unit and functional tests using AWS CodeBuild. Verify
that the test results passed before deploying the new application revision to the PROD environment.

(Correct)

Add a test action that uses Amazon Inspector to the pipeline. Run an assessment using the Runtime
Behavior Analysis rules package to verify that the deployed code complies with the strict security
standards of the company before deploying it to the PROD environment.

Explanation

Continuous delivery is a release practice in which code changes are automatically built, tested, and prepared for
release to production. With AWS CloudFormation and CodePipeline, you can use continuous delivery to
automatically build and test changes to your AWS CloudFormation templates before promoting them to
production stacks. This release process lets you rapidly and reliably make changes to your AWS infrastructure.

In AWS CodePipeline, you can add an approval action to a stage in a pipeline at the point where you want the
pipeline execution to stop so that someone with the required AWS Identity and Access Management permissions
can approve or reject the action.

If the action is approved, the pipeline execution resumes. If the action is rejected—or if no one approves or
rejects the action within seven days of the pipeline reaching the action and stopping — the result is the same as
an action failing, and the pipeline execution does not continue.

Hence, the correct answers are:

- Add a test action to the pipeline to run both the unit and functional tests using AWS CodeBuild. Verify
that the test results passed before deploying the new application revision to the PROD environment.

- In the pipeline, add an AWS CodeDeploy action to deploy the latest version of the application to the
PRE-PROD environment. Set up a manual approval action in the pipeline so that the QA team can
perform the required tests. Add another CodeDeploy action that deploys the verified code to the PROD
environment after the manual approval action.

The option that says: Add a test action that uses Amazon Inspector to the pipeline. Run an assessment using
the Runtime Behavior Analysis rules package to verify that the deployed code complies with the strict
security standards of the company before deploying it to the PROD environment is incorrect because
Amazon inspector just checks the security vulnerabilities of the EC2 instances and not the application
functionality itself.

The option that says: Add a test action that uses Amazon GuardDuty to the pipeline. Run an assessment
using the Runtime Behavior Analysis rules package to verify that the deployed code complies with the
strict security standards of the company before deploying it to the PROD environment is incorrect because
there is no Runtime Behavior Analysis rules package in Amazon GuardDuty.
The option that says: Migrate the pipeline from CodePipeline to AWS Data Pipeline to enable more CI/CD
features. Add a test action that uses Amazon Macie to the pipeline. Run an assessment using the Runtime
Behavior Analysis rules package to verify that the deployed code complies with the strict security
standards of the company before deploying it to the PROD environment is incorrect because AWS Data
Pipeline is a web service that simply helps you to reliably process and move data between different AWS
compute and storage services, as well as on-premises data sources, at specified intervals. This service can't be
used in your CI/CD process. Moreover, there is no Runtime Behavior Analysis rules package in Amazon Macie.

References:

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/continuous-delivery-codepipeline.html

https://docs.aws.amazon.com/codepipeline/latest/userguide/approvals-action-add.html

https://docs.aws.amazon.com/codepipeline/latest/userguide/best-practices.html

Check out these AWS CloudFormation and CodePipeline Cheat Sheets:

https://tutorialsdojo.com/aws-cloudformation/

https://tutorialsdojo.com/aws-codepipeline/

Question 30: Correct

You have created a new Elastic Beanstalk environment to be used as a pre-production stage for load testing new
code version. Since code changes are committed on a regular basis, you sometimes need to deploy new versions
2 to 3 times each day. You need to deploy a new version as quickly as possible in a cost-effective way to give
ample time for the QA team to test it.

Which of the following implementations is suited for this scenario?

Use Immutable as the deployment policy to deploy code on new instances.

Use All at once as deployment policy to deploy new versions.

(Correct)

Use Rolling as the deployment policy to deploy new versions.

Implement a blue/green deployment strategy to have the new version ready for quick switching.

Explanation

In ElasticBeanstalk, you can choose from a variety of deployment methods:

All at once – Deploy the new version to all instances simultaneously. All instances in your environment are out
of service for a short time while the deployment occurs. This is the method that provides the least amount of
time for deployment.
Rolling – Deploy the new version in batches. Each batch is taken out of service during the deployment phase,
reducing your environment's capacity by the number of instances in a batch.

Rolling with additional batch – Deploy the new version in batches, but first launch a new batch of instances to
ensure full capacity during the deployment process.

Immutable – Deploy the new version to a fresh group of instances by performing an immutable update.

Blue/Green - Deploy the new version to a separate environment, and then swap CNAMEs of the two
environments to redirect traffic to the new version instantly.

Refer to the table below for the characteristics of each deployment method as well as the amount of time it takes
to do the deployment, as seen in the Deploy Time column:

Hence, the correct answer is: Use All at once as the deployment policy to deploy new versions.

The option that says: Use Rolling as the deployment policy to deploy new versions is incorrect because this
deployment type is not as fast as “All at once” deployment since the code is deployed in batches.

The option that says: Use Immutable as the deployment policy to deploy code on new instances is incorrect
because this deployment type takes time as new instances are being provisioned during the deployment.

The option that says: Implement a blue/green deployment strategy to have the new version ready for quick
switching is incorrect because just like the Immutable deployment, this type also takes time since the new
instances are provisioned first before the actual deployment starts. Plus, it incurs additional cost as 2 sets of
instances are running at the same time.
References:

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.rolling-version-deploy.html

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.CNAMESwap.html

Check out this AWS Elastic Beanstalk Cheat Sheet:

https://tutorialsdojo.com/aws-elastic-beanstalk/

Question 31: Correct

You’re running a cluster of EC2 instances on AWS that serves authentication processes and session handling for
your new application. After three weeks of operation, your monitoring team flagged your instances with
constantly high CPU usage, and the login module response is very slow. Upon checking, it was discovered that
your Amazon EC2 instances were hacked and exploited for mining Bitcoins. You have immediately taken down
the cluster and created a new one with your custom AMI. You want to have a solution to detect compromised
EC2 instances as well as detect malicious activity within your AWS environment.

Which of the following will help you with this?

Use Amazon Macie

Enable VPC Flow Logs


Use AWS Inspector

Enable Amazon GuardDuty

(Correct)

Explanation

Amazon GuardDuty offers threat detection that enables you to continuously monitor and protect your AWS
accounts and workloads. GuardDuty analyzes continuous streams of meta-data generated from your account and
network activity found in AWS CloudTrail Events, Amazon VPC Flow Logs, and DNS Logs. It also uses
integrated threat intelligence such as known malicious IP addresses, anomaly detection, and machine learning to
identify threats more accurately. This can include issues like escalations of privileges, uses of exposed
credentials, or communication with malicious IPs, URLs, or domains.

For example, GuardDuty can detect compromised EC2 instances serving malware or mining bitcoin. It also
monitors AWS account access behavior for signs of compromise, such as unauthorized infrastructure
deployments, like instances deployed in a region that has never been used, or unusual API calls, like a password
policy change to reduce password strength.

Hence, the correct answer is: Enable Amazon GuardDuty.

The option that says: Use Amazon Macie is incorrect because this service is available to protect data stored in
Amazon S3 only. It uses machine learning to automatically discover, classify, and protect sensitive data in AWS.

The option that says: Use AWS Inspector is incorrect because this service is just an automated security
assessment service that helps improve the security and compliance of applications deployed on AWS. It will not
detect possible malicious activity on your instances.

The option that says: Enable VPC Flow Logs is incorrect because this feature only collects traffic logs flowing
to your VPC. It does not provide analysis of the traffic logs.

References:

https://aws.amazon.com/guardduty/faqs/

https://docs.aws.amazon.com/guardduty/latest/ug/what-is-guardduty.html

https://aws.amazon.com/guardduty/

Check out this Amazon GuardDuty Cheat Sheet:

https://tutorialsdojo.com/amazon-guardduty/

Question 32: Incorrect

A company has its on-premises data network connected to their AWS VPC via a Direct Connect connection.
Their DevOps team is maintaining their Media Asset Management (MAM) system which uses a repository of
over 50-TB digital videos and media files that are stored on their on-premises tape library. Due to the sheer size
of their data, they want to implement an automated catalog system that will enable them to search their files
using facial recognition. A catalog will store the faces of the people who are present in these videos including a
still image of each person. Eventually, the media company would like to migrate these media files to AWS
including the MAM video contents.

Which of the following provides a solution which uses the LEAST amount of ongoing management overhead
and will cause MINIMAL disruption to the existing system?

Move all of the media files from the on-premises library into an EBS volume mounted on a large Amazon
EC2 instance. Set up an open-source facial recognition tool in the instance. Process the media files to
retrieve the metadata and store this information into the MAM solution. Copy the media files to an
Amazon S3 bucket.

Using Amazon Kinesis Video Streams, create a video ingestion stream and build a collection of faces with
Amazon Rekognition. Stream the media files from the MAM solution into Kinesis Video Streams.
Configure Amazon Rekognition to process the streamed files. Set up a stream consumer to retrieve the
required metadata, and store them into the MAM solution. Configure the stream to store the files in an
Amazon S3 bucket.

Connect the on-premises file system to AWS Storage Gateway by setting up a file gateway appliance on-
premises. Use the MAM solution to extract the media files from the current data store and send them into
the file gateway. Populate a collection using Amazon Rekognition by building a catalog of faces from the
processed media files. Launch a Lambda function to invoke Amazon Rekognition Javascript SDK to have
it fetch the media files from the S3 bucket which is backing the file gateway. Retrieve the needed
metadata using the Lambda function and store the information into the MAM solution.

(Correct)

Launch a tape gateway appliance in your on-premises data center and connect it to your AWS Storage
Gateway service. Set up the MAM solution to fetch the media files from the current archive and store
them into the tape gateway in the AWS Cloud. Build a collection from the catalog of faces using Amazon
Rekognition. Set up an AWS Lambda function which invokes the Rekognition Javascript SDK to have
Amazon Rekognition process the video directly from the tape gateway in real-time. Retrieve the required
metadata and store them into the MAM solution.

(Incorrect)

Explanation

Amazon Rekognition can store information about detected faces in server-side containers known as collections.
You can use the facial information that's stored in a collection to search for known faces in images, stored
videos, and streaming videos. Amazon Rekognition supports the IndexFaces operation. You can use this
operation to detect faces in an image and persist information about facial features that are detected into a
collection. This is an example of a storage-based API operation because the service persists information on the
server.

AWS Storage Gateway offers file-based, volume-based, and tape-based storage solutions. With a tape gateway,
you can cost-effectively and durably archive backup data in GLACIER or DEEP_ARCHIVE. A tape gateway
provides a virtual tape infrastructure that scales seamlessly with your business needs and eliminates the
operational burden of provisioning, scaling, and maintaining a physical tape infrastructure.
You can run AWS Storage Gateway either on-premises as a VM appliance, as a hardware appliance, or in AWS
as an Amazon Elastic Compute Cloud (Amazon EC2) instance. You deploy your gateway on an EC2 instance to
provision iSCSI storage volumes in AWS. You can use gateways hosted on EC2 instances for disaster recovery,
data mirroring, and providing storage for applications hosted on Amazon EC2.

Hence, the correct answer is: Connect the on-premises file system to AWS Storage Gateway by setting up a file
gateway appliance on-premises. Use the MAM solution to extract the media files from the current data store
and send them into the file gateway. Populate a collection using Amazon Rekognition by building a catalog of
faces from the processed media files. Launch a Lambda function to invoke Amazon Rekognition Javascript
SDK to have it fetch the media files from the S3 bucket which is backing the file gateway. Retrieve the needed
metadata using the Lambda function and store the information into the MAM solution.

The option that says: Move all of the media files from the on-premises library into an EBS volume mounted
on a large Amazon EC2 instance. Set up an open-source facial recognition tool in the instance. Process the
media files to retrieve the metadata and store this information into the MAM solution. Copy the media files to
an Amazon S3 bucket is incorrect because it entails a lot of ongoing management overhead instead of just using
Amazon Rekognition. Moreover, it is more suitable to use the AWS Storage Gateway service rather than an EBS
Volume.

The option that says: Launch a tape gateway appliance in your on-premises data center and connect it to your
AWS Storage Gateway service. Set up the MAM solution to fetch the media files from the current archive and
store them into the tape gateway in the AWS Cloud. Build a collection from the catalog of faces using
Amazon Rekognition. Set up an AWS Lambda function which invokes the Rekognition Javascript SDK to
have Amazon Rekognition process the video directly from the tape gateway in real-time. Retrieve the required
metadata and store them into the MAM solution is incorrect because although this is using the right
combination of AWS Storage Gateway and Amazon Rekognition, take note that you can't directly fetch the
media files from your tape gateway in real-time since this is backed up using Glacier. Although the on-premises
data center is using a tape gateway, you can still set up a solution to use a file gateway in order to properly
process the videos using Amazon Rekognition. Keep in mind that the tape gateway in AWS Storage Gateway
service is primarily used as an archive solution.

The option that says: Using Amazon Kinesis Video Streams, create a video ingestion stream and build a
collection of faces with Amazon Rekognition. Stream the media files from the MAM solution into Kinesis
Video Streams. Configure Amazon Rekognition to process the streamed files. Set up a stream consumer to
retrieve the required metadata, and store them into the MAM solution. Configure the stream to store the files
in an Amazon S3 bucket is incorrect because you won't be able to connect your tape gateway directly to your
Kinesis Video Streams service. You need to use the AWS Storage Gateway first.

References:

https://docs.aws.amazon.com/rekognition/latest/dg/collections.html

https://aws.amazon.com/storagegateway/file/

Check out this Amazon Rekognition Cheat Sheet:

https://tutorialsdojo.com/amazon-rekognition/

Question 33: Correct


A leading technology company with a hybrid cloud architecture has a suite of web applications that is composed
of 50 modules. Each of the module is a multi-tiered application hosted in an Auto Scaling group of On-Demand
EC2 instances behind an ALB with an external Amazon RDS. The Application Security team is mandated to
block access from external IP addresses and only allow access to the 50 applications from the corporate data
center. A group of 10 proxy servers with an associated IP address each are used for the corporate network to
connect to the Internet. The 10 proxy IP addresses are being refreshed twice a month. The Network team uploads
a CSV file that contains the latest proxy IP addresses into a private S3 bucket. The DevOps Engineer must build
a solution to ensure that the applications are accessible from the corporate network in the most cost-effective
way and with minimal operational effort.As a DevOps Engineer, how can you meet the above requirement?

Host all of the applications and modules in the same Virtual Private Cloud (VPC). Set up a Direct
Connect connection with an active/standby configuration. Update the ELB security groups to allow only
inbound HTTPS connections from the corporate network IP addresses.

Configure the ELB security groups to allow HTTPS inbound access from the Internet. Set up Amazon
Cognito to integrate the company's Active Directory as the identity provider. Integrate all of the 50
modules with Amazon Cognito to ensure that only the company employees can log into the application.
Store the user access logs to Amazon CloudWatch Logs to record user access activities. Use AWS Config
for configuration management that runs twice a month to update the settings accordingly.

Develop a custom Python-based Bolo script using the AWS SDK for Python. Configure the script to
download the CSV file that contains the proxy IP addresses and update the ELB security groups to allow
only HTTPS inbound from the given IP addresses. Host the script in a Lambda function and run it every
minute using CloudWatch Events.

Launch a Lambda function to read the list of proxy IP addresses from the S3 bucket. Configure the
function to update the ELB security groups to allow HTTPS requests only from the given IP addresses.
Use the Amazon S3 Event Notification to automatically invoke the Lambda function when the CSV file is
updated.

(Correct)

Explanation

AWS Lambda is a serverless compute service that runs your code in response to events and automatically
manages the underlying compute resources for you. You can use AWS Lambda to extend other AWS services
with custom logic, or create your own back-end services that operate at AWS scale, performance, and security.
AWS Lambda can automatically run code in response to multiple events, such as HTTP requests via Amazon
API Gateway, modifications to objects in Amazon S3 buckets, table updates in Amazon DynamoDB, and state
transitions in AWS Step Functions.

Lambda runs your code on high-availability compute infrastructure and performs all the administration of the
compute resources, including server and operating system maintenance, capacity provisioning and automatic
scaling, code and security patch deployment, and code monitoring and logging. All you need to do is supply the
code.

Lambda does not enforce any restrictions on your function logic – if you can code for it, you can run it within a
Lambda function. As part of your function, you may need to call other APIs, or access other AWS services like
databases.
By default, your service or API must be accessible over the public internet for AWS Lambda to access it.
However, you may have APIs or services that are not exposed this way. Typically, you create these resources
inside Amazon Virtual Private Cloud (Amazon VPC) so that they cannot be accessed over the public Internet.
These resources could be AWS service resources, such as Amazon Redshift data warehouses, Amazon
ElastiCache clusters, or Amazon RDS instances. They could also be your own services running on your own
EC2 instances. By default, resources within a VPC are not accessible from within a Lambda function.

AWS Lambda runs your function code securely within a VPC by default. However, to enable your Lambda
function to access resources inside your private VPC, you must provide additional VPC-specific configuration
information that includes VPC subnet IDs and security group IDs. AWS Lambda uses this information to set up
elastic network interfaces (ENIs) that enable your function to connect securely to other resources within your
private VPC.

Hence, the correct answer is: Launch a Lambda function to read the list of proxy IP addresses from the S3
bucket. Configure the function to update the ELB security groups to allow HTTPS requests only from the
given IP addresses. Use the Amazon S3 Event Notification to automatically invoke the Lambda function
when the CSV file is updated.

The option that says: Host all of the applications and modules in the same Virtual Private Cloud (VPC). Set
up a Direct Connect connection with an active/standby configuration. Update the ELB security groups to
allow only inbound HTTPS connections from the corporate network IP addresses is incorrect because setting
up a Direct Connect connection costs a significant amount of money. Remember that the scenario says that you
have to ensure that the applications are accessible from the corporate network in the most cost-effective way and
with minimal operational effort.

The option that says: Develop a custom Python-based Bolo script using the AWS SDK for Python. Configure
the script to download the CSV file that contains the proxy IP addresses and update the ELB security groups
to allow only HTTPS inbound from the given IP addresses. Host the script in a Lambda function and run it
every minute using CloudWatch Events is incorrect because running the Lambda function every minute will
increase your compute costs. A better solution is to use Amazon S3 Event Notification to automatically invoke
the Lambda function when the CSV file is updated.

The option that says: Configure the ELB security groups to allow HTTPS inbound access from the Internet.
Set up Amazon Cognito to integrate the company's Active Directory as the identity provider. Integrate all of
the 50 modules with Amazon Cognito to ensure that only the company employees can log into the application.
Store the user access logs to Amazon CloudWatch Logs to record user access activities. Use AWS Config for
configuration management that runs twice a month to update the settings accordingly is incorrect because
using Amazon Cognito in this scenario is not warranted and unnecessary as well as the use of AWS Config.
Using AWS Lambda can fulfill the requirements in this scenario.

References:

https://aws.amazon.com/blogs/security/how-to-set-up-an-outbound-vpc-proxy-with-domain-whitelisting-and-
content-filtering/

https://aws.amazon.com/blogs/security/how-to-add-dns-filtering-to-your-nat-instance-with-squid/

https://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html
Question 34: Correct

A startup is developing an AI-powered traffic monitoring portal that will be hosted in AWS Cloud. The design
of the cloud architecture should be highly available and fault-tolerant to avoid unnecessary outages that can
affect the users. A DevOps Engineer was instructed to implement the architecture and also set up a system that
automatically assesses applications for exposure, vulnerabilities, and deviations from the AWS best practices.

Among the options below, which is the MOST appropriate architecture that you should implement?

Use Amazon GuardDuty for automated security assessment to help improve the security and compliance
of your applications. Set up an Amazon ElastiCache cluster for the database caching of the portal. Launch
an Auto Scaling group of Amazon EC2 instances on four Availability Zones then associate it to an
Application Load Balancer. Set up a MySQL RDS database instance with Multi-AZ deployments
configuration and Read Replicas. Using Amazon Route 53, create a CNAME record for the root domain
to point to the load balancer.

Use Amazon Inspector for automated security assessment to help improve the security and compliance of
your applications. Launch an Auto Scaling group of Amazon EC2 instances on three Availability Zones.
Set up an Application Load Balancer to distribute the incoming traffic. Set up an Amazon Aurora Multi-
Master as the database tier. Using Amazon Route 53, create an alias record for the root domain to point to
the load balancer.

(Correct)

Use AWS Shield for automated security assessment to help improve the security and compliance of your
applications. Launch an Auto Scaling group of Amazon EC2 instances on two Availability Zones with an
Application Load Balancer in front. Set up a MySQL RDS database instance with Multi-AZ deployments
configuration. Using Amazon Route 53, create a non-alias A record for the root domain to point to the
load balancer.

Use Amazon Macie for automated security assessment to help improve the security and compliance of
your applications. Set up Amazon DynamoDB as the database of the portal. Launch an Auto Scaling
group of EC2 instances on four Availability Zones with an Application Load Balancer in front to
distribute the incoming traffic. Using Amazon Route 53, create a non-alias A record for the root domain
to point to the load balancer.

Explanation

Amazon Inspector is an automated security assessment service that helps improve the security and compliance
of applications deployed on AWS. Amazon Inspector automatically assesses applications for exposure,
vulnerabilities, and deviations from best practices. After performing an assessment, Amazon Inspector produces
a detailed list of security findings prioritized by level of severity. These findings can be reviewed directly or as
part of detailed assessment reports which are available via the Amazon Inspector console or API.

Amazon Inspector security assessments help you check for unintended network accessibility of your Amazon
EC2 instances and for vulnerabilities on those EC2 instances. Amazon Inspector assessments are offered to you
as pre-defined rules packages mapped to common security best practices and vulnerability definitions. Examples
of built-in rules include checking for access to your EC2 instances from the internet, remote root login being
enabled, or vulnerable software versions installed. These rules are regularly updated by AWS security
researchers.

If you host a website on multiple Amazon EC2 instances, you can distribute traffic to your website across the
instances by using an Elastic Load Balancing (ELB) load balancer. The ELB service automatically scales the
load balancer as traffic to your website changes over time. The load balancer also can monitor the health of its
registered instances and route domain traffic only to healthy instances. To route domain traffic to an ELB load
balancer, use Amazon Route 53 to create an alias record that points to your load balancer. An alias record is a
Route 53 extension to DNS. It's similar to a CNAME record, but you can create an alias record both for the root
domain, such as tutorialsdojo.com, and for subdomains, such as www.tutorialsdojo.com. (You can create
CNAME records only for subdomains.)

You can use Amazon EC2 Auto Scaling to maintain a minimum number of running instances for your
application at all times. Amazon EC2 Auto Scaling can detect when your instance or application is unhealthy
and replace it automatically to maintain the availability of your application. You can also use Amazon EC2 Auto
Scaling to scale your Amazon EC2 capacity up or down automatically based on demand, using criteria that you
specify.

In this scenario, all of the options are highly available architectures. The main difference here is how they use
Amazon Route 53. Keep in mind that you have to create an alias record in Amazon Route 53 in order to point to
your load balancer.

Hence, the correct answer is: Use Amazon Inspector for automated security assessment to help improve the
security and compliance of your applications. Launch an Auto Scaling group of Amazon EC2 instances on
three Availability Zones. Set up an Application Load Balancer to distribute the incoming traffic. Set up an
Amazon Aurora Multi-Master as the database tier. Using Amazon Route 53, create an alias record for the
root domain to point to the load balancer.

The option that says: Use Amazon Shield for automated security assessment to help improve the security
and compliance of your applications. Launch an Auto Scaling group of Amazon EC2 instances on two
Availability Zones with an Application Load Balancer in front. Set up a MySQL RDS database instance
with Multi-AZ deployments configuration. Using Amazon Route 53, create a non-alias A record for the
root domain to point to the load balancer is incorrect because AWS Shield is a managed Distributed Denial of
Service (DDoS) protection service and not an automated security assessment. Moreover, you need to create an
Alias record with the root DNS name and not an A record.

The option that says: Use Amazon GuardDuty for automated security assessment to help improve the
security and compliance of your applications. Set up an Amazon ElastiCache cluster for the database
caching of the portal. Launch an Auto Scaling group of Amazon EC2 instances on four Availability Zones
then associate it to an Application Load Balancer. Set up a MySQL RDS database instance with Multi-AZ
deployments configuration and Read Replicas. Using Amazon Route 53, create a CNAME record for the
root domain to point to the load balancer is incorrect because Amazon GuardDuty is a threat detection service
that continuously monitors for malicious activity and unauthorized behavior to protect your AWS accounts. The
correct service that you should use is Amazon Inspector since this is an automated security assessment service
that helps improve the security and compliance of applications deployed on AWS. In addition, you can create
CNAME records only for subdomains and not for the zone apex or root domain.

The option that says: Use Amazon Macie for automated security assessment to help improve the security
and compliance of your applications. Set up Amazon DynamoDB as the database of the portal. Launch an
Auto Scaling group of EC2 instances on four Availability Zones with an Application Load Balancer in
front to distribute the incoming traffic. Using Amazon Route 53, create a non-alias A record for the root
domain to point to the load balancer is incorrect because you should use Amazon Inspector instead of Amazon
Macie, since this is just a security service that uses machine learning to automatically discover, classify, and
protect sensitive data in AWS. In addition, you should create an alias record with the root DNS name and not a
non-alias A record.

References:

http://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html

http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-elb-load-balancer.html

https://docs.aws.amazon.com/autoscaling/ec2/userguide/attach-load-balancer-asg.html

Check out this Amazon Route 53 Cheat Sheet:

https://tutorialsdojo.com/amazon-route-53/

Question 35: Incorrect

A national university has launched its serverless online learning portal using Lambda and API Gateway in AWS
that enables its students to enroll, see their grades online as well as manage their class schedule. The portal
abruptly stopped working after a few weeks and lost all of its data. The university hired a DevOps consultant and
based on the investigation, the outage was due to an SQL injection vulnerability on the portal's login page in
which the attacker simply injected the malicious SQL code. The consultant also emphasized the system's
inability to track historical changes to the rules and metrics associated with their firewall.

Which of the following is the MOST suitable and cost-effective solution to avoid another SQL Injection attack
against their infrastructure in AWS?

Add a web access control list in front of the API Gateway using AWS WAF to block requests that contain
malicious SQL code. Track the changes to your web access control lists (web ACLs) such as the creation
and deletion of rules including the updates to the WAF rule configurations using AWS Config.

(Correct)

Configure the Network Access Control List of your VPC to block the IP address of the attacker and then
create a CloudFront web distribution. Configure AWS WAF to add a web access control list (web ACL)
in front of the CloudFront distribution to block requests that contain malicious SQL code. Track the
changes to your web access control lists (web ACLs) such as the creation and deletion of rules including
the updates to the WAF rule configurations using AWS Config.

Add a web access control list in front of the Lambda functions using AWS WAF to block requests that
contain malicious SQL code. Track the changes to your web access control lists (web ACLs) such as the
creation and deletion of rules including the updates to the WAF rule configurations using AWS Firewall
Manager.

Launch a new Application Load Balancer and set up AWS WAF to block requests that contain malicious
SQL code. Place the API Gateway behind the ALB and use the AWS Firewall Manager to track changes
to your web access control lists (web ACLs) such as the creation and deletion of rules including the
updates to the WAF rule configurations.

Explanation

AWS WAF is a web application firewall that helps protect your web applications from common web exploits
that could affect application availability, compromise security, or consume excessive resources. With AWS
Config, you can track changes to WAF web access control lists (web ACLs). For example, you can record the
creation and deletion of rules and rule actions, as well as updates to WAF rule configurations.

AWS WAF gives you control over which traffic to allow or block to your web applications by defining
customizable web security rules. You can use AWS WAF to create custom rules that block common attack
patterns, such as SQL injection or cross-site scripting, and rules that are designed for your specific application.
New rules can be deployed within minutes, letting you respond quickly to changing traffic patterns. Also, AWS
WAF includes a full-featured API that you can use to automate the creation, deployment, and maintenance of
web security rules.

AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS
resources. Config continuously monitors and records your AWS resource configurations and allows you to
automate the evaluation of recorded configurations against desired configurations. With Config, you can review
changes in configurations and relationships between AWS resources, dive into detailed resource configuration
histories, and determine your overall compliance against the configurations specified in your internal guidelines.
This enables you to simplify compliance auditing, security analysis, change management, and operational
troubleshooting.

In this scenario, integrating WAF in front of the API Gateway and using AWS Config are the ones that should
be implemented in order to improve the security of the online learning portal. Hence, the correct answer is: Add
a web access control list in front of the API Gateway using AWS WAF to block requests that contain
malicious SQL code. Track the changes to your web access control lists (web ACLs) such as the creation and
deletion of rules including the updates to the WAF rule configurations using AWS Config.

The option that says: Add a web access control list in front of the Lambda functions using AWS WAF to block
requests that contain malicious SQL code. Track the changes to your web access control lists (web ACLs)
such as the creation and deletion of rules including the updates to the WAF rule configurations using AWS
Firewall Manager is incorrect because you have to use AWS WAF in front of the API Gateway and not directly
to the Lambda functions. AWS Firewall Manager is primarily used to manage your Firewall across multiple
AWS accounts under your AWS Organizations and hence, it is not suitable for tracking changes to WAF web
access control lists. You should use AWS Config instead.

The option that says: Configure the Network Access Control List of your VPC to block the IP address of the
attacker and then create a CloudFront web distribution. Configure AWS WAF to add a web access control list
(web ACL) in front of the CloudFront distribution to block requests that contain malicious SQL code. Track
the changes to your web access control lists (web ACLs) such as the creation and deletion of rules including
the updates to the WAF rule configurations using AWS Config is incorrect because even though it is valid to
use AWS WAF with CloudFront, it entails an additional and unnecessary cost to launch a CloudFront
distribution for this scenario. There is no requirement that the serverless online portal should be scalable and be
accessible around the globe hence, a CloudFront distribution is not relevant.
The option that says: Launch a new Application Load Balancer and set up AWS WAF to block requests that
contain malicious SQL code. Place the API Gateway behind the ALB and use the AWS Firewall Manager to
track changes to your web access control lists (web ACLs) such as the creation and deletion of rules including
the updates to the WAF rule configurations is incorrect because launching a new Application Load Balancer
entails additional cost and is not cost-effective. In addition, AWS Firewall Manager is primarily used to manage
your Firewall across multiple AWS accounts under your AWS Organizations. Using AWS Config is much more
suitable for tracking changes to WAF web access control lists.

References:https://aws.amazon.com/waf/

https://docs.aws.amazon.com/waf/latest/developerguide/what-is-aws-waf.html

https://aws.amazon.com/config/

Check out this AWS WAF Cheat Sheet:

https://tutorialsdojo.com/aws-waf/

Question 36: Incorrect

A company is running a batch job hosted on AWS Fargate to process large ZIP files. The job is triggered
whenever files are uploaded on an Amazon S3 bucket. To save costs, the company wants to set the minimum
number of ECS Tasks to 1, and only increase the task count when objects are uploaded to the S3 bucket again.
Once processing is complete, the S3 bucket should be emptied out and all ECS tasks should be stopped. The
object-level logging has already been enabled in the bucket.

Which of the following options is the EASIEST way to implement this?

Create a CloudWatch Event rule to detect S3 object PUT operations and set the target to a Lambda
function that will run Amazon ECS API command to increase the number of tasks on ECS. Create another
rule to detect S3 DELETE operations and run the Lambda function to reduce the number of ECS tasks.

Use CloudWatch Alarms on CloudTrail since the S3 object-level operations are recorded on CloudTrail.
Create two Lambda functions for increasing/decreasing the ECS task count. Set these as respective targets
for the CloudWatch Alarm depending on the S3 event.

Use CloudWatch Alarms on CloudTrail since this S3 object-level operations are recorded on CloudTrail.
Set two alarm actions to update ECS task count to scale-out/scale-in depending on the S3 event.

Create a CloudWatch Event rule to detect S3 object PUT operations and set the target to the ECS cluster
to run a new ECS task. Create another rule that detects S3 DELETE operations. Set the target to a Lambda
function that will stop all ECS tasks.

(Correct)

Explanation

You can use CloudWatch Events to run Amazon ECS tasks when certain AWS events occur. You can set up a
CloudWatch Events rule that runs an Amazon ECS task whenever a file is uploaded to a certain Amazon S3
bucket using the Amazon S3 PUT operation.
First, you must create a CloudWatch Events rule for the S3 service that will watch for object-level operations –
PUT and DELETE objects. For object-level operations, it is required to create a CloudTrail trail first. You need
two rules – one for running a task whenever a file is uploaded to S3 and another for stopping all the ECS tasks.
For the first rule, select “ECS task” as the target and input the needed values such as the cluster name, task
definition, and the task count. For the second rule, select a Lambda function as the target. To stop a running task,
you need to call the StopTask API which could be done in a Lambda function.

Hence, the correct answer is: Create a CloudWatch Event rule to detect S3 object PUT operations and set
the target to the ECS cluster to run a new ECS task. Create another rule that detects S3 DELETE
operations. Set the target to a Lambda function that will stop all ECS tasks.

The option that says: Create a CloudWatch Event rule to detect S3 object PUT operations and set the
target to a Lambda function that will run Amazon ECS API command to increase the number of tasks on
ECS. Create another rule to detect S3 DELETE operations and run the Lambda function to reduce the
number of ECS tasks is incorrect. Although this solution meets the requirement, creating your own Lambda
function for this scenario is not really necessary. It is much simpler to control ECS tasks directly as target for the
CloudWatch Event rule. Take note that the scenario requires a solution that is the easiest to implement.

The option that says: Use CloudWatch Alarms on CloudTrail since the S3 object-level operations are
recorded on CloudTrail. Create two Lambda functions for increasing/decreasing the ECS task count. Set
these as respective targets for the CloudWatch Alarm depending on the S3 event is incorrect because using
CloudTrail, CloudWatch Alarm, and two Lambda functions creates an unnecessary complexity to what you want
to achieve. CloudWatch Events can directly target an ECS task on the Targets section when you create a new
rule.

The option that says: Use CloudWatch Alarms on CloudTrail since this S3 object-level operations are
recorded on CloudTrail. Set two alarm actions to update ECS task count to scale-out/scale-in depending
on the S3 event is incorrect because you can’t directly set CloudWatch Alarms to update the ECS task count.
You have to use CloudWatch Events instead.

References:

https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/CloudWatch-Events-tutorial-ECS.html

https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/Create-CloudWatch-Events-Rule.html

Check out this Amazon CloudWatch Cheat Sheet:

https://tutorialsdojo.com/amazon-cloudwatch/

Question 37: Correct

A financial company has a total of over a hundred Amazon EC2 instances running across their development,
testing, and production environments in AWS. Based on a recent IT review, the company initiated a new
compliance rule that mandates a monthly audit of every Linux and Windows EC2 instances check for system
performance issues. Each instance must have a logging function that collects various system details and retrieve
custom metrics from installed applications or services. The DevOps team will periodically review these logs and
analyze their contents using AWS Analytics tools, and the result will be stored in an S3 bucket.
Which is the MOST recommended way to collect and analyze logs from the instances with MINIMAL effort?

Configure and install AWS Inspector Agent in each Amazon EC2 instance that will collect and push data
to CloudWatch Logs periodically. Set up a CloudWatch dashboard to properly analyze the log data of all
EC2 instances.

Configure and install the AWS Systems Manager Agent (SSM Agent) in each EC2 instance that will
automatically collect and push data to CloudWatch Logs. Analyze the log data with CloudWatch Logs
Insights.

Configure and install the unified CloudWatch Logs agent in each Amazon EC2 instance that will
automatically collect and push data to CloudWatch Logs. Analyze the log data with CloudWatch Logs
Insights.

(Correct)

Configure and install AWS SDK in each Amazon EC2 instance. Create a custom daemon script that
would collect and push data to CloudWatch Logs periodically. Enable CloudWatch detailed monitoring
and analyze the log data of all instances using CloudWatch Logs Insights.

Explanation

To collect logs from your Amazon EC2 instances and on-premises servers into CloudWatch Logs, AWS offers
both a new unified CloudWatch agent, and an older CloudWatch Logs agent. It is recommended to use the
unified CloudWatch agent which has the following advantages:

- You can collect both logs and advanced metrics with the installation and configuration of just one agent.

- The unified agent enables the collection of logs from servers running Windows Server.

- If you are using the agent to collect CloudWatch metrics, the unified agent also enables the collection of
additional system metrics, for in-guest visibility.

- The unified agent provides better performance.

CloudWatch Logs Insights enables you to interactively search and analyze your log data in Amazon CloudWatch
Logs. You can perform queries to help you quickly and effectively respond to operational issues. If an issue
occurs, you can use CloudWatch Logs Insights to identify potential causes and validate deployed fixes.

CloudWatch Logs Insights includes a purpose-built query language with a few simple but powerful commands.
CloudWatch Logs Insights provides sample queries, command descriptions, query autocompletion, and log field
discovery to help you get started quickly. Sample queries are included for several types of AWS service logs.

Hence, the correct answer is: Configure and install the unified CloudWatch Logs agent in each Amazon EC2
instance that will automatically collect and push data to CloudWatch Logs. Analyze the log data with
CloudWatch Logs Insights.

The option that says: Configure and install AWS SDK in each Amazon EC2 instance. Create a custom
daemon script that would collect and push data to CloudWatch Logs periodically. Enable CloudWatch
detailed monitoring and analyze the log data of all instances using CloudWatch Logs Insights is incorrect
because although this is a valid solution, this entails a lot of effort to implement as you have to allocate time to
install the AWS SDK to each instance and develop a custom monitoring solution. Remember that the question is
specifically looking for a solution that can be implemented with minimal effort. In addition, it is unnecessary and
not cost-efficient to enable detailed monitoring in CloudWatch in order to meet the requirements of this scenario
since this can be done using CloudWatch Logs.

The option that says: Configure and install the AWS Systems Manager Agent (SSM Agent) in each EC2
instance that will automatically collect and push data to CloudWatch Logs. Analyze the log data with
CloudWatch Logs Insights is incorrect because although this is also a valid solution, it is more efficient to use
CloudWatch agent than an SSM agent. Manually connecting to an instance to view log files and troubleshoot an
issue with SSM Agent is time-consuming hence, for more efficient instance monitoring, you can use the
CloudWatch Agent instead to send the log data to Amazon CloudWatch Logs.

The option that says: Configure and install AWS Inspector Agent in each Amazon EC2 instance that will
collect and push data to CloudWatch Logs periodically. Set up a CloudWatch dashboard to properly analyze
the log data of all EC2 instances is incorrect because AWS Inspector is simply a security assessment service
that only helps you in checking for unintended network accessibility of your EC2 instances and for
vulnerabilities on those EC2 instances. Furthermore, setting up an Amazon CloudWatch dashboard is not
suitable since it's primarily used for scenarios where you have to monitor your resources in a single view, even
those resources that are spread across different AWS Regions. It is better to use CloudWatch Logs Insights
instead since it enables you to interactively search and analyze your log data.

References:https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html

https://docs.aws.amazon.com/systems-manager/latest/userguide/monitoring-ssm-agent.html

https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AnalyzingLogData.html

Check out this Amazon CloudWatch Cheat Sheet:

https://tutorialsdojo.com/amazon-cloudwatch/

CloudWatch Agent vs SSM Agent vs Custom Daemon Scripts:

https://tutorialsdojo.com/cloudwatch-agent-vs-ssm-agent-vs-custom-daemon-scripts/

Question 38: Correct

A leading IT consulting firm is building a GraphQL API service and a mobile application that lets people post
photos and videos of the traffic situations and other issues in the city's public roads. Users can include a text
report and constructive feedback to the authorities. The department of public works shall rectify the problems
based on the data gathered by the system. In order for the mobile app to run on various mobile and tablet
devices, the firm decided to develop it using the React Native mobile framework, which will consume and send
data to the GraphQL API. The backend service will be responsible for storing the photos and videos in an
Amazon S3 bucket. The API will also need access to the Amazon DynamoDB database to store the text reports.
The firm has recently deployed the mobile app prototype, however, during testing, the GraphQL API showed a
lot of issues. The team decided to remove the API to proceed with the project and refactor the mobile application
instead so that it will directly connect to both DynamoDB and S3 as well as handle user authentication.
Which of the following options provides a cost-effective and scalable architecture for this project? (Select
TWO.)

Using the STS AssumeRoleWithWebIdentity API, set up a web identity federation and register with
social identity providers like Facebook, Google or any other OpenID Connect (OIDC)-compatible IdP.
Create a new IAM Role and grant permissions to allow access to Amazon S3 and DynamoDB. Configure
the mobile application to use the AWS temporary security credentials to store the photos and videos to an
S3 bucket and persist the text-based reports to the DynamoDB table.

(Correct)

Create an identity pool in Amazon Cognito that will be used to store the end-user identities organized for
your mobile app. Amazon Cognito will automatically create the required IAM roles for authenticated
identities as well as for unauthenticated "guest" identities that define permissions for Amazon Cognito
users. Download and integrate the AWS SDK for React Native with your app, and import the files
required to use Amazon Cognito. Configure your app to pass the credentials provider instance to the client
object, which passes the temporary security credentials to the client.

(Correct)

Create an identity pool in AWS Identity and Access Management (IAM) that will be used to store the end-
user identities organized for your mobile app. IAM will automatically create the required IAM roles for
authenticated identities as well as for unauthenticated "guest" identities that define permissions for the
users. Download and integrate the AWS SDK for React Native with your app, and import the files
required to use IAM. Configure your app to pass the credentials provider instance to the client object,
which passes the temporary security credentials to the client.

Using the STS AssumeRoleWithSAML API, set up a web identity federation and register with various
social identity providers like Facebook, Google, or any other OpenID Connect (OIDC)-compatible IdP.
Set up an IAM role for that provider and grant permissions for the IAM role to allow access to Amazon
S3 and DynamoDB. Configure the mobile app to use the AWS temporary security credentials to store the
photos and videos to an S3 bucket and persist the text-based reports to the DynamoDB table.

Using the STS AssumeRoleWithWebIdentity API, set up a web identity federation and register with
various social identity providers like Facebook, Google, or any other OpenID Connect (OIDC)-
compatible IdP. Set up an IAM role for that provider and grant permissions for the IAM role to allow
access to Amazon S3 and DynamoDB. Configure the mobile application to use the AWS access and
secret keys to store the photos and videos to an S3 bucket and persist the text-based report to a
DynamoDB table.

Explanation

With web identity federation, you don't need to create custom sign-in code or manage your own user identities.
Instead, users of your app can sign in using a well-known external identity provider (IdP), such as Login with
Amazon, Facebook, Google, or any of your OpenID Connect (OIDC)-compatible IdP. They can receive an
authentication token, and then exchange that token for temporary security credentials in AWS that map to an
IAM role with permissions to use the resources in your AWS account. Using an IdP helps you keep your AWS
account secure because you don't have to embed and distribute long-term security credentials with your
application.
The preferred way to use web identity federation is to use Amazon Cognito. For example, You are a developer
that builds a game for a mobile device where each user data such as scores and profiles are stored in Amazon S3
and Amazon DynamoDB. You could also store this data locally on the device and use Amazon Cognito to keep
it synchronized across devices. You know that for security and maintenance reasons, long-term AWS security
credentials should not be distributed with the game. You might also know that the game might have a large
number of users. For all of these reasons, you don't want to create new user identities in IAM for each player.
Instead, you build the game so that users can sign in using an identity that they've already established with a
well-known external identity provider (IdP), such as Login with Amazon, Facebook, Google, or any OpenID
Connect (OIDC)-compatible IdP. Your game can take advantage of the authentication mechanism from one of
these providers to validate the user's identity.

To enable the mobile app to access your AWS resources, you should first register for a developer ID with your
chosen IdPs. You can also configure the application with each of these providers. In your AWS account that
contains the Amazon S3 bucket and DynamoDB table for the game, you should use Amazon Cognito to create
IAM roles that precisely define permissions that the game needs. If you are using an OIDC IdP, you can also
create an IAM OIDC identity provider entity to establish trust between your AWS account and the IdP.

In the app's code, you can call the sign-in interface for the IdP that you configured previously. The IdP handles
all the details of letting the user sign in, and the app gets an OAuth access token or OIDC ID token from the
provider. Your mobile app can trade this authentication information for a set of temporary security credentials
that consist of an AWS access key ID, a secret access key, and a session token. The app can then use these
credentials to access web services offered by AWS. The app is limited to the permissions that are defined in the
role that it assumes.

In this scenario, you have a mobile app that needs to have access to the DynamoDB and S3 bucket. You can
achieve this by using Web Identity Federation with AssumeRoleWithWebIdentity API which provides
temporary security credentials and an IAM role. You can also use Amazon Cognito to simplify the process.

Hence, the correct answers are:

- Using the STS AssumeRoleWithWebIdentity API, set up a web identity federation and register with
social identity providers like Facebook, Google or any other OpenID Connect (OIDC)-compatible IdP.
Create a new IAM Role and grant permissions to allow access to Amazon S3 and DynamoDB. Configure
the mobile application to use the AWS temporary security credentials to store the photos and videos to an
S3 bucket and persist the text-based reports to the DynamoDB table.

- Create an identity pool in Amazon Cognito that will be used to store the end-user identities organized for
your mobile app. Amazon Cognito will automatically create the required IAM roles for authenticated
identities as well as for unauthenticated "guest" identities that define permissions for Amazon Cognito
users. Download and integrate the AWS SDK for React Native with your app, and import the files
required to use Amazon Cognito. Configure your app to pass the credentials provider instance to the
client object, which passes the temporary security credentials to the client.

The option that says: Create an identity pool in AWS Identity and Access Management (IAM) that will be
used to store the end-user identities organized for your mobile app. IAM will automatically create the
required IAM roles for authenticated identities as well as for unauthenticated "guest" identities that
define permissions for the users. Download and integrate the AWS SDK for React Native with your app,
and import the files required to use IAM. Configure your app to pass the credentials provider instance to
the client object, which passes the temporary security credentials to the client is incorrect because you
cannot create identity pools with guest identities using the AWS Identity and Access Management (IAM)
service. You can only implement this using Amazon Cognito.

The option that says: Using the STS AssumeRoleWithSAML API, set up a web identity federation and
register with various social identity providers like Facebook, Google, or any other OpenID Connect
(OIDC)-compatible IdP. Set up an IAM role for that provider and grant permissions for the IAM role to
allow access to Amazon S3 and DynamoDB. Configure the mobile app to use the AWS temporary security
credentials to store the photos and videos to an S3 bucket and persist the text-based reports to the
DynamoDB table is incorrect because you should have used the AssumRoleWithWebIdentity API instead
of AssumeRoleWithSAML, as this is used in SAML authentication response and not for web identity
authentication.

The option that says: Using the STS AssumeRoleWithWebIdentity API, set up a web identity federation
and register with various social identity providers like Facebook, Google, or any other OpenID Connect
(OIDC)-compatible IdP. Set up an IAM role for that provider and grant permissions for the IAM role to
allow access to Amazon S3 and DynamoDB. Configure the mobile application to use the AWS access and
secret keys to store the photos and videos to an S3 bucket and persist the text-based report to a
DynamoDB table is incorrect because even though the use of Amazon Cognito is valid, it is wrong to store and
use the AWS access and secret keys from the mobile app itself. This is a security risk and you should use the
temporary security credentials instead.

References:

https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_oidc_cognito.html

https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_oidc.html

https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html

AWS Identity Services Overview:

https://youtu.be/AIdUw0i8rr0

Check out this AWS IAM Cheat Sheet:

https://tutorialsdojo.com/aws-identity-and-access-management-iam/

Question 39: Incorrect

A digital payment gateway system is running in AWS which serves thousands of businesses worldwide. It is
hosted in an Auto Scaling Group of EC2 instances behind an Application Load Balancer with an Amazon RDS
database in a Multi-AZ deployment configuration. The company is using several CloudFormation templates in
deploying the new version of the system. The AutoScalingRollingUpdate policy is used to control how
CloudFormation handles rolling updates for their Auto Scaling group which replaces the old instances based on
the parameters they have set. Lately, there were a lot of failed deployments which has caused system
unavailability issues and business disruptions. They want to find out what's preventing their Auto Scaling group
from updating correctly during a stack update. In this scenario, how should the DevOps engineer troubleshoot
this issue? (Select THREE.)
Switch from AutoScalingRollingUpdate to AutoScalingReplacingUpdate policy by modifying
the UpdatePolicy of the AWS::AutoScaling::AutoscalingGroup resource in the CloudFormation template.
Set the WillReplace property to true.

In your AutoScalingRollingUpdate policy, set the WaitOnResourceSignals property to false.

(Correct)

Set the WaitOnResourceSignals property to true in your AutoScalingRollingUpdate policy.

In your AutoScalingRollingUpdate policy, set the value of the MinSuccessfulInstancesPercent property to


prevent AWS CloudFormation from rolling back the entire stack if only a single instance fails to launch.

(Correct)

During a rolling update, suspend the following Auto Scaling


processes: HealthCheck, ReplaceUnhealthy, AZRebalance, AlarmNotification, and ScheduledActions.

(Correct)

Suspend the following Auto Scaling processes that are related with your ELB: Launch, Terminate,
and AddToLoadBalancer.

Explanation

The AWS::AutoScaling::AutoScalingGroup resource uses the UpdatePolicy attribute to define how an Auto
Scaling group resource is updated when the AWS CloudFormation stack is updated. If you don't have the right
settings configured for the UpdatePolicy attribute, your rolling update can produce unexpected results.

You can use the AutoScalingRollingUpdate policy to control how AWS CloudFormation handles rolling updates
for an Auto Scaling group. This common approach keeps the same Auto Scaling group, and then replaces the old
instances based on the parameters that you set.

The AutoScalingRollingUpdate policy supports the following configuration options:


1. "UpdatePolicy": {

2. "AutoScalingRollingUpdate": {

3. "MaxBatchSize": Integer,

4. "MinInstancesInService": Integer,

5. "MinSuccessfulInstancesPercent": Integer,

6. "PauseTime": String,

7. "SuspendProcesses": [ List of processes ],

8. "WaitOnResourceSignals": Boolean

9. }
10. }

Using a rolling update has a risk of system outages and performance degradation due to the decreased
availability of your running EC2 instances. If you want to ensure high availability of your application, you can
also use the AutoScalingReplacingUpdate policy to perform an immediate rollback of the stack without any
possibility of failure.

To find out what's preventing your Auto Scaling group from updating correctly during a stack update, work
through the following troubleshooting scenarios as needed:

- Configure WaitOnResourceSignals and PauseTime to avoid problems with success signals

In your AutoScalingRollingUpdate policy, set the WaitOnResourceSignals property to false. Take note that
if WaitOnResourceSignals is set to true, PauseTime changes to a timeout value. AWS CloudFormation waits to
receive a success signal until the maximum time specified by the PauseTime value. If a signal is not received,
AWS CloudFormation cancels the update. Then, AWS CloudFormation rolls back the stack with the same
settings, including the same PauseTime value.

- Configure MinSuccessfulInstancesPercent to avoid stack rollback

If you're replacing a large number of instances during a rolling update and waiting for a success signal for each
instance, complete the following: In your AutoScalingRollingUpdate policy, set the value of
the MinSuccessfulInstancesPercent property. Take note that setting the MinSuccessfulInstancesPercent property
prevents AWS CloudFormation from rolling back the entire stack if only a single instance fails to launch.

- Configure SuspendProcesses to avoid unexpected changes to the Auto Scaling group

During a rolling update, suspend the following Auto Scaling


processes: HealthCheck, ReplaceUnhealthy, AZRebalance, AlarmNotification, and ScheduledActions. It is quite
important to know that if you're using your Auto Scaling group with Elastic Load Balancing (ELB), you should
not suspend the following processes: Launch, Terminate, and AddToLoadBalancer. These processes are required
to make rolling updates. Take note that if an unexpected scaling action changes the state of the Auto Scaling
group during a rolling update, the update can fail. The failure can result from an inconsistent view of the group
by AWS CloudFormation.

Based on the above information, the correct answers are:

- In your AutoScalingRollingUpdate policy, set the WaitOnResourceSignals property to false.

- In your AutoScalingRollingUpdate policy, set the value of the MinSuccessfulInstancesPercent property to


prevent AWS CloudFormation from rolling back the entire stack if only a single instance fails to launch

- During a rolling update, suspend the following Auto Scaling


processes: HealthCheck, ReplaceUnhealthy, AZRebalance, AlarmNotification, and ScheduledActions

The option that says: Switch from AutoScalingRollingUpdate to AutoScalingReplacingUpdate policy by


modifying the UpdatePolicy of the AWS::AutoScaling::AutoscalingGroup resource in the CloudFormation
template. Set the WillReplace property to true is incorrect because although
the AutoScalingReplacingUpdate policy provides an immediate rollback of the stack without any possibility of
failure, this solution is not warranted since the scenario asks for the options that will help troubleshoot the issue.
The option that says: Suspend the following Auto Scaling processes that are related with your
ELB: Launch, Terminate, and AddToLoadBalancer is incorrect because these processes are required by the ELB to
make rolling updates.

The option that says: Set the WaitOnResourceSignals property to true in


your AutoScalingRollingUpdate policy is incorrect. The WaitOnResourceSignals property should be set to false
instead of true, to determine what prevents the Auto Scaling group from being updated correctly during a stack
update.

References:

https://aws.amazon.com/premiumsupport/knowledge-center/auto-scaling-group-rolling-updates/

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-group.html

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-updatepolicy.html#cfn-
attributes-updatepolicy-replacingupdate

Check out this AWS CloudFormation Cheat Sheet:

https://tutorialsdojo.com/aws-cloudformation/

Question 40: Correct

A startup aims to rearchitect its internal web application hosted on Amazon EC2 into serverless architecture. At
present, the startup deploys changes to the application by provisioning a new Auto Scaling group of EC2
instances across multiple Availability Zones and is fronted with a new Application Load Balancer. It then shifts
the traffic with the use of Amazon Route 53 weighted routing policy. The DevOps Engineer of the startup will
need to design a deployment strategy for serverless architecture similar to the current process that retains the
ability to test new features with a limited set of users before making the features accessible to the entire user
base. The startup plans to use AWS Lambda and Amazon API Gateway for the serverless architecture.

Which of the following is the MOST suitable solution to meet the requirements?

Utilize AWS OpsWorks to deploy the Lambda functions in the custom layer and the API Gateway in the
service layer. When there are code changes, use OpsWorks blue/green deployment strategy, then
gradually redirect traffic

Utilize AWS Elastic Beanstalk to deploy Lambda functions and API Gateway. When there are code
changes, a new version of both Lambda functions and API should be deployed. Use Elastic Beanstalk's
blue/green deployment strategy to shift traffic gradually.

Deploy Lambda functions and API Gateway via AWS CDK. When there are code changes, update the
CloudFormation Stack and deploy the new version of the Lambda functions and APIs. Enable canary
release strategy by utilizing Amazon Route 53 failover routing policy.

Deploy Lambda functions with versions and API Gateway using AWS CloudFormation. When there are
code changes, update the CloudFormation stack with the new Lambda code then a canary release strategy
should be used to update the API versions. Once testing is done, promote the new version.
(Correct)

Explanation

By introducing alias traffic shifting, implementing canary deployments of Lambda functions has become
effortless. The weightings of additional version can be adjusted on an alias to route invocation traffic to new
function versions based on the weight specified.

In API Gateway, a canary release deployment uses the deployment stage for the production release of the base
version of an API, and attaches to the stage a canary release for the new versions, relative to the base version, of
the API. The stage is associated with the initial deployment and the canary with subsequent deployments.

Hence, the correct answer is the option that says: Deploy Lambda functions with versions and API Gateway
using AWS CloudFormation. When there are code changes, update the CloudFormation stack with the
new Lambda code then a canary release strategy should be used to update the API versions. Once testing
is done, promote the new version.

The option that says: Deploy Lambda functions and API Gateway via AWS CDK. When there are code
changes, update the CloudFormation Stack and deploy the new version of the Lambda functions and
APIs. Enable canary release strategy by utilizing Amazon Route 53 failover routing policy is incorrect
because failover routing policy is primarily used for active-passive failover that lets you route traffic to a
resource when the resource is healthy or to a different resource when the first resource is unhealthy. In addition,
Route 53 cannot set Lambda versions as target.The option that says: Utilize AWS Elastic Beanstalk to deploy
Lambda functions and API Gateway. When there are code changes, a new version of both Lambda
functions and API should be deployed. Use Elastic Beanstalk's blue/green deployment strategy to shift
traffic gradually is incorrect because Elastic Beanstalk cannot deploy Lambda functions and API Gateway.

The option that says: Utilize AWS OpsWorks to deploy the Lambda functions in the custom layer and the
API Gateway in the service layer. When there are code changes, use OpsWorks blue/green deployment
strategy, then gradually redirect traffic is incorrect because AWS OpsWorks is a configuration management
service that provides managed instances of Chef and Puppet. It is not used for deploying Lambda functions and
API Gateway.

References:

https://docs.aws.amazon.com/apigateway/latest/developerguide/canary-release.html#api-gateway-canary-
release-deployment-overview

https://aws.amazon.com/blogs/compute/implementing-canary-deployments-of-aws-lambda-functions-with-alias-
traffic-shifting/

https://aws.amazon.com/blogs/compute/performing-canary-deployments-for-service-integrations-with-amazon-
api-gateway/

Tutorials Dojo's AWS Certified DevOps Engineer Professional Exam Study Guide:

https://tutorialsdojo.com/aws-certified-devops-engineer-professional/

Question 41: Incorrect


You have migrated your application API server from a cluster of EC2 instances to a combination of API gateway
and AWS Lambda. You are used to canary deployments on your EC2 cluster where you carefully check any
errors on the application before doing the full deployment. However, you can’t do this on your current AWS
Lambda setup since the deployment switches quickly from one version to another.

How can you implement the same functionality on AWS Lambda?

Deploy your app using Traffic shifting with AWS Lambda aliases.

(Correct)

Use CodeDeploy to perform rolling update of the latest Lambda function.

Use Route 53 weighted routing policy with API Gateway.

(Incorrect)

Deploy your app using Traffic shifting with Amazon Route 53.

Explanation

By default, an alias points to a single Lambda function version. When the alias is updated to point to a different
function version, incoming request traffic in turn instantly points to the updated version. This exposes that alias
to any potential instabilities introduced by the new version. To minimize this impact, you can implement the
routing-config parameter of the Lambda alias that allows you to point to two different versions of the Lambda
function and dictate what percentage of incoming traffic is sent to each version.

With the introduction of alias traffic shifting, it is now possible to trivially implement canary deployments of
Lambda functions. By updating additional version weights on an alias, invocation traffic is routed to the new
function versions based on the weight specified. Detailed CloudWatch metrics for the alias and version can be
analyzed during the deployment, or other health checks performed, to ensure that the new version is healthy
before proceeding.

Hence, the correct answer is:Deploy your app using Traffic shifting with AWS Lambda aliases.

The option that says: Use CodeDeploy to perform rolling update of the latest Lambda function is incorrect
because AWS Lambda does not support native rolling update. Deployments of a Lambda function could only be
performed in a single flip by updating the function code for version $LATEST, or by updating an alias to target a
different function version.

The option that says: Deploy your app using Traffic shifting with Amazon Route 53 is incorrect because
Route 53 does not support traffic shifting for Lambda deployments.

The option that says: Use Route 53 weighted routing policy with API Gateway is incorrect. Although
technically valid, this is not an efficient solution to perform canary deployments of Lambda functions. Traffic
splitting uses an alias to switch between two functions in the backend allowing you to maintain a single instance
of API Gateway + Lambda function. With weighted routing, you need to deploy the new version on a new
instance of API Gateway + Lambda.

References:
https://aws.amazon.com/blogs/compute/implementing-canary-deployments-of-aws-lambda-functions-with-alias-
traffic-shifting/

https://docs.aws.amazon.com/lambda/latest/dg/lambda-traffic-shifting-using-aliases.html

https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/automating-updates-to-
serverless-apps.html

Check out this AWS Lambda Cheat Sheet:

https://tutorialsdojo.com/aws-lambda/

AWS Lambda Overview - Serverless Computing in AWS:

https://youtu.be/bPVX1zHwAnY

Question 42: Correct

A multinational corporation has multiple AWS accounts that are consolidated using AWS Organizations. For
security purposes, a new system should be configured that automatically detects suspicious activities in any of its
accounts, such as SSH brute force attacks or compromised EC2 instances that serve malware. All of the gathered
information must be centrally stored in its dedicated security account for audit purposes, and the events should
be stored in an S3 bucket.As a DevOps Engineer, which solution should you implement in order to meet this
requirement?

Automatically detect SSH brute force or malware attacks by enabling Amazon Macie in every account.
Set up the security account as the Macie Administrator for every member account of the organization.
Create an Amazon CloudWatch Events rule in the security account. Configure the rule to send all findings
to Amazon Kinesis Data Firehose, which should push the findings to an Amazon S3 bucket.

Automatically detect SSH brute force or malware attacks by enabling Amazon Macie in the security
account only. Configure the security account as the Macie Administrator for every member account. Set
up a new CloudWatch Events rule in the security account. Configure the rule to send all findings to
Amazon Kinesis Data Streams. Launch a custom shell script in Lambda function to read data from the
Kinesis Data Streams and push them to the S3 bucket.

Automatically detect SSH brute force or malware attacks by enabling Amazon GuardDuty in the security
account only. Set up the security account as the GuardDuty Administrator for every member account.
Create a new CloudWatch rule in the security account. Configure the rule to send all findings to Amazon
Kinesis Data Streams. Launch a custom shell script in Lambda function to read data from the Kinesis Data
Streams and push them to the S3 bucket.

Automatically detect SSH brute force or malware attacks by enabling Amazon GuardDuty in every
account. Configure the security account as the GuardDuty Administrator for every member of the
organization. Set up a new CloudWatch rule in the security account. Configure the rule to send all
findings to Amazon Kinesis Data Firehose, which will push the findings to the S3 bucket.

(Correct)

Explanation
Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and
unauthorized behavior to protect your AWS accounts and workloads. With the cloud, the collection and
aggregation of account and network activities is simplified, but it can be time consuming for security teams to
continuously analyze event log data for potential threats. With GuardDuty, you now have an intelligent and cost-
effective option for continuous threat detection in the AWS Cloud. The service uses machine learning, anomaly
detection, and integrated threat intelligence to identify and prioritize potential threats. GuardDuty analyzes tens
of billions of events across multiple AWS data sources, such as AWS CloudTrail, Amazon VPC Flow Logs, and
DNS logs. With a few clicks in the AWS Management Console, GuardDuty can be enabled with no software or
hardware to deploy or maintain. By integrating with AWS CloudWatch Events, GuardDuty alerts are actionable,
easy to aggregate across multiple accounts, and straightforward to push into existing event management and
workflow systems.

GuardDuty makes enablement and management across multiple accounts easy. Through the multi-account
feature, all member accounts findings can be aggregated with a GuardDuty administrator account. This enables
security team to manage all GuardDuty findings from across the organization in one single account. The
aggregated findings are also available through CloudWatch Events, making it easy to integrate with an existing
enterprise event management system.

Hence, the correct answer is: Automatically detect SSH brute force or malware attacks by enabling Amazon
GuardDuty in every account. Configure the security account as the GuardDuty Administrator for every
member of the organization. Set up a new CloudWatch rule in the security account. Configure the rule to
send all findings to Amazon Kinesis Data Firehose, which will push the findings to the S3 bucket.

The option that says: Automatically detect SSH brute force or malware attacks by enabling Amazon Macie in
every account. Set up the security account as the Macie Administrator for every member account of the
organization. Create an Amazon CloudWatch Events rule in the security account. Configure the rule to send
all findings to Amazon Kinesis Data Firehose, which should push the findings to an Amazon S3 bucket is
incorrect because you have to use Amazon GuardDuty instead of Amazon Macie. Take note that Amazon Macie
cannot detect SSH brute force or malware attacks.

The option that says: Automatically detect SSH brute force or malware attacks by enabling Amazon Macie in
the security account only. Configure the security account as the Macie Administrator for every member
account. Set up a new CloudWatch Events rule in the security account. Configure the rule to send all
findings to Amazon Kinesis Data Streams. Launch a custom shell script in Lambda function to read data
from the Kinesis Data Streams and push them to the S3 bucket is incorrect because you don't need to create a
custom shell script in Lambda or use Kinesis Data Streams. You can simply configure the CloudWatch Event
rule to send all findings to Amazon Kinesis Data Firehose, which will push the findings to the S3 bucket.

The option that says: Automatically detect SSH brute force or malware attacks by enabling Amazon
GuardDuty in the security account only. Set up the security account as the GuardDuty Administrator for
every member account. Create a new CloudWatch rule in the security account. Configure the rule to send all
findings to Amazon Kinesis Data Streams. Launch a custom shell script in Lambda function to read data
from the Kinesis Data Streams and push them to the S3 bucket is incorrect because although it is valid to use
Amazon GuardDuty in this scenario, the implementation for storing the findings is incorrect. You can simply
configure the CloudWatch Event rule to send all findings to Amazon Kinesis Data Firehose, which will push the
findings to the S3 bucket.

References:
https://aws.amazon.com/guardduty/

https://aws.amazon.com/blogs/security/how-to-manage-amazon-guardduty-security-findings-across-multiple-
accounts/

Check out this Amazon GuardDuty Cheat Sheet:

https://tutorialsdojo.com/amazon-guardduty/

Question 43: Correct

A large hospital has an online medical record system that is hosted in a fleet of Windows EC2 instances with
several EBS volumes attached to it. The IT Security team mandated that the latest security patches should be
installed to all of their Amazon EC2 instances using an automated patching system. They should also have to
implement a functionality that checks all of their EC2 instances if they are using an approved Amazon Machine
Image (AMI) in their AWS Cloud environment. The patching system should not impede developers from
launching instances using an unapproved AMI, but nonetheless, they still have to be notified if there are non-
compliant EC2 instances in their VPC.As a DevOps Engineer, which of the following should you implement to
protect and monitor all of your instances as required above?

Use Amazon GuardDuty to continuously monitor your Amazon EC2 instances if the latest security
patches are installed and also to check if there are any unapproved AMIs being used. Use CloudWatch
Alarms to notify you if there are any non-compliant instances running in your VPC.

Set up an IAM policy that will restrict the developers from launching EC2 instances with an unapproved
AMI. Use CloudWatch Alarms to notify you if there are any non-compliant instances running in your
VPC.

Create a patch baseline in AWS Systems Manager Patch Manager that defines which patches are
approved for installation on your instances. Set up the AWS Config Managed Rule which automatically
checks whether your running EC2 instances are using approved AMIs. Create CloudWatch Alarms to
notify you if there are any non-compliant instances running in your VPC.

(Correct)

Automatically patch all of your Amazon EC2 instances and detect uncompliant EC2 instances which do
not use approved AMIs using AWS Shield Advanced. Create CloudWatch Alarms to notify you if there
are any non-compliant instances running in your VPC.

Explanation

AWS Systems Manager Patch Manager automates the process of patching managed instances with security-
related updates. For Linux-based instances, you can also install patches for non-security updates. You can patch
fleets of Amazon EC2 instances or your on-premises servers and virtual machines (VMs) by operating system
type. This includes supported versions of Windows Server, Ubuntu Server, Red Hat Enterprise Linux (RHEL),
SUSE Linux Enterprise Server (SLES), CentOS, Amazon Linux, and Amazon Linux 2. You can scan instances
to see only a report of missing patches, or you can scan and automatically install all missing patches.

Patch Manager uses patch baselines, which include rules for auto-approving patches within days of their
release, as well as a list of approved and rejected patches. You can install patches on a regular basis by
scheduling patching to run as a Systems Manager Maintenance Window task. You can also install patches
individually or to large groups of instances by using Amazon EC2 tags. You can add tags to your patch baselines
themselves when you create or update them.

AWS Config provides AWS managed rules, which are predefined, customizable rules that AWS Config uses to
evaluate whether your AWS resources comply with common best practices. For example, you could use a
managed rule to quickly start assessing whether your Amazon Elastic Block Store (Amazon EBS) volumes are
encrypted or whether specific tags are applied to your resources. You can set up and activate these rules without
writing the code to create an AWS Lambda function, which is required if you want to create custom rules. The
AWS Config console guides you through the process of configuring and activating a managed rule. You can also
use the AWS Command Line Interface or AWS Config API to pass the JSON code that defines your
configuration of a managed rule.

In this scenario, you can use a combination of AWS Config Managed Rules and AWS Systems Manager Patch
Manager to meet the requirements. Hence, the correct option is: Create a patch baseline in AWS Systems
Manager Patch Manager that defines which patches are approved for installation on your instances. Set
up the AWS Config Managed Rule which automatically checks whether your running EC2 instances are
using approved AMIs. Create CloudWatch Alarms to notify you if there are any non-compliant instances
running in your VPC.

The option that says: Set up an IAM policy that will restrict the developers from launching EC2 instances
with an unapproved AMI. Use CloudWatch Alarms to notify you if there are any non-compliant instances
running in your VPC is incorrect. Although you can use an IAM policy to prohibit your developers from
launching unapproved AMIs, this will impede their work which violates what the scenario requires. Remember,
it is stated in the scenario that the system that you will implement should not impede developers from launching
instances using an unapproved AMI.

The option that says: Use Amazon GuardDuty to continuously monitor your Amazon EC2 instances if the
latest security patches are installed and also to check if there are any unapproved AMIs being used. Use
CloudWatch Alarms to notify you if there are any non-compliant instances running in your VPC is
incorrect because Amazon GuardDuty is primarily used as a threat detection service that continuously monitors
for malicious or unauthorized behavior to help you protect your AWS accounts and workloads. It monitors for
activity such as unusual API calls or potentially unauthorized deployments that indicate a possible account
compromise, however, it does not check if your EC2 instances are using an approved AMI or not.

The option that says: Automatically patch all of your Amazon EC2 instances and detect uncompliant EC2
instances which do not use approved AMIs using AWS Shield Advanced. Create CloudWatch Alarms to
notify you if there are any non-compliant instances running in your VPC is incorrect because the AWS
Shield Advanced service is more suitable in preventing DDoS attacks in your AWS resources. It cannot check
the specific AMIs that your EC2 instances are using.

References:

https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-patch.html

https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config_use-managed-rules.html

https://docs.aws.amazon.com/config/latest/developerguide/approved-amis-by-id.html
Check out these cheat sheets on AWS Config and AWS Systems Manager:

https://tutorialsdojo.com/aws-config/

https://tutorialsdojo.com/aws-systems-manager/

Question 44: Incorrect

You are working on a web application that allows designers to create mobile themes for their android phones.
The app is hosted on a cluster of Auto Scaling ECS instances and its deployments are handled by AWS
CodeDeploy. ALB health checks are not sufficient to tell that new version deployments are successful, rather
you have custom validation scripts that verify all APIs of the application. You want to make sure that there are
no 5XX error replies on the new version before continuing production deployment and that you are notified via
email if results failed. You also want to configure automatic rollback to the older version when the validation
fails. Which combination of options should you implement to meet this requirement? (Select THREE.)

Have AWS CloudWatch Alarms trigger an AWS SNS notification when the threshold for 5xx is reached
on CloudWatch.

(Correct)

Associate CloudWatch Alarms to your deployment group to have it trigger a rollback when the 5xx error
alarm is active.

(Correct)

Have AWS Lambda trigger an AWS SNS notification after performing the validation of the new app
revision.

(Incorrect)

Create your validation scripts on AWS Lambda and define the functions on the AppSpec lifecycle hook to
validate the app using test traffic.

(Correct)

Create your validation scripts on AWS Lambda and invoke them after deployment to validate your new
app version.

Have Lambda send results to AWS CloudWatch Alarms directly and trigger a rollback when 5xx reply
errors are received during deployment.

Explanation

You can use CloudWatch Alarms to track metrics on your new deployment and you can set thresholds for those
metrics in your Auto Scaling groups being managed by CodeDeploy. This can invoke an action if the metric you
are tracking crosses the threshold for a defined period of time. You can also monitor metrics such as instance
CPU utilization, Memory utilization or custom metrics you have configured. If the alarm is activated,
CloudWatch initiates actions such as sending a notification to Amazon Simple Notification Service, stopping a
CodeDeploy deployment, or changing the state of an instance. You will also have the option to automatically roll
back a deployment when a deployment fails or when a CloudWatch alarm is activated. CodeDeploy will
redeploy the last known working version of the application when it rolls back.

With Amazon SNS, you can create triggers that send notifications to subscribers of an Amazon SNS topic when
specified events, such as success or failure events, occur in deployments and instances. CloudWatch Alarms can
trigger sending out notification to your configured SNS topic.

The AfterAllowTestTraffic lifecycle hook of the AppSpec.yaml file allows you to use Lambda functions to
validate the new version task set using the test traffic during the deployment. For example, a Lambda function
can serve traffic to the test listener and track metrics from the replacement task set. If rollbacks are configured,
you can configure a CloudWatch alarm that triggers a rollback when the validation test in your Lambda function
fails.

Hence, the correct answers are:

- Create your validation scripts on AWS Lambda and define the functions on the AppSpec lifecycle hook to
validate the app using test traffic.

- Have AWS CloudWatch Alarms trigger an AWS SNS notification when the threshold for 5xx is reached on
CloudWatch.

- Associate CloudWatch Alarms to your deployment group to have it trigger a rollback when the 5xx error
alarm is active.

The option that says: Create your validation scripts on AWS Lambda and invoke them after deployment to
validate your new app version is incorrect since you will have to trigger the Lambda validation functions before
going to production traffic. This also gives you the opportunity to rollback the deployment in case it is not
working as expected.

The option that says: Have AWS Lambda trigger an AWS SNS notification after performing the validation of
the new app revision is incorrect because you will need to monitor the results of each API test call from Lambda
if you are going to implement this. It is better to implement results monitoring as a threshold on CloudWatch.

The option that says: Have Lambda send results to AWS CloudWatch Alarms directly and trigger a rollback
when 5xx reply errors are received during deployment is incorrect because CloudWatch Alarms can’t receive
direct test results from AWS Lambda.

References:https://docs.aws.amazon.com/codedeploy/latest/userguide/deployments-rollback-and-
redeploy.html#deployments-rollback-and-redeploy-automatic-rollbacks

https://docs.aws.amazon.com/codedeploy/latest/userguide/deployment-groups-configure-advanced-options.html

https://docs.aws.amazon.com/codedeploy/latest/userguide/deployment-steps.html#deployment-steps-what-
happens

Check out this AWS CodeDeploy Cheat Sheet:

https://tutorialsdojo.com/aws-codedeploy/

Question 45: Correct


A telecommunications company has a web portal that requires a cross-region failover. The portal stores its data
in an Amazon Aurora database in the primary region (us-west-1) and the Parallel Query feature is also enabled in
the database to optimize some of the I/O and computation involved in processing data-intensive queries. The
portal uses Route 53 to direct customer traffic to the active region.Which among the options below should be
taken to MINIMIZE downtime of the portal in the event that the primary database fails?

Configure the Route 53 record to balance traffic between both regions equally using the Failover routing
policy. Enable the Aurora multi-master option and set up a Route 53 health check to analyze the health of
the databases. Set the Route 53 record to automatically direct all traffic to the secondary region when a
primary database fails.

Launch a read replica of the primary database to the second region. Set up Amazon RDS Event
Notification to publish status updates to an SNS topic. Create a Lambda function subscribed to the topic to
monitor database health. Configure the Lambda function to promote the read replica as the primary in the
event of a failure. Update the Route 53 record to redirect traffic from the primary region to the secondary
region.

(Correct)

Set up CloudWatch to monitor the status of the Amazon Aurora database. Create a CloudWatch Events
rule to send a Slack message to the SysOps Team using Amazon SNS in the event of a database outage.
Instruct the SysOps team to redirect traffic to an S3 static website that displays a downtime message.
Manually promote the read replica as the primary instance and verify the portal's status. Redirect traffic
from the S3 website to the secondary region.

Create a CloudWatch Events rule to periodically invoke a Lambda function that checks the health of the
primary Amazon Aurora database every hour. Configure the Lambda function to promote the read replica
as the primary if a failure was detected. Update the Route 53 record to redirect traffic from the primary to
the secondary region.

Explanation

Amazon RDS uses the Amazon Simple Notification Service (Amazon SNS) to provide notification when an
Amazon RDS event occurs. These notifications can be in any notification form supported by Amazon SNS for
an AWS Region, such as an email, a text message, or a call to an HTTP endpoint.

Amazon RDS groups these events into categories that you can subscribe to so that you can be notified when an
event in that category occurs. You can subscribe to an event category for a DB instance, DB cluster, DB cluster
snapshot, DB parameter group, or DB security group. For example, if you subscribe to the Backup category for a
given DB instance, you are notified whenever a backup-related event occurs that affects the DB instance. If you
subscribe to a configuration change category for a DB security group, you are notified when the DB security
group is changed. You also receive notification when an event notification subscription changes.

For Amazon Aurora, events occur at both the DB cluster and the DB instance level, so you can receive events if
you subscribe to an Aurora DB cluster or an Aurora DB instance.
Event notifications are sent to the addresses that you provide when you create the subscription. You might want
to create several different subscriptions, such as one subscription receiving all event notifications and another
subscription that includes only critical events for your production DB instances. You can easily turn off
notification without deleting a subscription by choosing No for Enabled in the Amazon RDS console or by
setting the Enabled parameter to false using the AWS CLI or Amazon RDS API.

When you copy a snapshot to an AWS Region that is different from the source snapshot's AWS Region, the first
copy is a full snapshot copy, even if you copy an incremental snapshot. A full snapshot copy contains all of the
data and metadata required to restore the DB instance. After the first snapshot copy, you can copy incremental
snapshots of the same DB instance to the same destination region within the same AWS account.

Depending on the AWS Regions involved and the amount of data to be copied, a cross-region snapshot copy
can take hours to complete. In some cases, there might be a large number of cross-region snapshot copy requests
from a given source AWS Region. In these cases, Amazon RDS might put new cross-region copy requests from
that source AWS Region into a queue until some in-progress copies complete. No progress information is
displayed about copy requests while they are in the queue. Progress information is displayed when the copy
starts.

This means that a cross-region snapshot doesn't provide a high RPO compared with a Read Replica since the
snapshot takes significant time to complete. Although this is better than Multi-AZ deployments since you can
replicate your database across AWS Regions, using a Read Replica is still the best choice for providing a high
RTO and RPO for disaster recovery.

Hence, the correct answer is: Launch a read replica of the primary database to the second region. Set up
Amazon RDS Event Notification to publish status updates to an SNS topic. Create a Lambda function
subscribed to the topic to monitor database health. Configure the Lambda function to promote the read
replica as the primary in the event of a failure. Update the Route 53 record to redirect traffic from the
primary region to the secondary region.

The option that says: Set up CloudWatch to monitor the status of the Amazon Aurora database. Create a
CloudWatch Events rule to send a Slack message to the SysOps Team using Amazon SNS in the event of a
database outage. Instruct the SysOps team to redirect traffic to an S3 static website that displays a
downtime message. Manually promote the read replica as the primary instance and verify the portal's
status. Redirect traffic from the S3 website to the secondary region is incorrect because this solution entails a
lot of manual steps. It is possible that the SysOps Team might not respond to the Slack message immediately.

The option that says: Create a CloudWatch Events rule to periodically invoke a Lambda function that
checks the health of the primary Amazon Aurora database every hour. Configure the Lambda function to
promote the read replica as the primary if a failure was detected. Update the Route 53 record to redirect
traffic from the primary to the secondary region is incorrect. Although this is a valid option, there will still be
a delay on the database failover since the CloudWatch rule only runs every hour. A better solution is to use
Amazon RDS Event Notification instead.

The option that says: Configure the Route 53 record to balance traffic between both regions equally using
the Weighted routing policy. Enable the Aurora multi-master option and set up a Route 53 health check to
analyze the health of the databases. Set the Route 53 record to automatically direct all traffic to the
secondary region when a primary database fails is incorrect. Although an Aurora multi-master improves the
availability of the database, the application will still experience downtime in the event of an AWS Region
outage. Much like Amazon RDS Multi-AZ, the Aurora multi-master has its data replicated across multiple
Availability Zones only but not to another AWS Region. You should use Amazon Aurora Global Database
instead.

References:

https://aws.amazon.com/blogs/database/implementing-a-disaster-recovery-strategy-with-amazon-rds/

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/
USER_CopySnapshot.html#USER_CopySnapshot.AcrossRegions

https://aws.amazon.com/rds/details/read-replicas/

Check out this Amazon RDS Cheat Sheet:

https://tutorialsdojo.com/amazon-relational-database-service-amazon-rds/

Question 46: Correct

You are deploying a critical web application with Elastic Beanstalk using the “Rolling” deployment policy. Your
Elastic Beanstalk environment configuration has an RDS DB instance attached to it and used by your application
servers. The deployment failed when you deployed a major version. And it took even more time to rollback
changes because you have to manually redeploy the old version.Which of the following options will you
implement to prevent this from happening in future deployments?

Configure Immutable as the deployment policy in your Elastic Beanstalk environment for future
deployments of your web application.

(Correct)

Configure All at once as the deployment policy in your Elastic Beanstalk environment for future
deployments of your web application.

Implement a Blue/green deployment strategy in your Elastic Beanstalk environment for future
deployments of your web application. Ensure that the RDS DB instance is still tightly coupled with the
environment.

Configure Rolling with additional batch as the deployment policy in your Elastic Beanstalk
environment for future deployments of your web application.

Explanation

Immutable environment updates are an alternative to rolling updates. Immutable environment updates ensure
that configuration changes that require replacing instances are applied efficiently and safely. If an immutable
environment update fails, the rollback process requires only terminating an Auto Scaling group. A failed rolling
update, on the other hand, requires performing an additional rolling update to roll back the changes.

To perform an immutable environment update, Elastic Beanstalk creates a second, temporary Auto Scaling
group behind your environment's load balancer to contain the new instances. First, Elastic Beanstalk launches a
single instance with the new configuration in the new group. This instance serves traffic alongside all of the
instances in the original Auto Scaling group that are running the previous configuration.
When the first instance passes health checks, Elastic Beanstalk launches additional instances with the new
configuration, matching the number of instances running in the original Auto Scaling group. When all of the new
instances pass health checks, Elastic Beanstalk transfers them to the original Auto Scaling group, and terminates
the temporary Auto Scaling group and old instances.

Refer to the table below for the characteristics of each deployment method as well as the amount of time it takes
to do the deployment, as seen in the Deploy Time column:

Hence, the correct answer is: Configure Immutable as the deployment policy in your Elastic Beanstalk
environment for future deployments of your web application.

The option that says: Configure Rolling with additional batch as the deployment policy in your Elastic
Beanstalk environment for future deployments of your web application is incorrect because this deployment
type is just similar to rolling deployments and hence, this will not help alleviate the root cause of the issue in this
scenario.

The option that says: Configure All at once as the deployment policy in your Elastic Beanstalk environment
for future deployments of your web application is incorrect because this will cause a brief downtime during
deployment and hence, this is not ideal for deploying your critical production applications.

The option that says: Implement a Blue/green deployment strategy in your Elastic Beanstalk environment for
future deployments of your web application. Ensure that the RDS DB instance is still tightly coupled with the
environment is incorrect because a Blue/green deployment requires that your environment runs independently of
your production database. This means that you have to decouple your RDS database from your environment. If
your Elastic Beanstalk environment has an attached Amazon RDS DB instance, the data will be lost if you
terminate the original (blue) environment.

References:

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/environmentmgmt-updates-immutable.html

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.rolling-version-deploy.html

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.rollingupdates.html

Check out this AWS Elastic Beanstalk Cheat Sheet:

https://tutorialsdojo.com/aws-elastic-beanstalk/

Question 47: Correct

A European enterprise has developed a serverless web application hosted on AWS. The application comprises
Amazon API Gateway, various AWS Lambda functions, Amazon S3, and an Amazon RDS for MySQL
database. The source code consists of AWS Serverless Application Model (AWS SAM) templates and Python
code and is stored in AWS CodeCommit.

The enterprise's security team recently performed a security audit and discovered that user names and passwords
for authentication to the database are hardcoded within CodeCommit repositories. The DevOps Engineer must
implement a solution with the following requirements:
 Automatically detect and prevent hardcoded secrets.
 Automatic secrets rotation should also be implemented.

What is the MOST secure solution that meets these requirements?

Link the CodeCommit repositories to Amazon CodeGuru Reviewer. Perform a manual evaluation of the
code review for any recommendations. Store the secret as a string in Parameter Store. Modify the SAM
templates and Python code to retrieve the secret from Parameter Store.

Integrate Amazon CodeGuru Profiler. Apply the CodeGuru Profiler function


decorator @with_lambda_profiler() to your handler function and review the recommendation report
manually. Select the option to protect the secret. Modify the SAM templates and Python code to fetch the
secret from AWS Secrets Manager.

Link the CodeCommit repositories to Amazon CodeGuru Reviewer. Perform a manual evaluation of the
code review for any recommendations. Select the option to protect the secret. Revise the SAM templates
and Python code to fetch the secret from AWS Secrets Manager.

(Correct)

Integrate Amazon CodeGuru Profiler on the AWS Lambda function by enabling the code profiling
feature. Apply the CodeGuru Profiler function decorator @with_lambda_profiler() to your handler
function and review the recommendation report manually. Store the secret as a secure string in Parameter
Store. Modify the SAM templates and Python code to retrieve the secret from Parameter Store.

Explanation

Amazon CodeGuru helps improve code quality and automate code reviews by scanning and profiling your Java
and Python applications. CodeGuru Reviewer can detect potential defects and bugs in your code. For instance,
it recommends improvements regarding security vulnerabilities, resource leaks, concurrency issues, incorrect
input validation, and deviation from AWS best practices.

Amazon CodeGuru Reviewer Secrets Detector is an automated tool that helps developers detect secrets in
source code or configuration files, such as passwords, API keys, SSH keys, and access tokens. The detectors use
machine learning (ML) to identify hardcoded secrets as part of the code review process, ensuring all new code
doesn’t contain hardcoded secrets before being merged and deployed. In addition to Java and Python code,
secrets detectors scan configuration and documentation files. CodeGuru Reviewer suggests remediation steps to
secure secrets with AWS Secrets Manager, a managed service that lets you securely and automatically store,
rotate, manage, and retrieve credentials, API keys, and all sorts of secrets.

Secrets Manager enables you to replace hardcoded credentials in your code, including passwords, with an API
call to Secrets Manager to retrieve the secret programmatically. This helps ensure the secret can't be
compromised by someone examining your code, because the secret no longer exists in the code. Also, you can
configure Secrets Manager to automatically rotate the secret for you according to a specified schedule. This
enables you to replace long-term secrets with short-term ones, significantly reducing the risk of compromise.

Hence, the correct answer is: Link the CodeCommit repositories to Amazon CodeGuru Reviewer. Perform
a manual evaluation of the code review for any recommendations. Select the option to protect the secret.
Revise the SAM templates and Python code to fetch the secret from AWS Secrets Manager.
The option that says: Link the CodeCommit repositories to Amazon CodeGuru Reviewer. Perform a
manual evaluation of the code review for any recommendations. Store the secret as a string in Parameter
Store. Modify the SAM templates and Python code to retrieve the secret from Parameter Store is incorrect.
While it utilizes CodeGuru Reviewer, storing a secret as a string in Parameter Store is not considered to be
secure. It is recommended to store secrets as a secure string. Furthermore, Parameter Store does not provide
automatic secrets rotation. It is recommended to use AWS Secrets Manager instead.

The option that says: Integrate Amazon CodeGuru Profiler. Apply the CodeGuru Profiler function
decorator <code>@with_lambda_profiler()</code> to your handler function and review the
recommendation report manually. Select the option to protect the secret. Modify the SAM templates and
Python code to fetch the secret from AWS Secrets Manager is incorrect because CodeGuru Profiler is
designed to enhance the performance of production applications and identify the most resource-intensive lines of
code. It is not intended to automatically identify hardcoded secrets.

The option that says: Integrate Amazon CodeGuru Profiler on the AWS Lambda function by enabling the
code profiling feature. Apply the CodeGuru Profiler function decorator
<code>@with_lambda_profiler()</code> to your handler function and review the recommendation report
manually. Store the secret as a secure string in Parameter Store. Modify the SAM templates and Python
code to retrieve the secret from Parameter Store is incorrect. Although it stores secrets in Parameter Store as
a secure string, it does not fulfill the requirement for automatic secrets rotation. In addition, this option utilizes
CodeGuru Profiler instead of CodeGuru Reviewer.

References:

https://aws.amazon.com/blogs/aws/codeguru-reviewer-secrets-detector-identify-hardcoded-secrets/

https://docs.aws.amazon.com/codecommit/latest/userguide/how-to-amazon-codeguru-reviewer.html

https://docs.aws.amazon.com/codeguru/latest/profiler-ug/python-lambda-command-line.html

https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html

Question 48: Correct

An educational startup has developed an e-learning platform hosted in AWS, where they sell their online
courses. The CTO is planning to release an online forum that will allow students to interact with each other and
post their questions. This new feature requires a new DynamoDB table named Thread in which the partition key
is ForumName and the sort key is Subject. Below is a diagram that shows how the items in the table must be
organized:

The forum must also find all of the threads that were posted in a particular forum within the last three months as
part of the system's quarterly reporting.

Which of the following is the MOST suitable solution that you should implement?

Configure the e-learning platform to use a Query operation for the entire Thread table. Discard any posts
that were not within the specified time frame.
Create a new DynamoDB table with a local secondary index. Refactor the e-learning platform to use
the Query operation for search and utilize the LastPostDateTime attribute as the sort key.

(Correct)

Create a new DynamoDB table with a global secondary index. Refactor the e-learning platform to use
a Query operation for search and to utilize the LastPostDateTime attribute as the sort key.

Refactor the e-learning platform to do a Scan operation in the entire Thread table. Discard any posts that
were not within the specified time frame.

Explanation

DynamoDB supports two types of secondary indexes:

- Global secondary index — an index with a partition key and a sort key that can be different from those on the
base table. A global secondary index is considered "global" because queries on the index can span all of the data
in the base table, across all partitions.

- Local secondary index — an index that has the same partition key as the base table, but a different sort key. A
local secondary index is "local" in the sense that every partition of a local secondary index is scoped to a base
table partition that has the same partition key value.

A local secondary index maintains an alternate sort key for a given partition key value. A local secondary index
also contains a copy of some or all of the attributes from its base table; you specify which attributes are projected
into the local secondary index when you create the table. The data in a local secondary index is organized by the
same partition key as the base table, but with a different sort key. This lets you access data items efficiently
across this different dimension. For greater query or scan flexibility, you can create up to five local secondary
indexes per table.

Suppose that an application needs to find all of the threads that have been posted within the last three months.
Without a local secondary index, the application would have to Scan the entire Thread table and discard any
posts that were not within the specified time frame. With a local secondary index, a Query operation could
use LastPostDateTime as a sort key and find the data quickly.

In the provided scenario, you can create a local secondary index named LastPostIndex to meet the requirements.
Note that the partition key is the same as that of the Thread table, but the sort key is LastPostDateTime as shown
in the diagram below:

Hence, the most effective solution in this scenario is to: Create a new DynamoDB table with a local secondary
index. Refactor the e-learning platform to use the Query operation for search and utilize
the LastPostDateTime attribute as the sort key.

The option that says: Refactor the e-learning platform to do a Scan operation in the entire Thread table.
Discard any posts that were not within the specified time frame is incorrect because although this option is
valid, this solution would consume a large amount of provisioned read throughput and take a long time to
complete. This is not a scalable solution and the time it takes to fetch the data will continue to increase as the
table grows.
The option that says: Configure the e-learning platform to use a Query operation for the entire Thread table.
Discard any posts that were not within the specified time frame is incorrect because using the Query operation
is not sufficient to meet this requirement. You have to create a local secondary index when you create the table
to narrow down the results and to improve the performance of your application.

The option that says: Create a new DynamoDB table with a global secondary index. Refactor the e-learning
platform to use a Query operation for search and to utilize the LastPostDateTime attribute as the sort key is
incorrect because using a local secondary index is a more appropriate solution to be used in this scenario. Take
note that in this scenario, it is still using the same partition key (ForumName) but with an alternate sort key
(LastPostDateTime) which warrants the use of a local secondary index.

References:

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/SecondaryIndexes.html

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/LSI.html

Check out this Amazon DynamoDB Cheat Sheet:

https://tutorialsdojo.com/amazon-dynamodb/

Tutorials Dojo's AWS Certified DevOps Engineer Professional Exam Study Guide:

https://tutorialsdojo.com/aws-certified-devops-engineer-professional/

Question 49: Correct

A company has a mission-critical website hosted on-premises that is written in .NET and Angular. Its DevOps
team is planning to host the site to AWS Elastic Beanstalk. The application should maintain its availability at all
times to avoid any loss of revenue for the company. The existing EC2 instances should remain in service during
the deployment of the succeeding application versions. A new fleet of EC2 instances should be provisioned to
host the new application version. The new instances should be placed in service and the old ones should be
removed after the new application version is successfully deployed in the new fleet of instances. No DNS change
should also be made on the underlying resources of the environment especially the Elastic Beanstalk DNS
CNAME. In the event of deployment failure, the new fleet of instances should be terminated and the current
instances should continue serving traffic as normal.As a DevOps Engineer, what deployment strategy should you
implement to satisfy the requirements?

Configure the deployment setting of the Elastic Beanstalk environment to use blue/green deployment.
Swap the CNAME records of the old and new environments to redirect the traffic from the old version to
the new version.

Configure the deployment setting of the Elastic Beanstalk environment to use immutable environment
updates.

(Correct)

Configure the deployment setting of the Elastic Beanstalk environment to use rolling deployments.
Configure the deployment setting of the Elastic Beanstalk environment to use rolling deployments with
additional batch.

Explanation

In ElasticBeanstalk, you can choose from a variety of deployment methods:

All at once – Deploy the new version to all instances simultaneously. All instances in your environment are out
of service for a short time while the deployment occurs. This is the method that provides the least amount of
time for deployment.

Rolling – Deploy the new version in batches. Each batch is taken out of service during the deployment phase,
reducing your environment's capacity by the number of instances in a batch.

Rolling with additional batch – Deploy the new version in batches, but first launch a new batch of instances to
ensure full capacity during the deployment process.

Immutable – Deploy the new version to a fresh group of instances by performing an immutable update.

Blue/Green - Deploy the new version to a separate environment, and then swap CNAMEs of the two
environments to redirect traffic to the new version instantly.

Refer to the table below for the characteristics of each deployment method as well as the amount of time it takes
to do the deployment, as seen in the Deploy Time column:

Immutable environment updates are an alternative to rolling updates. Immutable environment updates ensure
that configuration changes that require replacing instances are applied efficiently and safely. If an immutable
environment update fails, the rollback process requires only terminating an Auto Scaling group. A failed rolling
update, on the other hand, requires performing an additional rolling update to roll back the changes.

To perform an immutable environment update, Elastic Beanstalk creates a second, temporary Auto Scaling
group behind your environment's load balancer to contain the new instances. First, Elastic Beanstalk launches a
single instance with the new configuration in the new group. This instance serves traffic alongside all of the
instances in the original Auto Scaling group that are running the previous configuration.

When the first instance passes health checks, Elastic Beanstalk launches additional instances with the new
configuration, matching the number of instances running in the original Auto Scaling group. When all of the new
instances pass health checks, Elastic Beanstalk transfers them to the original Auto Scaling group, and terminates
the temporary Auto Scaling group and old instances.

Hence, the correct answer is: Configure the deployment setting of the Elastic Beanstalk environment to use
immutable environment updates.

The option that says: Configure the deployment setting of the Elastic Beanstalk environment to use rolling
deployments is incorrect because it doesn't launch a fresh group of EC2 instances to host the new application
version.

The option that says: Configure the deployment setting of the Elastic Beanstalk environment to use rolling
deployments with additional batch is incorrect because the rollback process of this configuration does not meet
the specified requirement. In the event of deployment failure, the existing instances are affected and could be
unavailable.

The option that says: Configure the deployment setting of the Elastic Beanstalk environment to use blue/green
deployment. Swap the CNAME records of the old and new environments to redirect the traffic from the old
version to the new version is incorrect because the scenario explicitly mentioned that there should be no DNS
change made on the underlying resources of the environment especially the Elastic Beanstalk DNS CNAME.

References:

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/environmentmgmt-updates-immutable.html

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.rolling-version-deploy.html

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.CNAMESwap.html

Check out this AWS Elastic Beanstalk Cheat Sheet:

https://tutorialsdojo.com/aws-elastic-beanstalk/

Question 50: Correct

You have developed an app that lets users quickly upload and share image snippets across various platforms
such as mobile app, desktop web browser, and third party instant messaging apps. Your app relies on a Lambda
function that detects each request’s User-Agent and automatically sends a corresponding image resolution based
on the device. You want to have a consistent and fast experience for all your users around the world.Which of
the following options will you implement?

Create your function on Lambda and deploy it with Lambda@Edge.

(Correct)

Create your function on AWS Lambda and set it as a CloudFront origin.

Deploy your function on Amazon API Gateway and set up an Edge-optimized API endpoint.

Upload your function on S3 and set it as CloudFront origin to run it.

Explanation

Lambda@Edge runs your code in response to events generated by the Amazon CloudFront content delivery
network (CDN). Lambda@Edge lets you run Node.js Lambda functions to customize content that CloudFront
delivers, executing the functions in AWS locations closer to the viewer. The functions run in response to
CloudFront events, without provisioning or managing servers.

Lambda@Edge is a feature of Amazon CloudFront that lets you run code closer to users of your application,
which improves performance and reduces latency. Lambda@Edge lets you use CloudFront triggers to invoke a
Lambda function. When you associate a CloudFront distribution with a Lambda function, CloudFront intercepts
requests and responses at CloudFront edge locations and runs the function. Lambda functions can improve
security or customize information close to your viewers, to improve performance.

Hence, the correct answer is: Create your function on Lambda and deploy it with Lambda@Edge.

The option that says: Create your function on AWS Lambda and set it as a CloudFront origin is incorrect
because in AWS, you don't usually set a Lambda function as the origin for your CloudFront distribution. You
have to use Lambda@Edge instead.

The option that says: Upload your function on S3 and set it as CloudFront origin to run it is incorrect because
although you can set S3 as the origin of your CloudFront distribution, you won't be able to trigger the function
since S3 has no compute capability.

The option that says: Deploy your function on Amazon API Gateway and set up an Edge-optimized API
endpoint is incorrect because you can't run your function using API Gateway alone. You have to integrate your
Lambda functions with API Gateway first. Moreover, The Edge-optimized API is just a default hostname in API
Gateway that is deployed to the specified region while using a CloudFront distribution to facilitate client access
typically from across AWS regions.

References:

https://docs.aws.amazon.com/lambda/latest/dg/lambda-edge.html

https://aws.amazon.com/lambda/edge/

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-edge-how-it-works-
tutorial.html

Check out this AWS Lambda Cheat Sheet:

https://tutorialsdojo.com/aws-lambda/

Question 51: Correct

A Data Analytics team uses a large Hadoop cluster with 100 EC2 instance nodes for gathering trends about user
behavior on the company’s mobile app. Currently, the DevOps team forwards any email notification they receive
from the AWS Health service to the Data Analytics team whenever there is a particular EC2 instance that needs
maintenance or any old EC2 instance types that will be retired by Amazon soon. The Data Analytics team needs
to determine how the maintenance will affect their cluster and ensure that their Hadoop Distributed File System
(HDFS) component can recover from any failure.Which of the following is the FASTEST way to automate this
notification process?

Add your Data Analytics team email as an alternate contact for your AWS account. They will receive
these EC2 maintenance emails directly so you won’t have to forward it to them.

Use AWS Systems Manager (SSM) Resource Data Sync to track the AWS Health Events of your related
EC2 instances. Configure the SSM Resource Data Sync to automatically poll the AWS Health API for
Amazon EC2 updates. Send a notification to the Data Analytics team whenever an EC2 event is matched
Create a CloudWatch Metric Filter for AWS Health Events of your related EC2 instances. Create a
CloudWatch Alarm for this metric to notify you whenever the threshold is reached.

Create an AWS EventBridge (Amazon CloudWatch Events) rule for AWS Health. Select EC2 Service
and select the Events you want to get notified. Set the target to an Amazon SNS topic that the Data
Analytics team is subscribed to.

(Correct)

Explanation

You can use AWS EventBridge (Amazon CloudWatch Events) to detect and react to changes in the status of
AWS Personal Health Dashboard (AWS Health) events. Then, based on the rules that you create, CloudWatch
Events invokes one or more target actions when an event matches the values that you specify in a rule.

Depending on the type of event, you can send notifications, capture event information, take corrective action,
initiate events, or take other actions. In this scenario, you first need to create an Events rule for AWS Health and
choose the "EC2 service” as well as specific categories of events or scheduled changes that are planned to your
account.

Then, you can then set the Target of this rule to an SNS topic on which the Data Analytics team will subscribe.
For every event that matches this event rule, a notification will be sent to the subscribers of your SNS topic.

Hence, the correct answer is: Create an AWS EventBridge (Amazon CloudWatch Events) rule for AWS
Health. Select EC2 Service and select the Events you want to get notified. Set the target to an Amazon
SNS topic that the Data Analytics team is subscribed to.

The option that says: Create a CloudWatch Metric Filter for AWS Health Events of your related EC2
instances. Create a CloudWatch Alarm for this metric to notify you whenever the threshold is reached is
incorrect because you can only create a Metric filter for CloudWatch log groups. Take note that the AWS Health
events are not automatically sent to CloudWatch Log groups; thus, you have to manually set up a data stream for
AWS Health.

The option that says: Use AWS Systems Manager (SSM) Resource Data Sync to track the AWS Health
Events of your related EC2 instances. Configure the SSM Resource Data Sync to automatically poll the
AWS Health API for Amazon EC2 updates. Send a notification to the Data Analytics team whenever an
EC2 event is matched is incorrect. Keep in mind that the AWS Systems Manager resource data sync feature is
primarily used in AWS Systems Manager Inventory to send inventory data collected from all of your managed
nodes to a single Amazon Simple Storage Service (Amazon S3) bucket. The SSM resource data sync will
automatically update the centralized data in Amazon S3 when new inventory data is collected. This resource data
sync feature is not applicable in this scenario.

The option that says: Add your Data Analytics team email as an alternate contact for your AWS account.
They will receive these EC2 maintenance emails directly, so you won’t have to forward it to them is
incorrect because the Data Analytics team will also receive other financial or account/billing-related emails
using this setup. This is a security risk since the Data Analytics team email could potentially be compromised
and be used to access your AWS account.

References:
https://docs.aws.amazon.com/health/latest/ug/cloudwatch-events-health.html

https://aws.amazon.com/blogs/aws/new-cloudwatch-events-track-and-respond-to-changes-to-your-aws-
resources/

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide//automating_with_cloudwatch_events.html

Question 52: Incorrect

A leading telecommunications company is migrating a multi-tier enterprise application to AWS which must be
hosted on a single Amazon EC2 Dedicated Instance. The app cannot use Auto Scaling due to server licensing
constraints. For its database tier, Amazon Aurora will be used to store the data and transactions of the
application. Auto-healing must be configured to ensure high availability even in the event of Amazon EC2 or
Aurora database outages.Which of the following options provides the MOST cost-effective solution for this
migration task?

Set up the AWS Personal Health Dashboard (AWS Health) events to monitor your Amazon EC2
Dedicated Instance. Set up an Amazon EC2 instance and enable the built-in instance recovery feature.
Create an Aurora database with a Read Replica on the other Availability Zone. Promote the replica as the
primary in the event that the primary database instance fails.

Set up the AWS Personal Health Dashboard (AWS Health) events to monitor your Amazon EC2
Dedicated Instance. Set up an Auto Scaling group with a minimum and maximum instance count of 1.
Launch a single-instance Amazon Aurora database.

Set up the AWS Personal Health Dashboard (AWS Health) events to monitor your Amazon EC2
Dedicated Instance. Launch an Elastic IP address and attach it to the dedicated instance. Set up a second
EC2 instance in the other Availability Zone. Set up an Amazon CloudWatch Events rule to trigger an
AWS Lambda function to move the EIP to the second instance when the first instance fails. Set up a
single-instance Amazon Aurora database.

Set up the AWS Personal Health Dashboard (AWS Health) events to monitor your Amazon EC2
Dedicated Instance. Set up a CloudWatch Events rule to trigger an AWS Lambda function to start a new
EC2 instance in an available Availability Zone when the instance status reaches a failure state. Configure
an Aurora database with a Read Replica in the other Availability Zone. In the event that the primary
database instance fails, promote the read replica to a primary database instance.

(Correct)

Explanation

You can create an Amazon CloudWatch alarm that monitors an Amazon EC2 instance and automatically
recovers the instance if it becomes impaired due to an underlying hardware failure or a problem that requires
AWS involvement to repair. Terminated instances cannot be recovered. A recovered instance is identical to the
original instance, including the instance ID, private IP addresses, Elastic IP addresses, and all instance metadata.
If the impaired instance is in a placement group, the recovered instance runs in the placement group.

When the StatusCheckFailed_System alarm is triggered, and the recover action is initiated, you will be notified
by the Amazon SNS topic that you selected when you created the alarm and associated the recover action.
During instance recovery, the instance is migrated during an instance reboot, and any data that is in-memory is
lost. When the process is complete, information is published to the SNS topic you've configured for the alarm.
Anyone who is subscribed to this SNS topic will receive an email notification that includes the status of the
recovery attempt and any further instructions. You will notice an instance reboot on the recovered instance.

Examples of problems that cause system status checks to fail include:

- Loss of network connectivity

- Loss of system power

- Software issues on the physical host

- Hardware issues on the physical host that impact network reachability

If your instance has a public IPv4 address, it retains the public IPv4 address after recovery.

You can configure a CloudWatch alarm to automatically recover impaired EC2 instances and notify you through
Amazon SNS. However, the SNS notification by itself doesn't include the results of the automatic recovery
action.

You must also configure an Amazon CloudWatch Events rule to monitor AWS Personal Health Dashboard
(AWS Health) events for your instance. Then, you are notified of the results of automatic recovery actions for an
instance.

Hence, the correct answer is: Set up the AWS Personal Health Dashboard (AWS Health) events to monitor
your Amazon EC2 Dedicated Instance. Set up a CloudWatch Events rule to trigger an AWS Lambda function
to start a new EC2 instance in an available Availability Zone when the instance status reaches a failure state.
Configure an Aurora database with a Read Replica in the other Availability Zone. In the event that the
primary database instance fails, promote the read replica to a primary database instance.

The option that says: Set up the AWS Personal Health Dashboard (AWS Health) events to monitor your
Amazon EC2 Dedicated Instance. Set up an Auto Scaling group with a minimum and maximum instance
count of 1. Launch a single-instance Amazon Aurora database is incorrect because launching a single-instance
Aurora database is not a highly available architecture. You have to set up at least a Read Replica that you can
configure to be the new primary instance in the event of outages.

The option that says: Set up the AWS Personal Health Dashboard (AWS Health) events to monitor your
Amazon EC2 Dedicated Instance. Set up an Amazon EC2 instance and enable the built-in instance recovery
feature. Create an Aurora database with a Read Replica on the other Availability Zone. Promote the replica
as the primary in the event that the primary database instance fails is incorrect because there is no built-in
instance recovery failure feature for Amazon EC2. You have to use a combination of CloudWatch and a Lambda
function to automatically recover the EC2 instance from failure.

The option that says: Set up the AWS Personal Health Dashboard (AWS Health) events to monitor your
Amazon EC2 Dedicated Instance. Launch an Elastic IP address and attach it to the dedicated instance. Set
up a second EC2 instance in the other Availability Zone. Set up an Amazon CloudWatch Events rule to
trigger an AWS Lambda function to move the EIP to the second instance when the first instance fails. Set up
a single-instance Amazon Aurora database is incorrect because setting up a second EC2 instance in other
Availability Zone entails an additional cost. Launching a single-instance Aurora database is not a highly
available architecture.

References:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-recover.html

https://aws.amazon.com/premiumsupport/knowledge-center/cloudwatch-sns-ec2-automatic-recovery/

https://aws.amazon.com/premiumsupport/knowledge-center/automatic-recovery-ec2-cloudwatch/

Question 53: Correct

You have developed a custom web dashboard for your company that shows all instances running on AWS
including the instance details. The web app relies on a DynamoDB table that will be updated whenever a new
instance is created or terminated. You have several auto scaling groups of EC2 instances and you want to have
an effective way of updating the DynamoDB table whenever a new instance is created or terminated.

Which of the following steps will you implement?

Configure a CloudWatch Events target to your auto scaling group that will trigger a Lambda function
when a lifecycle action occurs. Configure the function to update the DynamoDB table with the necessary
instance details.

(Correct)

Define a Lambda function as a notification target for the lifecycle hook for the auto scaling group. In the
event of a scale-out or scale-in, the lifecycle hook will trigger the Lambda function to update the
DynamoDB table with the necessary details.

Configure a CloudWatch alarm that will monitor the number of instances on your auto scaling group and
trigger a Lambda function to update the DynamoDB table with the necessary details.

Create a custom script that runs on the instances that will run on scale-in and scale-out events to update
the DynamoDB table with the necessary details.

Explanation

Adding lifecycle hooks to your Auto Scaling group gives you greater control over how instances launch and
terminate. Lifecycle hooks enable you to perform custom actions by pausing instances as an Auto Scaling group
launches or terminates them.

For example, your newly launched instance completes its startup sequence and a lifecycle hook pauses the
instance. While the instance is in a wait state, you can install or configure software on it, making sure that your
instance is fully ready before it starts receiving traffic. For another example of the use of lifecycle hooks, when a
scale-in event occurs, the terminating instance is first deregistered from the load balancer (if the Auto Scaling
group is being used with Elastic Load Balancing). Then, a lifecycle hook pauses the instance before it is
terminated. While the instance is in the wait state, you can, for example, connect to the instance and download
logs or other data before the instance is fully terminated.
After you add lifecycle hooks to your Auto Scaling group, the workflow is shown as follows:The Auto Scaling
group responds to scale-out events by launching instances and scale-in events by terminating instances.

The lifecycle hook puts the instance into a wait state (Pending:Wait or Terminating:Wait). The instance is paused
until you continue or the timeout period ends.

You can perform a custom action using one or more of the following options:

Define a CloudWatch Events target to invoke a Lambda function when a lifecycle action occurs. The Lambda
function is invoked when Amazon EC2 Auto Scaling submits an event for a lifecycle action to CloudWatch
Events. The event contains information about the instance that is launching or terminating, and a token that you
can use to control the lifecycle action.

Define a notification target for the lifecycle hook. Amazon EC2 Auto Scaling sends a message to the notification
target. The message contains information about the instance that is launching or terminating, and a token that
you can use to control the lifecycle action.

Create a script that runs on the instance as the instance starts. The script can control the lifecycle action using the
ID of the instance on which it runs.

By default, the instance remains in a wait state for one hour, and then the Auto Scaling group continues the
launch or terminate process (Pending:Proceed or Terminating:Proceed). If you need more time, you can restart
the timeout period by recording a heartbeat. If you finish before the timeout period ends, you can complete the
lifecycle action, which continues the launch or termination process.

Hence, the correct answer is: Configure a CloudWatch Events target to your auto scaling Group that will
trigger a Lambda function when a lifecycle action occurs. Configure the function to update the
DynamoDB table with the necessary instance details.

The option that says: Create a custom script that runs on the instances that will run on scale-in and scale-
out events to update the DynamoDB table with the necessary details is incorrect. Although this is possible, it
will be harder to execute since a custom script needs to be developed first and will be run only at instance
creation and termination.

The option that says: Define a Lambda function as a notification target for the lifecycle hook for the auto
scaling group. In the event of a scale-out or scale-in, the lifecycle hook will trigger the Lambda function to
update the DynamoDB table with the necessary details is incorrect because you cannot use AWS Lambda as
a notification target for the lifecycle hook of an Auto Scaling group. You can only configure Amazon
CloudWatch Events, Amazon SNS, or Amazon SQS as notification targets.

The option that says: Configure a CloudWatch alarm that will monitor the number of instances on your
auto scaling group and trigger a Lambda function to update the DynamoDB table with the necessary
details is incorrect because this option doesn't track the lifecycle action of the Auto Scaling group. In this
scenario, it's better to create a lifecycle hook integrated with CloudWatch Events and Lambda instead.

References:

https://docs.aws.amazon.com/autoscaling/ec2/userguide/lifecycle-hooks.html#adding-lifecycle-hooks
https://docs.aws.amazon.com/autoscaling/ec2/userguide/lifecycle-hooks.html#lifecycle-hooks-overview

https://docs.aws.amazon.com/autoscaling/ec2/userguide/AutoScalingGroupLifecycle.html

Check out this AWS Auto Scaling Cheat Sheet:

https://tutorialsdojo.com/aws-auto-scaling/

Tutorials Dojo's AWS Certified DevOps Engineer Professional Exam Study Guide:

https://tutorialsdojo.com/aws-certified-devops-engineer-professional/

Question 54: Incorrect

A web application is hosted on an Auto Scaling group (ASG) of On-Demand EC2 instances. You have a
separate Linux EC2 instance that is used primarily for batch processing. This particular instance needs to update
its configuration file based on the list of the active IP addresses of the instances within the ASG. This is needed
in order to run a batch job properly.Which of the following options will let you effectively update the
configuration file whenever the ASG scales-in/scales-out?

Create a CloudWatch Logs to monitor the Launch/Terminate events of the ASG. Set the target to a
Lambda function that will update the configuration file inside the EC2 instance. Add a proper IAM
permission to Lambda to access the EC2 instance.

Develop a custom script that runs on the background to query active EC2 instances and IP addresses of
the ASG. Then the script will update the configuration whenever it detects a change.

Create a CloudWatch Events rule scheduled to run every minute. Set the target to a Lambda function to
query the ASG instances and update the configuration file on an S3 bucket. Automatically establish an
SSH connection to the Linux EC2 instances by using another Lambda function and then download the
master configuration file from S3 to update the local configuration file stored in the instance.

Create a CloudWatch Events rule for the Launch/Terminate events of the ASG. Set the target to an SSM
Run Command that will update the configuration file on the target EC2 instance.

(Correct)

Explanation

You can use Amazon CloudWatch Events to invoke AWS Systems Manager Run Command and perform
actions on Amazon EC2 instances when certain events happen.

You can use SSM Run Command to configure instances without having to login to an instance. This setup
requires the SSM agent to be installed on your EC2 instance and that proper IAM permission has been granted.
You need to set the CloudWatch Event rule that will watch for the launch/terminate action of the auto-scaling
group. Then set the Systems Manager Run command as the target of that event.

On the parameters of SSM Run Command, define the commands you need to update the configuration file. It is
important that your target EC2 instance is properly tagged and defined on SSM Run command as it will be the
basis of SSM to identify your instance.
Hence, the correct answer is: Create a CloudWatch Events rule for the Launch/Terminate events of the
ASG. Set the target to an SSM Run Command that will update the configuration file on the target EC2
instance.

The option that says: Develop a custom script that runs on the background to query active EC2 instances
and IP addresses of the ASG. Then the script will update the configuration whenever it detects a change is
incorrect. Although this may be a possible solution, you will still have to write and maintain your own script and
that creates an unnecessary operational overhead. Also, running a script with regular intervals is not a good
approach if you are just waiting for events.

The option that says: Create a CloudWatch Logs to monitor the Launch/Terminate events of the ASG. Set
the target to a Lambda function that will update the configuration file inside the EC2 instance. Add a
proper IAM permission to Lambda to access the EC2 instance is incorrect because you can't monitor the
Auto Scaling Group events using CloudWatch Logs. You have to use CloudWatch Events instead.

The option that says: Create a CloudWatch Events rule scheduled to run every minute. Set the target to a
Lambda function to query the ASG instances and update the configuration file on an S3 bucket.
Automatically establish an SSH connection to the Linux EC2 instances by using another Lambda function
and then download the master configuration file from S3 to update the local configuration file stored in
the instance is incorrect. It is better to use a CloudWatch Events rule that tracks the Launch/Terminate events of
the Auto Scaling group (ASG) instead of running it every minute. AWS Lambda can’t directly establish an SSH
connection and run a command inside an EC2 instance to update the configuration file, even with proper IAM
permissions. A better solution would be to use the Systems Manager Run Command to let you remotely and
securely manage the configuration of your managed instances.

References:

https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/EC2_Run_Command.html

https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-prereqs.html

https://docs.aws.amazon.com/systems-manager/latest/userguide/rc-console.html

Check out these Amazon CloudWatch and AWS Systems Manager Cheat Sheets:

https://tutorialsdojo.com/amazon-cloudwatch/

https://tutorialsdojo.com/aws-systems-manager/

Question 55: Correct

A DevOps Engineer has been assigned to develop an automated workflow to ensure that the required patches of
all of their Windows EC2 instances are properly applied. It is of utmost importance that the EC2 instance reboots
do not occur at the same time on all of their Windows instances in order to maintain their system uptime
requirements. Any unavailability issues of their systems would likely cause a loss of revenue in the company
since the customer transactions will not be processed in a timely manner.How can the engineer meet the above
requirements?
Set up two Patch Groups with unique tags that you will assign to all of your Amazon EC2 Windows
Instances. Associate the predefined AWS-DefaultPatchBaseline baseline on both patch groups. Integrate
AWS Systems Manager Run Command and CloudWatch Events to set up a cron expression that will
automate the patch execution for the two Patch Groups. Use an AWS Systems Manager State Manager
document to define custom commands which will be executed during patch execution.

Set up a Patch Group with unique tags that you will assign to all of your Amazon EC2 Windows
Instances. Associate the predefined AWS-DefaultPatchBaseline baseline on both patch groups. Set up a
CloudWatch Events rule configured to use a cron expression to automate the execution of patching in a
given schedule using the AWS Systems Manager Run command. Use an AWS Systems Manager State
Manager document to define custom commands which will be executed during patch execution.

Set up two Patch Groups with unique tags that you will assign to all of your Amazon EC2 Windows
Instances. Associate the predefined AWS-DefaultPatchBaseline baseline on both patch groups. Set up two
non-overlapping maintenance windows and associate each with a different patch group. Register targets
with specific maintenance windows using Patch Group tags. Assign the AWS-RunPatchBaseline document
as a task within each maintenance window which has a different processing start time.

(Correct)

Set up a Patch Group with unique tags that you will assign to all of your Amazon EC2 Windows Instances
and then associate the predefined AWS-DefaultPatchBaseline baseline on your patch group. Create a new
maintenance window and associate it with your patch group. Assign the AWS-RunPatchBaseline document
as a task within your maintenance window.

Explanation

AWS Systems Manager Patch Manager automates the process of patching managed instances with both
security-related and other types of updates. You can use Patch Manager to apply patches for both operating
systems and applications.

You can patch fleets of Amazon EC2 instances or your on-premises servers and virtual machines (VMs) by
operating system type. This includes supported versions of Windows Server, Ubuntu Server, Red Hat Enterprise
Linux (RHEL), SUSE Linux Enterprise Server (SLES), CentOS, Amazon Linux, and Amazon Linux 2. You can
scan instances to see only a report of missing patches, or you can scan and automatically install all missing
patches.

Patch Manager uses patch baselines, which include rules for auto-approving patches within days of their release,
as well as a list of approved and rejected patches. You can install patches on a regular basis by scheduling
patching to run as a Systems Manager maintenance window task. You can also install patches individually or to
large groups of instances by using Amazon EC2 tags. You can add tags to your patch baselines themselves when
you create or update them.

You can use a patch group to associate instances with a specific patch baseline. Patch groups help ensure that
you are deploying the appropriate patches, based on the associated patch baseline rules, to the correct set of
instances. Patch groups can also help you avoid deploying patches before they have been adequately tested. For
example, you can create patch groups for different environments (such as Development, Test, and Production)
and register each patch group to an appropriate patch baseline.
When you run AWS-RunPatchBaseline, you can target managed instances using their instance ID or tags. SSM
Agent and Patch Manager will then evaluate which patch baseline to use based on the patch group value that you
added to the instance.

You create a patch group by using Amazon EC2 tags. Unlike other tagging scenarios across Systems Manager, a
patch group must be defined with the tag key: Patch Group. Note that the key is case-sensitive. You can specify
any value, for example, "web servers," but the key must be Patch Group.

The AWS-DefaultPatchBaseline baseline is primarily used to approve all Windows Server operating system
patches that are classified as "CriticalUpdates" or "SecurityUpdates" and that have an MSRC severity of
"Critical" or "Important". Patches are auto-approved seven days after release.

Hence, the option that says: Set up two Patch Groups with unique tags that you will assign to all of your
Amazon EC2 Windows Instances. Associate the predefined AWS-DefaultPatchBaseline baseline on both patch
groups. Set up two non-overlapping maintenance windows and associate each with a different patch group.
Register targets with specific maintenance windows using Patch Group tags. Assign the AWS-
RunPatchBaseline document as a task within each maintenance window which has a different processing start
time is the correct answer as it properly uses two Patch Groups, non-overlapping maintenance windows and
the AWS-DefaultPatchBaseline baseline to ensure that the EC2 instance reboots do not occur at the same time.

The option that says: Set up a Patch Group with unique tags that you will assign to all of your Amazon EC2
Windows Instances and then associate the predefined AWS-DefaultPatchBaseline baseline on your patch
group. Create a new maintenance window and associate it with your patch group. Assign the AWS-
RunPatchBaseline document as a task within your maintenance window is incorrect because although it is
correct to use a Patch Group, you must create another Patch Group to avoid any unavailability issues. Having
two non-overlapping maintenance windows will ensure that there will be another set of running Windows EC2
instances while the other set is being patched.

The option that says: Set up a Patch Group with unique tags that you will assign to all of your Amazon EC2
Windows Instances. Associate the predefined AWS-DefaultPatchBaseline baseline on both patch groups. Set
up a CloudWatch Events rule configured to use a cron expression to automate the execution of patching in a
given schedule using the AWS Systems Manager Run command. Use an AWS Systems Manager State
Manager document to define custom commands which will be executed during patch execution is incorrect
because the AWS Systems Manager Run Command is primarily used to remotely manage the configuration of
your managed instances while AWS Systems Manager State Manager is just a configuration management
service that automates the process of keeping your Amazon EC2 and hybrid infrastructure in a state that you
define. These two services, including CloudWatch Events, are not suitable for this scenario. The better solution
would be to use AWS Systems Manager Maintenance Windows which lets you define a schedule for when to
perform potentially disruptive actions on your instances such as patching an operating system, updating drivers,
or installing software or patches.

The option that says: Set up a Patch Group with unique tags that you will assign to all of your Amazon EC2
Windows Instances. Associate the predefined AWS-DefaultPatchBaseline baseline on both patch groups. Set
up a CloudWatch Events rule configured to use a cron expression to automate the execution of patching in a
given schedule using the AWS Systems Manager Run command. Use an AWS Systems Manager State
Manager document to define custom commands which will be executed during patch execution is incorrect
because, just as what is mentioned in the previous option, you have to use Maintenance Windows for scheduling
the patches and you also need to set up two Patch Groups in this scenario instead of one.
References:

https://aws.amazon.com/blogs/mt/patching-your-windows-ec2-instances-using-aws-systems-manager-patch-
manager/

https://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-ssm-documents.html

https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-patch-scheduletasks.html

Check out this AWS Systems Manager Cheat Sheet:

https://tutorialsdojo.com/aws-systems-manager/

Question 56: Incorrect

You have a separate AWS account on which developers can freely spawn their own AWS resources and test
their new builds. Given the lax restriction in this environment, you checked AWS Trusted Advisor and it shows
that several instances use the default security group rule that opens inbound port 22 to all IP addresses. Even for
a test environment, you still want to restrict the port 22 access from the Public IP of your on-premises data center
only. With this, you want to be notified of any security check recommendations from Trusted Advisor and
automatically solve the non-compliance based on the results.

What are the steps that you should take to set up the required solution? (Select THREE.)

Set up custom AWS Config rule to execute a remediation action using AWS Systems Manager
Automation to update the publicly open port 22 on the instances and restrict to only your office’s public
IP.

(Correct)

Create a Lambda function and integrate CloudWatch Events and AWS Lambda to execute the function on
a regular schedule to check AWS Trusted Advisor via API. Based on the results, publish a message to an
SNS Topic to notify the subscribers.

(Correct)

Create a Lambda function that executes every hour to refresh AWS Trusted Advisor scan results via API.
The automated notification on AWS trusted Advisor will notify you of any changes.

Create an AWS Config Cron job to schedule your checks on all AWS security groups and send results to
SNS for the non-compliance notification.

Set up custom AWS Config rule that checks security groups to make sure that port 22 is not open to
public. Send a notification to an SNS topic for non-compliance.

(Correct)

Set up custom AWS Config rule to execute a remediation action that triggers a Lambda Function to
update the publicly open port 22 in the security group and restrict to only your office’s public IP.
(Incorrect)

Explanation

AWS Config enables you to assess, audit, and evaluate the configurations of your AWS resources. Config
continuously monitors and records your AWS resource configurations and allows you to automate the evaluation
of recorded configurations against desired configurations. You can configure AWS Config to stream
configuration changes and notifications to an Amazon SNS topic. This way, you can be notified when AWS
Config evaluates your custom or managed rules against your resources.

AWS Config now includes remediation capability with AWS Config rules. This feature gives you the ability to
associate and execute remediation actions with AWS Config rules to address noncompliant resources. You can
choose from a list of available remediation actions. For example, you can create an AWS Config rule to check
that your Amazon S3 buckets do not allow public read access. You can then associate a remediation action to
disable public access for noncompliant S3 buckets.

It's easy to set up remediation actions through the AWS Config console or API. Just choose the remediation
action you want to associate from a pre-populated list, or create your own custom remediation actions using
AWS Systems Manager Automation documents.

You can also write a scheduled Lambda function to check Trusted Advisor regularly. You can specify a fixed
rate (for example, execute a Lambda function every hour or 15 minutes), or you can specify a Cron expression
using CloudWatch Events. You can retrieve and refresh Trusted Advisor results programmatically. The AWS
Support service enables you to write applications that interact with AWS Trusted Advisor.

Hence, the correct answers are:

- Create a Lambda function and integrate CloudWatch Events and AWS Lambda to execute the function
on a regular schedule to check AWS Trusted Advisor via API. Based on the results, publish a message to
an SNS Topic to notify the subscribers.

- Set up custom AWS Config rule that checks security groups to make sure that port 22 is not open to
public. Send a notification to an SNS topic for non-compliance.

- Set up custom AWS Config rule to execute a remediation action using AWS Systems Manager
Automation to update the publicly open port 22 on the instances and restrict to only your office’s public
IP.The option that says: Create an AWS Config Cron job to schedule your checks on all AWS security
groups and send results to SNS for the non-compliance notification is incorrect because you do not schedule
AWS Config with Cron jobs since this service can be configured to run periodically.

The option that says: Setup custom AWS Config rule to execute a remediation action that triggers a
Lambda Function to update the publicly open port 22 in the security group and restrict to only your
office’s public IP is incorrect because the custom remediation actions in AWS Config should be used in
conjunction with AWS Systems Manager Automation documents.

The option that says: Create a Lambda function that executes every hour to refresh AWS Trusted Advisor
scan results via API. The automated notification on AWS trusted Advisor will notify you of any changes is
incorrect because the notification is only sent on a weekly basis, which can be quite long if you are concerned
about security issues.
References:

https://aws.amazon.com/about-aws/whats-new/2019/03/use-aws-config-to-remediate-noncompliant-resources/

https://docs.aws.amazon.com/config/latest/developerguide/notifications-for-AWS-Config.html

https://aws.amazon.com/blogs/aws/aws-config-rules-dynamic-compliance-checking-for-cloud-resources/

Check out this AWS Config Cheat Sheet:

https://tutorialsdojo.com/aws-config/

Question 57: Correct

A rental payment startup has developed a web portal that enables users to pay for their rent using both their
credit and debit cards online. They are using a third-party payment service to handle and process credit card
payments on their platform since the portal is not fully compliant with the Payment Card Industry Data Security
Standard (PCI DSS). The application is hosted in an Auto Scaling group of Amazon EC2 instances, which are
launched in private subnets behind an internal-facing Application Load Balancer. The system must connect to an
external payment service over the Internet to complete the transaction for every user payment.

As a DevOps Engineer, what would be the MOST suitable option to implement to satisfy the above requirement?

In the Security Group, whitelist the Public IP of the Internet Gateway. Route the user payment requests
through the Internet Gateway. Update the route table associated with one or more of your private subnets
to point Internet-bound traffic to the Internet Gateway.

Use the Application Load Balancer to route payment requests from the application servers through the
Customer Gateway. Update the route table associated with one or more of your private subnets to point
Internet-bound traffic to the Customer Gateway.

Develop a shell script to automatically assign Elastic IP addresses to the Amazon EC2 instances. Add the
script in the User Data of the EC2 instances, which automatically adds the Elastic IP address to the
Network Access List upon launch. Update the route table associated with one or more of your private
subnets to point Internet-bound traffic to a VPC Endpoint.

Using a NAT Gateway, route credit card payment requests from the EC2 instances to the external
payment service. Associate an Elastic IP address to the NAT Gateway. Update the route table associated
with one or more of your private subnets to point Internet-bound traffic to the NAT gateway.

(Correct)

Explanation

You can use a network address translation (NAT) gateway to enable instances in a private subnet to connect
to the Internet or other AWS services, but prevent the Internet from initiating a connection with those instances.

To create a NAT gateway, you must specify the public subnet in which the NAT gateway should reside. You
must also specify an Elastic IP address to associate with the NAT gateway when you create it. The Elastic IP
address cannot be changed once you associate it with the NAT Gateway. After you've created a NAT gateway,
you must update the route table associated with one or more of your private subnets to point Internet-bound
traffic to the NAT gateway. This enables instances in your private subnets to communicate with the
internet.Each NAT gateway is created in a specific Availability Zone and implemented with redundancy in that
zone. You have a limit on the number of NAT gateways you can create in an Availability Zone.

Remember the difference between NAT Instance and NAT Gateways. A NAT Instance needs to use a script to
manage failover between instances while this is done automatically in NAT gateways.

Hence, the correct answer is: Using a NAT Gateway, route credit card payment requests from the EC2
instances to the external payment service. Associate an Elastic IP address to the NAT Gateway. Update
the route table associated with one or more of your private subnets to point Internet-bound traffic to the
NAT gateway.

The option that says: In the Security Group, whitelist the Public IP of the Internet Gateway. Route the user
payment requests through the Internet Gateway. Update the route table associated with one or more of
your private subnets to point Internet-bound traffic to the Internet Gateway is incorrect because you cannot
whitelist an IP address using a Security Group. You should use a NAT Gateway instead to enable instances in a
private subnet to connect to the Internet.

The option that says: Use the Application Load Balancer to route payment requests from the application
servers through the Customer Gateway. Update the route table associated with one or more of your
private subnets to point Internet-bound traffic to the Customer Gateway is incorrect because what you need
here is a NAT Gateway and not a Customer Gateway. The use of an ELB is also not suitable for this scenario.

The option that says: Develop a shell script to automatically assign Elastic IP addresses to the Amazon EC2
instances. Add the script in the User Data of the EC2 instances, which automatically adds the Elastic IP
address to the Network Access List upon launch. Update the route table associated with one or more of
your private subnets to point Internet-bound traffic to a VPC Endpoint is incorrect because neither the
required NAT Gateway nor the NAT instance is mentioned in this option. Moreover, a VPC endpoint simply
enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by
PrivateLink. This doesn't allow you to connect to the public Internet.

References:

https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html

https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-nat-comparison.html

Amazon VPC Overview:

https://www.youtube.com/watch?v=oIDHKeNxvQQ

Check out this Amazon VPC Cheat Sheet:

https://tutorialsdojo.com/amazon-vpc/

Question 58: Incorrect

A leading software company is testing an application that is hosted on an Auto Scaling group of EC2 instances
across multiple Availability Zones behind an Application Load Balancer. For its deployment process, the
company uses the blue/green deployment process with immutable instances to avoid any service degradation.
Recently, the DevOps team noticed that the users are being automatically logged out of the application
intermittently. Whenever a new version of the application is deployed, all users are automatically logged out
which affects the overall user experience. The DevOps team needs a solution to ensure users remain logged in
across scaling events and application deployments.

Which among the following options is the MOST efficient way to solve this issue?

Launch a Redis session store on a large EC2 instance for each supported Availability Zone that are
independent from each other. Configure the application to store user-session information in its
corresponding Redis node.

Launch a new Amazon ElastiCache for Redis cluster with nodes across multiple Availability Zones.
Configure the application to store user-session information in ElastiCache.

(Correct)

Enable sticky sessions in the Application Load Balancer. Configure custom application-based cookies for
each target group.

Launch a Memcached session store on a large EC2 instance on a single Availability Zone. Configure the
application to store user-session information in its corresponding Memcached node.

Explanation

There are various ways to manage user sessions including storing those sessions locally to the node responding
to the HTTP request or designating a layer in your architecture which can store those sessions in a scalable and
robust manner. Common approaches used include utilizing Sticky sessions or using a Distributed Cache for your
session management.

In order to address scalability and to provide a shared data storage for sessions that can be accessible from any
individual web server, you can abstract the HTTP sessions from the web servers themselves. A common solution
for this is to leverage an In-Memory Key/Value store such as Redis and Memcached.

While Key/Value data stores are known to be extremely fast and provide sub-millisecond latency, the added
network latency and added cost are the drawbacks. An added benefit of leveraging Key/Value stores is that they
can also be utilized to cache any data, not just HTTP sessions, which can help boost the overall performance of
your applications.

A consideration when choosing a distributed cache for session management is determining how many nodes may
be needed in order to manage the user sessions. Generally speaking, this decision can be determined by how
much traffic is expected and/or how much risk is acceptable. In a distributed session cache, the sessions are
divided by the number of nodes in the cache cluster. In the event of a failure, only the sessions that are stored on
the failed node are affected. If reducing risk is more important than cost, adding additional nodes to further
reduce the percent of stored sessions on each node may be ideal even when fewer nodes are sufficient.

Another consideration may be whether or not the sessions need to be replicated or not. Some key/value stores
offer replication via read replicas. In the event of a node failure, the sessions would not be entirely lost. Whether
replica nodes are important in your individual architecture may inform as to which key/value store should be
used. ElastiCache offerings for In-Memory key/value stores include ElastiCache for Redis, which can support
replication, and ElastiCache for Memcached which does not support replication.

Hence, the correct answer is: Launch a new Amazon ElastiCache for Redis cluster with nodes across
multiple Availability Zones. Configure the application to store user-session information in ElastiCache.

The option that says: Launch a Memcached session store on a large EC2 instance on a single Availability
Zone. Configure the application to store user-session information in its corresponding Memcached node is
incorrect as this will not solve the problem. The session store is located in a single instance and on a single
Availability Zone. You should set up a distributed cache for session management using an ElastiCache cluster
that spans across multiple AZs.

The option that says: Launch a Redis session store on a large EC2 instance for each supported Availability
Zone that are independent from each other. Configure the application to store user-session information in
its corresponding Redis node is incorrect because you have to create several session store nodes that are related
to each other and are not stand-alone nodes.

The option that says: Enable sticky sessions in the Application Load Balancer. Configure custom
application-based cookies for each target group is incorrect. Sticky sessions just bind user sessions to a
specific target group, thus, you're still persistently storing session data locally on the instance. This means that if
an instance fails, you will most likely lose the session data that are stored on the failed instance. Furthermore, if
the number of your web servers increases, as in a scale-up situation, traffic may be unequally distributed
throughout the web servers because active sessions may reside on certain servers.

References:

https://aws.amazon.com/elasticache/

https://aws.amazon.com/caching/session-management/

Check out this Amazon Elasticache Cheat Sheet:

https://tutorialsdojo.com/amazon-elasticache/

Question 59: Correct

A company is planning to launch its Node.js application to AWS to better serve its clients around the globe. A
hybrid deployment is required to be implemented wherein the application will run on both on-premises
application servers and On-Demand Amazon EC2 instances. The application instances require secure access to
database credentials, which must be encrypted both at rest and in transit.As a DevOps Engineer, how can you
automate the deployment process of the application in the MOST secure manner?

Store database credentials in the appspec.yml configuration file. Create an IAM policy for allowing access
to only the database credentials. Attach the IAM policy to the role associated with the instance profile for
CodeDeploy-managed instances and the IAM role used for on-premises instances registration on
CodeDeploy. Deploy the application packages to the EC2 instances and on-premises servers using AWS
CodeDeploy.
Using AWS Systems Manager Parameter Store, upload and manage the database credentials with a Secure
String data type. Create an IAM Policy that allows access and decryption of the database credentials.
Attach the IAM policy to the instance profile for CodeDeploy-managed instances as well as to the on-
premises instances using the register-on-premises-instance command. Deploy the application packages
to the EC2 instances and on-premises servers using AWS CodeDeploy.

Using AWS Systems Manager Parameter Store, upload and manage the database credentials with a Secure
String data type. Create an IAM role with an attached policy that allows access and decryption of the
database credentials. Attach this role to the instance profile of the CodeDeploy-managed instances as well
as to the on-premises instances using the register-on-premises-instance command. Deploy the
application packages to the EC2 instances and on-premises servers using AWS CodeDeploy.

(Correct)

Using AWS Systems Manager Parameter Store, upload and manage the database credentials with a Secure
String data type. Create an IAM role that allows access and decryption of the database credentials.
Associate this role to all the Amazon EC2 instances. Upload the application in AWS Elastic Beanstalk
with a Node.js platform configuration and deploy the application revisions to both on-premises servers
and EC2 instances using blue/green deployment.

Explanation

AWS Systems Manager Parameter Store provides secure, hierarchical storage for configuration data
management and secrets management. You can store data such as passwords, database strings, and license codes
as parameter values. You can store values as plain text or encrypted data. You can then reference values by using
the unique name that you specified when you created the parameter. Highly scalable, available, and durable,
Parameter Store is backed by the AWS Cloud.

Servers and virtual machines (VMs) in a hybrid environment require an IAM role to communicate with the
Systems Manager service. The role grants AssumeRole trust to the Systems Manager service. You only need to
create the service role for a hybrid environment once for each AWS account.

Users in your company or organization who will use Systems Manager on your hybrid machines must be granted
permission in IAM to call the SSM API.

You can use CodeDeploy to deploy to both Amazon EC2 instances and on-premises instances. An on-premises
instance is any physical device that is not an Amazon EC2 instance that can run the CodeDeploy agent and
connect to public AWS service endpoints. You can use CodeDeploy to simultaneously deploy an application to
Amazon EC2 instances in the cloud and to desktop PCs in your office or servers in your own data center.To
register an on-premises instance, you must use an IAM identity to authenticate your requests. You can choose
from the following options for the IAM identity and registration method you use:

- Use an IAM User ARN to authenticate requests

- Use an IAM Role ARN to authenticate requests

For maximum control over the authentication and registration of your on-premises instances, you can use
the register-on-premises-instance command and periodically refreshed temporary credentials generated with
the AWS Security Token Service (AWS STS). A static IAM role for the instance assumes the role of these
refreshed AWS STS credentials to perform CodeDeploy deployment operations. This method is most useful
when you need to register a large number of instances. It allows you to automate the registration process with
CodeDeploy. You can use your own identity and authentication system to authenticate on-premises instances
and distribute IAM session credentials from the service to the instances for use with CodeDeploy.

Take note you cannot directly attach an IAM Role to your on-premises servers. You have to set up your on-
premises servers as "on-premises instances" in CodeDeploy with a static IAM Role that your servers can assume.

Hence, the correct answer is: Using AWS Systems Manager Parameter Store, upload and manage the
database credentials with a Secure String data type. Create an IAM role with an attached policy that
allows access and decryption of the database credentials. Attach this role to the instance profile of the
CodeDeploy-managed instances as well as to the on-premises instances using the register-on-premises-
instance command. Deploy the application packages to the EC2 instances and on-premises servers using
AWS CodeDeploy.

The option that says: Using AWS Systems Manager Parameter Store, upload and manage the database
credentials with a Secure String data type. Create an IAM role that allows access and decryption of the
database credentials. Associate this role to all the Amazon EC2 instances. Upload the application in AWS
Elastic Beanstalk with a Node.js platform configuration and deploy the application revisions to both on-
premises servers and EC2 instances using blue/green deployment is incorrect. You can't deploy an
application to your on-premises servers using Elastic Beanstalk. This is only applicable to your Amazon EC2
instances.

The option that says: Using AWS Systems Manager Parameter Store, upload and manage the database
credentials with a Secure String data type. Create an IAM policy that allows access and decryption of the
database credentials. Attach the IAM policy to the instance profile for CodeDeploy-managed instances as
well as to the on-premises instances using the register-on-premises-instance command. Deploy the
application packages to the EC2 instances and on-premises servers using AWS CodeDeploy is incorrect.
You have to use an IAM Role and not an IAM Policy to grant access to AWS Systems Manager Parameter
Store. Alternatively, you can use your IAM User credentials too, but this method isn't secure.

The option that says: Store database credentials in the appspec.yml configuration file. Create an IAM policy
for allowing access to only the database credentials. Attach the IAM policy to the role associated with the
instance profile for CodeDeploy-managed instances and the IAM role used for on-premises instances
registration on CodeDeploy. Deploy the application packages to the EC2 instances and on-premises
servers using AWS CodeDeploy is incorrect. It is not secure to store sensitive database credentials in
an appspec.yml configuration file. You have to use AWS Systems Manager Parameter Store instead.

References:

https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html

https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-service-role.html

https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-managedinstances.html

https://docs.aws.amazon.com/codedeploy/latest/userguide/on-premises-instances-register.html

Check out this AWS Systems Manager Cheat Sheet:


https://tutorialsdojo.com/aws-systems-manager/

Question 60: Incorrect

You are the DevOps Engineer for an IT consulting firm that has various teams and departments. Using AWS
Organizations, these entities have been grouped into several organizational units (OUs). The IT Security team
reported that there was a suspected breach in your environment where a third-party AWS account was suddenly
added to your organization without any prior approval. The external account has high-level access privileges to
the accounts that you own but fortunately, no detrimental action was performed.

Which of the following is the MOST appropriate monitoring set up that notifies you for any changes to your
AWS accounts? (Select TWO.)

Use the AWS Systems Manager and CloudWatch Events to monitor all changes to your organization and
also to notify you of any new activities or configurations made to your account.

Launch an AWS-approved third-party monitoring tool from the AWS Marketplace that would send alerts
if a breach was detected. Analyze any possible breach using AWS GuardDuty. Use Amazon SNS to notify
the administrators.

Monitor the compliance of your AWS Organizations using AWS Config. Launch a new SNS Topic or
Amazon CloudWatch Events that will send alerts to you for any changes.

(Correct)

Create an Amazon CloudWatch Dashboard in order to monitor any changes to your organization. Launch
a new Amazon SNS topic that would send you and your team a notification.

Launch a new trail in Amazon CloudTrail to capture all API calls to your AWS Organizations, including
calls from the AWS Organizations console. Also, track all code calls to the AWS Organizations APIs.
Integrate CloudWatch Events and Amazon SNS to raise events when administrator-specified actions
occur in an organization and configure it to send a notification.

(Correct)

Explanation

AWS Organizations is an account management service that enables you to consolidate multiple AWS accounts
into an organization that you create and centrally manage. AWS Organizations includes account management
and consolidated billing capabilities that enable you to better meet the budgetary, security, and compliance needs
of your business. As an administrator of an organization, you can create accounts in your organization and invite
existing accounts to join the organization.

AWS Organizations can work with CloudWatch Events to raise events when administrator-specified actions
occur in an organization. For example, because of the sensitivity of such actions, most administrators would
want to be warned every time someone creates a new account in the organization or when an administrator of a
member account attempts to leave the organization. You can configure CloudWatch Events rules that look for
these actions and then send the generated events to administrator-defined targets. Targets can be an Amazon
SNS topic that emails or text messages its subscribers. Combining this with Amazon CloudTrail, you can set an
event to trigger whenever a matching API call is received.
Multi-account, multi-region data aggregation in AWS Config enables you to aggregate AWS Config data from
multiple accounts and regions into a single account. Multi-account, multi-region data aggregation is useful for
central IT administrators to monitor compliance for multiple AWS accounts in the enterprise. An aggregator is a
new resource type in AWS Config that collects AWS Config data from multiple source accounts and regions

Hence, the following are the correct answers in this scenario:

1. Launch a new trail in Amazon CloudTrail to capture all API calls to your AWS Organizations, including
calls from the AWS Organizations console. Also, track all code calls to the AWS Organizations APIs.
Integrate CloudWatch Events and Amazon SNS to raise events when administrator-specified actions occur in
an organization and configure it to send a notification.

2. Monitor the compliance of your AWS Organizations using AWS Config. Launch a new SNS Topic or
Amazon CloudWatch Events that will send alerts to you for any changes.

The option that says: Use the AWS Systems Manager and CloudWatch Events to monitor all changes to your
organization and also to notify you of any new activities or configurations made to your account is incorrect
because AWS Systems Manager is a collection of capabilities for configuring and managing your Amazon EC2
instances, on-premises servers and virtual machines, and other AWS resources at scale. This can't be used to
monitor the changes to the set up of AWS Organizations.

The option that says: Create an Amazon CloudWatch Dashboard in order to monitor any changes to your
organization. Launch a new Amazon SNS topic that would send you and your team a notification is incorrect
because a CloudWatch Dashboard is primarily used to monitor your AWS resources and not the configuration of
your AWS Organizations. Although you can enable the sharing of all CloudWatch Events across all accounts in
your organization, this can't be used to monitor if there is a new AWS account added to your AWS
Organizations. Most of the time, the Amazon CloudWatch Events service is primarily used to monitor your
AWS resources and the applications you run on AWS in real-time.

The option that says: Launch an AWS-approved third-party monitoring tool from the AWS Marketplace that
would send alerts if a breach was detected. Analyze any possible breach using AWS GuardDuty. Use Amazon
SNS to notify the administrators is incorrect because this option entails a lot of configuration, which is not fit
for the scenario. GuardDuty might not determine similar future incidents as malicious if it was performed by an
authenticated user already within the organization.

References:

https://docs.aws.amazon.com/organizations/latest/userguide/orgs_monitoring.html

https://docs.aws.amazon.com/organizations/latest/userguide/orgs_tutorials_cwe.html

Check out this AWS Organizations Cheat Sheet:

https://tutorialsdojo.com/aws-organizations/

Question 61: Correct

An IT consulting firm has a Docker application hosted on an Amazon ECS cluster in their AWS VPC. It has
been encountering intermittent unavailability issues and time outs lately, which affects their production
environment. A DevOps engineer was instructed to instrument the application to detect where high latencies are
occurring and to determine the specific services and paths impacting application performance.

Which of the following steps should the Engineer do to accomplish this task properly?

Add the xray-daemon.config configuration file in your Docker image. Set up the port mappings and
network mode settings in your task definition file to allow traffic on UDP port 2000.

In the Amazon ECS container agent configuration, pass a user data script in the /etc/ecs/ecs.config file
that will install the X-Ray daemon. The script will automatically run when the Amazon ECS container
instance is launched. Configure the network mode settings and port mappings in the container agent to
allow traffic on TCP port 3000

Produce a Docker image that runs the X-Ray daemon. Upload the image to a Docker image repository,
and then deploy it to your Amazon ECS cluster. Configure the network mode settings and port mappings
in your task definition file to allow traffic on UDP port 2000.

(Correct)

Produce a Docker image that runs the X-Ray daemon. Upload the image to a Docker image repository and
then deploy it to your Amazon ECS cluster. Configure the network mode settings and port mappings in
the container agent to allow traffic on TCP port 2000.

Explanation

The AWS X-Ray SDK does not send trace data directly to AWS X-Ray. To avoid calling the service every time
your application serves a request, the SDK sends the trace data to a daemon, which collects segments for
multiple requests and uploads them in batches. Use a script to run the daemon alongside your application.

To properly instrument your applications in Amazon ECS, you have to create a Docker image that runs the X-
Ray daemon, upload it to a Docker image repository, and then deploy it to your Amazon ECS cluster. You can
use port mappings and network mode settings in your task definition file to allow your application to
communicate with the daemon container.

The AWS X-Ray daemon is a software application that listens for traffic on UDP port 2000, gathers raw
segment data, and relays it to the AWS X-Ray API. The daemon works in conjunction with the AWS X-Ray
SDKs and must be running so that data sent by the SDKs can reach the X-Ray service.

Hence, the correct steps to properly instrument the application is to: Produce a Docker image that runs the X-
Ray daemon. Upload the image to a Docker image repository, and then deploy it to your Amazon ECS cluster.
Configure the network mode settings and port mappings in your task definition file to allow traffic on UDP
port 2000.

The option that says: Produce a Docker image that runs the X-Ray daemon. Upload the image to a Docker
image repository, and then deploy it to your Amazon ECS cluster. Configure the network mode settings and
port mappings in the container agent to allow traffic on TCP port 2000 is incorrect because this should be done
in the task definition and not in the container agent. Moreover, X-Ray is primarily using the UDP port 2000 so
this should also be added alongside the TCP port mapping.
The option that says: In the Amazon ECS container agent configuration, pass a user data script in
the /etc/ecs/ecs.config file that will install the X-Ray daemon. The script will automatically run when the
Amazon ECS container instance is launched. Configure the network mode settings and port mappings in the
container agent to allow traffic on TCP port 3000 is incorrect because X-Ray is using the UDP port 2000 and
not TCP port 3000. In addition, it is better to configure your ECS task definitions instead of the Amazon ECS
container agent to enable X-Ray.

The option that says: Add the xray-daemon.config configuration file in your Docker image. Set up the port
mappings and network mode settings in your task definition file to allow traffic on UDP port 2000 is incorrect
because this step is not suitable for ECS. The xray-daemon.config configuration file is primarily used in Elastic
Beanstalk.

References:

https://docs.aws.amazon.com/xray/latest/devguide/xray-daemon-ecs.html

https://docs.aws.amazon.com/xray/latest/devguide/xray-daemon.html

https://docs.aws.amazon.com/xray/latest/devguide/scorekeep-ecs.html

Check out this AWS X-Ray Cheat Sheet:

https://tutorialsdojo.com/aws-x-ray/

Instrumenting your Application with AWS X-Ray:

https://tutorialsdojo.com/instrumenting-your-application-with-aws-x-ray/

Question 62: Incorrect

A PHP web application is uploaded to the AWS Elastic Beanstalk of the company's development account to
automatically handle the deployment, capacity provisioning, load balancing, auto-scaling, and application health
monitoring. Amazon RDS is used as the database, and it is tightly coupled to the Elastic Beanstalk environment.
A DevOps Engineer noticed that if you terminate the environment, its database goes down as well. This issue
prevents you from performing seamless updates with blue-green deployments. Moreover, this poses a critical
security risk if the company decides to deploy the application in its production environment.How can the
DevOps Engineer decouple the database instance from the environment with the LEAST amount of data loss?

Decouple the Amazon RDS instance from your Elastic Beanstalk environment using the Canary
deployment strategy. Take an RDS DB snapshot of the database and enable deletion protection. Set up a
new Elastic Beanstalk environment with the necessary information to connect to the Amazon RDS
instance and delete the old environment.

Decouple the Amazon RDS instance from your Elastic Beanstalk environment using the Canary
deployment strategy. Take an RDS DB snapshot of the database and then set up a new Elastic Beanstalk
environment with the necessary information to connect to the Amazon RDS instance.

Decouple the Amazon RDS instance from your Elastic Beanstalk environment using the blue/green
deployment strategy to decouple. Take an RDS DB snapshot of the database and enable deletion
protection. Set up a new Elastic Beanstalk environment with the necessary information to connect to the
Amazon RDS instance. Before terminating the old Elastic Beanstalk environment, remove its security
group rule first before proceeding.

(Correct)

Decouple the Amazon RDS instance from your Elastic Beanstalk environment using a blue/green
deployment strategy. Enable deletion protection and take an RDS DB snapshot of the database. Set up a
new Elastic Beanstalk environment with the necessary information to connect to the Amazon RDS
instance and delete the old environment.

Explanation

AWS Elastic Beanstalk provides support for running Amazon Relational Database Service (Amazon RDS)
instances in your Elastic Beanstalk environment. This works great for development and testing environments.
However, it isn't ideal for a production environment because it ties the lifecycle of the database instance to the
lifecycle of your application's environment.

If you haven't used a DB instance with your application before, try adding one to a test environment with the
Elastic Beanstalk console first. This lets you verify that your application is able to read environment properties,
construct a connection string, and connect to a DB instance before you add Amazon Virtual Private Cloud
(Amazon VPC) and security group configuration to the mix.To decouple your database instance from your
environment, you can run a database instance in Amazon RDS and configure your application to connect to it on
launch. This enables you to connect multiple environments to a database, terminate an environment without
affecting the database, and perform seamless updates with blue-green deployments.

To allow the Amazon EC2 instances in your environment to connect to an outside database, you can configure
the environment's Auto Scaling group with an additional security group. The security group that you attach to
your environment can be the same one that is attached to your database instance, or a separate security group
from which the database's security group allows ingress.

You can connect your environment to a database by adding a rule to your database's security group that allows
ingress from the autogenerated security group that Elastic Beanstalk attaches to your environment's Auto Scaling
group. However, doing so creates a dependency between the two security groups. Subsequently, when you
attempt to terminate the environment, Elastic Beanstalk will be unable to delete the environment's security group
because the database's security group is dependent on it.

Hence, the correct answer is: Decouple the Amazon RDS instance from your Elastic Beanstalk environment
using the blue/green deployment strategy to decouple. Take an RDS DB snapshot of the database and
enable deletion protection. Set up a new Elastic Beanstalk environment with the necessary information to
connect to the Amazon RDS instance. Before terminating the old Elastic Beanstalk environment, remove
its security group rule first before proceeding.

The option that says: Decouple the Amazon RDS instance from your Elastic Beanstalk environment using a
blue/green deployment strategy. Enable deletion protection and take an RDS DB snapshot of the database.
Set up a new Elastic Beanstalk environment with the necessary information to connect to the Amazon
RDS instance and delete the old environment is incorrect. Although the deployment strategy being used here
is valid, the existing security group rule is not yet removed which hinders the deletion of the old environment.
The option that says: Decouple the Amazon RDS instance from your Elastic Beanstalk environment using
the Canary deployment strategy. Take an RDS DB snapshot of the database and enable deletion
protection. Set up a new Elastic Beanstalk environment with the necessary information to connect to the
Amazon RDS instance and delete the old environment is incorrect because there is no Canary deployment
configuration in Elastic Beanstalk. This type of deployment strategy is usually used in Lambda.

The option that says: Decouple the Amazon RDS instance from your Elastic Beanstalk environment using
the Canary deployment strategy. Take an RDS DB snapshot of the database and then set up a new Elastic
Beanstalk environment with the necessary information to connect to the Amazon RDS instance is incorrect
because you should use a blue/green deployment strategy instead. This will also cause data loss since the
deletion protection for the database is not enabled. Moreover, there is no Canary deployment configuration in
Elastic Beanstalk.

References:

https://aws.amazon.com/premiumsupport/knowledge-center/decouple-rds-from-beanstalk/

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo.RDS.html

Check out this AWS Elastic Beanstalk Cheat Sheet:

https://tutorialsdojo.com/aws-elastic-beanstalk/

Elastic Beanstalk vs CloudFormation vs OpsWorks vs CodeDeploy:

https://tutorialsdojo.com/elastic-beanstalk-vs-cloudformation-vs-opsworks-vs-codedeploy/

Question 63: Correct

A stock trading company has recently built a Node.js web application with a GraphQL API-backed service that
stores and retrieves financial transactions. The application is currently hosted in an on-premises server with a
local MySQL database as its data store. For testing purposes, the app will have a series of updates based on the
feedback of the QA and UX teams. There should be no downtime or degraded performance in the application
while the DevOps team is building, testing, and deploying the new release versions. The architecture should also
be scalable to meet the surge of application requests.

Which of the following actions will allow the DevOps team to quickly migrate the application to AWS?

Migrate the application source code to CodeCommit and use CodeBuild to set up the automatic unit and
functional tests. Set up two stacks in Elastic Beanstalk with an external Amazon RDS database with
Multi-AZ deployments configuration. Deploy the current application version on the two environments.
Configure CodeBuild to deploy the succeeding application revision to Elastic Beanstalk. Use a blue/green
strategy for deployment.

(Correct)

Migrate the application source code to CodeCommit and use CodeBuild to set up the automatic unit and
functional tests. Set up two stacks in Elastic Beanstalk with a separate Amazon RDS database with Multi-
AZ deployments configuration for each stack. Deploy the current application version on the two
environments. Configure CodeBuild to deploy the succeeding application revision to Elastic Beanstalk.
Use a blue/green strategy for deployment.

Migrate the application source code to Amazon ECR and use CodeDeploy to set up the automatic unit and
functional tests. Set up two stacks in Elastic Beanstalk with an external Amazon RDS database with
Multi-AZ deployments configuration. Deploy the current application version on the two environments.
Configure CodeBuild to deploy the succeeding application revision to Elastic Beanstalk. Use an in-place
strategy for deployment.

Migrate the application source code to CodeCommit and use CodeBuild to set up the automatic unit and
functional tests. Set up two stacks in Elastic Beanstalk with an external Amazon RDS database with
Multi-AZ deployments configuration. Deploy the current application version on the two environments.
Configure CodeBuild to deploy the succeeding application revision to Elastic Beanstalk. Use an in-place
strategy for deployment.

Explanation

Because AWS Elastic Beanstalk performs an in-place update when you update your application versions, your
application can become unavailable to users for a short period of time. You can avoid this downtime by
performing a blue/green deployment, where you deploy the new version to a separate environment, and then
swap CNAMEs of the two environments to redirect traffic to the new version instantly. A blue/green deployment
is also required when you want to update an environment to an incompatible platform version.

Blue/green deployments require that your environment run independently of your production database, if your
application uses one. If your environment has an Amazon RDS DB instance attached to it, the data will not
transfer over to your second environment, and will be lost if you terminate the original environment.

AWS CodeBuild compiles your source code, runs unit tests, and produces artifacts that are ready to deploy. You
can use CodeBuild together with the EB CLI to automate building your application from its source code.
Environment creation and each deployment thereafter start with a build step, and then deploy the resulting
application.

Hence, the correct answer is: Migrate the application source code to CodeCommit and use CodeBuild to set
up the automatic unit and functional tests. Set up two stacks in Elastic Beanstalk with an external Amazon
RDS database with Multi-AZ deployments configuration. Deploy the current application version on the
two environments. Configure CodeBuild to deploy the succeeding application revision to Elastic
Beanstalk. Use a blue/green strategy for deployment.

The option that says: Migrate the application source code to CodeCommit and use CodeBuild to set up the
automatic unit and functional tests. Set up two stacks in Elastic Beanstalk with an external Amazon RDS
database with Multi-AZ deployments configuration. Deploy the current application version on the two
environments. Configure CodeBuild to deploy the succeeding application revision to Elastic Beanstalk.
Use an in-place strategy for deployment is incorrect because it is better to use a blue/green deployment
configuration to ensure the high-availability of your application even during deployment.

The option that says: Migrate the application source code to CodeCommit and use CodeBuild to set up the
automatic unit and functional tests. Set up two stacks in Elastic Beanstalk with a separate Amazon RDS
database with Multi-AZ deployments configuration for each stack. Deploy the current application version
on the two environments. Configure CodeBuild to deploy the succeeding application revision to Elastic
Beanstalk. Use a blue/green strategy for deployment is incorrect because it is not appropriate to launch a
separate Amazon RDS database with Multi-AZ deployments configuration on each Elastic Beanstalk
environment. It is recommended to decouple your database from your Elastic Beanstalk environment.

The option that says: Migrate the application source code to Amazon ECR and use CodeDeploy to set up
the automatic unit and functional tests. Set up two stacks in Elastic Beanstalk with an external Amazon
RDS database with Multi-AZ deployments configuration. Deploy the current application version on the
two environments. Configure CodeBuild to deploy the succeeding application revision to Elastic
Beanstalk. Use an in-place strategy for deployment is incorrect because the Amazon ECR service is primarily
used for containerized applications. You also have to use CodeBuild to set up the automatic unit and functional
tests and not CodeDeploy. In addition, it is better to use a blue/green deployment configuration to ensure the
high availability of your application even during deployment.

References:

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.CNAMESwap.html

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb-cli-codebuild.html

Check out this AWS Elastic Beanstalk Cheat Sheet:

https://tutorialsdojo.com/aws-elastic-beanstalk/

Question 64: Correct

A leading digital consultancy company has two teams in its IT department: the DevOps team and the Security
team, that are working together on different components of its cloud architecture. AWS CloudFormation is used
to manage its resources across all of its AWS accounts, including AWS Config for configuration management.
The Security team applies the operating system-level updates and patches while the DevOps team manages
application-level dependencies and updates. The DevOps team must use the latest AMI when launching new
EC2 instances and deploying its flagship application.Which of the following options is the MOST scalable
method for integrating the two processes and teams?

Instruct the Security team to use a CloudFormation stack that launches an AWS CodePipeline pipeline
that builds new AMIs then store the latest AMI ARNs in an encrypted S3 object as part of the pipeline
output. Order the DevOps team to use a cross-stack reference within their own CloudFormation template
to get that S3 object location and obtain the most recent AMI ARNs to use when deploying their
application.

Instruct the Security team to set up an AWS CloudFormation template that creates new versions of their
AMIs and lists the Amazon Resource names (ARNs) of the AMIs in an encrypted S3 object as part of the
stack output section. Direct the DevOps team to use the cross-stack reference to load the encrypted S3
object and obtain the most recent AMI ARNs.

Instruct the Security team to set up an AWS CloudFormation stack that creates an AWS CodePipeline
pipeline that builds new Amazon Machine Images. Then, store the AMI ARNs as parameters in AWS
Systems Manager Parameter Store as part of the pipeline output. Order the DevOps team to use
the AWS::SSM::Parameter section in their CloudFormation stack to obtain the most recent AMI ARN from
the Parameter Store.
(Correct)

Instruct the Security team to maintain a nested stack in AWS CloudFormation that includes both the OS
and the templates from the DevOps team. Order the Security team to use the stack update action to deploy
updates to the application stack whenever the DevOps team changes the application code.

Explanation

Dynamic references provide a compact, powerful way for you to specify external values that are stored and
managed in other services, such as the Systems Manager Parameter Store, in your stack templates. When you use
a dynamic reference, CloudFormation retrieves the value of the specified reference when necessary during stack
and change set operations.

CloudFormation currently supports the following dynamic reference patterns:

- ssm, for plaintext values stored in AWS Systems Manager Parameter Store

- ssm-secure, for secure strings stored in AWS Systems Manager Parameter Store

- secretsmanager, for entire secrets or specific secret values that are stored in AWS Secrets Manager

Some considerations when using dynamic references:

- You can include up to 60 dynamic references in a stack template.

- For transforms, such as AWS::Include and AWS::Serverless, AWS CloudFormation does not resolve dynamic
references prior to invoking any transforms. Rather, AWS CloudFormation passes the literal string of the
dynamic reference to the transform. Dynamic references (including those inserted into the processed template as
the result of a transform) are resolved when you execute the change set using the template.

- Dynamic references for secure values, such as ssm-secure and secretsmanager, are not currently supported in
custom resources.

Hence, the correct answer is: Instruct the Security team to set up an AWS CloudFormation stack that
creates an AWS CodePipeline pipeline that builds new Amazon Machine Images. Then, store the AMI
ARNs as parameters in AWS Systems Manager Parameter Store as part of the pipeline output. Order the
DevOps team to use the AWS::SSM::Parameter section in their CloudFormation stack to obtain the most
recent AMI ARN from the Parameter Store.

The option that says: Instruct the Security team to set up an AWS CloudFormation template that creates
new versions of their AMIs and lists the Amazon Resource names (ARNs) of the AMIs in an encrypted S3
object as part of the stack output section. Direct the DevOps team to use the cross-stack reference to load
the encrypted S3 object and obtain the most recent AMI ARNsis incorrect because it is better to store the
parameters in AWS Systems Manager Parameter Store.

The option that says: Instruct the Security team to maintain a nested stack in AWS CloudFormation that
includes both the OS and the templates from the DevOps team. Order the Security team to use the stack
update action to deploy updates to the application stack whenever the DevOps team changes the
application code is incorrect because using a nested stack will not decouple the responsibility of the two teams.
Integrating AWS Systems Manager Parameter Store to store the ARN of the AMIs is a better solution.
The option that says: Instruct the Security team to use a CloudFormation stack that launches an AWS
CodePipeline pipeline that builds new AMIs, then store the latest AMI ARNs in an encrypted S3 object as
part of the pipeline output. Order the DevOps team to use a cross-stack reference within their own
CloudFormation template to get that S3 object location and obtain the most recent AMI ARNs to use
when deploying their application is incorrect. Although this is a valid solution, it entails a lot of effort to set up
a cross-stack reference within the DevOps team's own CloudFormation template to get that S3 object location
and obtain the most recent AMI ARNs. You also have to ensure that the AMI ARN on Amazon S3 is the latest
one.

References:

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/dynamic-references.html

https://aws.amazon.com/blogs/mt/integrating-aws-cloudformation-with-aws-systems-manager-parameter-store/

Check out this AWS CloudFormation Cheat Sheet:

https://tutorialsdojo.com/aws-cloudformation/

AWS CloudFormation - Templates, Stacks, Change Sets:

https://www.youtube.com/watch?v=9Xpuprxg7aY

Question 65: Correct

A company has an innovative mobile payment app that enables the users to easily pay their bills and transfer
money without the hassle of logging in to their online banking, entering the account details of the other party and
spending time going through other security verification processes. Anyone can easily pay another person, split
the bill with their friends or pay for their coffee in an instant with just a few taps in their revolutionary mobile
app. It is available on both Android and iOS devices, including a web portal that is deployed in AWS using
OpsWorks Stacks and EC2 instances. The company has over 10 million users nationwide and close to 1,000
transactions every hour. A new feature is ready to be released which will enable the users to store their credit
card information in their systems. However, the new version of the APIs and web portal cannot be deployed to
the existing application stack due to the PCI-DSS compliance rule.

How should the DevOps Engineer deploy the new web portal for the mobile app without having any impact on
the company's 10 million users?

Launch a brand new stack that contains the latest PCI-DSS compliant version of the web portal. Direct all
the incoming traffic to the new application stack at once using the Amazon Route 53 service so that all the
viewers can get to access new features.

Use a Blue/Green deployment strategy to deploy the new PCI-DSS compliant web portal using AWS
CodeDeploy, AWS SAM, and Lambda. The green environment represents the current web portal version
serving production traffic while the blue environment is running the new version of the web portal.

Upgrade the existing application stack in the Production environment in order to make it PCI-DSS
compliant. Deploy the new version of the web portal on the existing application stack.
Launch a brand new OpsWorks stack that contains a new layer with the latest PCI-DSS compliant web
portal version. Use a Blue/Green deployment strategy with Amazon Route 53 that shifts traffic between
the existing stack and new stack. Route only a small portion of incoming production traffic to use the new
application stack while maintaining the old application stack. Check the features of the new portal and
once it's 100% validated, slowly increase incoming production traffic to the new stack. Change the Route
53 to revert to old stack if there are issues on the new stack.

(Correct)

Explanation

Blue/green deployments provide near zero-downtime release and rollback capabilities. The fundamental idea
behind blue/green deployment is to shift traffic between two identical environments that are running different
versions of your application. The blue environment represents the current application version serving production
traffic. In parallel, the green environment is staged running a different version of your application. After the
green environment is ready and tested, production traffic is redirected from blue to green. If any problems are
identified, you can roll back by reverting traffic back to the blue environment.

Hence, the correct answer in this scenario is the option that says: Launch a brand new OpsWorks stack that
contains a new layer with the latest PCI-DSS compliant web portal version. Use a Blue/Green deployment
strategy with Amazon Route 53 that shifts traffic between the existing stack and new stack. Route only a
small portion of incoming production traffic to use the new application stack while maintaining the old
application stack. Check the features of the new portal and once it's 100% validated, slowly increase
incoming production traffic to the new stack. Change the Route 53 to revert to old stack if there are issues
on the new stack.

The option that says: Upgrade the existing application stack in the Production environment in order to
make it PCI-DSS compliant. Deploy the new version of the web portal on the existing application stack is
incorrect because if you forcibly deploy the new web portal to the existing application stack and there is an issue
on the deployment, there would be some inevitable downtime in order for you to fix the system and revert back
to the original version. It is better to use blue/green deployments instead.

The option that says: Launch a brand new stack that contains the latest PCI-DSS compliant version of the
web portal. Direct all the incoming traffic to the new application stack at once using the Amazon Route 53
service so that all the viewers can get to access new features is incorrect. Although the current application
stack can still be used if something goes wrong, the risk of service disruption is quite high when you direct all
incoming traffic to the new portal. If something did go wrong, then there would be downtime in order for you to
switch back to the old stack.

The option that says: Use a Blue/Green deployment strategy to deploy the new PCI-DSS compliant web
portal using AWS CodeDeploy, AWS SAM and Lambda. The green environment represents the current
web portal version serving production traffic while the blue environment is running the new version of the
web portal is incorrect. Although it mentioned Blue/Green deployment, the application stack doesn't mention
about serverless computing at all. Hence, Lambda and AWS SAM are irrelevant in this situation. Moreover, the
description of the blue and green environments are actually switched. The blue environment represents the
current application while the green represents the new one.

References:
https://d0.awsstatic.com/whitepapers/AWS_Blue_Green_Deployments.pdf

https://docs.aws.amazon.com/opsworks/latest/userguide/welcome_classic.html

https://docs.aws.amazon.com/opsworks/latest/userguide/best-deploy.html

Question 66: Correct

You are developing a mobile quiz app hosted on AWS and you’re using AWS CodeDeploy to deploy the
application on your cluster of EC2 instances. There is a hotfix that need to be applied after the scheduled
deployment. These are files not included on the current application revision so you uploaded them manually to
the target instances. On the next application release, the hotfix files were included on the new revision but your
deployment fails so CodeDeploy auto rollbacks to the previous version. However, you have noticed that the
hotfix files that you’ve manually added were missing, and the application is not working properly.

Which of the following is the possible cause and how will you prevent it from happening in the future?

By default, CodeDeploy retains all files on the deployment location but the auto rollback will deploy the
old revision files cleanly. You should choose “Overwrite the content” option for future deployments so
that only the files included in the old app revision will be overwritten and the existing contents will be
retained.

By default, CodeDeploy removes all files on the deployment location and the auto rollback will deploy
the old revision files cleanly. You should choose “Retain the content” option for future deployments so
that only the files included in the old app revision will be deployed and the existing contents will be
retained.

(Correct)

By default, CodeDeploy retains all files on the deployment location but the auto rollback will deploy the
old revision files cleanly. You should choose “Fail the deployment” option for future deployments so that
only the files included in the old app revision will be deployed and the existing contents will be retained.

By default, CodeDeploy removes all files on the deployment location and the auto rollback will deploy
the old revision files cleanly. You should choose “Overwrite the content” option for future deployments
so that only the files included in the old app revision will be overwritten and the existing contents will be
retained.

Explanation

As part of the deployment process, the CodeDeploy agent removes from each instance all the files installed by
the most recent deployment. If files that weren’t part of a previous deployment appear in target deployment
locations, you can choose what CodeDeploy does with them during the next deployment:

Fail the deployment — An error is reported and the deployment status is changed to Failed.

Overwrite the content — The version of the file from the application revision replaces the version already on
the instance.
Retain the content — The file in the target location is kept and the version in the application revision is not
copied to the instance.

Hence, the correct answer is: By default, CodeDeploy removes all files on the deployment location and the
auto rollback will deploy the old revision files cleanly. You should choose “Retain the content” option for
future deployments so that only the files included in the old app revision will be deployed and the existing
contents will be retained.

The option that says: By default, CodeDeploy removes all files on the deployment location and the auto
rollback will deploy the old revision files cleanly. You should choose “Overwrite the content” option for
future deployments so that only the files included in the old app revision will be overwritten and the existing
contents will be retained is incorrect. Since you want to retain the already existing hotfix files, you should use
the “Retain the content” option instead.

The option that says: By default, CodeDeploy retains all files on the deployment location but the auto rollback
will deploy the old revision files cleanly. You should choose “Overwrite the content” option for future
deployments so that only the files included in the old app revision will be overwritten and the existing contents
will be retained is incorrect. The “Overwrite content” option will actually removes the files and not retain them.
As part of the deployment process, the CodeDeploy agent removes from each instance all the files installed by
the most recent deployment and not retain them. Moreover, the “Retain the content” option is a more suitable
option to choose in this scenario for future deployments.

The option that says: By default, CodeDeploy retains all files on the deployment location but the auto rollback
will deploy the old revision files cleanly. You should choose "Fail the deployment" option for future
deployments so that only the files included in the old app revision will be deployed and the existing contents
will be retained is incorrect because as part of the deployment process, the CodeDeploy agent removes from
each instance all the files installed by the most recent deployment and not retain them. You should also choose
the “Retain the content” option for future deployments.

References:

https://docs.aws.amazon.com/codedeploy/latest/userguide/deployments-rollback-and-
redeploy.html#deployments-rollback-and-redeploy-content-options

https://docs.aws.amazon.com/codedeploy/latest/userguide/deployments.html

Check out this AWS CodeDeploy Cheat Sheet:

https://tutorialsdojo.com/aws-codedeploy/

Question 67: Correct

A DevOps Engineer has recently imported a Linux VM hosted on-premises to AWS EC2. This instance is
running a legacy application that is difficult to replicate or backup. However, you are still required to create
backups for this instance since it holds important data. Your solution is to take an EBS snapshot of the volumes
of the instance every day.Which of the following is the EASIEST way to automate the backup process?

Create a CloudWatch Events rule that is scheduled to run at midnight. Set the target to directly call the
EC2 CreateSnapshot API to create a snapshot of the needed EBS volumes.
(Correct)

Create a CloudWatch Events rule that is scheduled every midnight. Set the target to a Lambda function
that will run the EC2 CreateSnapshot API to your defined EBS volumes.

Create a script from inside the instance that will run the aws ec2 create-snapshot command. Schedule it
to run every midnight using crontab.

Create a CloudWatch Events rule that is scheduled every midnight. Set the target to a Systems Manager
Run command to run the EC2 CreateSnapshot API call of the EBS volumes from inside the EC2 instance.

Explanation

You can call the EC2 CreateSnapshot API directly as a target from CloudWatch Events. You can also run
CloudWatch Events rules according to a schedule and create an automated snapshot of an existing Amazon
Elastic Block Store (Amazon EBS) volume on a schedule. You can choose a fixed rate to create a snapshot at
fixed intervals or use a cron expression to specify that the snapshot is made at a specific time of day.

Creating rules with built-in targets is supported only in the AWS Management Console. From the CloudWatch
Events console, create a new rule scheduled to run at midnight every day. Then on the Targets section, choose
EC2 CreateSnapshot API call and input the EBS volume ID that you need to snapshot. You can specify multiple
targets if you need to snapshot multiple EBS volumes.

Alternatively, you can also use the Amazon Data Lifecycle Manager (DLM) for EBS Snapshots. This service
provides a simple, automated way to back up data stored on Amazon EBS volumes. You can define backup and
retention schedules for EBS snapshots by creating lifecycle policies based on tags. With this feature, you no
longer have to rely on custom scripts to create and manage your backups.

Hence, the correct answer is: Create a CloudWatch Events rule that is scheduled to run at midnight. Set the
target to directly call the EC2 CreateSnapshot API to create a snapshot of the needed EBS volumes.

The option that says: Create a CloudWatch Events rule that is scheduled every midnight. Set the target to a
Lambda function that will run the EC2 CreateSnapshot API to your defined EBS volumes is incorrect because
although this solution is valid, you actually don't need to launch a custom Lambda function for this scenario.
Remember that the scenario asks for the solution that is easiest to implement.

The option that says: Create a CloudWatch Events rule that is scheduled every midnight. Set the target to a
Systems Manager Run command to run the EC2 CreateSnapshot API call of the EBS volumes from inside
the EC2 instance is incorrect because although this solution meets the requirement, it entails an extra step of
using Systems Manager Run command. This is not required at all since you can directly call the EC2
CreateSnapshot API from CloudWatch Events.

The option that says: Create a script from inside the instance that will run the aws ec2 create-
snapshot command. Schedule it to run every midnight using crontab is incorrect because it entails a lot of
manual scripting and configuration. A better solution is to use CloudWatch Events or use the Amazon Data
Lifecycle Manager (DLM) for EBS Snapshots instead.

References:

https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/TakeScheduledSnapshot.html
https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/ScheduledEvents.html

https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/Create-CloudWatch-Events-Rule.html

Check out this Amazon CloudWatch Cheat Sheet:

https://tutorialsdojo.com/amazon-cloudwatch/

Question 68: Incorrect

A leading e-commerce company has a payment portal that handles the payment and refund transactions of its
online platform. The portal is hosted in an Auto Scaling group of On-Demand EC2 instances across three
multiple Availability Zones in the US West (N. California) region. There is a new requirement to improve the
system monitoring of the application as well to track the number of payment and refund transactions being done
every minute. The DevOps team should also be notified if this metric breached the specified threshold.

Which of the following options provides the MOST cost-effective and automated solution that will satisfy the
above requirement?

Set up an ELK stack that is composed of Elasticsearch, Logstash, and Kibana for log processing. Store the
payments and refund transactions in each instance and configure Logstash to send the logs to Amazon ES.
Set up a Kibana dashboard to view the data and the metric graphs.

Configure the instances to push the entire log of each payments and refund transactions to CloudWatch
Events as a custom metric. Set up a CloudWatch alarm to notify the DevOps team using SNS when the
threshold is breached. View statistical graphs of your published metrics with the AWS Management
Console.

Configure the instances to push the number of payments and refund transactions to CloudWatch as a
custom metric. Set up a CloudWatch alarm to notify the DevOps team using SNS when the threshold is
breached. View statistical graphs of your published metrics with the AWS Management Console.

(Correct)

Configure the instances to push the number of payments and refund transactions to Amazon CloudWatch
Logs as a custom metric. Develop a custom monitoring application using a Python-based Flask web
application to view the metrics and host it in an Amazon EC2 instance.

Explanation

Metrics are data about the performance of your systems. By default, several services provide free metrics for
resources (such as Amazon EC2 instances, Amazon EBS volumes, and Amazon RDS DB instances). You can
also enable detailed monitoring for some resources, such as your Amazon EC2 instances, or publish your own
application metrics. Amazon CloudWatch can load all the metrics in your account (both AWS resource metrics
and application metrics that you provide) for search, graphing, and alarms.

Metric data is kept for 15 months, enabling you to view both up-to-the-minute data and historical data. You can
publish your own metrics to CloudWatch using the AWS CLI or an API. You can view statistical graphs of your
published metrics with the AWS Management Console.
CloudWatch stores data about a metric as a series of data points. Each data point has an associated timestamp.
You can even publish an aggregated set of data points called a statistic set.

You can aggregate your data before you publish it to CloudWatch. When you have multiple data points per
minute, aggregating data minimizes the number of calls to put-metric-data. For example, instead of calling put-
metric-data multiple times for three data points that are within 3 seconds of each other, you can aggregate the
data into a statistic set that you publish with one call, using the --statistic-values parameter.

Hence, the correct answer is: Configure the instances to push the number of payments and refund
transactions to CloudWatch as a custom metric. Set up a CloudWatch alarm to notify the DevOps team using
SNS when the threshold is breached. View statistical graphs of your published metrics with the AWS
Management Console.

The option that says: Set up an ELK stack that is composed of Elasticsearch, Logstash, and Kibana for log
processing. Store the payments and refund transactions in each instance and configure Logstash to send the
logs to Amazon ES. Set up a Kibana dashboard to view the data and the metric graphs is incorrect because
although this solution may work, it entails a lot of effort to set up and costs more to maintain. A more cost-
effective solution is to use custom metrics in CloudWatch instead.

The option that says: Configure the instances to push the number of payments and refund transactions to
Amazon CloudWatch Logs as a custom metric. Develop a custom monitoring application using a Python-
based Flask web application to view the metrics and host it in an Amazon EC2 instance is incorrect because it
will take you a considerable amount of time to set up the Flask web app. Running this solution also entails an
added monthly cost.

The option that says: Configure the instances to push the entire log of each payment and refund transactions
to CloudWatch Events as a custom metric. Set up a CloudWatch alarm to notify the DevOps team using SNS
when the threshold is breached. View statistical graphs of your published metrics with the AWS Management
Console is incorrect because you can't push a custom metric using CloudWatch Events.

References:

https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/publishingMetrics.html

https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/working_with_metrics.html

Check out this Amazon CloudWatch Cheat Sheet:

https://tutorialsdojo.com/amazon-cloudwatch/

Question 69: Incorrect

A DevOps Engineer is designing a service that aggregates clickstream data in real-time. The service should also
deliver a report to its subscribers once a week via email only. The data being handled are geographically
distributed, high volume, and unpredictable. The service should identify and create sessions from real-time
clickstream events with a feature to do an ad hoc analysis.

Which among the options below is the MOST suitable solution that the Engineer should implement with the
LEAST amount of cost?
Collect the real-time clickstream data using a custom Amazon EMR application hosted on extra-large
EC2 instances then build and analyze the sessions using Kinesis Data Analytics. The aggregated analytics
will trigger the real-time events on Lambda and then send them to SQS which in turn, sends data to an S3
bucket. The clickstream data is ingested to a table by an AWS Glue crawler that will be used by Amazon
Athena for running queries and ad hoc analysis.

Collect the real-time clickstream data using Amazon Kinesis Data Stream then build and analyze the
sessions using Kinesis Data Analytics. The aggregated analytics will trigger the real-time events on
Lambda and then send them to Kinesis Data Firehose which in turn sends data to an S3 bucket. The
clickstream data is ingested to a table by an AWS Glue crawler that will be used by Amazon Athena for
running queries and ad hoc analysis.

(Correct)

Collect the real-time clickstream data using Amazon CloudWatch Events then build and analyze the
sessions using Kinesis Data Analytics. The aggregated analytics will trigger the real-time events on
Lambda and then send them to Kinesis Data Firehose which in turn sends data to an S3 bucket. The
clickstream data is ingested to a table by an AWS Glue crawler that will be used by Amazon Athena for
running queries and ad hoc analysis.

Collect the real-time clickstream data using Amazon SQS then build and analyze the sessions using
Kinesis Data Analytics. The aggregated analytics will trigger the real-time events on Lambda and then
send them to Kinesis Data Firehose which in turn, sends data to an Instance Store-backed EC2 instance.
The clickstream data is ingested to a table by an AWS Glue crawler that will be used by Amazon Athena
for running queries and ad hoc analysis.

Explanation

Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely
insights and react quickly to new information. Amazon Kinesis offers key capabilities to cost-effectively process
streaming data at any scale, along with the flexibility to choose the tools that best suit the requirements of your
application. With Amazon Kinesis, you can ingest real-time data such as video, audio, application logs, website
clickstreams, and IoT telemetry data for machine learning, analytics, and other applications. Amazon Kinesis
enables you to process and analyze data as it arrives and respond instantly instead of having to wait until all your
data is collected before the processing can begin.

Hence, the correct answer is: Collect the real-time clickstream data using Amazon Kinesis Data Stream then
build and analyze the sessions using Kinesis Data Analytics. The aggregated analytics will trigger the real-
time events on Lambda and then send them to Kinesis Data Firehose which in turn, sends data to an S3
bucket. The clickstream data is ingested to a table by an AWS Glue crawler that will be used by Amazon
Athena for running queries and ad hoc analysis.

The option that says: Collect the real-time clickstream data using Amazon CloudWatch Events then build and
analyze the sessions using Kinesis Data Analytics. The aggregated analytics will trigger the real-time events
on Lambda and then send them to Kinesis Data Firehose which in turn, sends data to an S3 bucket. The
clickstream data is ingested to a table by an AWS Glue crawler that will be used by Amazon Athena for
running queries and ad hoc analysis is incorrect because you can't collect real-time clickstream data using
Amazon CloudWatch Events. You have to use Amazon Kinesis Data Streams instead.
The option that says: Collect the real-time clickstream data using a custom Amazon EMR application hosted
on extra-large EC2 instances then build and analyze the sessions using Kinesis Data Analytics. The
aggregated analytics will trigger the real-time events on Lambda and then send them to SQS which in turn,
sends data to an S3 bucket. The clickstream data is ingested to a table by an AWS Glue crawler that will be
used by Amazon Athena for running queries and ad hoc analysis is incorrect because although it is possible to
collect the clickstream using Amazon EMR, it entails a significant amount of cost due to the use of extra-large
EC2 instances. Using Amazon Kinesis is a more cost-effective option.

The option that says: Collect the real-time clickstream data using Amazon SQS then build and analyze the
sessions using Kinesis Data Analytics. The aggregated analytics will trigger the real-time events on Lambda
and then send them to Kinesis Data Firehose which in turn, sends data to an Instance Store-backed EC2
instance. The clickstream data is ingested to a table by an AWS Glue crawler that will be used by Amazon
Athena for running queries and ad hoc analysis is incorrect because using Amazon SQS is not the appropriate
service to use for collecting clickstream data in real time.

References:

https://aws.amazon.com/blogs/big-data/create-real-time-clickstream-sessions-and-run-analytics-with-amazon-
kinesis-data-analytics-aws-glue-and-amazon-athena/

https://docs.aws.amazon.com/solutions/latest/real-time-web-analytics-with-kinesis/architecture.html

Check out these Amazon Kinesis and Athena Cheat Sheets:

https://tutorialsdojo.com/amazon-kinesis/

https://tutorialsdojo.com/amazon-athena/

Question 70: Correct

A business wants to leverage AWS CloudFormation to deploy its infrastructure. The business would like to
restrict deployment to two particular regions and wants to implement a strict tagging requirement. Developers
are expected to deploy various versions of the same application and want to guarantee that resources are
deployed in compliance with the business policy while still enabling developers to deploy different versions of
the application.

Which of the following is the MOST suitable solution?

Utilize approved CloudFormation templates and launch CloudFormation StackSets.

Detect and remediate unapproved CloudFormation Stacksets by creating AWS Trusted Advisor checks.

Utilize AWS Service Catalog and create products with approved CloudFormation templates.

(Correct)

Detect and remediate unapproved CloudFormation Stacksets by creating a CloudFormation drift detection
operation.

Explanation
With AWS Service Catalog, cloud resources can be centrally managed to achieve infrastructure as code (IaC)
template governance at scale, whether they were written in CloudFormation or Terraform. Compliance
requirements can be met while ensuring that customers can efficiently deploy the necessary cloud resources.

When limiting the options available to end-users during a product launch, template constraints can be applied.
This is done to ensure that compliance requirements of the organization are not breached.To apply template
constraints, a product must be present within a Service Catalog portfolio. A template constraint includes rules
that narrow the allowable values for parameters in the underlying AWS CloudFormation template of the product.
These parameters define the set of values available to users when creating a stack. For instance, an instance type
parameter can be defined to limit the types of instances that users can choose from when launching a stack
containing EC2 instances.

Hence, the correct answer is the option that says: Utilize AWS Service Catalog and create products with
approved CloudFormation templates.

The following options are incorrect because the use of CloudFormation StackSets would not be suitable for the
scenario's requirements, as it does not provide a solution to prevent the use of unsupported tags or restrict users.:

- Utilize approved CloudFormation templates and launch CloudFormation StackSets

- Detect and remediate unapproved CloudFormation Stacksets by creating AWS Trusted Advisor checks

- Detect and remediate unapproved CloudFormation Stacksets by creating a CloudFormation drift


detection operation

References:

https://aws.amazon.com/servicecatalog/

https://docs.aws.amazon.com/servicecatalog/latest/adminguide/catalogs_constraints_template-constraints.html

https://aws.amazon.com/blogs/aws/use-cloudformation-stacksets-to-provision-resources-across-multiple-aws-
accounts-and-regions/

Check out this AWS Service Catalog Cheat Sheet:

https://tutorialsdojo.com/aws-service-catalog/

Tutorials Dojo's AWS Certified DevOps Engineer Professional Exam Study Guide:

https://tutorialsdojo.com/aws-certified-devops-engineer-professional/

Question 71: Correct

A company is planning to launch a mobile marketplace using AWS Amplify and AWS Mobile Hub which will
serve millions of users worldwide. The backend APIs will be launched to multiple AWS regions to process the
sales and financial transactions in the region closest to the users to lower the latency. A DevOps Engineer was
instructed to design the system architecture to ensure that the transactions made in one region are automatically
replicated to other regions. In the coming months ahead, it is expected that the marketplace will have millions of
users across North America, South America, Europe, and Asia.
Which of the following is the MOST scalable, cost-effective, and highly available architecture that the Engineer
should implement?

In your preferred AWS region, set up a Global DynamoDB table and then enable the DynamoDB Streams
option. Set up replica tables in the other AWS regions where you want to replicate your data. In each local
region, store the individual transactions to a DynamoDB replica table in the same region.

(Correct)

Set up a Global DynamoDB table in your preferred region which will automatically create new replica
tables on all AWS regions. Store the individual transactions to a DynamoDB replica table in each local
region. Any changes made in one of the replica tables will be automatically replicated across all other
tables worldwide.

Set up an Aurora Multi-Master database on all of the required AWS regions. Store the individual
transactions to the Amazon Aurora instance in the local region and replicate the transactions across
regions using Aurora replication. Any changes made in one of the tables will be automatically replicated
across all other tables.

Store the individual transactions in each local region to a DynamoDB table. Use a Lambda function to
read recent writes from the primary DynamoDB table and replay the data to tables in all other regions.

Explanation

Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at
any scale. It's a fully managed, multiregion, multimaster, durable database with built-in security, backup and
restore, and in-memory caching for internet-scale applications. DynamoDB can handle more than 10 trillion
requests per day and can support peaks of more than 20 million requests per second.

Global Tables builds upon DynamoDB’s global footprint to provide you with a fully managed, multi-region, and
multi-master database that provides fast, local, read and write performance for massively scaled, global
applications. Global Tables replicates your Amazon DynamoDB tables automatically across your choice of
AWS regions.

Global Tables eliminates the difficult work of replicating data between regions and resolving update conflicts,
enabling you to focus on your application’s business logic. In addition, Global Tables enables your applications
to stay highly available even in the unlikely event of isolation or degradation of an entire region.

For example, suppose that you have a large customer base spread across three geographic areas—the US East
Coast, the US West Coast, and Western Europe. Customers can update their profile information using your
application. To satisfy this use case, you need to create three identical DynamoDB tables named
CustomerProfiles, in three different AWS Regions where the customers are located. These three tables would be
entirely separate from each other. Changes to the data in one table would not be reflected in the other tables.
Without a managed replication solution, you could write code to replicate data changes among these tables.
However, doing this would be a time-consuming and labor-intensive effort.

Instead of writing your own code, you could create a global table consisting of your three Region-specific
CustomerProfiles tables. DynamoDB would then automatically replicate data changes among those tables so that
changes to CustomerProfiles data in one Region would seamlessly propagate to the other Regions. In addition, if
one of the AWS Regions were to become temporarily unavailable, your customers could still access the same
CustomerProfiles data in the other Regions.

DynamoDB global tables are ideal for massively scaled applications with globally dispersed users. In such an
environment, users expect very fast application performance. Global tables provide automatic multi-master
replication to AWS Regions worldwide. They enable you to deliver low-latency data access to your users no
matter where they are located.

Hence, the correct answer is: In your preferred AWS region, set up a Global DynamoDB table and then
enable the DynamoDB Streams option. Set up replica tables in the other AWS regions where you want to
replicate your data. In each local region, store the individual transactions to a DynamoDB replica table in
the same region.

The option says: Store the individual transactions in each local region to a DynamoDB table. Use a
Lambda function to read recent writes from the primary DynamoDB table and replay the data to tables in
all other regions is incorrect because using an AWS Lambda function to replicate all data across regions is not a
scalable solution. Remember that there will be millions of people who will use the mobile app around the world,
and this entails a lot of replication and compute capacity for a single Lambda function. In this scenario, the best
way is to use Global DynamoDB tables with the DynamoDB Stream option enabled, to automatically handle the
replication process.

The option that says: Set up a Global DynamoDB table in your preferred region which will automatically
create new replica tables on all AWS regions. Store the individual transactions to a DynamoDB replica
table in each local region. Any changes made in one of the replica tables will be automatically replicated
across all other tables worldwide is incorrect because even though the option correctly uses Global
DynamoDB Tables, it will not automatically create new replica tables on all AWS regions. You have to
manually specify and create the replica tables in the specific AWS regions where you want to replicate your
data. Take note as well that the DynamoDB Stream option must be enabled in order for the Global DynamoDB
Table to work.

The option says: Set up an Aurora Multi-Master database on all of the required AWS regions. Store the
individual transactions to the Amazon Aurora instance in the local region and replicate the transactions
across regions using Aurora replication. Any changes made in one of the tables will be automatically
replicated across all other tables is incorrect because using a Multi-Master Amazon Aurora database does not
work across multiple regions by default. This database architecture is not the most cost-effective either compared
with DynamoDB. In addition, DynamoDB provides better global scalability for mobile applications compared
with Amazon Aurora.

References:

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/globaltables_HowItWorks.html

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GlobalTables.html

https://aws.amazon.com/dynamodb/global-tables/

Check out this Amazon DynamoDB Cheat Sheet:

https://tutorialsdojo.com/amazon-dynamodb/
Question 72: Correct

An international tours and travel company is planning to launch a multi-tier Node.js web portal with a MySQL
database to AWS. The portal must be highly available during the deployment of new portal versions in the future
and have the ability to roll back the changes if necessary, to improve user experience. There are third-party
applications that will also use the same MySQL database that the portal is using. The architecture should allow
the IT Operations team to centrally view all the server logs from various EC2 instances and store the data for
three months. It should also have a feature that allows the team to search and filter server logs in near-real-time
for monitoring purposes. The solution should be cost-effective and preferably has less operational overhead.

As a DevOps Engineer, which of the following is the BEST solution that you should implement to satisfy the
above requirements?

Using AWS Elastic Beanstalk, host the multi-tier Node.js web portal in a load-balancing and autoscaling
environment. Set up an Amazon RDS MySQL database with a Multi-AZ deployments configuration that
is decoupled from the Elastic Beanstalk stack. Set the log options to stream the application logs to
Amazon CloudWatch Logs with 90-day retention.

(Correct)

Using AWS Elastic Beanstalk, host the multi-tier Node.js web portal in a load-balancing and autoscaling
environment. Set up an Amazon RDS MySQL instance with a Multi-AZ deployments configuration
within the Elastic Beanstalk stack. Set the log options to stream the application logs to Amazon
CloudWatch Logs with 90-day data retention.

Host the multi-tier Node.js web portal in an Auto Scaling group of EC2 instances behind an Application
Load Balancer. Set up an Amazon RDS MySQL database instance. For log monitoring and analysis,
configure CloudWatch Logs to fetch the application logs from the instances and send them to an Amazon
ES cluster. Purge the contents of the Amazon ES domain every 90 days and then recreate it again.

Host the multi-tier Node.js web portal in an Auto Scaling group of EC2 instances behind an Application
Load Balancer. Set up an Amazon RDS MySQL database instance. Configure the CloudWatch Log agent
to send the application logs to Amazon CloudWatch Logs with 90-day data retention.

Explanation

With Elastic Beanstalk, you can quickly deploy and manage applications in the AWS Cloud without having to
learn about the infrastructure that runs those applications. Elastic Beanstalk reduces management complexity
without restricting choice or control. You simply upload your application, and Elastic Beanstalk automatically
handles the details of capacity provisioning, load balancing, scaling, and application health monitoring.

Elastic Beanstalk supports applications developed in Go, Java, .NET, Node.js, PHP, Python, and Ruby. When
you deploy your application, Elastic Beanstalk builds the selected supported platform version and provisions one
or more AWS resources, such as Amazon EC2 instances, to run your application.

You can interact with Elastic Beanstalk by using the AWS Management Console, the AWS Command Line
Interface (AWS CLI), or eb, a high-level CLI designed specifically for Elastic Beanstalk.
An Amazon RDS instance attached to an Elastic Beanstalk environment is ideal for development and testing
environments. However, it's not ideal for production environments because the lifecycle of the database instance
is tied to the lifecycle of your application environment. If you terminate the environment, then you will lose your
data because the Amazon RDS instance is deleted by the environment.

Hence, the correct answer is: Using AWS Elastic Beanstalk, host the multi-tier Node.js web portal in a load-
balancing and autoscaling environment. Set up an Amazon RDS MySQL database with a Multi-AZ
deployments configuration that is decoupled from the Elastic Beanstalk stack. Set the log options to
stream the application logs to Amazon CloudWatch Logs with a 90-day retention.

The option that says: Using AWS Elastic Beanstalk, host the multi-tier Node.js web portal in a load-
balancing and autoscaling environment. Set up an Amazon RDS MySQL instance with a Multi-AZ
deployments configuration within the Elastic Beanstalk stack. Set the log options to stream the application
logs to Amazon CloudWatch Logs with 90-day data retention is incorrect because it's not ideal to place the
database within the Elastic Beanstalk environment because the lifecycle of the database instance is tied to the
lifecycle of your application environment in production.

The option that says: Host the multi-tier Node.js web portal in an Auto Scaling group of EC2 instances
behind an Application Load Balancer. Set up an Amazon RDS MySQL database instance. Configure the
CloudWatch Log agent to send the application logs to Amazon CloudWatch Logs with 90-day data
retention is incorrect because it is easier to upload the web portal to Elastic Beanstalk instead. Moreover, you
have to enable Multi-AZ deployments in RDS to improve the availability of your database tier.

The option that says: Host the multi-tier Node.js web portal in an Auto Scaling group of EC2 instances
behind an Application Load Balancer. Set up an Amazon RDS MySQL database instance. For log
monitoring and analysis, configure CloudWatch Logs to fetch the application logs from the instances and
send them to an Amazon ES cluster. Purge the contents of the Amazon ES domain every 90 days and then
recreate it again is incorrect. Although this solution may work, it entails a lot of operational overhead to
maintain the Amazon ES cluster.

References:

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/Welcome.html

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo.RDS.html

https://aws.amazon.com/premiumsupport/knowledge-center/decouple-rds-from-beanstalk/

Check out this AWS Elastic Beanstalk Cheat Sheet:

https://tutorialsdojo.com/aws-elastic-beanstalk/

Question 73: Correct

A leading software development company is using AWS CodeCommit as the source control repository of one of
its cloud-based applications. There is a new requirement to automate its continuous integration and continuous
deployment pipeline on its development, testing, and production environments. A security code review must be
done to ensure that there is no leaked Personally Identifiable Information (PII) or other sensitive data. Each
change should go through both unit testing as well as functional testing. Any code push to CodeCommit should
automatically trigger the CI/CD pipeline, and an email notification to [email protected]
should be sent in the event of build or deployment failures. In addition, an approval to stage the assets to
Amazon S3 after tests should also be performed. The solution must strictly follow the CI/CD best practices.

Which among the following options should a DevOps Engineer implement to satisfy all of the requirements
above?

Configure AWS CodePipeline to have a trigger to start off the pipeline when a new code change is
committed on a certain branch. Add the required stages in the pipeline for security review, unit tests,
functional tests, and manual approval. Use CloudWatch Events to detect changes in pipeline stages. Send
an email notification to [email protected] using Amazon SES.

Configure the AWS CodePipeline to have a trigger to start off the pipeline when a new code change is
committed on a certain code branch. In the pipeline, set up the required stages for security review, unit
tests, functional tests, and manual approval. Use Amazon CloudWatch Events to detect changes in
pipeline stages. Send an email notification to [email protected] using Amazon
SNS.

(Correct)

Configure the AWS CodePipeline to have a trigger to start off the pipeline when a new code change is
committed on the master branch. In the pipeline, set up the required stages for security review, unit tests,
functional tests, and manual approval. Use CloudTrail to detect changes in pipeline stages. Send an email
notification to [email protected] using Amazon SNS.

Configure AWS CodePipeline to have a trigger to start off the pipeline when a new code change is
committed on the master branch. Add the required stages in the pipeline for security review, unit tests,
functional tests, and manual approval. Use CloudWatch Logs to detect changes in pipeline stages. Send an
email notification to [email protected] using Amazon SES.

Explanation

Monitoring is an important part of maintaining the reliability, availability, and performance of AWS
CodePipeline. You should collect monitoring data from all parts of your AWS solution so that you can more
easily debug a multi-point failure if one occurs.

You can use the following tools to monitor your CodePipeline pipelines and their resources:

Amazon CloudWatch Events — Use Amazon CloudWatch Events to detect and react to pipeline execution
state changes (for example, send an Amazon SNS notification or invoke a Lambda function).

AWS CloudTrail — Use CloudTrail to capture API calls made by or on behalf of CodePipeline in your AWS
account and deliver the log files to an Amazon S3 bucket. You can choose to have CloudWatch publish Amazon
SNS notifications when new log files are delivered so you can take quick action.

Console and CLI — You can use the CodePipeline console and CLI to view details about the status of a
pipeline or a particular pipeline execution.

Amazon CloudWatch Events is a web service that monitors your AWS resources and the applications you run
on AWS. You can use Amazon CloudWatch Events to detect and react to changes in the state of a pipeline,
stage, or action. Then, based on rules you create, CloudWatch Events invokes one or more target actions when a
pipeline, stage, or action enters the state you specify in a rule. Depending on the type of state change, you might
want to send notifications, capture state information, take corrective action, initiate events, or take other actions.

Hence, the correct answer is: Configure the AWS CodePipeline to have a trigger to start off the pipeline
when a new code change is committed on a certain code branch. In the pipeline, set up the required stages
for security review, unit tests, functional tests, and manual approval. Use Amazon CloudWatch Events to
detect changes in pipeline stages. Send an email notification to [email protected]
using Amazon SNS.

The option that says: Configure AWS CodePipeline to have a trigger to start off the pipeline when a new
code change is committed on a certain branch. Add the required stages in the pipeline for security review,
unit tests, functional tests, and manual approval. Use CloudWatch Events to detect changes in pipeline
stages. Send an email notification to [email protected] using Amazon SES is
incorrect because you have to use Amazon SNS in order to send an email notification and not Amazon SES.

The option that says: Configure the AWS CodePipeline to have a trigger to start off the pipeline when a
new code change is committed on the master branch. In the pipeline, set up the required stages for
security review, unit tests, functional tests, and manual approval. Use CloudTrail to detect changes in
pipeline stages. Send an email notification to [email protected] using Amazon
SNS is incorrect because you have to use CloudWatch Events to detect the changes in the pipeline stages and not
CloudTrail.

The option that says: Configure AWS CodePipeline to have a trigger to start off the pipeline when a new
code change is committed on the master branch. Add the required stages in the pipeline for security
review, unit tests, functional tests, and manual approval. Use CloudWatch Logs to detect changes in
pipeline stages. Send an email notification to [email protected] using Amazon
SES is incorrect because using CloudWatch Logs is not the appropriate service to use in this scenario. In order to
automatically detect the changes in the pipeline stages, the most suitable service that you have to use is Amazon
CloudWatch Events.

References:

https://docs.aws.amazon.com/codepipeline/latest/userguide/detect-state-changes-cloudwatch-events.html

https://docs.aws.amazon.com/codepipeline/latest/userguide/monitoring.html

Check out this AWS CodePipeline Cheat Sheet:

https://tutorialsdojo.com/aws-codepipeline/

Question 74: Incorrect

You are using CloudFormation Stacks to provision your cloud infrastructure. It is hosted on a Git repository on
AWS CodeCommit. Since the stacks are an integral part of your infrastructure, you have designated your team
leader as the only authority to commit changes to the master branch of your repository. You want to give full
access permissions to team members on the DEV and STG branches while the team leader approves each merge
request to the master branch.
Which of the following options will you implement to achieve this?

Create an IAM group for team members and another IAM group for the team leader, both with
AWSCodeCommitReadOnly policy attached. Attach another IAM policy to the team leaders' group that
allows Push, Delete, and Merge APIs of CodeCommit on the master branch.

Create an IAM group for team members with AWSCodeCommitReadOnly policy attached and another
IAM group for the team leader with AWSCodeCommitFullAccess policy attached. Attach another IAM
policy to the team members' group that allows only Pull, Push, Delete and Merge APIs of CodeCommit
on the DEV and STG branch.

Create an IAM group for team members with AWSCodeCommitPowerUser policy attached and another
IAM group for the team leader with AWSCodeCommitFullAccess policy attached. By default, the
AWSCodeCommitPowerUser does not allow Delete and Merge actions on the master branch.

(Incorrect)

Create an IAM group for team members and another IAM group for the team leader, both with
AWSCodeCommitPowerUser policy attached. Attach another IAM policy to the team members' group
that denies Push, Delete, and Merge APIs of CodeCommit on the master branch.

(Correct)

Explanation

You want to create a policy in IAM that will deny API actions if certain conditions are met. You want to prevent
team members from updating the master branch, but you don’t want to prevent them from viewing the branch,
cloning the repository, or creating pull requests that will merge to that branch.

Additionally, you want the team members to have full access to the DEV and STG branches. For this scenario, it
is recommended to start with FullAccess or PowerUser policy which gives necessary permissions to the users for
all repository/branches and then includes an additional policy that restrict certain APIs that you don’t want them
to perform such as Push, Delete and Merge APIs.

Hence, the correct answer is: Create an IAM group for team members and another IAM group for the team
leader, both with AWSCodeCommitPowerUser policy attached. Attach another IAM policy to the team
members' group that denies Push, Delete and Merge APIs of CodeCommit on the master branch.

The option that says: Create an IAM group for team members and another IAM group for the team leader,
both with AWSCodeCommitReadOnly policy attached. Attach another IAM policy to the team leaders' group
that allows Push, Delete and Merge APIs of CodeCommit on the master branch is incorrect because the
AWSCodeCommitReadOnly will grant read-only permission to the team members group which restricts their
permission on the repository and its branches.

The option that says: Create an IAM group for team members with AWSCodeCommitReadOnly policy
attached and another IAM group for the team leader with AWSCodeCommitFullAccess policy attached.
Attach another IAM policy to the team members' group that allows only Pull, Push, Delete and Merge APIs
of CodeCommit on the DEV and STG branch is incorrect because this will only allow specific actions which
may not be sufficient for full access of the team members on the STG and DEV branch. There are more actions
needed to allow full access for the team members.

The option that says: Create an IAM group for team members with AWSCodeCommitPowerUser policy
attached and another IAM group for the team leader with AWSCodeCommitFullAccess policy attached. By
default, the AWSCodeCommitPowerUser does not allow Delete and Merge actions on the master branch is
incorrect because technically, the AWSCodeCommitPowerUser policy does not allow repository deletion at all.
Moreover, it also allows merge actions to your branch.

References:

https://aws.amazon.com/blogs/devops/refining-access-to-branches-in-aws-codecommit/

https://aws.amazon.com/about-aws/whats-new/2018/05/aws-codecommit-supports-branch-level-permissions/

https://docs.aws.amazon.com/codecommit/latest/userguide/auth-and-access-control-permissions-reference.html

Question 75: Incorrect

A JavaScript-based online salary calculator hosted on-premises is slated to be migrated to AWS. The application
has no server-side code and is just composed of a UI powered by Vue.js and Bootstrap. Since the online
calculator may contain sensitive financial data, adding HTTP response headers such as X-Content-Type-
Options, X-Frame-Options and X-XSS-Protection should be implemented to comply with the Open Web
Application Security Project (OWASP) standards.

Which of the following is the MOST suitable solution that you should implement?

Host the application on an S3 bucket configured for website hosting then set up server access logging on
the Amazon S3 bucket to track user activity. Enable Amazon S3 client-side encryption and configure it to
return the required security headers.

Host the application on an S3 bucket configured for website hosting then set up server access logging on
the S3 bucket to track user activity. Configure the bucket policy of the S3 bucket to return the required
security headers.

Host the application on an S3 bucket configured for website hosting. Set up a CloudFront web distribution
and set the S3 bucket as the origin with the origin response event set to trigger a Lambda@Edge function.
Add the required security headers in the HTTP response using the Lambda function.

(Correct)

Host the application on an S3 bucket configured for website hosting. Set up a CloudFront web distribution
and set the S3 bucket as the origin. Set a custom Request and Response Behavior in CloudFront that
automatically adds the required security headers in the HTTP response.

(Incorrect)

Explanation
Security headers are a group of headers in the HTTP response from a server that tell your browser how to behave
when handling your site’s content. For example, X-XSS-Protection is a header that Internet Explorer and
Chrome respect to stop the pages from loading when they detect cross-site scripting (XSS) attacks.

The following are some examples of security headers:

Strict Transport Security

Content-Security-Policy

X-Content-Type-Options

X-Frame-Options

X-XSS-Protection

Referrer-Policy

Whenever you navigate to a website, your browser requests a web page, and the server responds with the content
along with HTTP headers. Headers such as cache-control are used by the browser to determine how long to
cache content for, others such as content-type are used to indicate the media type of a resource and therefore how
to interpret such resource.

You can set up a solution that uses a simple single-page website, hosted in an Amazon S3 bucket and using
Amazon CloudFront. You can create a new Lambda@Edge function and associate it with your CloudFront
distribution. Then configure the origin response trigger to execute the Lambda@Edge function add the security
headers in the HTTP response.

Hence, the correct answer is: Host the application on an S3 bucket configured for website hosting. Set up a
CloudFront web distribution and set the S3 bucket as the origin with the origin response event set to trigger a
Lambda@Edge function. Add the required security headers in the HTTP response using the Lambda
function.

The option that says: Host the application on an S3 bucket configured for website hosting then set up server
access logging on the Amazon S3 bucket to track user activity. Enable Amazon S3 client-side encryption and
configure it to return the required security headers is incorrect because you have to integrate the S3 bucket
with CloudFront and use Lambda@Edge to add the required headers. Take note that you can't return an HTTP
response with the required security headers by just using client-side encryption alone.

The option that says: Host the application on an S3 bucket configured for website hosting then set up server
access logging on the S3 bucket to track user activity. Configure the bucket policy of the S3 bucket to return
the required security headers is incorrect because you can't return an HTTP response with the required security
headers by simply configuring the bucket policy of the S3 bucket.

The option that says: Host the application on an S3 bucket configured for website hosting. Set up a
CloudFront web distribution and set the S3 bucket as the origin. Set a custom Request and Response
Behavior in CloudFront that automatically adds the required security headers in the HTTP response is
incorrect because configuring a custom Request and Response Behavior in CloudFront is not enough to
automatically add the required security headers to the HTTP response. You have to use Lambda@Edge to add
the headers to satisfy the requirement of this scenario.

References:

https://aws.amazon.com/blogs/networking-and-content-delivery/adding-http-security-headers-using-
lambdaedge-and-amazon-cloudfront/

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/
RequestAndResponseBehaviorCustomOrigin.html#request-custom-headers-behavior

Check out this Amazon CloudFront Cheat Sheet:

https://tutorialsdojo.com/amazon-cloudfront/

You might also like