AWS Certified Cloud Practitioner Exams Notes CLF-C02
AWS Certified Cloud Practitioner Exams Notes CLF-C02
When selecting a cloud strategy, a company must consider factors such as required
cloud application components, preferred resource management tools, and any legacy
IT infrastructure requirements.
The three cloud computing deployment models are cloud-based, on-premises, and
hybrid.
For example, you might have applications that run on technology that is fully kept in
your on-premises data centre. Though this model is much like legacy IT infrastructure,
its incorporation of application management and virtualization technologies helps to
increase resource utilization.
1.3 Hybrid Deployment:
Connect cloud-based resources to on-premises infrastructure.
Integrate cloud-based resources with legacy IT applications.
To learn more about the benefits, expand each for the following six categories.
2.1 Trade upfront expense for variable expense:
Upfront expense refers to data centres, physical servers, and other resources that you
would need to invest in before using them. Variable expense means you only pay for
computing resources you consume instead of investing heavily in data centres and
servers before you know how you’re going to use them.
By taking a cloud computing approach that offers the benefit of variable expense,
companies can implement innovative solutions while saving on costs.
Computing in data centres often requires you to spend more money and time
managing infrastructure and servers.
A benefit of cloud computing is the ability to focus less on these tasks and more on
your applications and customers.
With cloud computing, you don’t have to predict how much infrastructure capacity you
will need before deploying an application.
For example, you can launch Amazon EC2 instances when needed, and pay only for
the compute time you use. Instead of paying for unused resources or having to deal
with limited capacity, you can access only the capacity that you need. You can also
scale in or scale out in response to demand.
This flexibility provides you with more time to experiment and innovate. When
computing in data centers, it may take weeks to obtain new resources that you need.
By comparison, cloud computing enables you to access new resources within minutes.
Later in this course, you will explore the AWS global infrastructure in greater detail.
You will examine some of the services that you can use to deliver content to customers
around the world.
By comparison, with an Amazon EC2 instance you can use a virtual server to run
applications in the AWS Cloud.
You can provision and launch an Amazon EC2 instance within minutes.
You can stop using it when you have finished running a workload.
You pay only for the compute time you use when an instance is running, not
when it is stopped or terminated.
You can save costs by paying only for server capacity that you need or want.
Amazon EC2 instance types(opens in a new tab) are optimized for different tasks.
When selecting an instance type, consider the specific needs of your workloads and
applications. This might include requirements for compute, memory, or storage
capabilities.
To learn more about Amazon EC2 instance types, expand each of the following five
categories.
3.1 General purpose instances:
General purpose instances provide a balance of compute, memory, and networking
resources. You can use them for a variety of workloads, such as:
application servers
gaming servers
backend servers for enterprise applications
small and medium databases
Suppose that you have an application in which the resource needs for compute,
memory, and networking are roughly equivalent. You might consider running it on a
general-purpose instance because the application does not require optimization in any
single resource area.
3.2 Compute optimized instances:
Compute optimized instances are ideal for compute-bound applications that benefit
from high-performance processors. Like general purpose instances, you can use
compute optimized instances for workloads such as web, application, and gaming
servers.
However, the difference is compute optimized applications are ideal for high-
performance web servers, compute-intensive applications servers, and dedicated
gaming servers. You can also use compute optimized instances for batch processing
workloads that require processing many transactions in a single group.
3.3 Memory optimized instances:
Memory optimized instances are designed to deliver fast performance for workloads
that process large datasets in memory. In computing, memory is a temporary storage
area. It holds all the data and instructions that a central processing unit (CPU) needs
to be able to complete actions. Before a computer program or application is able to
run, it is loaded from storage into memory. This preloading process gives the CPU
direct access to the computer program.
Suppose that you have a workload that requires large amounts of data to be preloaded
before running an application. This scenario might be a high-performance database or
a workload that involves performing real-time processing of a large amount of
unstructured data. In these types of use cases, consider using a memory optimized
instance. Memory optimized instances enable you to run workloads with high memory
needs and receive great performance.
3.4 Accelerated computing instances:
Accelerated computing instances use hardware accelerators, or coprocessors, to
perform some functions more efficiently than is possible in software running on CPUs.
Examples of these functions include floating-point number calculations, graphics
processing, and data pattern matching.
In computing, a hardware accelerator is a component that can expedite data
processing. Accelerated computing instances are ideal for workloads such as graphics
applications, game streaming, and application streaming.
3.5 Storage optimized instances:
Storage optimized instances are designed for workloads that require high,
sequential read and write access to large datasets on local storage. Examples of
workloads suitable for storage optimized instances include distributed file systems,
data warehousing applications, and high-frequency online transaction processing
(OLTP) systems.
In computing, the term input/output operations per second (IOPS) is a metric that
measures the performance of a storage device. It indicates how many different input
or output operations a device can perform in one second. Storage optimized instances
are designed to deliver tens of thousands of low-latency, random IOPS to
applications.
You can think of input operations as data put into a system, such as records entered
into a database. An output operation is data generated by a server. An example of
output might be the analytics performed on the records in a database. If you have an
application that has a high IOPS requirement, a storage optimized instance can
provide better performance over other instance types not optimized for this kind of use
case.
You can purchase Standard Reserved and Convertible Reserved Instances for a 1-
year or 3-year term. You realize greater cost savings with the 3-year option.
Standard Reserved Instances: This option is a good fit if you know the EC2 instance
type and size you need for your steady-state applications and in which AWS Region
you plan to run them. Reserved Instances require you to state the following
qualifications:
Instance type and size: For example, m5.Xlarge.
Platform description (operating system): For example, Microsoft Windows
Server or Red Hat Enterprise Linux
Tenancy: Default tenancy or dedicated tenancy
You have the option to specify an Availability Zone for your EC2 Reserved Instances.
If you make this specification, you get EC2 capacity reservation. This ensures that
your desired amount of EC2 instances will be available when you need them.
At the end of a Reserved Instance term, you can continue using the Amazon EC2
instance without interruption. However, you are charged On-Demand rates until you
do one of the following:
Terminate the instance.
Purchase a new Reserved Instance that matches the instance attributes
(instance family and size, Region, platform, and tenancy).
The EC2 Instance Savings Plans are a good option if you need flexibility in your
Amazon EC2 usage over the duration of the commitment term. You have the benefit
of saving costs on running any EC2 instance within an EC2 instance family in a chosen
Region (for example, M5 usage in N. Virginia) regardless of Availability Zone, instance
size, OS, or tenancy. The savings with EC2 Instance Savings Plans are similar to the
savings provided by Standard Reserved Instances.
Unlike Reserved Instances, however, you don't need to specify up front what EC2
instance type and size (for example, m5.xlarge), OS, and tenancy to get a discount.
Further, you don't need to commit to a certain number of EC2 instances over a 1-year
or 3-year term. Additionally, the EC2 Instance Savings Plans don't include an EC2
capacity reservation option.
Later in this course, you'll review AWS Cost Explorer, which you can use to visualize,
understand, and manage your AWS costs and usage over time. If you're considering
your options for Savings Plans, you can use AWS Cost Explorer to analyze your
Amazon EC2 usage over the past 7, 30, or 60 days. AWS Cost Explorer also provides
customized recommendations for Savings Plans. These recommendations estimate
how much you could save on your monthly Amazon EC2 costs, based on previous
Amazon EC2 usage and the hourly commitment amount in a 1-year or 3-year Savings
Plan.
Suppose that you have a background processing job that can start and stop as needed
(such as the data processing job for a customer survey). You want to start and stop
the processing job without affecting the overall operations of your business. If you
make a Spot request and Amazon EC2 capacity is available, your Spot Instance
launches. However, if you make a Spot request and Amazon EC2 capacity is
unavailable, the request is not successful until capacity becomes available. The
unavailable capacity might delay the launch of your background processing job.
After you have launched a Spot Instance, if capacity is no longer available or demand
for Spot Instances increases, your instance may be interrupted. This might not pose
any issues for your background processing job. However, in the earlier example of
developing and testing applications, you would most likely want to avoid unexpected
interruptions. Therefore, choose a different EC2 instance type that is ideal for those
tasks.
You can use your existing per-socket, per-core, or per-VM software licenses to help
maintain license compliance. You can purchase On-Demand Dedicated Hosts and
Dedicated Hosts Reservations. Of all the Amazon EC2 options that were covered,
Dedicated Hosts are the most expensive.
5 Scalability:
Scalability involves beginning with only the resources you need and designing your
architecture to automatically respond to changing demand by scaling out or in. As a
result, you pay for only the resources you use. You don’t have to worry about a lack of
computing capacity to meet your customers’ needs.
If you wanted the scaling process to happen automatically, which AWS service would
you use? The AWS service that provides this functionality for Amazon EC2 instances
is Amazon EC2 Auto Scaling.
5.1 Amazon EC2 Auto Scaling:
If you’ve tried to access a website that wouldn’t load and frequently timed out, the
website might have received more requests than it was able to handle. This situation
is similar to waiting in a long line at a coffee shop, when there is only one barista
present to take orders from customers.
Amazon EC2 Auto Scaling enables you to automatically add or remove Amazon EC2
instances in response to changing application demand. By automatically scaling your
instances in and out as needed, you can maintain a greater sense of application
availability.
Within Amazon EC2 Auto Scaling, you can use two approaches: dynamic scaling and
predictive scaling.
A load balancer acts as a single point of contact for all incoming web traffic to your
Auto Scaling group. This means that as you add or remove Amazon EC2 instances in
response to the amount of incoming traffic, these requests route to the load balancer
first. Then, the requests spread across multiple resources that will handle them. For
example, if you have multiple Amazon EC2 instances, Elastic Load Balancing
distributes the workload across the multiple instances so that no single instance has
to carry the bulk of it.
Although Elastic Load Balancing and Amazon EC2 Auto Scaling are separate
services, they work together to help ensure that applications running in Amazon EC2
can provide high performance and availability.
Suppose that you have an application with tightly coupled components. These
components might include databases, servers, the user interface, business logic, and
so on. This type of architecture can be considered a monolithic application.
To help maintain application availability when a single component fails, you can design
your application through a microservices approach.
In a microservices approach, application components are loosely coupled. In this case,
if a single component fails, the other components continue to work because they are
communicating with each other. The loose coupling prevents the entire application
from failing.
When designing applications on AWS, you can take a microservices approach with
services and components that fulfil different functions. Two services facilitate
application integration: Amazon Simple Notification Service (Amazon SNS) and
Amazon Simple Queue Service (Amazon SQS).
In Amazon SNS, subscribers can be web servers, email addresses, AWS Lambda
functions, or several other options.
In the next lesson, you will learn more about AWS Lambda.
To review two examples of how to use Amazon SNS, choose the arrow buttons to
display each one.
After a while, some customers express that they would prefer to receive separate
newsletters for only the specific topics that interest them. The coffee shop owners
decide to try this approach.
Now, instead of having a single newsletter for all topics, the coffee shop has broken it
up into three separate newsletters. Each newsletter is devoted to a specific topic:
coupons, coffee trivia, and new products.
Subscribers will now receive updates immediately for only the specific topics to which
they have subscribed.
It is possible for subscribers to subscribe to a single topic or to multiple topics. For
example, the first customer subscribes to only the coupons topic, and the second
subscriber subscribes to only the coffee trivia topic. The third customer subscribes to
both the coffee trivia and new products topics.
6.2 Amazon Simple Queue Service (Amazon SQS)
Amazon Simple Queue Service (Amazon SQS) is a message queuing service.
Using Amazon SQS, you can send, store, and receive messages between software
components, without losing messages or requiring other services to be available. In
Amazon SQS, an application sends messages into a queue. A user or service retrieves
a message from the queue, processes it, and then deletes it from the queue.
To review two examples of how to use Amazon SQS, choose the arrow buttons to
display each one.
Suppose that the coffee shop has an ordering process in which a cashier takes orders,
and a barista makes the orders. Think of the cashier and the barista as two separate
components of an application.
First, the cashier takes an order and writes it down on a piece of paper. Next, the
cashier delivers the paper to the barista. Finally, the barista makes the drink and gives
it to the customer.
When the next order comes in, the process repeats. This process runs smoothly as
long as both the cashier and the barista are coordinated.
What might happen if the cashier took an order and went to deliver it to the barista,
but the barista was out on a break or busy with another order? The cashier would need
to wait until the barista is ready to accept the order. This would cause delays in the
ordering process and require customers to wait longer to receive their orders.
As the coffee shop has become more popular and the ordering line is moving more
slowly, the owners notice that the current ordering process is time consuming and
inefficient. They decide to try a different approach that uses a queue.
Recall that the cashier and the barista are two separate components of an application.
A message queuing service, such as Amazon SQS, lets messages between
decoupled application complements.
In this example, the first step in the process remains the same as before: a customer
places an order with the cashier.
The cashier puts the order into a queue. You can think of this as an order board that
serves as a buffer between the cashier and the barista. Even if the barista is out on a
break or busy with another order, the cashier can continue placing new orders into the
queue.
Next, the barista checks the queue and retrieves the order.
The barista then removes the completed order from the queue.
While the barista is preparing the drink, the cashier is able to continue taking new
orders and add them to the queue.
Comparison between computing with virtual servers (thinking about servers and code)
and serverless computing (thinking only about code).
The term “serverless” means that your code runs on servers, but you do not need to
provision or manage these servers. With serverless computing, you can focus more
on innovating new products and features instead of maintaining servers.
AWS Lambda(opens in a new tab) is a service that lets you run code without needing
to provision or manage servers.
While using AWS Lambda, you pay only for the compute time that you consume.
Charges apply only when your code is running. You can also run code for virtually any
type of application or backend service, all with zero administration.
For example, a simple Lambda function might involve automatically resizing uploaded
images to the AWS Cloud. In this case, the function triggers when uploading a new
image.
How AWS Lambda works
You upload your code to Lambda.
You set your code to trigger from an event source, such as AWS services,
mobile applications, or HTTP endpoints.
Lambda runs your code only when triggered.
You pay only for the compute time that you use. In the previous example of
resizing images, you would pay only for the compute time that you use when
uploading new images. Uploading the images triggers Lambda to run code for
the image resizing function.
When using AWS Fargate, you do not need to provision or manage servers. AWS
Fargate manages your server infrastructure for you. You can focus more on innovating
and developing your applications, and you pay only for the resources that are required
to run your containers.
To understand the AWS global infrastructure, consider the coffee shop. If an event
such as a parade, flood, or power outage impacts one location, customers can still get
their coffee by visiting a different location only a few blocks away.
When determining the right Region for your services, data, and applications, consider
the following four business factors.
Not all companies have location-specific data regulations, so you might need to focus
more on the other three factors.
8.1.2 Proximity to your customers
Selecting a Region that is close to your customers will help you to get content to them
faster. For example, your company is based in Washington, DC, and many of your
customers live in Singapore. You might consider running your infrastructure in the
Northern Virginia Region to be close to company headquarters, and run your
applications from the Singapore Region.
8.1.3 Available services within a Region
Sometimes, the closest Region might not have all the features that you want to offer
to customers. AWS is frequently innovating by creating new services and expanding
on features within existing services. However, making new services available around
the world sometimes requires AWS to build out physical hardware one Region at a
time.
Suppose that your developers want to build an application that uses Amazon Braket
(AWS quantum computing platform). As of this course, Amazon Braket is not yet
available in every AWS Region around the world, so your developers would have to
run it in one of the Regions that already offers it.
8.1.4 Pricing:
Suppose that you are considering running applications in both the United States and
Brazil. The way Brazil’s tax structure is set up, it might cost 50% more to run the same
workload out of the São Paulo Region compared to the Oregon Region. You will learn
in more detail that several factors determine pricing, but for now know that the cost of
services can vary from Region to Region.
9 Availability Zone
An Availability Zone is a single data center or a group of data centers within a Region.
Availability Zones are located tens of miles apart from each other. This is close enough
to have low latency (the time between when content requested and received) between
Availability Zones. However, if a disaster occurs in one part of the Region, they are
distant enough to reduce the chance that multiple Availability Zones are affected.
Suppose that you’re running an application on a single Amazon EC2 instance in the
Northern California Region. The instance is running in the us-west-1a Availability
Zone. If us-west-1a were to fail, you would lose your instance.
9.2 Amazon EC2 instances in multiple Availability Zones
A best practice is to run applications across at least two Availability Zones in a Region.
In this example, you might choose to run a second Amazon EC2 instance in us-west-
1b.
An edge location is a site that Amazon CloudFront uses to store cached copies of
your content closer to your customers for faster delivery.
You can also use the AWS Console mobile application to perform tasks such as
monitoring resources, viewing alarms, and accessing billing information. Multiple
identities can stay logged into the AWS Console mobile app at the same time.
By using AWS CLI, you can automate the actions that your services and applications
perform through scripts. For example, you can use commands to launch an Amazon
EC2 instance, connect an Amazon EC2 instance to a specific Auto Scaling group, and
more.
10.3 software development kits (SDKs).
Another option for accessing and managing AWS services is the software
development kits (SDKs). SDKs make it easier for you to use AWS services through
an API designed for your programming language or platform. SDKs enable you to use
AWS services with your existing applications or create entirely new applications that
will run on AWS.
To help you get started with using SDKs, AWS provides documentation and sample
code for each supported programming language. Supported programming languages
include C++, Java, .NET, and more.
With AWS Elastic Beanstalk, you provide code and configuration settings, and
Elastic Beanstalk deploys the resources necessary to perform the following tasks:
Adjust capacity.
Load balancing.
Automatic scaling.
Application health monitoring.
With AWS CloudFormation, you can treat your infrastructure as code. This means
that you can build an environment by writing lines of code instead of using the AWS
Management Console to individually provision resources.
12 Connectivity to AWS:
12.1 Amazon Virtual Private Cloud (Amazon VPC):
Imagine the millions of customers who use AWS services. Also, imagine the millions
of resources that these customers have created, such as Amazon EC2 instances.
Without boundaries around all of these resources, network traffic would be able to flow
between them unrestricted.
A networking service that you can use to establish boundaries around your AWS
resources is Amazon Virtual Private Cloud (Amazon VPC)(opens in a new tab).
Amazon VPC enables you to provision an isolated section of the AWS Cloud. In this
isolated section, you can launch resources in a virtual network that you define. Within
a virtual private cloud (VPC), you can organize your resources into subnets.
A subnet is a section of a VPC that can contain resources such as Amazon EC2
instances.
12.2 Internet gateway:
To allow public traffic from the internet to access your VPC, you attach an internet
gateway to the VPC.
Internet gateway icon attached to a VPC that holds three EC2 instances. An arrow
connects the client to the gateway over the internet indicating that the client's request
has gained access to the VPC.
An internet gateway is a connection between a VPC and the internet. You can think of
an internet gateway as being similar to a doorway that customers use to enter the
coffee shop. Without an internet gateway, no one can access the resources within
your VPC.
What if you have a VPC that includes only private resources?
12.3 Virtual private gateway
To access private resources in a VPC, you can use a virtual private gateway.
Here’s an example of how a virtual private gateway works. You can think of the internet
as the road between your home and the coffee shop. Suppose that you are traveling
on this road with a bodyguard to protect you. You are still using the same road as other
customers, but with an extra layer of protection.
The bodyguard is like a virtual private network (VPN) connection that encrypts (or
protects) your internet traffic from all the other requests around it.
The virtual private gateway is the component that allows protected internet traffic to
enter into the VPC. Even though your connection to the coffee shop has extra
protection, traffic jams are possible because you’re using the same road as other
customers.
A virtual private gateway enables you to establish a virtual private network (VPN)
connection between your VPC and a private network, such as an on-premises data
center or internal corporate network. A virtual private gateway allows traffic into the
VPC only if it is coming from an approved network.
12.4 AWS Direct Connect
AWS Direct Connect(opens in a new tab) is a service that lets you to establish a
dedicated private connection between your data center and a VPC.
Suppose that there is an apartment building with a hallway directly linking the building
to the coffee shop. Only the residents of the apartment building can travel through this
hallway.
This private hallway provides the same type of dedicated connection as AWS Direct
Connect. Residents are able to get into the coffee shop without needing to use the
public road shared with other customers.
A corporate data center routes network traffic to an AWS Direct Connect location. That
traffic is then routed to a VPC through a virtual private gateway. All network traffic
between the corporate data center and VPC flows through this dedicated private
connection.
The private connection that AWS Direct Connect provides helps you to reduce network
costs and increase the amount of bandwidth that can travel through your network.
To learn more about the role of subnets within a VPC, review the following example
from the coffee shop.
First, customers give their orders to the cashier. The cashier then passes the orders
to the barista. This process allows the line to keep running smoothly as more
customers come in.
Suppose that some customers try to skip the cashier line and give their orders directly
to the barista. This disrupts the flow of traffic and results in customers accessing a part
of the coffee shop that is restricted to them.
To fix this, the owners of the coffee shop divide the counter area by placing the cashier
and the barista in separate workstations. The cashier’s workstation is public facing
and designed to receive customers. The barista’s area is private. The barista can still
receive orders from the cashier but not directly from customers.
A cashier, a barista, and three customers in line. The icon for the first customer in line
has an arrow pointing to cashier showing that the customer gives their order to the
cashier. Then the cashier icon has an arrow pointing to barista icon showing that the
cashier forwards the customer's order to the barista. The last customer in line tries to
give their order directly to the barista, but they're blocked from doing so.
This is similar to how you can use AWS networking services to isolate resources and
determine exactly how network traffic flows.
In the coffee shop, you can think of the counter area as a VPC. The counter area
divides into two separate areas for the cashier’s workstation and the barista’s
workstation. In a VPC, subnets are separate areas that are used to group together
resources.
13.1 Subnets
A subnet is a section of a VPC in which you can group resources based on security or
operational needs. Subnets can be public or private.
Public subnets contain resources
that need to be accessible by the public,
such as an online store’s website.
In a VPC, subnets can communicate with each other. For example, you might have
an application that involves Amazon EC2 instances in a public subnet
communicating with databases that are located in a private subnet.
13.2 Network traffic in a VPC
When a customer requests data from an application hosted in the AWS Cloud, this
request is sent as a packet. A packet is a unit of data sent over the internet or a
network.
It enters into a VPC through an internet gateway. Before a packet can enter into a
subnet or exit from a subnet, it checks for permissions. These permissions indicate
who sent the packet and how the packet is trying to communicate with the resources
in a subnet.
The VPC component that checks packet permissions for subnets is a network access
control list (ACL)(opens in a new tab).
13.3 Network ACLs
A network ACL is a virtual firewall that controls inbound and outbound traffic at the
subnet level.
For example, step outside of the coffee shop and imagine that you are in an airport. In
the airport, travelers are trying to enter into a different country. You can think of the
travelers as packets and the passport control officer as a network ACL. The passport
control officer checks travelers’ credentials when they are both entering and exiting
out of the country. If a traveler is on an approved list, they are able to get through.
However, if they are not on the approved list or are explicitly on a list of banned
travelers, they cannot come in.
Each AWS account includes a default network ACL. When configuring your VPC, you
can use your account’s default network ACL or create custom network ACLs.
By default, your account’s default network ACL allows all inbound and outbound
traffic, but you can modify it by adding your own rules. For custom network ACLs, all
inbound and outbound traffic is denied until you add rules to specify which traffic to
allow. Additionally, all network ACLs have an explicit deny rule. This rule ensures that
if a packet doesn’t match any of the other rules on the list, the packet is denied.
Network ACLs perform stateless packet filtering. They remember nothing and check
packets that cross the subnet border each way: inbound and outbound.
Recall the previous example of a traveler who wants to enter into a different country.
This is similar to sending a request out from an Amazon EC2 instance and to the
internet.
When a packet response for that request comes back to the subnet, the network ACL
does not remember your previous request. The network ACL checks the packet
response against its list of rules to determine whether to allow or deny.
After a packet has entered a subnet, it must have its permissions evaluated for
resources within the subnet, such as Amazon EC2 instances.
The VPC component that checks packet permissions for an Amazon EC2 instance is
a security group(opens in a new tab).
13.5 Security groups
A security group is a virtual firewall that controls inbound and outbound traffic for an
Amazon EC2 instance.
By default, a security group denies all inbound traffic and allows all outbound traffic.
You can add custom rules to configure which traffic should be allowed; any other traffic
would then be denied
For this example, suppose that you are in an apartment building with a door attendant
who greets guests in the lobby. You can think of the guests as packets and the door
attendant as a security group. As guests arrive, the door attendant checks a list to
ensure they can enter the building. However, the door attendant does not check the
list again when guests are exiting the building
If you have multiple Amazon EC2 instances within the same VPC, you can associate
them with the same security group or use different security groups for each instance.
13.6 Stateful packet filtering
Security groups perform stateful packet filtering. They remember previous decisions
made for incoming packets.
Consider the same example of sending a request out from an Amazon EC2 instance
to the internet.
When a packet response for that request returns to the instance, the security group
remembers your previous request. The security group allows the response to proceed,
regardless of inbound security group rules.
With both network ACLs and security groups, you can configure custom rules for the
traffic in your VPC. As you continue to learn more about AWS security and networking,
make sure to understand the differences between network ACLs and security groups.
A packet travels over the internet from a client, to the internet gateway and into the
VPC. Then the pack goes through the network access control list and accesses the
public subnet, where two EC2 instances are located.
13.7 VPC component recall
Recall the purpose of the following four VPC components. Compare your response by
choosing each VPC component flashcard.
14 Global Networking:
14.1 Domain Name System (DNS)
Suppose that AnyCompany has a website hosted in the AWS Cloud. Customers enter
the web address into their browser, and they are able to access the website. This
happens because of Domain Name System (DNS) resolution. DNS resolution
involves a customer DNS resolver communicating with a company DNS server.
You can think of DNS as being the phone book of the internet. DNS resolution is the
process of translating a domain name to an IP address.
A client connects to a DNS resolver looking for a domain. The resolver forwards the
request to the DNS server, which returns the IP address to the resolver.
For example, suppose that you want to visit AnyCompany’s website.
When you enter the domain name into your browser, this request is sent to a
customer DNS resolver.
The customer DNS resolver asks the company DNS server for the IP address
that corresponds to any Company’s website.
The company DNS server responds by providing the IP address for any
Company’s website, 192.0.2.0.
14.2 Amazon Route 53
Amazon Route 53(opens in a new tab) is a DNS web service. It gives developers and
businesses a reliable way to route end users to internet applications hosted in AWS.
Another feature of Route 53 is the ability to manage the DNS records for domain
names. You can register new domain names directly in Route 53. You can also
transfer DNS records for existing domain names managed by other domain registrars.
This enables you to manage all of your domain names within a single location.
In the previous module, you learned about Amazon CloudFront, a content delivery
service. The following example describes how Route 53 and Amazon CloudFront work
together to deliver content to customers.
Example: How Amazon Route 53 and Amazon CloudFront deliver content
Amazon Elastic Block Store (Amazon EBS)(opens in a new tab) is a service that
provides block-level storage volumes that you can use with Amazon EC2 instances. If
you stop or terminate an Amazon EC2 instance, all the data on the attached EBS
volume remains available.
To create an EBS volume, you define the configuration (such as volume size and type)
and provision it. After you create an EBS volume, it can attach to an Amazon EC2
instance.
Because EBS volumes are for data that needs to persist, it’s important to back up the
data. You can take incremental backups of EBS volumes by creating Amazon EBS
snapshots.
Incremental backups of EBS volumes with Amazon EBS snapshots. On Day 1, two
volumes are backed up. Day 2 adds one new volume and the new volume is backed
up. Day 3 adds two more volumes for a total of five volumes. Only the two new volumes
are backed up.
An EBS snapshot(opens in a new tab) is an incremental backup. This means that the
first backup taken of a volume copies all the data. For subsequent backups, only the
blocks of data that have changed since the most recent snapshot are saved.
Incremental backups are different from full backups, in which all the data in a storage
volume copies each time a backup occurs. The full backup includes data that has not
changed since the most recent backup.
The data might be an image, video, text document, or any other type of file. Metadata
contains information about what the data is, how it is used, the object size, and so on.
An object’s key is its unique identifier.
You can upload any type of file to Amazon S3, such as images, videos, text files, and
so on. For example, you might use Amazon S3 to store backup files, media files for a
website, or archived documents. Amazon S3 offers unlimited storage space. The
maximum file size for an object in Amazon S3 is 5 TB.
When you upload a file to Amazon S3, you can set permissions to control visibility and
access to it. You can also use the Amazon S3 versioning feature to track changes to
your objects over time.
16.2 Amazon S3 storage classes
With Amazon S3, you pay only for what you use. You can choose from a range of
storage classes(opens in a new tab) to select a fit for your business and cost needs.
When selecting an Amazon S3 storage class, consider these two factors:
To learn more about Amazon S3 storage classes, expand each of the following eight
categories.
16.3 S3 Standard
Designed for frequently accessed data
Stores data in a minimum of three Availability Zones
Amazon S3 Standard provides high availability for objects. This makes it a good choice
for a wide range of use cases, such as websites, content distribution, and data
analytics. Amazon S3 Standard has a higher cost than other storage classes intended
for infrequently accessed data and archival storage.
16.6 S3 Intelligent-Tiering
Ideal for data with unknown or changing access patterns
Requires a small monthly monitoring and automation fee per object
S3 Glacier Flexible Retrieval is a low-cost storage class that is ideal for data archiving.
For example, you might use this storage class to store archived customer records or
older photos and video files. You can retrieve your data from S3 Glacier Flexible
Retrieval from 1 minute to 12 hours.
S3 Deep Archive supports long-term retention and digital preservation for data that
might be accessed once or twice in a year. This storage class is the lowest-cost
storage in the AWS Cloud, with data retrieval from 12 to 48 hours. All objects from this
storage class are replicated and stored across at least three geographically dispersed
Availability Zones.
16.10 S3 Outposts
Creates S3 buckets on Amazon S3 Outposts
Makes it easier to retrieve, store, and access data on AWS Outposts
Amazon S3 Outposts delivers object storage to your on-premises AWS Outposts
environment. Amazon S3 Outposts is designed to store data durably and redundantly
across multiple devices and servers on your Outposts. It works well for workloads with
local data residency requirements that must satisfy demanding performance needs by
keeping data close to on-premises applications.
Amazon Elastic File System (Amazon EFS)(opens in a new tab) is a scalable file
system used with AWS Cloud services and on-premises resources. As you add and
remove files, Amazon EFS grows and shrinks automatically. It can scale on demand
to petabytes without disrupting applications.
Select each of the cards below to review a comparison of Amazon EBS and Amazon
EFS.
Amazon EFS is a regional service. It stores data in and across multiple Availability
Zones.
The duplicate storage enables you to access data concurrently from all the Availability
Zones in the Region where a file system is located. Additionally, on-premises servers
can access Amazon EFS using AWS Direct Connect.
In a relational database, data is stored in a way that relates it to other pieces of data.
An example of a relational database might be the coffee shop’s inventory management
system. Each record in the database would include data for a single item, such as
product name, size, price, and so on.
Relational databases use structured query language (SQL) to store and query data.
This approach allows data to be stored in an easily understandable, consistent, and
scalable way. For example, the coffee shop owners can write a SQL query to identify
all the customers whose most frequently purchased drink is a medium latte.
Amazon RDS provides a number of different security options. Many Amazon RDS
database engines offer encryption at rest (protecting data while it is stored) and
encryption in transit (protecting data while it is being sent and received).
PostgreSQL
MySQL
MariaDB
Oracle Database
19 Nonrelational databases:
In a nonrelational database, you create tables. A table is a place where you can store
and query data.
Nonrelational databases are sometimes referred to as “NoSQL databases” because
they use structures other than rows and columns to organize data. One type of
structural approach for nonrelational databases is key-value pairs. With key-value
pairs, data is organized into items (keys), and items have attributes (values). You can
think of attributes as being different features of your data.
In a key-value database, you can add or remove attributes from items in the table at
any time. Additionally, not every item in the table has to have the same attributes.
Key Value
DynamoDB is serverless, which means that you do not have to provision, patch, or
manage servers. You also do not have to install, maintain, or operate software.
AWS Database Migration Service (AWS DMS)(opens in a new tab) enables you to
migrate relational databases, nonrelational databases, and other types of data stores.
With AWS DMS, you move data between a source database and a target
database. The source and target databases(opens in a new tab) can be of the same
type or different types. During the migration, your source database remains
operational, reducing downtime for any applications that rely on the database.
For example, suppose that you have a MySQL database that is stored on premises in
an Amazon EC2 instance or in Amazon RDS. Consider the MySQL database to be
your source database. Using AWS DMS, you could migrate your data to a target
database, such as an Amazon Aurora database.
To review other use cases for AWS DMS, select each of the following flashcards.
Enabling developers to test applications against production data without affecting
production users.
Combining several databases into a single database.
Sending ongoing copies of your data to other target sources instead of doing a one-
time migration
Blockchain is a distributed ledger system that lets multiple parties run transactions and
share data without a central authority.
Throughout this course, you have learned about a variety of resources that you can
create in the AWS Cloud. These resources include Amazon EC2 instances, Amazon
S3 buckets, and Amazon RDS databases. Who is responsible for keeping these
resources secure: you (the customer) or AWS?
The answer is both. The reason is that you do not treat your AWS environment as a
single object. Rather, you treat the environment as a collection of parts that build upon
each other. AWS is responsible for some parts of your environment and you (the
customer) are responsible for other parts. This concept is known as the shared
responsibility model(opens in a new tab).
You can think of this model as being similar to the division of responsibilities between
a homeowner and a homebuilder. The builder (AWS) is responsible for constructing
your house and ensuring that it is solidly built. As the homeowner (the customer), it is
your responsibility to secure everything in the house by ensuring that the doors are
closed and locked.
To learn more about the shared responsibility model, expand each of the following two
categories.
23.1 Customers: Security in the cloud
Customers are responsible for the security of everything that they create and put in the
AWS Cloud.
When using AWS services, you, the customer, maintain complete control over your
content. You are responsible for managing security requirements for your content,
including which content you choose to store on AWS, which AWS services you use,
and who has access to that content. You also control how access rights are granted,
managed, and revoked.
The security steps that you take will depend on factors such as the services that you
use, the complexity of your systems, and your company’s specific operational and
security needs. Steps include selecting, configuring, and patching the operating
systems that will run on Amazon EC2 instances, configuring security groups, and
managing user accounts.
AWS is responsible for protecting the global infrastructure that runs all of the services
offered in the AWS Cloud. This infrastructure includes AWS Regions, Availability
Zones, and edge locations.
AWS manages the security of the cloud, specifically the physical infrastructure that
hosts your resources, which include:
Network infrastructure.
Virtualization infrastructure
Although you cannot visit AWS data centres to see this protection firsthand, AWS
provides several reports from third-party auditors. These auditors have verified its
compliance with a variety of computer security standards and regulations.
IAM gives you the flexibility to configure access based on your company’s specific
operational and security needs. You do this by using a combination of IAM features,
which are explored in detail in this lesson:
Multi-factor authentication
You will also learn best practices for each of these features.
When you first create an AWS account, you begin with an identity known as the root
user(opens in a new tab).
The root user is accessed by signing in with the email address and password that you
used to create your AWS account. You can think of the root user as being similar to
the owner of the coffee shop. It has complete access to all the AWS services and
resources in the account.
Best practice:
Instead, use the root user to create your first IAM user and assign it permissions to
create other users.
Then, continue to create other IAM users, and access those identities for performing
regular tasks throughout AWS. Only use the root user when you need to perform a
limited number of tasks that are only available to the root user. Examples of these
tasks include changing your root user email address and changing your AWS support
plan. For more information, see “Tasks that require root user credentials” in the AWS
Account Management Reference Guide(opens in a new tab).
A) IAM users
An IAM user is an identity that you create in AWS. It represents the person or
application that interacts with AWS services and resources. It consists of a name and
credentials.
By default, when you create a new IAM user in AWS, it has no permissions associated
with it. To allow the IAM user to perform specific actions in AWS, such as launching an
Amazon EC2 instance or creating an Amazon S3 bucket, you must grant the IAM user
the necessary permissions.
Best practice:
We recommend that you create individual IAM users for each person who needs to
access AWS.
Even if you have multiple employees who require the same level of access, you should
create individual IAM users for each of them. This provides additional security by
allowing each IAM user to have a unique set of security credentials.
B) IAM policies
An IAM policy is a document that allows or denies permissions to AWS services and
resources.
IAM policies enable you to customize users’ levels of access to resources. For
example, you can allow users to access all of the Amazon S3 buckets within your AWS
account, or only a specific bucket.
Best practice:
By following this principle, you help to prevent users or roles from having more
permissions than needed to perform their tasks.
For example, if an employee needs access to only a specific bucket, specify the bucket
in the IAM policy. Do this instead of granting the employee access to all of the buckets
in your AWS account.
Here’s an example of how IAM policies work. Suppose that the coffee shop owner has
to create an IAM user for a newly hired cashier. The cashier needs access to the
receipts kept in an Amazon S3 bucket with the ID: AWSDOC-EXAMPLE-BUCKET.
This example IAM policy allows permission to access the objects in the Amazon S3
bucket with ID: AWSDOC-EXAMPLE-BUCKET.
In this example, the IAM policy is allowing a specific action within Amazon S3:
ListObject. The policy also mentions a specific bucket ID: AWSDOC-EXAMPLE-
BUCKET. When the owner attaches this policy to the cashier’s IAM user, it will allow
the cashier to view all of the objects in the AWSDOC-EXAMPLE-BUCKET bucket.
If the owner wants the cashier to be able to access other services and perform other
actions in AWS, the owner must attach additional policies to specify these services
and actions.
Now, suppose that the coffee shop has hired a few more cashiers. Instead of assigning
permissions to each individual IAM user, the owner places the users into an IAM
group(opens in a new tab).
C) IAM groups
An IAM group is a collection of IAM users. When you assign an IAM policy to a group,
all users in the group are granted permissions specified by the policy.
Here’s an example of how this might work in the coffee shop. Instead of assigning
permissions to cashiers one at a time, the owner can create a “Cashiers” IAM group.
The owner can then add IAM users to the group and then attach permissions at the
group level.
Assigning IAM policies at the group level also makes it easier to adjust permissions
when an employee transfers to a different job. For example, if a cashier becomes an
inventory specialist, the coffee shop owner removes them from the “Cashiers” IAM
group and adds them into the “Inventory Specialists” IAM group. This ensures that
employees have only the permissions that are required for their current role.
What if a coffee shop employee hasn’t switched jobs permanently, but instead, rotates
to different workstations throughout the day? This employee can get the access they
need through IAM roles(opens in a new tab).
D) IAM roles
In the coffee shop, an employee rotates to different workstations throughout the day.
Depending on the staffing of the coffee shop, this employee might perform several
duties: work at the cash register, update the inventory system, process online orders,
and so on.
When the employee needs to switch to a different task, they give up their access to
one workstation and gain access to the next workstation. The employee can easily
switch between workstations, but at any given point in time, they can have access to
only a single workstation. This same concept exists in AWS with IAM roles.
An IAM role is an identity that you can assume to gain temporary access to
permissions.
Before an IAM user, application, or service can assume an IAM role, they must be
granted permissions to switch to the role. When someone assumes an IAM role, they
abandon all previous permissions that they had under a previous role and assume the
permissions of the new role.
Best practice:
IAM roles are ideal for situations in which access to services or resources needs to be
granted temporarily, instead of long-term.
To review an example of how IAM roles could be used in the coffee shop example,
choose the arrow buttons to display each of the following two steps.
E) Multi-factor authentication:
Have you ever signed in to a website that required you to provide multiple pieces of
information to verify your identity? You might have needed to provide your password
and then a second form of authentication, such as a random code sent to your phone.
This is an example of multi-factor authentication(opens in a new tab).
In IAM, multi-factor authentication (MFA) provides an extra layer of security for your
AWS account.
To learn more about how MFA works, choose the arrow buttons to display each of the
following two steps.
25 AWS Organizations
Suppose that your company has multiple AWS accounts. You can use AWS
Organizations(opens in a new tab) to consolidate and manage multiple AWS
accounts within a central location.
In AWS Organizations, you can centrally control permissions for the accounts in your
organization by using service control policies (SCPs)(opens in a new tab). SCPs
enable you to place restrictions on the AWS services, resources, and individual API
actions that users and roles in each account can access.
Consolidated billing is another feature of AWS Organizations. You will learn about
consolidated billing in a later module.
Organizational units
In AWS Organizations, you can group accounts into organizational units (OUs) to
make it easier to manage accounts with similar business or security requirements.
When you apply a policy to an OU, all the accounts in the OU automatically inherit the
permissions specified in the policy.
By organizing separate accounts into OUs, you can more easily isolate workloads or
applications that have specific security requirements. For instance, if your company
has accounts that can access only the AWS services that meet certain regulatory
requirements, you can put these accounts into one OU. Then, you can attach a policy
to the OU that blocks access to all other AWS services that do not meet the regulatory
requirements.
To review an example of how a company might use AWS Organizations, choose the
arrow buttons to display each step.
Imagine that your company has separate AWS accounts for the finance, information
technology (IT), human resources (HR), and legal departments. You decide to
consolidate these accounts into a single organization so that you can administer them
from a central location. When you create the organization, this establishes the root.
In designing your organization, you consider the business, security, and regulatory
needs of each department. You use this information to decide which departments
group together in OUs.
Imagine that your company has separate AWS accounts for the finance, information
technology (IT), human resources (HR), and legal departments. You decide to
consolidate these accounts into a single organization so that you can administer them
from a central location. When you create the organization, this establishes the root.
In designing your organization, you consider the business, security, and regulatory
needs of each department. You use this information to decide which departments
group together in OUs.
The HR and legal departments need to access the same AWS services and resources,
so you place them into an OU together. Placing them into an OU empowers you to
attach policies that apply to both the HR and legal departments’ AWS accounts.
26 Compliance:
AWS Artifact
Depending on your company’s industry, you may need to uphold specific standards.
An audit or inspection will ensure that the company has met those standards.
Suppose that your company needs to sign an agreement with AWS regarding your use
of certain types of information throughout AWS services. You can do this through AWS
Artifact Agreements.
In AWS Artifact Agreements, you can review, accept, and manage agreements for an
individual account and for all your accounts in AWS Organizations. Different types of
agreements are offered to address the needs of customers who are subject to specific
regulations, such as the Health Insurance Portability and Accountability Act (HIPAA).
AWS Artifact Reports provide compliance reports from third-party auditors. These
auditors have tested and verified that AWS is compliant with a variety of global,
regional, and industry-specific security standards and regulations. AWS Artifact
Reports remains up to date with the latest reports released. You can provide the AWS
audit artifacts to your auditors or regulators as evidence of AWS security controls.
The following are some of the compliance reports and regulations that you can find
within AWS Artifact. Each report includes a description of its contents and the reporting
period for which the document is valid.
AWS Artifact provides access to AWS security and compliance documents, such as
AWS ISO certifications, Payment Card Industry (PCI) reports, and Service
Organization Control (SOC) reports. To learn more about the available compliance
reports, visit AWS Compliance Programs(opens in a new tab).
In the Customer Compliance Center, you can read customer compliance stories to
discover how companies in regulated industries have solved various compliance,
governance, and audit challenges.
You can also access compliance whitepapers and documentation on topics such as:
AWS answers to key compliance questions
Additionally, the Customer Compliance Center includes an auditor learning path. This
learning path is designed for individuals in auditing, compliance, and legal roles who
want to learn more about how their internal operations can demonstrate compliance
using the AWS Cloud.
27 Denial-of-Service Attacks:
Customers can call the coffee shop to place their orders. After answering each call, a
cashier takes the order and gives it to the barista.
However, suppose that a prankster is calling in multiple times to place orders but is
never picking up their drinks. This causes the cashier to be unavailable to take other
customers’ calls. The coffee shop can attempt to stop the false requests by blocking
the phone number that the prankster is using.
Denial-of-service attacks
For example, an attacker might flood a website or application with excessive network
traffic until the targeted website or application becomes overloaded and is no longer
able to respond. If the website or application becomes unavailable, this denies service
to users who are trying to make legitimate requests.
Distributed denial-of-service attacks
Now, suppose that the prankster has enlisted the help of friends.
The prankster and their friends repeatedly call the coffee shop with requests to place
orders, even though they do not intend to pick them up. These requests are coming in
from different phone numbers, and it’s impossible for the coffee shop to block them all.
Additionally, the influx of calls has made it increasingly difficult for customers to be
able to get their calls through. This is similar to a distributed denial-of-service
attack.
To help minimize the effect of DoS and DDoS attacks on your applications, you can
use AWS Shield(opens in a new tab).
To learn more about AWS Shield, expand each of the following two categories.
27.1.1 AWS Shield Standard
AWS Shield Standard automatically protects all AWS customers at no cost. It protects
your AWS resources from the most common, frequently occurring types of DDoS
attacks.
As network traffic comes into your applications, AWS Shield Standard uses a variety
of analysis techniques to detect malicious traffic in real time and automatically
mitigates it.
It also integrates with other services such as Amazon CloudFront, Amazon Route 53,
and Elastic Load Balancing. Additionally, you can integrate AWS Shield with AWS WAF
by writing custom rules to mitigate complex DDoS attacks.
In the same way, you must ensure that your applications’ data is secure while in
storage (encryption at rest) and while it is transmitted, known as encryption in
transit.
AWS Key Management Service (AWS KMS)(opens in a new tab) enables you to
perform encryption operations through the use of cryptographic keys. A
cryptographic key is a random string of digits used for locking (encrypting) and
unlocking (decrypting) data. You can use AWS KMS to create, manage, and use
cryptographic keys. You can also control the use of keys across a wide range of
services and in your applications.
With AWS KMS, you can choose the specific levels of access control that you need
for your keys. For example, you can specify which IAM users and roles are able to
manage keys. Alternatively, you can temporarily disable keys so that they are no
longer in use by anyone. Your keys never leave AWS KMS, and you are always in
control of them.
28.2 AWS WAF
AWS WAF(opens in a new tab) is a web application firewall that lets you monitor
network requests that come into your web applications.
AWS WAF works together with Amazon CloudFront and an Application Load Balancer.
Recall the network access control lists that you learned about in an earlier module.
AWS WAF works in a similar way to block or allow traffic. However, it does this by
using a web access control list (ACL)(opens in a new tab) to protect your AWS
resources.
Here’s an example of how you can use AWS WAF to allow and block specific requests.
Suppose that your application has been receiving malicious network requests from
several IP addresses. You want to prevent these requests from continuing to access
your application, but you also want to ensure that legitimate users can still access it.
You configure the web ACL to allow all requests except those from the IP addresses
that you have specified.
When a request comes into AWS WAF, it checks against the list of rules that you have
configured in the web ACL. If a request does not come from one of the blocked IP
addresses, it allows access to the application.
However, if a request comes from one of the blocked IP addresses that you have
specified in the web ACL, AWS WAF denies access.
After you have enabled GuardDuty for your AWS account, GuardDuty begins
monitoring your network and account activity. You do not have to deploy or manage
any additional security software. GuardDuty then continuously analyzes data from
multiple AWS sources, including VPC Flow Logs and DNS logs.
If GuardDuty detects any threats, you can review detailed findings about them from
the AWS Management Console. Findings include recommended steps for
remediation. You can also configure AWS Lambda functions to take remediation steps
automatically in response to GuardDuty’s security findings.
29 Amazon CloudWatch:
Amazon CloudWatch(opens in a new tab) is a web service that enables you to
monitor and manage various metrics and configure alarm actions based on data from
those metrics.
CloudWatch uses metrics(opens in a new tab) to represent the data points for your
resources. AWS services send metrics to CloudWatch. CloudWatch then uses these
metrics to create graphs automatically that show how performance has changed over
time.
CloudWatch alarms
With CloudWatch, you can create alarms(opens in a new tab) that automatically
perform actions if the value of your metric has gone above or below a predefined
threshold.
For example, suppose that your company’s developers use Amazon EC2 instances
for application development or testing purposes. If the developers occasionally forget
to stop the instances, the instances will continue to run and incur charges.
In this scenario, you could create a CloudWatch alarm that automatically stops an
Amazon EC2 instance when the CPU utilization percentage has remained below a
certain threshold for a specified period. When configuring the alarm, you can specify
to receive a notification whenever this alarm is triggered.
CloudWatch dashboard
The CloudWatch dashboard(opens in a new tab) feature enables you to access all
the metrics for your resources from a single location. For example, you can use a
CloudWatch dashboard to monitor the CPU utilization of an Amazon EC2 instance,
the total number of requests made to an Amazon S3 bucket, and more. You can even
customize separate dashboards for different business purposes, applications, or
resources.
30 AWS CloudTrail:
AWS CloudTrail(opens in a new tab) records API calls for your account. The recorded
information includes the identity of the API caller, the time of the API call, the source
IP address of the API caller, and more. You can think of CloudTrail as a “trail” of
breadcrumbs (or a log of actions) that someone has left behind them.
Recall that you can use API calls to provision, manage, and configure your AWS
resources. With CloudTrail, you can view a complete history of user activity and API
calls for your applications and resources.
Events are typically updated in CloudTrail within 15 minutes after an API call. You can
filter events by specifying the time and date that an API call occurred, the user who
requested the action, the type of resource that was involved in the API call, and more.
In the CloudTrail Event History section, the owner applies a filter to display only the
events for the “CreateUser” API action in IAM. The owner locates the event for the API
call that created an IAM user for Mary. This event record provides complete details
about what occurred:
On January 1, 2020 at 9:00 AM, IAM user John created a new IAM user (Mary) through
the AWS Management Console.
CloudTrail Insights
Within CloudTrail, you can also enable CloudTrail Insights(opens in a new tab). This
optional feature allows CloudTrail to automatically detect unusual API activities in your
AWS account.
For example, CloudTrail Insights might detect that a higher number of Amazon EC2
instances than usual have recently launched in your account. You can then review the
full event details to determine which actions you need to take next.
31 AWS Trusted Advisor:
AWS Trusted Advisor(opens in a new tab) is a web service that inspects your AWS
environment and provides real-time recommendations in accordance with AWS best
practices.
Trusted Advisor compares its findings to AWS best practices in five categories: cost
optimization, performance, security, fault tolerance, and service limits. For the checks
in each category, Trusted Advisor offers a list of recommended actions and additional
resources to learn more about AWS best practices.
The guidance provided by AWS Trusted Advisor can benefit your company at all
stages of deployment. For example, you can use AWS Trusted Advisor to assist you
while you are creating new workflows and developing new applications. You can also
use it while you are making ongoing improvements to existing applications and
resources.
When you access the Trusted Advisor dashboard on the AWS Management Console,
you can review completed checks for cost optimization, performance, security, fault
tolerance, and service limits.
The green check indicates the number of items for which it detected no problems.
bullet
bullet
For each service, you pay for exactly the amount of resources that you actually use,
without requiring long-term contracts or complex licensing.
Some services offer reservation options that provide a significant discount compared
to On-Demand Instance pricing.
For example, suppose that your company is using Amazon EC2 instances for a
workload that needs to run continuously. You might choose to run this workload on
Amazon EC2 Instance Savings Plans, because the plan allows you to save up to 72%
over the equivalent On-Demand Instance capacity.
Some services offer tiered pricing, so the per-unit cost is incrementally lower with
increased usage.
For example, the more Amazon S3 storage space you use, the less you pay for it per
GB.
For example, suppose that your company is using Amazon EC2 instances for a
workload that needs to run continuously. You might choose to run this workload on
Amazon EC2 Instance Savings Plans, because the plan allows you to save up to
72% over the equivalent On-Demand Instance capacity.
For example, the more Amazon S3 storage space you use, the less you pay for it per
GB.
32.1 AWS Pricing Calculator
The AWS Pricing Calculator(opens in a new tab) lets you explore AWS services and
create an estimate for the cost of your use cases on AWS. You can organize your
AWS estimates by groups that you define. A group can reflect how your company is
organized, such as providing estimates by cost center.
When you have created an estimate, you can save it and generate a link to share it
with others.
Suppose that your company is interested in using Amazon EC2. However, you are not
yet sure which AWS Region or instance type would be the most cost-efficient for your
use case. In the AWS Pricing Calculator, you can enter details, such as the kind of
operating system you need, memory requirements, and input/output (I/O)
requirements. By using the AWS Pricing Calculator, you can review an estimated
comparison of different EC2 instance types across AWS Regions.
AWS Lambda:
For AWS Lambda, you are charged based on the number of requests for your
functions and the time that it takes for them to run.
AWS Lambda allows 1 million free requests and up to 3.2 million seconds of
compute time per month.
You can save on AWS Lambda costs by signing up for a Compute Savings Plan. A
Compute Savings Plan offers lower compute costs in exchange for committing to a
consistent amount of usage over a 1-year or 3-year term. This is an example
of paying less when you reserve.
If you have used AWS Lambda in multiple AWS Regions, you can view the itemized
charges by Region on your bill.
In this example, all the AWS Lambda usage occurred in the Northern Virginia
Region. The bill lists separate charges for the number of requests for functions and
their duration.
Both the number of requests and the total duration of requests in this example are
under the thresholds in the AWS Free Tier, so the account owner would not have to
pay for any AWS Lambda usage in this month
.
Amazon EC2:
With Amazon EC2, you pay for only the compute time that you use while your
instances are running.
For some workloads, you can significantly reduce Amazon EC2 costs by using Spot
Instances. For example, suppose that you are running a batch processing job that is
able to withstand interruptions. Using a Spot Instance would provide you with up to
90% cost savings while still meeting the availability requirements of your workload.
You can find additional cost savings for Amazon EC2 by considering Savings Plans
and Reserved Instances.
The service charges in this example include details for the following items:
In this example, all the usage amounts are under the thresholds in the AWS Free
Tier, so the account owner would not have to pay for any Amazon EC2 usage in this
month.
Amazon S3:
Storage - You pay for only the storage that you use. You are charged the rate
to store objects in your Amazon S3 buckets based on your objects’ sizes,
storage classes, and how long you have stored each object during the month.
Requests and data retrievals - You pay for requests made to your Amazon
S3 objects and buckets. For example, suppose that you are storing photo files
in Amazon S3 buckets and hosting them on a website. Every time a visitor
requests the website that includes these photo files, this counts towards
requests you must pay for.
Data transfer - There is no cost to transfer data between different Amazon S3
buckets or from Amazon S3 to other services within the same AWS Region.
However, you pay for data that you transfer into and out of Amazon S3, with a
few exceptions. There is no cost for data transferred into Amazon S3 from the
internet or out to Amazon CloudFront. There is also no cost for data transferred
out to an Amazon EC2 instance in the same AWS Region as the Amazon S3
bucket.
Management and replication - You pay for the storage management
features that you have enabled on your account’s Amazon S3 buckets. These
features include Amazon S3 inventory, analytics, and object tagging
The AWS account in this example has used Amazon S3 in two Regions: Northern
Virginia and Ohio. For each Region, itemized charges are based on the following
factors:
All the usage for Amazon S3 in this example is under the AWS Free Tier limits, so the
account owner would not have to pay for any Amazon S3 usage in this month.
33 Billing Dashboard:
Use the AWS Billing & Cost Management dashboard(opens in a new tab) to pay
your AWS bill, monitor your usage, and analyze and control your costs.
Compare your current month-to-date balance with the previous month, and get
a forecast of the next month based on current usage.
View month-to-date spend by service.
View Free Tier usage by service.
Access Cost Explorer and create budgets.
Purchase and manage Savings Plans.
Publish AWS Cost and Usage Reports(opens in a new tab).
34 Consolidated billing:
The consolidated billing feature of AWS Organizations enables you to receive a single
bill for all AWS accounts in your organization. By consolidating, you can easily track
the combined costs of all the linked accounts in your organization. The default
maximum number of accounts allowed for an organization is 4, but you can contact
AWS Support to increase your quota, if needed.
On your monthly bill, you can review itemized charges incurred by each account. This
enables you to have greater transparency into your organization’s accounts while still
maintaining the convenience of receiving a single monthly bill.
Another benefit of consolidated billing is the ability to share bulk discount pricing,
Savings Plans, and Reserved Instances across the accounts in your organization. For
instance, one account might not have enough monthly usage to qualify for discount
pricing. However, when multiple accounts are combined, their aggregated usage may
result in a benefit that applies across all accounts in the organization.
To review and example of consolidated billing, choose the arrow buttons to display the
four steps.
Step 1
Suppose that you are the business leader who oversees your company’s AWS billing.
Your company has three AWS accounts used for separate departments. In this
example, Account 1 owes $19.64, Account 2 owes $19.96, and Account 3 owes
$20.06. Instead of paying each location’s monthly bill separately, you decide to create
an organization and add the three accounts.
Step 1
Suppose that you are the business leader who oversees your company’s AWS billing.
Your company has three AWS accounts used for separate departments. In this
example, Account 1 owes $19.64, Account 2 owes $19.96, and Account 3 owes
$20.06. Instead of paying each location’s monthly bill separately, you decide to create
an organization and add the three accounts.
Step 2
Continuing the example, each month AWS charges your primary payer account for all
the linked accounts in a consolidated bill. Through the primary account, you can also
get a detailed cost report for each linked account.
The monthly consolidated bill also includes the account usage costs incurred by the
primary account. In this case, the primary account incurred $14.14. This cost is not a
premium charge for having a primary account.
The consolidated bill shows the costs associated with any actions of the primary
account (such as storing files in Amazon S3 or running Amazon EC2 instances). The
total charged to the paying account, including the primary account and accounts one
through three, is $73.80.
Step 3
Consolidated billing also enables you to share volume pricing discounts across
accounts.
Some AWS services, such as Amazon S3, provide volume pricing discounts that give
you lower prices the more that you use the service. In Amazon S3, after customers
have transferred 10 TB of data in a month, they pay a lower per-GB transfer price for
the next 40 TB of data transferred.
In this example, there are three separate AWS accounts that have transferred different
amounts of data in Amazon S3 during the current month:
Because no single account has passed the 10 TB threshold, none of them is eligible
for the lower per-GB transfer price for the next 40 TB of data transferred.
35 AWS Budgets:
In AWS Budgets(opens in a new tab), you can create budgets to plan your service
usage, service costs, and instance reservations.
The information in AWS Budgets updates three times a day. This helps you to
accurately determine how close your usage is to your budgeted amounts or to the
AWS Free Tier limits.
In AWS Budgets, you can also set custom alerts when your usage exceeds (or is
forecasted to exceed) the budgeted amount.
Suppose that you have set a budget for Amazon EC2. You want to ensure that your
company’s usage of Amazon EC2 does not exceed $200 for the month.
In AWS Budgets, you could set a custom budget to notify you when your usage has
reached half of this amount ($100). This setting would allow you to receive an alert
and decide how you would like to proceed with your continued use of Amazon EC2.
To learn more about AWS Budgets, choose each of the numbered markers.
36 AWS Cost Explorer:
AWS Cost Explorer(opens in a new tab) is a tool that lets you visualize, understand,
and manage your AWS costs and usage over time.
AWS Cost Explorer includes a default report of the costs and usage for your top five
cost-accruing AWS services. You can apply custom filters and groups to analyze your
data. For example, you can view resource usage at the hourly level.
Example: AWS Cost Explorer
This example of the AWS Cost Explorer dashboard displays monthly costs for Amazon
EC2 instances over a 6-month period. The bar for each month separates the costs for
different Amazon EC2 instance types, such as t2.micro or m3.large.
By analyzing your AWS costs over time, you can make informed decisions about future
costs and how to plan your budgets.
AWS offers four different Support plans(opens in a new tab) to help you troubleshoot
issues, lower costs, and efficiently use AWS services.
You can choose from the following Support plans to meet your company’s needs:
Basic
Developer
Business
Enterprise On-Ramp
Enterprise
The information in this course highlights only a selection of details for each
Support plan. A complete overview of what is included in each Support plan,
including pricing for each plan, is available on the AWS Support(opens in a new
tab) site.
In general, for pricing, the Developer plan has the lowest cost, the Business
and Enterprise On-Ramp plans are in the middle, and the Enterprise plan has
the highest cost.
To learn more about AWS support plans, expand each of the following four
categories.
37. 3 Developer Support
Customers in the Developer Support plan have access to features such as:
Best practice guidance
Client-side diagnostic tools
Building-block architecture support, which consists of guidance for how to use
AWS offerings, features, and services together.
For example, suppose that your company is exploring AWS services. You’ve heard
about a few different AWS services. However, you’re unsure of how to potentially use
them together to build applications that can address your company’s needs. In this
scenario, the building-block architecture support that is included with the Developer
Support plan could help you to identify opportunities for combining specific services
and features.
Suppose that your company has the Business Support plan and wants to install a
common third-party operating system onto your Amazon EC2 instances. You could
contact AWS Support for assistance with installing, configuring, and troubleshooting
the operating system. For advanced topics such as optimizing performance, using
custom scripts, or resolving security issues, you may need to contact the third-party
software provider directly.
In November 2021, AWS opened enrollment into AWS Enterprise On-Ramp Support
plan. In addition to all the features included in the Basic, Developer, and Business
Support plans, customers with an Enterprise On-Ramp Support plan have access to:
A pool of Technical Account Managers to provide proactive guidance and
coordinate access to programs and AWS experts
A Cost Optimization workshop (one per year)
A Concierge support team for billing and account assistance
Tools to monitor costs and performance through Trusted Advisor and Health
API/Dashboard
Enterprise On-Ramp Support plan also provides access to a specific set of proactive
support services, which are provided by a pool of Technical Account Managers.
Consultative review and architecture guidance (one per year)
Infrastructure Event Management support (one per year)
Support automation workflows.
30 minutes or less response time for business-critical issues
The Enterprise On-Ramp and Enterprise Support plans include access to a Technical
Account Manager (TAM).
The TAM is your primary point of contact at AWS. If your company subscribes to
Enterprise Support or Enterprise On-Ramp, your TAM educates, empowers, and
evolves your cloud journey across the full range of AWS services. TAMs provide
expert engineering guidance, help you design solutions that efficiently integrate AWS
services, assist with cost-effective and resilient architectures, and provide direct
access to AWS programs and a broad community of experts.
For example, suppose that you are interested in developing an application that uses
several AWS services together. Your TAM could provide insights into how to best use
the services together. They achieve this, while aligning with the specific needs that
your company is hoping to address through the new application.
38 AWS Marketplace
AWS Marketplace(opens in a new tab) is a digital catalog that includes thousands of
software listings from independent software vendors. You can use AWS Marketplace
to find, test, and buy software that runs on AWS.
For each listing in AWS Marketplace, you can access detailed information on pricing
options, available support, and reviews from other AWS customers.
You can also explore software solutions by industry and use case. For example,
suppose your company is in the healthcare industry. In AWS Marketplace, you can
review use cases that software helps you to address, such as implementing solutions
to protect patient records or using machine learning models to analyze a patient’s
medical history and predict possible health risks.
At the highest level, the AWS Cloud Adoption Framework (AWS CAF)(opens in a
new tab) organizes guidance into six areas of focus, called Perspectives. Each
Perspective addresses distinct responsibilities. The planning process helps the right
people across the organization prepare for the changes ahead.
To learn more about the AWS CAF, expand each of the following six categories.
Use the Business Perspective to create a strong business case for cloud adoption and
prioritize cloud adoption initiatives. Ensure that your business strategies and goals
align with your IT strategies and goals.
Use the People Perspective to evaluate organizational structures and roles, new skill
and process requirements, and identify gaps. This helps prioritize training, staffing,
and organizational changes.
The Governance Perspective focuses on the skills and processes to align IT strategy
with business strategy. This ensures that you maximize the business value and
minimize risks.
Use the Governance Perspective to understand how to update the staff skills and
processes necessary to ensure business governance in the cloud. Manage and
measure cloud investments to evaluate business outcomes.
The Security Perspective ensures that the organization meets security objectives for
visibility, auditability, control, and agility.
Use the AWS CAF to structure the selection and implementation of security controls
that meet the organization’s needs.
The Operations Perspective helps you to enable, run, use, operate, and recover IT
workloads to the level agreed upon with your business stakeholders.
40 Migration Strategies:
6 strategies for migration
When migrating applications to the cloud, six of the most common migration
strategies(opens in a new tab) that you can implement are:
Rehosting
Replatforming
Refactoring/re-architecting
Repurchasing
Retaining
Retiring
To learn more about migration stretiges, expand each of the following six categories.
a) Rehosting
b) Replatforming.
Replatforming, also known as “lift, tinker, and shift,” involves making a few cloud
optimizations to realize a tangible benefit. Optimization is achieved without changing
the core architecture of the application.
c) Refactoring/re-architecting:
d) Repurchasing
e) Retaining:
Retaining consists of keeping applications that are critical for the business in the
source environment. This might include applications that require major refactoring
before they can be migrated, or, work that can be postponed until a later time.
f) Retiring
The AWS Snow Family(opens in a new tab) is a collection of physical devices that
help to physically transport up to exabytes of data into and out of AWS.
AWS Snow Family is composed of AWS Snowcone, AWS Snowball, and AWS
Snowmobile.
These devices offer different capacity points, and most include built-in computing
capabilities. AWS owns and manages the Snow Family devices and integrates with
AWS security, monitoring, storage management, and computing capabilities.
Snowball Edge Storage Optimized devices are well suited for large-scale
data migrations and recurring transfer workflows, in addition to local computing
with higher capacity needs.
o Storage: 80 TB of hard disk drive (HDD) capacity for block volumes and
Amazon S3 compatible object storage, and 1 TB of SATA solid state
drive (SSD) for block volumes.
o Compute: 40 vCPUs, and 80 GiB of memory to support Amazon EC2
sbe1 instances (equivalent to C5).
Snowball Edge Compute Optimized provides powerful computing resources
for use cases such as machine learning, full motion video analysis, analytics,
and local computing stacks.
o Storage: 80-TB usable HDD capacity for Amazon S3 compatible object
storage or Amazon EBS compatible block volumes and 28 TB of usable
NVMe SSD capacity for Amazon EBS compatible block volumes.
o Compute: 104 vCPUs, 416 GiB of memory, and an optional NVIDIA
Tesla V100 GPU. Devices run Amazon EC2 sbe-c and sbe-g instances,
which are equivalent to C5, M5a, G3, and P3 instances.
41.3 AWS Snowmobile:
AWS Snowmobile(opens in a new tab) is an exabyte-scale data transfer
service used to move large amounts of data to AWS.
You can transfer up to 100 petabytes of data per Snowmobile, a 45-foot long
ruggedized shipping container, pulled by a semi trailer truck.
When examining how to use AWS services, it is important to focus on the desired
outcomes. You are properly equipped to drive innovation in the cloud if you can
clearly articulate the following conditions:
Consider some of the paths you might explore in the future as you continue on your
cloud journey.
To learn more on ways to innovate with AWS, expand each of the following three
categories.
With AWS, serverless refers to applications that don’t require you to provision,
maintain, or administer servers. You don’t need to worry about fault tolerance or
availability. AWS handles these capabilities for you.
AWS Lambda is an example of a service that you can use to run serverless
applications. If you design your architecture to trigger Lambda functions to run your
code, you can bypass the need to manage a fleet of servers.
You can use ML to analyze data, solve complex problems, and predict outcomes
before they happen.
Operational excellence
Security
Reliability
Performance efficiency
Cost optimization
Sustainability
Operational excellence is the ability to run and monitor systems to deliver business
value and to continually improve supporting processes and procedures.
Design principles for operational excellence in the cloud include performing operations
as code, annotating documentation, anticipating failure, and frequently making small,
reversible changes.
43.2 Security
The Security pillar is the ability to protect information, systems, and assets while
delivering business value through risk assessments and mitigation strategies.
When considering the security of your architecture, apply these best practices:
Automate security best practices when possible.
Apply security at all layers.
Protect data in transit and at rest.
43.3 Reliability
Reliability is the ability of a system to do the following:
Recover from infrastructure or service disruptions
Dynamically acquire computing resources to meet demand
Mitigate disruptions such as misconfigurations or transient network issues
43.6 Sustainability
In December 2021, AWS introduced a sustainability pillar as part of the AWS Well-
Architected Framework.
Operating in the AWS Cloud offers many benefits over computing in on-premises or hybrid
environments.
In this section, you will learn about six advantages of cloud computing:
To learn more about the advantages of cloud computing, expand each of the following six
categories.
Upfront expenses include data centers, physical servers, and other resources that you would
need to invest in before using computing resources.
Instead of investing heavily in data centers and servers before you know how you’re going to
use them, you can pay only when you consume computing resources.
By using cloud computing, you can achieve a lower variable cost than you can get on your
own.
Because usage from hundreds of thousands of customers aggregates in the cloud, providers
such as AWS can achieve higher economies of scale. Economies of scale translate into
lower pay-as-you-go prices.
With cloud computing, you don’t have to predict how much infrastructure capacity you will
need before deploying an application.
For example, you can launch Amazon Elastic Compute Cloud (Amazon EC2) instances
when needed and pay only for the compute time you use. Instead of paying for resources
that are unused or dealing with limited capacity, you can access only the capacity that you
need, and scale in or out in response to demand.
The flexibility of cloud computing makes it easier for you to develop and deploy applications.
This flexibility also provides your development teams with more time to experiment and
innovate.
Cloud computing in data centers often requires you to spend more money and time
managing infrastructure and servers.
A benefit of cloud computing is the ability to focus less on these tasks and more on your
applications and customers.
6. Go global in minutes.
The AWS Cloud global footprint enables you to quickly deploy applications to customers
around the world, while providing them with low latency.
Cloud Concepts
Security and Compliance
Technology
Billing and Pricing
The areas covered describe each domain in the Exam Guide(opens in a new tab) for the
AWS Certified Cloud Practitioner certification. For a description of each domain, review
the AWS Certified Cloud Practitioner website(opens in a new tab). (opens in a new tab)You
are encouraged to read the information in the Exam Guide as part of your preparation for the
exam.
Each domain in the exam is weighted. The weight represents the percentage of questions in
the exam that correspond to that particular domain. These are approximations, so the
questions on your exam may not match these percentages exactly. The exam does not
indicate the domain associated with a question. In fact, some questions can potentially fall
under multiple domains.
Total 100%
You are encouraged to use these benchmarks to help you determine how to allocate your
time studying for the exam.
Recommended experience
Candidates for the AWS Certified Cloud Practitioner exam should have a basic
understanding of IT services and their uses in the AWS Cloud platform.
We recommend that you have at least six months of experience with the AWS Cloud in any
role, including project managers, IT managers, sales managers, decision makers, and
marketers. These roles are in addition to those working in finance, procurement, and legal
departments.
Exam details
Two types of questions are included on the exam: multiple choice and multiple response.
A multiple-choice question has one correct response and three incorrect responses,
or distractors.
A multiple-response question has two or more correct responses out of five or more
options.
On the exam, there is no penalty for guessing. Any questions that you do not answer are
scored as incorrect. If you are not sure of what the correct answer is, it’s always best for you
to guess instead of leaving any questions unanswered.
The exam enables you to flag any questions that you’d like to review before submitting the
exam. This helps you to use your time during the exam efficiently, knowing that you can
always go back and review any questions that you were initially unsure of.
As part of your preparation for the AWS Certified Cloud Practitioner exam, we recommend
that you review the following whitepapers and resources:
46.Exam Strategies
This section explores several strategies that can help you to increase the probability of
passing the exam.
To learn more about exam strategies, expand each of the following three categories.
First, make sure that you read each question in full. Key words or phrases in the question
that, if left unread, could result in you selecting an incorrect response option.
Next, try to predict the correct answer before looking at any of the response options.
This strategy helps you to draw directly from your knowledge and skills without distraction
from incorrect response options. If your prediction turns out to be one of the response
options, this can be helpful for knowing whether you’re on the right track. However, make
sure that you review all the other response options for that question.
3. Eliminate incorrect response options.
Before selecting your response to a question, eliminate any options that you believe to be
incorrect.
This strategy helps you to focus on the correct option (or options, for multiple-response
questions) and ensure that you have fulfilled all the requirements of the