Quiz 2
Quiz 2
Chapter 6 Containers
Introduction
• The previous chapter examines one form of server virtualization: virtual machines (VMs).
• This chapter considers an alternative form of server virtualization that has become popular for use in data centers:
container technology. The chapter explains the motivation for containers and describes the underlying
technology.
• VM technology disadvantages are that VM creating entails the overhead of hosting an operating system and
running multiple VMs on a server imposes computation overhead because each operating system schedules apps
and runs background processes.
Figure 6.1 Illustration of four VMs each running their own operating system.
• All the overhead is unnecessary if a a user only runs a single application and does not need all the facilities in an
operating system.
• A conventional operating system includes a facility that satisfies most of the need: support for concurrent
processes. But in a multi-tenant cloud service, because processes share facilities, such as network address and a
file system that all an app to obtain information about other apps.
• User IDs provide additional isolation by assigning an owner to each running process and each file, and forbidding
a process owned by one user from accessing or removing an object owned by another user. However, the user ID
mechanism in most operating systems does not scale to handle a cloud service with arbitrary numbers of
customers.
Linux Namespaces Used For Isolation
• Significant advances in isolation mechanisms have arisen in the open-source community. Under various names,
such as jails, the community has incorporated isolation mechanisms into the Linux operating system. Known as
namespaces, a current set of mechanisms can be used to isolate various aspects of an application. Below are
seven major namespaces used with containers.
• The mechanisms control various aspects of isolation. For example, the process ID namespace allows each
isolated app to use its own set of process IDs 0, 1, 2..., and so on.
• Instead of all processes sharing the single Internet address that the host owns, the network namespace allows a
process to be assigned a unique Internet address.
• Isolation facilities in an operating system make it possible to run multiple apps on a computer without
interference and without allowing an app to learn about other isolated apps
• Industry uses the term container to capture the idea of an environment that surrounds and protects an app while
the app runs.
• At any time, a server computer runs a set of containers, each of which contains an app.
Figure 6.3 The conceptual organization of software when a server computer runs a set of containers.
• A container consists of an isolated environment in which an application can run. A container runs on a
conventional operating system, and multiple containers can run on the same operating system concurrently. Each
container provides isolation, which means an application in one container cannot interfere with an application in
another container.
Docker Containers
• The Docker approach has become prominent for use with cloud systems for four main reasons:
o Tools that enable rapid and easy development of containers
o An extensive registry of software for use with containers
o Techniques that allow rapid instantiation of an isolated app
o Reproducible execution across hosts
• Development tools - The Docker technology provides an easy way to develop apps that can be deployed in an
isolated environment. Docker uses a high-level approach that allows a programmer to combine large pre-built
code modules into a single image that runs when a container is deployed.
• Extensive registry of software - The Docker project has produced Docker Hub, an extensive registry of open
source software that is ready to use and enables a programmer to share deployable apps, such as a web server,
without writing code. More important, a user (or an operator) can combine pieces from the registry in the same
way that a conventional program uses modules from a library.
• Rapid instantiation. Because a container does not require a full operating system, a container is much smaller
than a VM. Consequently, the time required to download a container can be an order of magnitude less than the
time required to download a VM. In addition, unlike a conventional app that may require an operating system to
load one or more libraries dynamically, the early binding approach means a Docker container does not need the
operating system to perform extra work when the container starts.
• Reproducible execution. Once a Docker container has been built, the container image becomes immutable —
the image remains unchanged, independent of the number of times the image runs in a container. Furthermore,
because all the necessary software components have been built in, a container image performs the same on any
system. As a result, container execution always gives reproducible results.
• The Docker model does not separate a container from its contents. A programmer creates all the software needed
for a container, including an app to run, and places the software in an image file. A separate image file must be
created for each app. When an image file runs, a container is created to run the app. We say the app has been
containerized.
Docker Terminology And Development Tools
• In addition to tools used to create and launch an app, the extended environment includes tools used to deploy and
control copies of running containers.
• A Docker image is a file that can be executed, and a container is the execution of an image. An image can have
two forms: a partial image that forms a building block, and a container image that includes all the software
needed for a particular app. Partial images form a useful part of the Docker programming environment analogous
to library functions in a conventional programming environment.
• Docker uses the term layer to refer to each piece of code a programmer includes in a container image. We say that
the programmer starts by specifying a base image and then adds one or more layers of software which can include
code from scratch or downloaded from one of the pre-built building blocks from a registry.
• Docker uses a special build facility analogous to Linux’s make. A programmer creates a text file with items that
specify how to construct a container image. By default Docker expects the specifications to be placed in a text file
named Dockerfile. A programmer runs docker build which reads Dockerfile, follows the specified steps,
constructs a container image, and issues a log of steps taken (along with error messages, if any).
• A Docker container image is not an executable file, and cannot be launched the same way one launches a
conventional app. Instead, one must use the command docker run to specify that an image is to be run as a Docker
container.
• The table in Figure 6.4 lists a few terms that Docker uses, along with items from conventional computing
systems that provide an analogy to help clarify the concept.
Figure 6.5 Illustration of the Docker daemon, dockerd, which manages both containers and images.
• The dockerd program runs in background at all times and contains several key subsystems. In addition to a
subsystem that launches and terminates containers, dockerd contains a subsystem used to build images and a
subsystem used to download items from a registry.
• A user does not interact with dockerd directly. Instead, dockerd provides two interfaces through which a user
can make requests and obtain information:
o A RESTful interface - intended for applications
o A Command Line Interface (CLI) - intended for humans
• RESTful Interface - Customers do not typically create containers manually but instead use orchestration
software to deploy and manage sets of containers. When it needs to create, terminate, or otherwise manage
containers, orchestration software uses dockerd’s RESTful interface. As expected, the RESTful interface uses
the HTTP protocol, and allows orchestration software to send requests and receive responses.
• Command Line Interface (CLI) - To accommodate situations when a human needs to manage containers
manually, dockerd offers an interactive command-line interface that allows a user to enter one command at a
time and receive a response. To send a command to dockerd, a user can type the following in a terminal window:
Figure 6.6 Examples of Docker commands a user can enter through the CLI.
• To construct a container image, a programmer creates a Dockerfile, places the file in the current directory, and
runs:
docker build .
where the dot specifies that Docker should look in the current directory for a Dockerfile that specifies how to build
the image.
• To run an image as a container, a programmer invokes the run command and supplies the image name. For
example:
docker run f34cd9527ae6
• Docker stores images until the user decides to remove them. To review the list of all saved images, a user can
enter:
docker images
which will print a list of all the saved images along with their names and the time at which each was created.
• Apps running in the container make calls to the base operating system which then makes calls to the host
operating system. When building a container image, a programmer starts by specifying a base operating system.
From a programmer’s perspective, most of the distinctions between a base operating system and the host
operating system are unimportant. For example, when a user runs a container interactively in a terminal window
and an app running the container writes to standard output, the output will appear in the user’s terminal window
and keystrokes entered in the window will be sent to the app running in the container.
• One aspect of containers differs from conventional apps: the file system. Apps running in the container can read
files and can create new files but because a container is immutable, however, so unless a programmer connects
a container to permanent storage, changes made to local files when a container runs will not be saved for
subsequent invocations of the container image.
Items In A Dockerfile
• A Dockerfile specifies a sequence of steps to be taken to build a container image.
o FROM. The FROM keyword, which specifies a base operating system to use for the image, must appear
as the first keyword in a Dockerfile. For example, to use the al- pine Linux base, one specifies:
FROM alpine
o RUN. The RUN keyword specifies that a program should be run to add a new layer to the image. The
name is unfortunately confusing because the “running” takes place when the image is built rather than
when the container runs. For example, to execute the apk program during the build and request that it
add Python and Pip to the image, a programmer uses:
o ENTRYPOINT. A programmer must specify where execution begins when a container starts running.
To do so, a programmer uses the ENTRYPOINT keyword followed by the name of an executable program
in the image file system and arguments that will be passed to the program. ENTRYPOINT has two
forms: one in which arguments are passed as literal strings and one that uses a shell to invoke the initial
pro- gram. Using a shell allows variable substitution, but the non-shell option is preferred because the
program will run with process ID 1 and will receive Unix signals.
o CMD. The CMD keyword, which is related to ENTRYPOINT, has multiple forms and multiple purposes.
As a minor use, CMD can be used as an alternative to ENTRYPOINT to specify a command to be run
by a shell when the container starts; a programmer cannot specify an initial command with both CMD
and ENTRYPOINT. The main purpose of CMD focuses on providing a set of default arguments for EN-
TRYPOINT that can be overridden when the container is started.
o COPY and ADD. Both the COPY and the older ADD keywords can be used to add directories and files
to the file system being constructed for the image (i.e., the file system that will be available when the
image runs in a container). COPY is straightforward because it copies files from the computer where the
image is being built into the image file system. As an example, a programmer might build a Python
script, test the script on the local computer, and then specify that the script should be copied into an
image.
o The ADD keyword allows a programmer to specify local files or give a URL that specifies a remote file.
Furthermore, ADD understands how to decompress and open archives. In many cases, however, opening
an archive results in extra, unneeded files in the image, making the image larger than necessary.
Therefore, the recommended approach has become: before building an image, download remote files to
the local computer, open archives, remove unneeeded files, and use the COPY keyword to copy the files
into the image.
o EXPOSE. Although EXPOSE deals with Internet access and VOLUME deals with file systems, both
specify a way to connect a container to the outside world. EXPOSE specifies protocol port numbers that
the container is designed to use. For example, an image that contains a web server might specify that the
container is designed to use port 80:
EXPOSE 80
o VOLUME specifies a mount point in the image file system where an external file system can connect.
That is, VOLUME provides a way for a container to connect to persistent storage in a data center.
VOLUME does not specify a remote storage loca- tion, nor does it specify how the connection to a
remote location should be made. In- stead, such details must be specified when a user starts a container.
An Example Dockerfile
• Figure 6.9 contains a trivial example of a Dockerfile that specifies alpine to be the base operating system and
specifies that file /bin/echo (i.e., the Linux echo command) should be run when the container starts. When it runs
as a container, the image prints hi there. Once it runs, the echo command exits, which causes the container to exit.
FROM alpine
ENTRYPOINT ["/bin/echo", "hi there"]
Summary
• Container technology provides an alternate form of virtualization that avoids the overhead incurred when a guest
operating system runs in each VM. Although it operates much like a conventional process, a container uses
namespace mechanisms in the host operating system to remain isolated from other containers.
• To build a container for a Docker project, a programmer writes specifications in a text file (Dockerfile).
Specifications include a base operating system to use, other software that should be added to the container, an
initial program to execute when the container starts, and external network and file system connections that can
be used when an image runs in a container. To construct a container image, a programmer invokes Docker’s
build mechanism, which follows the specifications and produces an image. Once it has been created, an image
can be run in a container.
• Docker software includes two main pieces: a daemon named dockerd that runs in background, and a command
named docker that a user invokes to interact with dockerd through a command-line interface. The docker
interface allows a user to build an image or run and manage containers.
Part II Cloud Infrastructure And Virtualization
Chapter 7 Virtual Networks
Introduction
• The first chapter in Part II (Chapter 4) describes networks used in data centers defining east-west
traffic pattern and explains the leaf-spine architecture that handles such traffic.
• This chapter explores the complex topic of network virtualization focusing on the motivation
for virtualized networks, the general concepts, the use of SDN, and the ways cloud providers
can employ virtualization technologies.
• Overlay - a virtual network that does not actually exist but which in effect has been created by
configuring switches to restrict communication.
• Underlay network - the underlying physical network that provides connections among entities
in a virtual network.
Virtual Local Area Networks (VLANs)
• One of the earliest technologies is known by the name Virtual Local Area Network (VLAN).
• VLANs are part of the Ethernet standard, and the Ethernet switches used in data centers support
the use of VLANs.
• A traditional switch without VLANs forms a single network. A set of computers connect to
ports on the switch, and all the computers can communicate.
• When a network administrator uses a VLAN switch, the administrator assigns each port on the
switch a small integer known as the port’s VLAN tag.
• VLAN technology makes the switch hardware behave like a set of smaller switches. For
example, when a computer on VLAN X broadcasts a packet, only computers on VLAN X
receive a copy instead of all computers attached to the switch receiving a copy.
• Although sufficient for smaller data centers, two limits prevent the technology from handling
large cloud data centers:
o VLAN tag consists of twelve bits which limits the technology to 4096 VLANs.
o The VLAN scheme assigns VLAN tags to ports on switches, not to VMs or containers.
o VLAN switches run a Spanning Tree Protocol that does not scale to large data
centers.
• The addresses used on Internet packets are known as IP addresses (Internet Protocol
Addresses).
• Switches throughout each data center are configured to examine IP addresses and forward
packets to their correct destination, either within the data center or on the global Internet.
• A technology known as Virtual Extensible LAN (VXLAN) that many data centers use extends
VLAN technology to a scale to the equivalent of more than sixteen million virtual networks —
enough for even the largest cloud providers.
• VXLAN uses multicast technology to make delivery of packets efficient and reduce the amount
of traffic on the network. Instead of sending packets from VM to VM, VXLAN sends a single
copy across the data center which is then distributed to only the recipients.
• By connecting each VM to the virtual switch, it allows the VM to send and receive
packets, just as if the VM connected to a real network switch.
• A virtual switch can be configured to follow the same forwarding rules as other data
center switches.
• A container can clone the host’s IP address. Uses the same IP address as the host OS which
means the container’s network use may conflict with the use by normal apps or other containers
that have cloned the address. Not a good approach.
• A container can receive a new IP address. Each container assigned a unique IP address,
and the host operating system can use a virtual switch to provide connectivity.
• A container can use address translation. NAT (Network Address Translation allows multiple
computers to share a single IP address. When used with containers, NAT software runs in
the host operating system. When a container that uses NAT begins execution, the container
requests an IP address, and the NAT software responds to the request by assigning an IP address
from a set of reserved, private IP addresses that cannot be used on the Internet.
• To communicate over the Internet, the NAT software intercepts each outgoing packet and
replaces the private IP address with the IP address of the host OS. When receiving a packet,
NAT replaces the host’s address with the private address assigned to the container and
forwards the reply to the container.
o Arbitrary placement of addressable entities and migration - VMs and containers are
placed on arbitrary physical servers so the IP addresses that belong to a given tenant
may be spread across the data center and whatever system is used to connect all the
virtual entities owned by a tenant must accommodate VM migration.
• The Spanning Tree Protocol (STP) – solves the problem of packets being forwarded
repeatedly to the same switch when a sender broadcasting a packet to all the switches that are
connected.
• Standard routing protocols - Data center networks employ standard routing protocols that
propagate routing information automatically. The protocols, including OSPF (Open Shortest
Path First) and BGP (Border Gateway Protocol), learn about possible destinations inside the
data center and on the global Internet, compute a shortest path to each destination, and install
forwarding rules in switches to send packets along the shortest paths. More important, routing
protocols monitor networks, and when they detect that a switch or link has failed,
automatically change forwarding to route packets around the failure.
• Software can accommodate a large data center, can handle multiple levels of virtualization, and
can update forwarding rules when VMs migrate.
• The SDN approach uses a dedicated computer to run SDN software. The computer runs a
conventional operating system, typically Linux, an SDN controller app, and a management app
(or apps).
• The controller forms a logical connection to a set of switches and communicates appropriate
forwarding rules to each switch.
• The logical connections between an SDN controller and each switch employ bidirectional
communication that allows data to flow in either direction.
• The controller can monitor the status of the switch itself and the links to other switches.
• A controller configures each switch to send the controller any packet that cannot be handled
by its forwarding rules so that it can handle exceptions.
• OpenFlow controls packet forwarding by having a controller install a set of forwarding rules in
each switch. Each rule describes a particular type of packet and specifies the output port
over which the packet should be sent. When a packet arrives, the switch hardware checks each
of the rules, finds one that matches the packet, and sends the packet over the specified port. A
forwarding rule uses items in a packet header to decide where the packet should be sent.
• Alternatively, a forwarding rule can examine the packet’s destination, directing packets destined
for the global Internet out one port and packets destined for servers inside the data center out
another port.
• A forwarding rule can use the sender’s address to direct traffic from some senders along one
path and traffic from other senders along another path (e.g., to give some customers priority).
Programmable Networks
• The first generation of SDN software uses static forwarding rules, where each rule specifies a
set of header fields in a packet and an output port for packets that match the rule.
• The chief limitation of static forwarding rules lies in the overhead needed to handle exceptions:
a packet that does not match any rule must be sent to the controller.
• A second generation of SDN called a programmable network eliminates the need for a
controller to handle each exception by placing a computer program in each switch.
Summary
• Data center networks use virtualization technologies to give each tenant the illusion of having
an isolated network.
• A virtual network forms an overlay and the physical network on which it has been built forms
an underlay.
• Larger data centers with many tenants, an extended VLANs technology named VXLAN must
be used. VXLAN uses Internet packets, which means it builds an overlay on top of the Internet
protocol.
• Virtual switch technology and NAT are used to handle forwarding among addressable entities
(VMs and containers) that run on a given server.
• Data center networks also use automated configuration and management technologies,
including the Spanning Tree Protocol (STP) and standard routing protocols. Software Defined
Networking (SDN) allows a manager to run a computer program called a controller, that
configures and monitors network switches. A second generation of SDN called a programmable
network uses the P4 language to create a programmable network.
Part II Cloud Infrastructure And Virtualization
Chapter 8 Virtual Storage
Introduction
• A previous chapter (Chapter 4) describes the infrastructure used in data centers, including the idea of separating
disks from servers.
• This chapter completes the description of virtualization technologies by examining the virtual storage facilities
used in data centers. The chapter introduces the concepts of remote block storage, remote file storage, and the
facilities used to provide them. The chapter also discusses object storage (also known as key-value storage).
• The term persistent storage (or non-volatile storage) refers to a data storage mechanism that retains data after
the power has been removed. We can distinguish between two forms of persistent storage:
o Persistent storage devices - For decades the computer industry adopted electromechanical devices called
disks that use magnetized particles on a surface to store data. The industry now uses Solid State Disk
(SSD) technology with no moving parts.
o Persistent storage abstractions - An operating system provides two abstractions: named files and
hierarchical directories, which are also known as folders. The file abstraction offers two important
properties: files can vary in size and can store arbitrary data. The hierarchical directory abstraction allows
a user to name each file and to organize files into meaningful groups.
• To store data on a disk, the operating system must pass two items to the disk device: a block of data and a block
number.
• The disk hardware uses the block number to find the location on the disk, and replaces the contents with the new
values. We say the hardware writes the data to the specified block on disk. The hardware cannot write a partial
block
• When an operating system instructs a disk device to retrieve a copy of a previously stored block, the operating
system must specify a block number. The disk hardware always fetches a copy of an entire block.
• Each operating system defines many details, such as the format of file names, file ownership, protections, and the
exact set of file operations. Nevertheless, most systems follow an approach known as open-close-read-write.
Operation Meaning
open Obtain access to use a file and move to the first byte
close Stop using a previously-opened file
read Fetch data from the current position of an open file
write Store data at the current position of an open file
seek Move to a new position in an open file
• The chief difference between the operations used with files and those used with disk hardware arises from the
transfer size. A file system provides a byte-oriented interface which allows an application to move to an
arbitrary byte position in a file and transfer an arbitrary number of bytes starting at that position.
• The connection occurs over a piece of hardware known as an I/O bus, and all interaction between the processor
and the disk occurs over the bus.
• Both electromechanical disks and solid-state disks connect over a bus, and either can provide local storage.
• A remote storage device is a persistent storage mechanism that is not attached directly to a computer, but is
instead reachable over a computer network.
• A disk device cannot connect directly to a network. Instead, the remote disk connects to a storage server that
connects to a network and runs software that handles network communication.
o One of the first remote file access systems is known as the Network File System (NFS). Originally
devised by Sun Microsystems, Inc., NFS became an Internet standard, and is still used, including in data
centers. NFS integrates the remote files into the user’s local file system. That is, some folders on the
user’s computer correspond to a remote file system. We say that NFS provides transparent remote file
access because a user cannot tell the location of a file, and an app can access a remote file as easily as a
local file.
o When the workstation sends a write request, the diskless workstation specifies both a block number and
the data to be written to the block; the reply from the server either announces success or contains an error
code to tell why the operation could not be performed.
o When it sends a read request, the diskless workstation only specifies a block number; the reply either
contains a copy of the requested block or an error code to tell why the operation could not be performed.
• Host-based - A storage server consists of a computer (any computer that has network access, directly attached
local storage, and file sharing software to handle requests that arrive over the network. A host-based server has the
advantage of not needing expensive hardware, but the disadvantages of low performance and limited scale.
• Server-based – Uses dedicated, high-speed server hardware can increase performance and allow a storage server
to scale beyond the capabilities of a host-based system. Increased processing power (i.e., more cores) allows a
server to handle many requests per second and a large memory allows the server to cache data in memory. In
addition, a server with a high-speed bus can accommodate more I/O, including multiple local disks.
• Specialized hardware - Network Attached Storage (NAS) are specialized systems that provide scalable remote
file storage systems suitable for a data center. The hardware used in a NAS system is ruggedized to withstand
heavy use. The hardware and software in a NAS system are both optimized for high performance.
• One technique used to help satisfy the goal of durability involves the use of parallel, redundant disks known as
a RAID array (Redundant Array of Independent Disks), the technology places redundant copies of data on
multiple disks, allowing the system to continue to operate correctly if a disk fails, and allows the disk to be
replaced while the array continues to operate (known as hot swapping).
Storage Area Network (SAN) Technology
• Storage Area Network (SAN) describes a remote storage system that employs a block-oriented interface to
provide a remote disk interface.
• Analogous to NAS, the term SAN implies that the system has optimized, ruggedized hardware and software that
provide the performance and durability needed in a data center.
• Early SAN technology includes two components: a server and a network optimized for storage traffic.
• Some of the motivation for a special network arose because early data centers used a hierarchical network
architecture optimized for web traffic (i.e., north-south traffic).
• In contrast to web traffic, communication between servers and remote storage facilities produces east-west traffic.
Thus, having a dedicated network used to access remote storage keeps storage traffic off the main data center
network.
• A second motivation for a separate storage network arose from capacity and latency requirements:
o In terms of capacity, a hierarchical data center network means increasing the capacity of links at higher
levels.
o In terms of latency, a hierarchical data center network means storage traffic must flow up the hierarchy
and back down.
• The server does not merely allocate one physical disk to each client. Instead, the server provides each client with a
virtual disk. When software creates an entity that needs disk storage (e.g., when a VM is created), the software
sends a message to the SAN server giving a unique ID for the new entity and specifying a disk size measured in
blocks.
• When it receives a request to create a disk for a new entity, the server uses the client’s unique ID to form a
virtual disk map. The map has an entry for each block of the virtual disk, 0 1, and so on. For each entry, the
server finds an unused block on one of the local disks, allocates the block to the new entity, and fills in the entry
with information about how to access the block. Blocks allocated to an entity may be on one physical disk at the
server or spread across multiple physical disks.
• To an entity using the SAN server, the server provides the abstraction of a single physical disk. Therefore, the
entity will refer to blocks as, 0, 1, 2, and so on.
• When a read or write request arrives at the SAN server, the server uses the client’s ID to find the map for the
client. The server can then use the map to transform the client’s block number into the number of a local physical
disk and a block on that disk. The server performs the requested operation on the specified block on the specified
local disk.
Hyper-Converged Infrastructure
• The move to leaf- spine networks and the availability of much less expensive high-capacity Ethernet hardware
changes the economics of SANs. Instead of using a special-purpose network, SAN hardware has been redesigned
to allow it to communicate over a conventional data center network.
• A converged network is a network that carries multiple types of traffic. A data center network that carries all
types of traffic, including SAN storage traffic, use what is called a Hyper-Converged Infrastructure (HCI).
• NAS Advantages
o A NAS system allows apps to share individual files and because it presents a normal file system interface
with the usual open-read-write-close semantics, NAS works equally well with any app.
o A NAS arises supports the use of containers. When constructing an image for a container, a programmer
can specify external mount points in the container’s file system. When a user creates a container to run
the image, the user can choose to connect each external mount point to a directory in the host’s local file
system or to a remote directory on a NAS server.
o Because the actual file system resides on a NAS server, only file data passes across the network. The server
maintains all metadata (e.g., directories and inodes).
• NAS Disadvantages
o Because each operating system defines its own type of file system, each NAS is inherently linked to a
particular OS so a user cannot choose to run an arbitrary operating system unless it supports the NAS
file system.
• SAN Advantages
o The block access paradigm has the advantage of working with any operating system. A user can create a
VM running Windows and another VM running Linux.
o Using integers to identify blocks has the advantage of making the mapping from a client’s block number
to a block on a local disk extremely efficient.
• SAN Disadvantages
o Because each entity has its own virtual disk, entities cannot share individual files easily.
o Although it works well for virtual machines, containers cannot use a block-oriented interface directly.
o Because a file system resides on an entity using a SAN, the file system must transfer metadata over the
network to the SAN site.
Object Storage
• A remote file system mechanism inherently depends on a particular operating system, and can only be accessed
by apps that can run on the operating system.
• As cloud systems emerged, to solve this issue object store or key-value store technologies were adopted.
• An object store system has three characteristics that make it especially popular in a cloud environment:
o Stores arbitrary objects - An object store can contain arbitrary objects, and objects can be
grouped together into buckets, analogous to file folders.
o Offers universal accessibility and scale – An object store uses a general-purpose interface that any
app can use, and scales to allow many simultaneous accesses, including apps in containers.
Typically, an object store offers a RESTful interface.
o Remains independent of the operating system - An object store does not depend on a specific operating
system or file system. Instead, an app running on an arbitrary operating system can access the object store
to save and retrieve objects, making it useful for both VMs and containers.
Summary
• A conventional computer system uses two forms of persistent storage: a hardware device (disk) that stores data
and a software system that uses the hardware to store files and directories. Disk hardware uses a block interface
that allows a computer to read or write one block at a time; a file system typically uses the open-close-read-write
paradigm to provide a byte-oriented interface. Each type requires a server that has a network connection and local
storage attached. To use a remote disk server, a computer sends requests to read or write a specified block. To use
a remote file server, a computer sends requests to read or write a set of bytes to a specified file
• Network Attached Storage (NAS) is a high- performance, scalable, ruggedized remote file server
• Storage Area Network (SAN) to describe a high-performance, scalable, ruggedized remote disk server that
provides block storage.
• Block storage systems allocate a virtual disk to each client. The SAN server maintains a mapping between the
block numbers a client uses and blocks on physical disks; the client remains unaware that blocks from its disk
may not map onto a single physical disk at the server.
• The shift to leaf-spine networks that support high volumes of east-west traffic has enabled a Hyper-Converged
Infrastructure (HCI) in which communication between VMs and a storage server uses the data center network
along with other traffic.
• Object store technology provides an alternative to NAS and SAN systems. Like a remote file system, an object
store can store an arbitrary digital object. Unlike a remote file system, object store technology uses a RESTful
interface to provide universal accessibility independent of any file system or operating system.
Part III Automation and Orchestration
Chapter 9 Automation
Introduction
• Previous parts of the text discuss the motivation for cloud computing, the physical infrastructure, and key
virtualizations that cloud systems use.
• This part of the book introduces the topics of automation and orchestration.
• This chapter examines aspects of automation and explains why so many automation systems have arisen due to
the need cloud systems have for automated support mechanisms.
• Individual customers.
o Individual subscribers often use SaaS apps, such as a document editing system that allows a set of users
to share and edit documents cooperatively.
o To make such services convenient, providers typically offer access through a web browser or a dedicated
app that the user downloads.
o The provider may create a virtual machine or container, allocate storage, configure network access, and
launch web server software. When an individual uses a point-and-click interface to access a service,
the interface must be backed by underlying automated systems that handle many chores on behalf of the
individual.
o Two types of automated tools are available for large cloud customers. One type, available from the
provider or a third party, allows a customer to download and run the tools to deploy and manage apps.
The other type consists of tools offered by a provider that allow large customers to configure, deploy,
and manage apps and services without downloading software.
o The next chapters explain examples, including Kubernetes, which automates deployment and operation
of a service built with containers, and Hadoop, that automates MapReduce computations.
• Cloud providers
o Cloud providers have devised some of the most sophisticated and advanced automation tools and use
them to manage cloud data centers.
o In addition to building tools to configure, monitor, and manage the underlying cloud infrastructure, a
provider also creates tools that handle requests from cloud customers automatically.
The Need For Automation In A Data Center
• Operating a data center is much more complex than operating IT facilities for a single organization. Four aspects
of a cloud data center stand out.
• Extreme scale – A cloud provider must accommodate thousands of tenants including individuals and major
enterprise customers
• Diverse services - Because a cloud data center provider allows each customer to choose software and services
to run it must be able to run software for thousands of services at the same time.
• Constant change - A data center provider must handle dynamically changing requirements, with the response
time dependent on the change including handling a tenant request for a new VM or to deploy a container in the
moment.
• Human error - Many problems in a data center can be traced to human error.
An Example Deployment
• Below lists example steps a provider takes when deploying a single VM.
o Optimizations
Optimizations for both initial deployments and subsequent changes; the initial placement of VMs and
containers to handle balancing the load across physical servers; minimization of the latency between
applications and storage, and minimization of network traffic; VM migration, including migration to
increase performance or to minimize power consumption.
o Safety and recovery
Scheduled backups of tenant’s data; server monitoring; monitoring of network equipment and fast re-
routing around failed switches or links; monitoring of storage equipment, including detecting failures of
redundant power supplies and redundant disks (RAID systems); automated restart of VMs and containers;
auditing and compliance enforcement.
Levels Of Automation
• The five level basic model below can help explain the extent to which automation can be applied.
o Level 1 - Automated preparation and configuration - automation of offline tasks that are performed
before installation occurs.
o Level 2 -Automated monitoring and measurement - monitoring both physical and virtual resources of a
data center and making measurements available to human operators. Monitoring often focuses on
performance, and includes the load on each server, the traffic on each link in the network, and the
performance of storage systems.
o Level 3 - Automated analysis of trends and prediction - enhances level 2 monitoring by adding analytic
capabilities by collecting measurements over a long period and using software to analyze changes and
trends. From a data center owner’s point of view, the key advantage of level 3 analysis lies in the ability
to predict needs, allowing the data center owner to plan ahead rather than waiting for a crisis to occur.
o Level 4 - Automated identification of root causes - uses data gathered from monitoring along with
knowledge of both the data center infrastructure and layers of virtualization that have been added to
deduce the cause of problems.
o Level 5 - Automated remediation of problems - extends the ideas of a level 4 system by adding
automated problem solving.
o Data center operations encompass a broad set of facilities and services, both physical and virtual and
must t manage a broad variety of computation, networking, and storage mechanisms in the presence of
continuous change. An operator may have multiple goals, some of which may conflict with each other.
Examples include:
▪ Placing active VMs on a subset of servers, making it possible to reduce power costs by powering
down some servers.
o Such tools can be especially useful if they relieve humans from tasks that involve tedious details.
Human error is a source of many problems, and a tool is less prone to making errors.
o Step 1 - A tenant signs a contract for a new VM and a new work order (i.e., a “ticket”) is created
o Step 2 - A human from the group that handles VMs configures a new VM and passes the
ticket on
o Step 3 - A human from the group that handles networking configures the network and
passes the ticket on
o Step 4 - A human from the group that handles storage configures a SAN server and passes
the ticket on
o Step 5 - The tenant is notified that the VM is ready
• Because automation tools evolved to help human operators who each have limited expertise, each tool tends to
focus on one aspect of data center operations.
• A cloud provider must configure servers, networks, storage, databases, and software systems continuously.
• Each vendor creates their own specialized configuration language, and a data center contains hardware and
software from many vendors so an automation tool can allow humans to specify items in a vendor-independent
language, and pass the appropriate commands to the hardware or software system being configured. The operator
does not need to interact with the system being configured.
• Zero Touch Provisioning (ZTP) (or Infrastructure as Code (IaC)) refer to a process where a data center
operator creates a specification and uses software to read the specification and configure underlying systems.
Two approaches have been used: push and pull.
o The push version follows the traditional pattern of installing a configuration: a tool reads a specification
and performs the commands needed to configure the underlying system.
o The pull version requires an entity to initiate configuration. For example, when a new software system
starts, the system can be designed to pull configuration information from a server.
o Imperative specifications follow the paradigm of early binding by specifying operations for the
underlying system and values to be used. The result can be misleading and ambiguous.
o A declarative specification focuses on the result rather than the steps used to achieve it. For example,
a declarative specification for IP address assignment might have the form:
• Intent-based
o Intent-based characterizes a specification that allows a human to state the desired outcome without
giving details about how to achieve the desired outcome or specific values to be used. For example,
an intent-based specification for IP address assignment might state:
o Intent-based specifications offer generality and flexibility. Because they do not dictate steps to be taken,
intent-based specifications allow many possible implementations to be used.
o Using a declarative, intent-based configuration specification can help eliminate ambiguity and
increase both generality and flexibility. An intent-based specification gives tools freedom to
choose an implementation that produces the desired outcome.
• By default, Kubernetes assigns a unique IP address to each group of containers (called a pod). Doing so means
network forwarding can be arranged to allow containers in a group to communicate and run microservices, even
if the containers run on multiple servers.
• Docker software takes the approach of using a virtual layer 3 bridge to allow containers to communicate.
• Other tools are available that can configure an overlay network for containers such that each host has a separate
IP subnet and each container on a host has a unique address.
• Other tools are available to provide secure network communication for containers and microservices.
Summary
• The diversity of services, large scale, and constant change in a cloud data center mandate the use of automated
tools for the configuration and operation of hardware and software.
• Because operating a data center is complex, providers have multiple, conflicting goals.
• The first step toward automated configuration allows a human to specify configuration in a vendor-independent
form, and then uses a tool to read the specification and configure the underlying hardware and software systems
accordingly.
• An imperative specification follows a paradigm of early binding by specifying the operations and values to be
used for configuration. A declarative specification can help avoid ambiguities.
• An intent-based specification, which allows a human to specify the desired outcome without specifying how to
achieve the outcome, increases flexibility and generality.
Part III Automation and Orchestration
Chapter 10 Orchestration: Automated Replication And Parallelism
Introduction
• The previous chapter points out that the reason the vast number of automated tools and technologies have been
developed is due to the scale and complexity of a cloud data center.
• This chapter considers the topic of orchestration in which an automated system configures, controls, and manages
all aspects of a service.
• A later chapter explains the microservices programming paradigm that orchestration enables.
• Automating a manual process can lead to a clumsy, inefficient solution that mimics human actions instead
of taking an entirely new approach.
• The advent of containers required designers find a way to build more comprehensive systems that cross
functional boundaries and handle all aspects needed for a service. Particularly:
o Rapid creation - low-overhead of containers means that it takes significantly less time to create a
container than to create a VM.
o Short lifetime - Unlike a VM that remains in place semi-permanently once created, a container is
ephemeral. A typical container is created when needed, performs one application task, and then exits.
o Replication – Replication is key for containers. When demand increases for a particular service,
multiple containers for the service can be created and run simultaneously. When demand declines,
unneeded container replicas can be terminated.
• Rapid creation and termination of containers requires an automated system. In addition, the network (and
possibly a storage system) must be configured quickly when a container is created or terminated.
• An automated container management system needs a larger scope that includes communication and storage.
Industry uses the term orchestration to refer to an automated system that coordinates the many subsystems
needed to configure, deploy, operate, and monitor software systems and services.
• Unlike an automation tool that focuses on one aspect of data center operations, an orchestration system
coordinates all the subsystems needed to operate a service, including deploying containers and configuring both
network communication and storage.
• In addition to automated configuration and deployment, a container orchestrator usually handles three key
aspects of system management:
o Dynamic scaling of services - When demand increases, the orchestrator automatically increases the
number of simultaneous copies. When demand decreases, the orchestrator reduces the number of
copies, either by allowing copies to exit without replacing them or by terminating idle copies.
o Coordination across multiple servers - An orchestrator deploys copies of a container on multiple
physical servers and then monitors performance and balances the load by starting new copies on lightly
loaded servers.
o Resilience and automatic recovery - If a container fails or the containers providing a service become
unreachable, the orchestrator can either restart the failed containers or switch over to a backup set,
thereby guaranteeing that the service remains available at all times.
o Service naming and discovery - Kubernetes allows a service to be accessed through a domain name or
an IP address. Typically, names and addresses are configured to be global, allowing applications running
outside the data center to access the service.
o Load balancing - Kubernetes does not limit a service to a single container. Instead, if traffic is high,
Kubernetes can automatically create multiple copies of the container for a service and use a load balancer
to divide incoming requests among the copies.
o Storage orchestration - Kubernetes allows an operator to mount remote storage automatically when a
container runs. The system can accommodate many types of storage, including local storage and
storage from a public cloud provider.
o Optimized container placement - When creating a service, an operator specifies a cluster of servers
(called nodes) that Kubernetes can use to run containers for the service. The operator specifies the
processor and memory (RAM) that each container will need. Kubernetes places containers on nodes in
the cluster in a way that optimizes the use of servers.
o Automated Recovery -After creating a container, Kubernetes does not make the container available to
clients until the container is running and ready to provide service. Kubernetes automatically replaces a
container that fails and terminates a container that stops responding to a user-defined health check.
o Automated rollouts and rollbacks - Kubernetes allows an operator to roll out a new version of a service
at a specified rate by creating a new version of a container image and telling Kubernetes to start replacing
running containers with the new version. More important, Kubernetes allows each new container to
inherit all the resources the old container owned.
Limits On Kubernetes Scope
• Kubernetes does not handle all aspects of deploying and managing containerized software. Specifically:
• Kubernetes uses the term cluster to describe the set of containers plus the associated support software used to
create, operate, and access the containers. The number of containers in a cluster depends on demand, and
Kubernetes can increase or decrease the number as needed.
• Software in a cluster can be divided into two conceptual categories: one category contains software invoked by
the owner of the cluster to create and operate containers. The other category contains software invoked by users
of the cluster to obtain access to a container.
Figure 10.2 The conceptual model for Kubernetes in which an owner runs software to create and
operate a set of containers, and the containers provide a computation service that users
access.
• Although the terms owner and user seem to refer to humans, the roles do not have to be filled by entering
commands manually. Each of the two categories of software provides multiple APIs. Although a human can
indeed interact with the software through a command-line interface, apps can also perform the functions (e.g.,
through a RESTful interface). In the case of a container, a user typically runs an app that uses the traditional
client-server paradigm to communicate with a container.
Kubernetes Pods
• Although many applications consist of a single container, some programming paradigms, such as the
microservices paradigm, encourage a software engineer to divide an application into small autonomous pieces
that each run as a container.
• The pieces are tightly-coupled which means they are designed to work together, usually by communicating over
the network.
• To run a copy of a multicontainer application, all containers for the application must be started, and the network
must be configured to provide communication.
• Kubernetes uses the term pod to refer to an application. Thus, a pod can consist of a single container or multiple
containers; single container pods are typical.
• A pod defines the smallest unit of work that Kubernetes can deploy. When it deploys an instance of a pod,
Kubernetes places all containers for the pod on the same node.
• In terms of networking, Kubernetes assigns an IP address to each running pod. If a pod has multiple containers,
all containers in the pod share the IP address.
• Communication among containers in the pod occurs over the localhost network interface, just as if the containers
in the pod were processes running on a single computer.
• The use of a single IP address for all containers means a programmer must be careful to avoid multiple containers
attempting to use the same transport layer port. For example, if one of the containers in a pod uses port 80
for a web server, none of the other containers will be allowed to allocate port 80.
• When it deploys a pod on a node, Kubernetes stores information from the template with the running pod. Any
changes to a template apply to any new pods that are created from the template but have no effect on already
running pods.
• The metadata specifications in the example are kept with each running pod, including labels that contain
information useful for humans. Tools allow humans to search for running pods with a label.
• A template can specify many more items such as a set of environment variables and initial values to be passed
to containers in the pod, specify external storage mount points for each container and an initial command to
run when the container executes.
Init Containers
• Kubernetes allows a designer to include one or more init containers.
• An init container is intended to perform initialization that might be needed before the main containers run so all
init containers in a pod must complete successfully before the main containers are started.
• An init container is useful for testing the environment to ensure that the facilities the main containers need
are available.
• A pod designer uses a set of init containers to check the environment before the main containers of the
pod execute. Doing so guarantees that either all the main containers in a pod start successfully or none
of them start.
• Kubernetes uses the term Control Plane (also known by the term master) to refer to the software components
that an owner uses to create and operate containers. When the control plane software runs on a node, the node
is known as a master node.
• A node that Kubernetes uses to run containers is Kubernetes node (worker node). Some sources use the term
worker node, which helps clarify the meaning. We will use the terms master node and worker node to make
the purpose of each node explicit.
• A typical Kubernetes cluster runs a single copy of the control plane components, and all control-plane
components run on a single master node.
• It is possible to create a cluster by running multiple copies of the control plane software on multiple master
nodes, but this text assumes that in their all the control plane software components run on a single master node.
o Scheduler (kube-scheduler) - The Scheduler handles the assignment of pods to nodes in the cluster.
It watches for a pod that is ready to run but has not yet been assigned to a node. It then chooses a
node on which to run the pod and binds the pod to the node. The Scheduler remains executing after
the initial deployment of pods, which allows Kubernetes to increase the number of pods in the cluster
dynamically.
o Cluster State Store - The Cluster State Store holds information about the cluster, including the set of
nodes available to the cluster, the pods that are running, and the nodes on which the pods are currently
running. When something changes in the cluster, the Cluster Store must be updated. Kubernetes uses
the etcd key-value technology as the underlying storage technology. Etcd is a reliable distributed
system that keeps multiple copies of each key-value pair and uses a consensus algorithm to ensure that
a value can be retrieved even if one copy is damaged or lost.
o Cloud Controller Manager - The Cloud Controller Manager provides an interface to the underlying
cloud provider system. Such an interface allows Kubernetes to request changes and to probe the
underlying cloud system when errors occur. The interface also allows Kubernetes to check the status of
hardware.
• In addition to the five main software components listed above, Kubernetes offers a command-line app that
allows a human to connect to the cluster and enter management commands that operate the cluster. The
command-line app, which is named kubectl, connects to the API server.
Figure 10.5 Control plane components on a master node and the communication among them.
o Service Proxy (kube-proxy) - Responsible for configuring network forwarding on the node to provide
network connectivity for the pods running on the node. Specifically, the Service Proxy configures the
Linux iptables facility.
o Kubelet - Provides the interface between the control plane and the worker node. It contacts the API
server, watches the set of pods bound to the node, and handles the details of running the pods on the
node. Kubelet sets up the environment to ensure that each pod is isolated from other pods, and interfaces
with the Container Runtime system to run and monitor containers and it also monitors the pods running
on the node and reports their status back to the API server. Kubelet includes a copy of the cAdvisor
software that collects and summarizes statistics about pods and then exports the summary through a
Summary API, making them available to monitoring software (e.g., Metrics Server).
o Container Runtime - Kubernetes does not include a Container Runtime system. Instead, it uses a
conventional container technology and assumes that each node runs conventional Container Runtime
software. When Kubernetes needs to deploy containers, Kubelet interacts with the Container Runtime
system to perform the required task.
Figure 10.7 Software components on a worker node and the communication among them.
Kubernetes Features
• Kubernetes contains many additional features and facilities, some of which are quite sophisticated and complex.
The following highlights some of the most significant.
o Replicas - When deploying a cluster, an owner can specify and control the number of replicas for the
pod. In essence, an owner can explicitly control how an application scales out to handle higher load.
o Deployments - Kubernetes uses the term Deployment to refer to a specific technology that offers an
automated technology for scale out. Like other Kubernetes facilities, Deployments follow the intent-
based approach. An owner specifies the desired number of replicas and the Deployment system
maintains the required number of replicas automatically.
o StatefulSet - The StatefulSets facility allows an owner to create and manage stateful applications. A
user can deploy a set of pods with guaranteed ordering. Each pod in the set is assigned a unique identity,
and the system guarantees that the identity will persist.
o DaemonSet - The DaemonSet facility allows an owner to run a copy of a pod on all nodes in a cluster
(or some of them). As the name implies, the facility is typically used to deploy a permanently-running
daemon process that other containers on the node can use.
o Garbage Collection - Kubernetes objects have dependencies. For example, a Replicaset owns a set of
pods. When the owner of an object terminates, the object should also be terminated (e.g., collected).
The Garbage Collection facility allows one to set dependencies and specify how and when terminated
objects should be collected.
o TTL Controller for Finished Resources - The TTL Controller allows an owner to specify a maximum
time after an object has finished (either terminates normally or fails) before the object must be
collected. The value is known as a Time-To-Live (TTL), which leads to the name of the facility.
o Job - The job facility creates a specified number of pods, and monitors their progress. When a specified
number of the pods complete, the Job facility terminates the others. If all pods exit before the required
number has been reached, the Job facility restarts the set. As an example, the facility can be used to
guarantee that one pod runs to completion.
o CronJob - The CronJob facility allows an owner to schedule a job to be run periodically (e.g., every
hour, every night, every weekend, or once a month). The idea and the name are derived from the Unix
cron program, and the CronJob facility uses the same specification format as cron.
o Services - The Services facility allows an owner to create a set of pods and specify an access policy
for the set. In essence, the Services facility hides the details of individual pods and passes each request
to one of the pods automatically. Decoupling a Service from the underlying pods allows pods to
exit and restart without affecting any apps that use the service. The facility works well for a
microservices architecture.
• Kubernetes offers many facilities, and often allows one to choose between a mechanism that offers
explicit control and a mechanism that handles the task automatically.
Summary
• An orchestration system handles all the data center subsystems needed to deploy a service, including the network,
storage facilities, and container execution.
• Orchestration typically handles dynamic scaling, coordination across multiple physical servers, and recovery.
• Kubernetes provides the primary example of an orchestration technology. Kubernetes handles service naming
and discovery, load balancing, storage, container placement, automated container restart, management of
configurations, and automated rollouts.
• Kubernetes uses a cluster model in which each cluster consists of a master node that runs control plane
software and worker nodes that run pods. Each pod represents an application, and may consist of one or more
containers that work together on one application.
• The owner of a service uses the Kubernetes control plane to deploy and control the service, External management
apps also communicate with the API server; the kubectl app provides a command-line interface.
• Each worker node runs a Service Proxy that configures network communication among pods on the node and
Kubelet that runs and monitors pods.
• Kubernetes contains many facilities that handle the tasks of deploying replicas, managing stateful applications,
running a background daemon on each node, controlling the collection of terminated objects, guaranteeing a job
runs to completion, running jobs periodically, and offering a service that can scale out automatically.
lOMoARcPSD|36225468
Module 3
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Contents
Module 3: AWS Global Infrastructure Overview 4
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 3
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 4
Module overview
Topics Activities
• AWS Management Console
• AWS Global Infrastructure
clickthrough
• AWS service and service category
overview
Knowledge check
Demo
• AWS Global Infrastructure
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 2
The module includes an educator-led demonstration that focuses on the details of the AWS
Global Infrastructure. The module also includes a hands-on activity where you will explore the
AWS Management Console.
Finally, you will be asked to complete a knowledge check that will test your understanding of the
key concepts that are covered in this module.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 5
Module objectives
After completing this module, you should be able to:
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 3
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 6
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 7
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 5
To learn more about the AWS Regions that are currently available, use one of the following links:
• https://aws.amazon.com/about-aws/global-infrastructure/#AWS_Global_Infrastructure_Map
• https://aws.amazon.com/about-aws/global-infrastructure/regions_az/
These resources are updated frequently to show current and planned AWS infrastructure.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 8
Educator-Led
Demo: AWS Global
Infrastructure
Details
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 6
The educator might now choose to conduct a live demonstration of the AWS Global
Infrastructure map introduced on the previous slide. This resource provides an interactive way to
learn about the AWS Global Infrastructure. The remaining slides in this section cover many of the
same topics and go into greater detail on some topics.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 9
AWS Regions
• An AWS Region is a geographical area.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 7
The AWS Cloud infrastructure is built around Regions. AWS has 22 Regions worldwide. An AWS
Region is a physical geographical location with one or more Availability Zones. Availability Zones
in turn consist of one or more data centers.
To achieve fault tolerance and stability, Regions are isolated from one another. Resources in one
Region are not automatically replicated to other Regions. When you store data in a specific
Region, it is not replicated outside that Region.
It is your responsibility to replicate data across Regions, if your business needs require it.
AWS Regions that were introduced before March 20, 2019 are enabled by default. Regions that
were introduced after March 20, 2019—such as Asia Pacific (Hong Kong) and Middle East
(Bahrain)—are disabled by default. You must enable these Regions before you can use them. You
can use the AWS Management Console to enable or disable a Region.
Some Regions have restricted access. An Amazon AWS (China) account provides access to the
Beijing and Ningxia Regions only. To learn more about AWS in China, see:
https://www.amazonaws.cn/en/about-aws/china/. The isolated AWS GovCloud (US) Region is
designed to allow US government agencies and customers to move sensitive workloads into the
cloud by addressing their specific regulatory and compliance requirements.
For accessibility: Snapshot from the infrastructure.aws website that shows a picture of
downtown London including the Tower Bridge and the Shard. It notes that there are three
Availability Zones in the London region. End of accessibility description.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 10
Selecting a Region
Data governance,
legal requirements
Proximity to customers
(latency)
Determine the right Region
for your services, Services available
within the Region
applications, and data based
on these factors
Costs (vary by Region)
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 8
There are a few factors that you should consider when you select the optimal Region or Regions
where you store data and use AWS services.
One essential consideration is data governance and legal requirements. Local laws might require
that certain information be kept within geographical boundaries. Such laws might restrict the
Regions where you can offer content or services. For example, consider the European Union (EU)
Data Protection Directive.
All else being equal, it is generally desirable to run your applications and store your data in a
Region that is as close as possible to the user and systems that will access them. This will help
you reduce latency. CloudPing is one website that you can use to test latency between your
location and all AWS Regions. To learn more about CloudPing, see: http://www.cloudping.info/
Keep in mind that not all services are available in all Regions. To learn more, see:
https://aws.amazon.com/about-aws/global-infrastructure/regional-product-
services/?p=tgi&loc=4.
Finally, there is some variation in the cost of running services, which can depend on which Region
you choose. For example, as of this writing, running an On-Demand t3.medium size Amazon
Elastic Compute Cloud (Amazon EC2) Linux instance in the US East (Ohio) Region costs $0.0416
per hour, but running the same instance in the Asia Pacific (Tokyo) Region costs $0.0544 per hour.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 11
Availability Zones
• Each Region has multiple Availability Zones.
AWS Cloud
• Each Availability Zone is a fully isolated
Region eu-west-1
partition of the AWS infrastructure.
Availability Zone eu-west-1a
• Availability Zones consist of discrete data centers
Data center
• They are designed for fault isolation
Data center
• They are interconnected with other Availability Zones
by using high-speed private networking Data center
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 9
Each AWS Region has multiple, isolated locations that are known as Availability Zones.
Each Availability Zone provides the ability to operate applications and databases that are more
highly available, fault-tolerant, and scalable than would be possible with a single data center.
Each Availability Zone can include multiple data centers (typically three), and at full-scale, they
can include hundreds of thousands of servers. They are fully isolated partitions of the AWS Global
Infrastructure. Availability Zones have their own power infrastructure, and they are physically
separated by many kilometers from other Availability Zones—though all Availability Zones are
within 100 km of each other.
All Availability Zones are interconnected with high-bandwidth, low-latency networking over fully
redundant, dedicated fiber that provides high-throughput between Availability Zones. The
network accomplishes synchronous replication between Availability Zones.
Availability Zones help build highly available applications. When an application is partitioned
across Availability Zones, companies are better isolated and protected from issues such as
lightning, tornadoes, earthquakes, and more.
You are responsible for selecting the Availability Zones where your systems will reside. Systems
can span multiple Availability Zones. AWS recommends replicating across Availability Zones for
resiliency. You should design your systems to survive the temporary or prolonged failure of an
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 12
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 13
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 10
The foundation for the AWS infrastructure is the data centers. Customers do not specify a data
center for the deployment of resources. Instead, an Availability Zone is the most granular level of
specification that a customer can make. However, a data center is the location where the actual
data resides. Amazon operates state-of-the-art, highly available data centers. Although rare,
failures can occur that affect the availability of instances in the same location. If you host all your
instances in a single location that is affected by such a failure, none of your instances will be
available.
AWS uses custom network equipment sourced from multiple original device manufacturers
(ODMs). ODMs design and manufacture products based on specifications from a second
company. The second company then rebrands the products for sale.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 14
Points of Presence
• AWS provides a global
network of Points of Presence
locations
• Consists of edge locations
and a much smaller number
of Regional edge caches
• Used with Amazon
CloudFront
• A global Content Delivery Network
(CDN), that delivers content to end
users with reduced latency
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 11
Amazon CloudFront is a content delivery network (CDN) used to distribute content to end users
to reduce latency. Amazon Route 53 is a Domain Name System (DNS) service. Requests going to
either one of these services will be routed to the nearest edge location automatically in order to
lower latency.
AWS Points of Presence are located in most of the major cities around the world. By
continuously measuring internet connectivity, performance and computing to find the best
way to route requests, the Points of Presence deliver a better near real-time user experience.
They are used by many AWS services, including Amazon CloudFront, Amazon Route 53, AWS
Shield, and AWS Web Application Firewall (AWS WAF) services.
Regional edge caches are used by default with Amazon CloudFront. Regional edge caches are
used when you have content that is not accessed frequently enough to remain in an edge
location. Regional edge caches absorb this content and provide an alternative to that content
having to be fetched from the origin server.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 15
growth
• Fault-tolerance
Data center Data center Data center Data center
• No human intervention
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 12
Now that you have a good understanding of the major components that comprise the AWS
Global Infrastructure, let's consider the benefits provided by this infrastructure.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 16
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 13
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 17
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
AWS offers a broad set of global cloud-based products that can be used as building blocks for
common cloud architectures. Here is a look at how these cloud based products are organized.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 18
Compute (virtual,
Foundation Networking Storage (object,
automatic scaling,
Services block, and archive)
and load balancing)
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 15
As discussed previously, the AWS Global Infrastructure can be broken down into three elements:
Regions, Availability Zones, and Points of Presence, which include edge locations. This
infrastructure provides the platform for a broad set of services, such as networking, storage,
compute services, and databases—and these services are delivered as an on-demand utility that
is available in seconds, with pay-as-you-go pricing.
For accessibility: Marketing diagram showing infrastructure at the bottom, consisting of Regions,
Availability Zones, and edge locations. The next level up is labeled Foundational Services and
includes graphics for compute, networking, and storage. That level is highlighted. Next level up is
platform services that includes databases, analytics, app services, deployment and management,
and mobile services. Top layer is labeled applications and includes virtual desktops and
collaboration and sharing. End of accessibility description.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 19
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 16
AWS offers a broad set of cloud-based services. There are 23 different product or service
categories, and each category consists of one or more services. This course will not attempt to
introduce you to each service. Rather, the focus of this course is on the services that are most
widely used and offer the best introduction to the AWS Cloud. This course also focuses on
services that are more likely to be covered in the AWS Certified Cloud Practitioner exam.
The categories that this course will discuss are highlighted on the slide: Compute, Cost
Management, Database, Management and Governance, Networking and Content Delivery,
Security, Identity, and Compliance, and Storage.
If you click Amazon EC2, it takes you to the Amazon EC2 page. Each product page provides a
detailed description of the product and lists some of its benefits.
Explore the different service groups to understand the categories and services within them. Now
that you know how to locate information about different services, this module will discuss the
highlighted service categories. The next seven slides list the individual services —within each of
the categories highlighted above—that this course will discuss.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 20
Amazon Simple
Storage Service
Glacier
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 17
AWS storage services include the services listed here, and many others.
Amazon Simple Storage Service (Amazon S3) is an object storage service that offers scalability,
data availability, security, and performance. Use it to store and protect any amount of data for
websites, mobile apps, backup and restore, archive, enterprise applications, Internet of Things
(IoT) devices, and big data analytics.
Amazon Elastic Block Store (Amazon EBS) is high-performance block storage that is designed for
use with Amazon EC2 for both throughput and transaction intensive workloads. It is used for a
broad range of workloads, such as relational and non-relational databases, enterprise
applications, containerized applications, big data analytics engines, file systems, and media
workflows.
Amazon Elastic File System (Amazon EFS) provides a scalable, fully managed elastic Network File
System (NFS) file system for use with AWS Cloud services and on-premises resources. It is built to
scale on demand to petabytes, growing and shrinking automatically as you add and remove files.
It reduces the need to provision and manage capacity to accommodate growth.
Amazon Simple Storage Service Glacier is a secure, durable, and extremely low-cost Amazon S3
cloud storage class for data archiving and long-term backup. It is designed to deliver 11 9s of
durability, and to provide comprehensive security and compliance capabilities to meet stringent
regulatory requirements.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 21
AWS compute services include the services listed here, and many others.
Amazon Elastic Compute Cloud (Amazon EC2) provides resizable compute capacity as virtual
machines in the cloud.
Amazon EC2 Auto Scaling enables you to automatically add or remove EC2 instances according to
conditions that you define.
Amazon Elastic Container Service (Amazon ECS) is a highly scalable, high-performance container
orchestration service that supports Docker containers.
Amazon Elastic Container Registry (Amazon ECR) is a fully-managed Docker container registry
that makes it easy for developers to store, manage, and deploy Docker container images.
AWS Elastic Beanstalk is a service for deploying and scaling web applications and services on
familiar servers such as Apache and Microsoft Internet Information Services (IIS).
AWS Lambda enables you to run code without provisioning or managing servers. You pay only for
the compute time that you consume. There is no charge when your code is not running.
Amazon Elastic Kubernetes Service (Amazon EKS) makes it easy to deploy, manage, and scale
containerized applications that use Kubernetes on AWS.
AWS Fargate is a compute engine for Amazon ECS that allows you to run containers without
having to manage servers or clusters.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 22
Amazon
DynamoDB
Photo from https://aws.amazon.com/compliance/data-center/data-centers/
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 19
AWS database services include the services listed here, and many others.
Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a
relational database in the cloud. It provides resizable capacity while automating time-consuming
administration tasks such as hardware provisioning, database setup, patching, and backups.
Amazon Redshift enables you to run analytic queries against petabytes of data that is stored
locally in Amazon Redshift, and directly against exabytes of data that are stored in Amazon S3. It
delivers fast performance at any scale.
Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond
performance at any scale, with built-in security, backup and restore, and in-memory caching.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 23
AWS networking
and content delivery services
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 20
AWS networking and content delivery services include the services listed here, and many others.
Amazon Virtual Private Cloud (Amazon VPC) enables you to provision logically isolated sections
of the AWS Cloud.
Elastic Load Balancing automatically distributes incoming application traffic across multiple
targets, such as Amazon EC2 instances, containers, IP addresses, and Lambda functions.
Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data,
videos, applications, and application programming interfaces (APIs) to customers globally, with
low latency and high transfer speeds.
AWS Transit Gateway is a service that enables customers to connect their Amazon Virtual Private
Clouds (VPCs) and their on-premises networks to a single gateway.
Amazon Route 53 is a scalable cloud Domain Name System (DNS) web service designed to give
you a reliable way to route end users to internet applications. It translates names (like
www.example.com) into the numeric IP addresses (like 192.0.2.1) that computers use to connect
to each other.
AWS Direct Connect provides a way to establish a dedicated private network connection from
your data center or office to AWS, which can reduce network costs and increase bandwidth
throughput.
AWS VPN provides a secure private tunnel from your network or device to the AWS global
network.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 24
AWS security, identity, and compliance services include the services listed here, and many others.
AWS Identity and Access Management (IAM) enables you to manage access to AWS services and
resources securely. By using IAM, you can create and manage AWS users and groups. You can use
IAM permissions to allow and deny user and group access to AWS resources.
AWS Organizations allows you to restrict what services and actions are allowed in your accounts.
Amazon Cognito lets you add user sign-up, sign-in, and access control to your web and mobile
apps.
AWS Artifact provides on-demand access to AWS security and compliance reports and select
online agreements.
AWS Key Management Service (AWS KMS) enables you to create and manage keys. You can use
AWS KMS to control the use of encryption across a wide range of AWS services and in your
applications.
AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards
applications running on AWS.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 25
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 22
AWS cost management services include the services listed here, and others.
The AWS Cost and Usage Report contains the most comprehensive set of AWS cost and usage
data available, including additional metadata about AWS services, pricing, and reservations.
AWS Budgets enables you to set custom budgets that alert you when your costs or usage exceed
(or are forecasted to exceed) your budgeted amount.
AWS Cost Explorer has an easy-to-use interface that enables you to visualize, understand, and
manage your AWS costs and usage over time.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 26
AWS management and governance services include the services listed here, and others.
The AWS Management Console provides a web-based user interface for accessing your AWS
account.
AWS Config provides a service that helps you track resource inventory and changes.
AWS Auto Scaling provides features that allow you to scale multiple resources to meet demand.
AWS Command Line Interface provides a unified tool to manage AWS services.
AWS Well-Architected Tool provides help in reviewing and improving your workloads.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 27
Activity: AWS
Management
Console
clickthrough
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 24
In this educator-led activity, you will be asked to log in to the AWS Management Console. The
activity instructions are on the next slide. You will be challenged to answer five questions. The
educator will lead the class in a discussion of each question, and reveal the correct answers.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 28
1. Launch the Sandbox hands-on environment and connect to the AWS Management Console.
2. Explore the AWS Management Console.
A. Click the Services menu.
B. Notice how services are grouped into service categories. For example, the EC2 service appears in the Compute
service category.
Question #1: Under which service category does the IAM service appear?
Question #2: Under which service category does the Amazon VPC service appear?
C. Click the Amazon VPC service. Notice that the dropdown menu in the top-right corner displays an AWS Region (for
example, it might display N. Virginia).
D. Click the Region menu and switch to a different Region. For example, choose EU (London).
E. Click Subnets (on the left side of the screen). The Region has three subnets in it. Click the box next to one of the
subnets. Notice that the bottom half of the screen now displays details about this subnet.
Question #3: Does the subnet you selected exist at the level of the Region or at the level of the Availability
Zone?
F. Click Your VPCs. An existing VPC is already selected.
Question #4: Does the VPC exist at the level of the Region or the level of the Availability Zone?
Question #5: Which services are global instead of Regional? Check Amazon EC2, IAM, Lambda, and Route 53.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 25
The purpose of this activity is to expose you to the AWS Management Console. You will gain
experience navigating between AWS service consoles (such as the Amazon VPC console). You will
also practice navigating to services in different service categories. Finally, the console will help
you distinguish whether a given service or service resource is global or Regional.
Follow the instructions on the slide. After most or all students have completed the steps
document above, the educator will review the questions and answers with the whole class.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 29
• Question #2: Under which service category does the Amazon VPC service appear?
• Answer: Networking & Content Delivery
• Question #3: Does the subnet that you selected exist at the level of the Region or the level of the
Availability Zone?
• Answer: Subnets exist at the level of the Availability Zone.
• Question #4: Does the VPC exist at the level of the Region or the level of the Availability Zone?
• Answer: VPCs exist at the Region level.
• Question #5: Which of the following services are global instead of Regional? Check Amazon EC2, IAM,
Lambda, and Route 53.
• Answer: IAM and Route 53 are global. Amazon EC2 and Lambda are Regional.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 26
This slide provides an answer key to the questions that were asked in the activity on the previous
slide. The educator will use this slide to lead a discussion and debrief the hands-on activity.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 30
Module wrap-up
Module 3: AWS Global Infrastructure Overview
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
It’s now time to review the module and wrap up with a knowledge check and discussion of a
practice certification exam question.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 31
Module summary
In summary, in this module you learned how to:
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 28
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 32
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 29
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 33
Choice Response
A AWS Regions
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 30
Look at the answer choices and rule them out based on the keywords.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 34
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 31
The following are the keywords to recognize: component of AWS global infrastructure,
CloudFront, low-latency.
Incorrect answers:
Answer A:
Answer C
Answer D
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 35
Additional resources
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 32
The following resources provide more detail on the topics discussed in this module:
• AWS Global Infrastructure: https://aws.amazon.com/about-aws/global-infrastructure/
• AWS Regional Services List: https://aws.amazon.com/about-aws/global-
infrastructure/regional-product-services/
• AWS Cloud Products: https://aws.amazon.com/products/
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 36
Thank you
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 33
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 37
Module 4
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Contents
Module 4: AWS Cloud Security 4
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 3
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Security is the highest priority at Amazon Web Services (AWS). AWS delivers a scalable cloud
computing environment that is designed for high availability and dependability, while providing
the tools that enable you to run a wide range of applications. Helping to protect the
confidentiality, integrity, and availability of your systems and data is critical to AWS, and so is
maintaining customer trust and confidence. This module provides an introduction to the AWS
approach to security, which includes both the controls in the AWS environment and some of the
AWS products and features customers can use to meet their security objectives.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 4
Module overview
Topics Activities
• AWS shared responsibility model • AWS shared responsibility model activity
• AWS Identity and Access Management (IAM)
• Securing a new AWS account Demo
• Securing accounts • Recorded demonstration of IAM
• Securing data on AWS
• Working to ensure compliance
Lab
• Introduction to AWS IAM
Knowledge check
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 2
Section one includes an educator-led activity on the AWS shared responsibility model.
Section two includes a recorded IAM demo, and the end of this same section there includes a
hands-on lab that provides you with practice configuring IAM by using the AWS Management
Console.
Finally, you will be asked to complete a knowledge check to test your understanding of the key
concepts that are covered in this module.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 5
Module objectives
After completing this module, you should be able to:
• Recognize the shared responsibility model
• Identify the responsibility of the customer and AWS
• Recognize IAM users, groups, and roles
• Describe different types of security credentials in IAM
• Identify the steps to securing a new AWS account
• Explore IAM users and groups
• Recognize how to secure AWS data
• Recognize AWS compliance programs
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 3
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 6
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 7
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 5
Security and compliance are a shared responsibility between AWS and the customer. This shared
responsibility model is designed to help relieve the customer’s operational burden. At the same
time, to provide the flexibility and customer control that enables the deployment of customer
solutions on AWS, the customer remains responsible for some aspects of the overall security. The
differentiation of who is responsible for what is commonly referred to as security “of” the cloud
versus security “in” the cloud.
AWS operates, manages, and controls the components from the software virtualization layer
down to the physical security of the facilities where AWS services operate. AWS is responsible for
protecting the infrastructure that runs all the services that are offered in the AWS Cloud. This
infrastructure is composed of the hardware, software, networking, and facilities that run the AWS
Cloud services.
The customer is responsible for the encryption of data at rest and data in transit. The customer
should also ensure that the network is configured for security and that security credentials and
logins are managed safely. Additionally, the customer is responsible for the configuration of
security groups and the configuration of the operating system that run on compute instances that
they launch (including updates and security patches).
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 8
AWS responsibilities:
• Physical security of data centers
AWS services • Controlled, need-based access
• Virtualization infrastructure
• Instance isolation
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 6
AWS is responsible for security of the cloud. But what does that mean?
Under the AWS shared responsibility model, AWS operates, manages, and controls the
components from the bare metal host operating system and hypervisor virtualization layer down
to the physical security of the facilities where the services operate. It means that AWS is
responsible for protecting the global infrastructure that runs all the services that are offered in
the AWS Cloud. The global infrastructure includes AWS Regions, Availability Zones, and edge
locations.
AWS is responsible for the physical infrastructure that hosts your resources, including:
• Physical security of data centers with controlled, need-based access; located in nondescript
facilities, with 24/7 security guards; two-factor authentication; access logging and review;
video surveillance; and disk degaussing and destruction.
• Hardware infrastructure, such as servers, storage devices, and other appliances that AWS
relies on.
• Software infrastructure, which hosts operating systems, service applications, and
virtualization software.
• Network infrastructure, such as routers, switches, load balancers, firewalls, and cabling. AWS
also continuously monitors the network at external boundaries, secures access points, and
provides redundant infrastructure with intrusion detection.
Protecting this infrastructure is the top priority for AWS. While you cannot visit AWS data centers
or offices to see this protection firsthand, Amazon provides several reports from third-party
auditors who have verified our compliance with a variety of computer security standards and
regulations.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 9
Customer responsibilities:
• Amazon Elastic Compute Cloud (Amazon EC2)
instance operating system
Customer data
• Including patching, maintenance
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 7
While the cloud infrastructure is secured and maintained by AWS, customers are responsible for
security of everything they put in the cloud.
The customer is responsible for what is implemented by using AWS services and for the
applications that are connected to AWS. The security steps that you must take depend on the
services that you use and the complexity of your system.
Customer responsibilities include selecting and securing any instance operating systems, securing
the applications that are launched on AWS resources, security group configurations, firewall
configurations, network configurations, and secure account management.
When customers use AWS services, they maintain complete control over their content.
Customers are responsible for managing critical content security requirements, including:
• What content they choose to store on AWS
• Which AWS services are used with the content
• In what country that content is stored
• The format and structure of that content and whether it is masked, anonymized, or encrypted
• Who has access to that content and how those access rights are granted, managed, and
revoked
Customers retain control of what security they choose to implement to protect their own data,
environment, applications, IAM configurations, and operating systems.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 10
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 8
Infrastructure as a service (IaaS) refers to services that provide basic building blocks for cloud IT,
typically including access to configure networking, computers (virtual or on dedicated hardware),
and data storage space. Cloud services that can be characterized as IaaS provide the customer
with the highest level of flexibility and management control over IT resources. IaaS services are
most similar to existing on-premises computing resources that many IT departments are familiar
with today.
AWS services—such as Amazon EC2—can be categorized as IaaS and thus require the customer
to perform all necessary security configuration and management tasks. Customers who deploy
EC2 instances are responsible for managing the guest operating system (including updates and
security patches), any application software that is installed on the instances, and the
configuration of the security groups that were provided by AWS.
Platform as a service (PaaS) refers to services that remove the need for the customer to manage
the underlying infrastructure (hardware, operating systems, etc.). PaaS services enable the
customer to focus entirely on deploying and managing applications. Customers don’t need to
worry about resource procurement, capacity planning, software maintenance, or patching.
AWS services such as AWS Lambda and Amazon RDS can be categorized as PaaS because AWS
operates the infrastructure layer, the operating system, and platforms. Customers only need to
access the endpoints to store and retrieve data. With PaaS services, customers are responsible
for managing their data, classifying their assets, and applying the appropriate permissions.
However, these service act more like managed services, with AWS handling a larger portion of
the security requirements. For these services, AWS handles basic security tasks—such as
operating system and database patching, firewall configuration, and disaster recovery.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 11
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 9
Software as a service (SaaS) refers to services that provide centrally hosted software that is
typically accessible via a web browser, mobile app, or application programming interface (API).
The licensing model for SaaS offerings is typically subscription or pay as you go. With SaaS
offerings, customers do not need to manage the infrastructure that supports the service. Some
AWS services—such as AWS Trusted Advisor, AWS Shield, and Amazon Chime—could be
categorized as SaaS offerings, given their characteristics.
AWS Trusted Advisor is an online tool that analyzes your AWS environment and provides real-
time guidance and recommendations to help you provision your resources by following AWS best
practices. The Trusted Advisor service is offered as part of your AWS Support plan. Some of the
Trusted Advisor features are free to all accounts, but Business Support and Enterprise Support
customers have access to the full set of Trusted Advisor checks and recommendations.
AWS Shield is a managed distributed denial of service (DDoS) protection service that safeguards
applications running on AWS. It provides always-on detection and automatic inline mitigations
that minimize application downtime and latency, so there is no need to engage AWS Support to
benefit from DDoS protection. AWS Shield Advanced is available to all customers. However, to
contact the DDoS Response Team, customers must have either Enterprise Support or Business
Support from AWS Support.
Amazon Chime is a communications service that enables you to meet, chat, and place business
calls inside and outside your organization, all using a single application. It is a pay-as-you-go
communications service with no upfront fees, commitments, or long-term contracts.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 12
Activity: AWS
shared
responsibility
model
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 10
In this educator-led activity, you will be presented with two scenarios. For each scenario, you will
be asked several questions about whose responsibility it is (AWS or the customer) to ensure
security of the item in question. The educator will lead the class in a discussion of each question
and reveal the correct answers one at a time.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 13
Activity: Scenario 1 of 2
Consider this deployment. Who is responsible – AWS or the customer?
AWS Cloud 1. Upgrades and patches to 6. Oracle upgrades or patches
Virtual Private Cloud the operating system on the If the Oracle instance runs
(VPC) EC2 instance? as an Amazon RDS instance?
2. Physical security of the data 7. Oracle upgrades or patches
center? If Oracle runs on an EC2
instance?
3. Virtualization infrastructure?
Amazon Simple Amazon Oracle 8. S3 bucket access
4. EC2 security group settings? configuration?
Storage Service EC2 instance
(Amazon S3) 5. Configuration of
applications that run on the
AWS Global Infrastructure EC2 instance?
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 11
Consider the case where a customer uses the AWS services and resources that are shown here.
Who is responsible for maintaining security? AWS or the customer?
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 14
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 12
The customer uses Amazon Simple Storage Service (Amazon S3) to store data. The customer
configured a virtual private cloud (VPC) with Amazon Virtual Private Cloud (Amazon VPC). The
EC2 instance and the Oracle database instance that they created both run in the VPC.
In this example, the customer must manage the guest operating system (OS) that runs on the EC2
instance. Over time, the guest OS will need to be upgraded and have security patches applied.
Additionally, any application software or utilities that the customer installed on the Amazon EC2
instance must also be maintained. The customer is responsible for configuring the AWS firewall
(or security group) that is applied to the Amazon EC2 instance. The customer is also responsible
for the VPC configurations that specify the network conditions in which the Amazon EC2 instance
runs. These tasks are the same security tasks that IT staff would perform, no matter where their
servers are located.
The Oracle instance in this example provides an interesting case study in terms of AWS or
customer responsibility. If the database runs on an EC2 instance, then it is the customer's
responsibility to apply Oracle software upgrades and patches. However, if the database runs as
an Amazon RDS instance, then it is the responsibility of AWS to apply Oracle software upgrades
and patches. Because Amazon RDS is a managed database offering, time-consuming database
administration tasks—which include provisioning, backups, software patching, monitoring, and
hardware scaling—are handled by AWS. To learn more, see Best Practices for Running Oracle
Database on AWS at https://docs.aws.amazon.com/whitepapers/latest/oracle-database-aws-
best-practices/oracle-database-aws-best-practices.html for details.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 15
Activity: Scenario 2 of 2
Consider this deployment. Who is responsible – AWS or the customer?
Secure Shell
(SSH) keys 1. Ensuring that the AWS 6. Ensuring network isolation
Management Console is not between AWS customers'
AWS Command hacked? data?
AWS
Line Interface 2. Configuring the subnet? 7. Ensuring low-latency
Management (AWS CLI) network connection
Console Internet 3. Configuring the VPC? between the web server and
gateway 4. Protecting against network the S3 bucket?
VPC
outages in AWS Regions? 8. Enforcing multi-factor
Subnet 5. Securing the SSH keys authentication for all user
logins?
Web server on
Amazon EC2
S3 bucket
with objects
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 13
Now, consider this additional case where a customer uses the AWS services and resources that
are shown here. Who is responsible for maintaining security? AWS or the customer?
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 16
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 14
A customer uses Amazon S3 to store data. The customer configured a virtual private cloud (VPC)
with Amazon VPC, and is running a web server on an EC2 instance in the VPC. The customer
configured an internet gateway as part of the VPC so that the web server can be reached by using
the AWS Management Console or the AWS Command Line Interface (AWS CLI). When the
customer uses the AWS CLI, the connection requires the use of Secure Shell (SSH) keys.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 17
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 15
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 18
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 19
• Which resources can be accessed and what can the user do to the resource
• How resources can be accessed
AWS Identity and Access Management (IAM) allows you to control access to compute, storage,
database, and application services in the AWS Cloud. IAM can be used to handle authentication,
and to specify and enforce authorization policies so that you can specify which users can access
which services.
IAM is a tool that centrally manages access to launching, configuring, managing, and terminating
resources in your AWS account. It provides granular control over access to resources, including
the ability to specify exactly which API calls the user is authorized to make to each service.
Whether you use the AWS Management Console, the AWS CLI, or the AWS software
development kits (SDKs), every call to an AWS service is an API call.
With IAM, you can manage which resources can be accessed by who, and how these resources
can be accessed. You can grant different permissions to different people for different resources.
For example, you might allow some users full access to Amazon EC2, Amazon S3, Amazon
DynamoDB, Amazon Redshift, and other AWS services. However, for other users, you might allow
read-only access to only a few S3 buckets. Similarly, you might grant permission to other users to
administer only specific EC2 instances. You could also allow a few users to access only the
account billing information, but nothing else.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 20
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 18
To understand how to use IAM to secure your AWS account, it is important to understand the role
and function of each of the four IAM components.
An IAM user is a person or application that is defined in an AWS account, and that must make API
calls to AWS products. Each user must have a unique name (with no spaces in the name) within
the AWS account, and a set of security credentials that is not shared with other users. These
credentials are different from the AWS account root user security credentials. Each user is
defined in one and only one AWS account.
An IAM group is a collection of IAM users. You can use IAM groups to simplify specifying and
managing permissions for multiple users.
An IAM policy is a document that defines permissions to determine what users can do in the
AWS account. A policy typically grants access to specific resources and specifies what the user
can do with those resources. Policies can also explicitly deny access.
An IAM role is a tool for granting temporary access to specific AWS resources in an AWS account.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 21
Programmatic access
• Authenticate using:
• Access key ID
• Secret access key AWS CLI AWS Tools
and SDKs
• Provides AWS CLI and AWS SDK access
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 19
Authentication is a basic computer security concept: a user or system must first prove their
identity. Consider how you authenticate yourself when you go to the airport and you want to get
through airport security so that you can catch your flight. In this situation, you must present
some form of identification to the security official to prove who you are before you can enter a
restricted area. A similar concept applies for gaining access to AWS resources in the cloud.
When you define an IAM user, you select what type of access the user is permitted to use to
access AWS resources. You can assign two different types of access to users: programmatic
access and AWS Management Console access. You can assign programmatic access only, console
access only, or you can assign both types of access.
If you grant programmatic access, the IAM user will be required to present an access key ID and a
secret access key when they make an AWS API call by using the AWS CLI, the AWS SDK, or some
other development tool.
If you grant AWS Management Console access, the IAM user will be required to fill in the fields
that appear in the browser login window. The user is prompted to provide either the 12-digit
account ID or the corresponding account alias. The user must also enter their IAM user name and
password. If multi-factor authentication (MFA) is enabled for the user, they will also be
prompted for an authentication code.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 22
IAM MFA
• MFA provides increased security.
Username and
password
MFA token
AWS services and resources can be accessed by using the AWS Management Console, the AWS
CLI, or through SDKs and APIs. For increased security, we recommend enabling MFA.
With MFA, users and systems must provide an MFA token—in addition to the regular sign-in
credentials—before they can access AWS services and resources.
Options for generating the MFA authentication token include virtual MFA-compliant applications
(such as Google Authenticator or Authy 2-Factor Authentication), U2F security key devices, and
hardware MFA devices.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 23
After the user or application is connected to the AWS account, what are they allowed to do?
EC2
Full
instances
access
Read-
only S3 bucket
IAM user,
IAM group,
or IAM role
IAM policies
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 21
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 24
IAM: Authorization
• Assign permissions by creating an IAM policy.
Note: The scope of IAM service configurations is global. Settings apply across all AWS Regions.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 22
To assign permission to a user, group or role, you must create an IAM policy (or find an existing
policy in the account). There are no default permissions. All actions in the account are denied to
the user by default (implicit deny) unless those actions are explicitly allowed. Any actions that you
do not explicitly allow are denied. Any actions that you explicitly deny are always denied.
The principle of least privilege is an important concept in computer security. It promotes that
you grant only the minimal user privileges needed to the user, based on the needs of your users.
When you create IAM policies, it is a best practice to follow this security advice of granting least
privilege. Determine what users need to be able to do and then craft policies for them that let
the users perform only those tasks. Start with a minimum set of permissions and grant additional
permissions as necessary. Doing so is more secure than starting with permissions that are too
broad and then later trying to lock down the permissions granted.
Note that the scope of the IAM service configurations is global. The settings are not defined at an
AWS Region level. IAM settings apply across all AWS Regions.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 25
IAM policies
• An IAM policy is a document that defines permissions
• Enables fine-grained access control
• Two types of policies – identity-based and resource-based
• Identity-based policies – IAM entities
• Attach a policy to any IAM entity
• An IAM user, an IAM group, or an IAM role Attach to
IAM user
one of
• Policies specify:
• Actions that may be performed by the entity
• Actions that may not be performed by the entity IAM IAM group
policy
• A single policy can be attached to multiple entities
• A single entity can have multiple policies attached to it IAM role
• Resource-based policies
• Attached to a resource (such as an S3 bucket)
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 23
An IAM policy is a formal statement of permissions that will be granted to an entity. Policies can
be attached to any IAM entity. Entities include users, groups, roles, or resources. For example,
you can attach a policy to AWS resources that will block all requests that do not come from an
approved Internet Protocol (IP) address range. Policies specify what actions are allowed, which
resources to allow the actions on, and what the effect will be when the user requests access to
the resources.
The order in which the policies are evaluated has no effect on the outcome of the evaluation. All
policies are evaluated, and the result is always that the request is either allowed or denied. When
there is a conflict, the most restrictive policy applies.
There are two types of IAM policies. Identity-based policies are permissions policies that you can
attach to a principal (or identity) such as an IAM user, role, or group. These policies control what
actions that identity can perform, on which resources, and under what conditions. Identity-based
policies can be further categorized as:
• Managed policies – Standalone identity-based policies that you can attach to multiple users,
groups, and roles in your AWS account
• Inline policies – Policies that you create and manage, and that are embedded directly into a
single user group or role.
Resource-based policies are JSON policy documents that you attach to a resource, such as an S3
bucket. These policies control what actions a specified principal can perform on that resource,
and under what conditions.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 26
{
"Version": "2012-10-17",
Explicit allow gives users access to a specific
"Statement":[{
DynamoDB table and…
"Effect":"Allow",
"Action":["DynamoDB:*","s3:*"],
"Resource":[
"arn:aws:dynamodb:region:account-number-without-hyphens:table/table-name",
"arn:aws:s3:::bucket-name", … mazon S3 buckets.
"arn:aws:s3:::bucket-name/*"]
}, Explicit deny ensures that the users cannot use any other AWS
{ actions or resources other than that table and those buckets.
"Effect":"Deny",
"Action":["dynamodb:*","s3:*"],
"NotResource":["arn:aws:dynamodb:region:account-number-without-hyphens:table/table-name”,
"arn:aws:s3:::bucket-name",
"arn:aws:s3:::bucket-name/*"]
} An explicit deny statement takes
] precedence over an allow statement.
}
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 24
The example IAM policy grants users access only to the following resources:
• The DynamoDB table whose name is represented by table-name.
• The AWS account's S3 bucket, whose name is represented by bucket-name and all the objects
that it contains.
The IAM policy also includes an explicit deny ("Effect":"Deny") element. The NotResource
element helps to ensure that users cannot use any other DynamoDB or S3 actions or resources
except the actions and resources that are specified in the policy—even if permissions have been
granted in another policy. An explicit deny statement takes precedence over an allow statement.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 27
Resource-based policies
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 25
While identity-based policies are attached to a user, group, or role, resource-based policies are
attached to a resource, such as an S3 bucket. These policies specify who can access the resource
and what actions they can perform on it.
Resource-based policies are defined inline only, which means that you define the policy on the
resource itself, instead of creating a separate IAM policy document that you attach. For example,
to create an S3 bucket policy (a type of resource-based policy) on an S3 bucket, navigate to the
bucket, click the Permissions tab, click the Bucket Policy button, and define the JSON-formatted
policy document there. An Amazon S3 access control list (ACL) is another example of a resource-
based policy.
The diagram shows two different ways that the user MaryMajor could be granted access to
objects in the S3 bucket that is named photos. On the left, you see an example of an identity-
based policy. An IAM policy that grants access to the S3 bucket is attached to the MaryMajor
user. On the right, you see an example of a resource-based policy. The S3 bucket policy for the
photos bucket specifies that the user MaryMajor is allowed to list and read the objects in the
bucket.
Note that you could define a deny statement in a bucket policy to restrict access to specific IAM
users, even if the users are granted access in a separate identity-based policy. An explicit deny
statement will always take precedence over any allow statement.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 28
IAM permissions
How IAM determines permissions:
Implicit deny
Yes Yes
Deny Allow
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 26
IAM policies enable you to fine-tune privileges that are granted to IAM users, groups, and roles.
When IAM determines whether a permission is allowed, IAM first checks for the existence of any
applicable explicit denial policy. If no explicit denial exists, it then checks for any applicable
explicit allow policy. If neither an explicit deny nor an explicit allow policy exists, IAM reverts to
the default, which is to deny access. This process is referred to as an implicit deny. The user will
be permitted to take the action only if the requested action is not explicitly denied and is
explicitly allowed.
It can be difficult to figure out whether access to a resource will be granted to an IAM entity
when you develop IAM policies. The IAM Policy Simulator at
https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_testing-policies.html is a
useful tool for testing and troubleshooting IAM policies.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 29
IAM groups
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 27
An IAM group is a collection of IAM users. IAM groups offer a convenient way to specify
permissions for a collection of users, which can make it easier to manage the permissions for
those users.
For example, you could create an IAM group that is called Developers and attach an IAM policy or
multiple IAM policies to the Developers group that grant the AWS resource access permissions
that developers typically need. Any user that you then add to the Developer group will
automatically have the permissions that are assigned to the group. In such a case, you do not
need to attach the IAM policy or IAM policies directly to the user. If a new user joins your
organization and should be granted developer privileges, you can simply add that user to the
Developers group. Similarly, if a person changes jobs in your organization, instead of editing that
user's permissions, simply remove the user from the group.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 30
IAM roles
• An IAM role is an IAM identity with specific permissions
• Similar to an IAM user
• Attach permissions policies to it
• Different from an IAM user IAM role
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 28
An IAM role is an IAM identity you can create in your account that has specific permissions. An
IAM role is similar to an IAM user because it is also an AWS identity that you can attach
permissions policies to, and those permissions determine what the identity can and cannot do in
AWS. However, instead of being uniquely associated with one person, a role is intended to be
assumable by anyone who needs it. Also, a role does not have standard long-term credentials
such as a password or access keys associated with it. Instead, when you assume a role, the role
provides you with temporary security credentials for your role session.
You can use roles to delegate access to users, applications, or services that do not normally have
access to your AWS resources. For example, you might want to grant users in your AWS account
access to resources they don't usually have, or grant users in one AWS account access to
resources in another account. Or you might want to allow a mobile app to use AWS resources,
but you do not want to embed AWS keys within the app (where the keys can be difficult to rotate
and where users can potentially extract them and misuse them). Also, sometimes you may want
to grant AWS access to users who already have identities that are defined outside of AWS, such
as in your corporate directory. Or, you might want to grant access to your account to third parties
so that they can perform an audit on your resources.
For all of these example use cases, IAM roles are an essential component to implementing the
cloud deployment.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 31
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 29
In the diagram, a developer runs an application on an EC2 instance that requires access to the S3
bucket that is named photos. An administrator creates the IAM role and attaches the role to the
EC2 instance. The role includes a permissions policy that grants read-only access to the specified
S3 bucket. It also includes a trust policy that allows the EC2 instance to assume the role and
retrieve the temporary credentials. When the application runs on the instance, it can use the
role's temporary credentials to access the photos bucket. The administrator does not need to
grant the application developer permission to access the photos bucket, and the developer never
needs to share or manage credentials.
To learn more details about this example, see Using an IAM Role to Grant Permissions to
Applications Running on Amazon EC2 Instances at
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 32
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 30
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 33
Recorded demo:
IAM
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 31
The demonstration shows how to configure the following resources by using the AWS
Management Console:
• An IAM role that will be used by an EC2 instance
• An IAM group
• An IAM user
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 34
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 35
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 33
When you first create an AWS account, you begin with a single sign-in identity that has complete
access to all AWS services and resources in the account. This identity is called the AWS account
root user and it is accessed by signing into the AWS Management Console with the email address
and password that you used to create the account. AWS account root users have (and retain) full
access to all resources in the account. Therefore, AWS strongly recommends that you do not use
account root user credentials for day-to-day interactions with the account.
Instead, AWS recommends that you use IAM to create additional users and assign permissions to
these users, following the principle of least privilege. For example, if you require administrator-
level permissions, you can create an IAM user, grant that user full access, and then use those
credentials to interact with the account. Later, if you need to revoke or modify your permissions,
you can delete or modify any policies that are associated with that IAM user.
Additionally, if you have multiple users that require access to the account, you can create unique
credentials for each user and define which user will have access to which resources. For example,
you can create IAM users with read-only access to resources in your AWS account and distribute
those credentials to users that require read access. You should avoid sharing the same credentials
with multiple users.
While the account root user should not be used for routine tasks, there are a few tasks that can
only be accomplished by logging in as the account root user. A full list of these tasks is detailed on
the Tasks that require root user credentials AWS documentation page at
https://docs.aws.amazon.com/general/latest/gr/root-vs-iam.html#aws_tasks-that-require-root.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 36
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 34
To stop using the account root user, take the following steps:
1. While you are logged into the account root user, create an IAM user for yourself with AWS
Management Console access enabled (but do not attach any permissions to the user yet).
Save the IAM user access keys if needed.
2. Next, create an IAM group, give it a name (such as FullAccess), and attach IAM policies to the
group that grant full access to at least a few of the services you will use. Next, add the IAM
user to the group.
3. Disable and remove your account root user access keys, if they exist.
4. Enable a password policy for all users. Copy the IAM users sign-in link from the IAM
Dashboard page. Then, sign out as the account root user.
5. Browse to the IAM users sign-in link that you copied, and sign in to the account by using your
new IAM user credentials.
6. Store your account root user credentials in a secure place.
To view detailed instructions for how to set up your first IAM user and IAM group, see Creating
Your First IAM Admin User and Group at
https://docs.aws.amazon.com/IAM/latest/UserGuide/getting-started_create-admin-group.html.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 37
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 35
Another recommended step for securing a new AWS account is to require multi-factor
authentication (MFA) for the account root user login and for all other IAM user logins. You can
also use MFA to control programmatic access. For details, see Configuring MFA-Protected API
Access at https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_configure-
api-require.html.
You have a few options for retrieving the MFA token that is needed to log in when MFA is
enabled. Options include virtual MFA-compliant applications (such as Google Authenticator and
Authy Authenticator), U2F security key devices, and hardware MFA options that provide a key fob
or display card.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 38
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 36
AWS CloudTrail is a service that logs all API requests to resources in your account. In this way, it
enables operational auditing on your account.
AWS CloudTrail is enabled on account creation by default on all AWS accounts, and it keeps a
record of the last 90 days of account management event activity. You can view and download the
last 90 days of your account activity for create, modify, and delete operations of services that are
supported by CloudTrail without needing to manually create another trail. See more on the
supported services at https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-
aws-service-specific-topics.html.
To enable CloudTrail log retention beyond the last 90 days and to enable alerting whenever
specified events occur, create a new trail (which is described at a high level on the slide). For
detailed step-by-step instructions about how to create a trail in AWS CloudTrail, see creating a
trail in the AWS documentation at
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-create-a-trail-using-the-
console-first-time.html.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 39
• Billing reports provide information about your use of AWS resources and
estimated costs for that use.
• The AWS Cost and Usage Report tracks your AWS usage and provides estimated
charges associated with your AWS account, either by the hour or by the day.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 37
An additional recommended step for securing a new AWS account is to enable billing reports,
such as the AWS Cost and Usage Report. Billing reports provide information about your use of
AWS resources and estimated costs for that use. AWS delivers the reports to an Amazon S3
bucket that you specify and AWS updates the reports at least once per day.
The AWS Cost and Usage Report tracks usage in the AWS account and provides estimated
charges, either by the hour or by the day.
For details about how to create an AWS Cost and Usage Report, see the AWS Documentation at
https://docs.aws.amazon.com/cur/latest/userguide/cur-create.html.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 40
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 52
The key takeaways from this section of the module are all related to best practices for securing an
AWS account. Those best practice recommendations include:
• Secure logins with multi-factor authentication (MFA).
• Delete account root user access keys.
• Create individual IAM users and grant permissions according to the principle of least privilege.
• Use groups to assign permissions to IAM users.
• Configure a strong password policy.
• Delegate using roles instead of sharing credentials.
• Monitor account activity using AWS CloudTrail.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 41
Lab 1:
Introduction to
IAM
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 53
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 42
Lab 1: Tasks
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 54
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 43
Account
AWS account
Users Groups
user-2
Amazon EC2
read-only
Amazon EC2 – IAM inline IAM managed
access
View, start, and policy policy S3 read-
stop access only access
user-3 user-1
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 55
The diagram shows the resources that your AWS account will have after you complete the lab
steps. It also describes how the resources will be configured.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 44
~ 40 minutes
Begin Lab 1:
Introduction to AWS
IAM
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 56
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 45
Lab debrief:
Key takeaways
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 57
The instructor will now lead a conversation about the key takeaways from the lab after you
complete it.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 46
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 47
AWS Organizations
• AWS Organizations enables you to consolidate multiple AWS accounts
so that you centrally manage them.
• Group AWS accounts into organizational units (OUs) and attach different
access policies to each OU.
• Use service control policies to establish control over the AWS services and API
actions that each AWS account can access
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 59
AWS Organizations is an account management service that enables you to consolidate multiple
AWS accounts into an organization that you create and centrally manage. Here, the focus is on
the security features that AWS Organizations provides.
One helpful security feature is that you can group accounts into organizational units (OUs) and
attach different access policies to each OU. For example, if you have accounts that should only be
allowed to access AWS services that meet certain regulatory requirements, you can put those
accounts into one OU. You then can define a policy that blocks OU access to services that do not
meet those regulatory requirements, and then attach the policy to the OU.
Another security feature is that AWS Organizations integrates with and supports IAM. AWS
Organizations expands that control to the account level by giving you control over what users and
roles in an account or a group of accounts can do. The resulting permissions are the logical
intersection of what is allowed by the AWS Organizations policy settings and what permissions
are explicitly granted by IAM in the account for that user or role. The user can access only what is
allowed by both the AWS Organizations policies and IAM policies.
Finally, AWS Organizations provides service control policies (SCPs) that enable you to specify the
maximum permissions that member accounts in the organization can have. In SCPs, you can
restrict which AWS services, resources, and individual actions the users and roles in each member
account can access. These restrictions even override the administrators of member accounts.
When AWS Organizations blocks access to a service, resource, or API action, a user or role in that
account can't access it, even if an administrator of a member account explicitly grants such
permissions.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 48
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 60
Here is a closer look at the Service control policies (SCPs) feature of AWS Organizations.
SCPs offer central control over the maximum available permissions for all accounts in your
organization, enabling you to ensure that your accounts stay in your organization’s access control
guidelines. SCPs are available only in an organization that has all features enabled, including
consolidated billing. See more on enabling features at
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_org_support-all-
features.html. SCPs aren't available if your organization has enabled only the consolidated billing
features. For instructions about enabling SCPs, see Enabling and Disabling a Policy Type on a
Root at
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies.html#enabl
e_policies_on_root.
SCPs are similar to IAM permissions policies and they use almost the same syntax. However, an
SCP never grants permissions. Instead, SCPs are JSON policies that specify the maximum
permissions for an organization or OU. Attaching an SCP to the organization root or an
organizational unit (OU) defines a safeguard for the actions that accounts in the organization root
or OU can do. However, it is not a substitute for well-managed IAM configurations within each
account. You must still attach IAM policies to users and roles in your organization's accounts to
actually grant permissions to them. See more on IAM policies at
https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 49
• Enables you to control the use of encryption across AWS services and in your
applications.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 61
AWS Key Management Service (AWS KMS) is a service that enables you to create and manage
encryption keys, and to control the use of encryption across a wide range of AWS services and
your applications. AWS KMS is a secure and resilient service that uses hardware security modules
(HSMs) that were validated under Federal Information Processing Standards (FIPS) 140-2 (or are
in the process of being validated) to protect your keys. AWS KMS also integrates with AWS
CloudTrail to provide you with logs of all key usage to help meet your regulatory and compliance
needs.
Customer master keys (CMKs) are used to control access to data encryption keys that encrypt
and decrypt your data. You can create new keys when you want, and you can manage who has
access to these keys and who can use them. You can also import keys from your own key
management infrastructure into AWS KMS.
AWS KMS integrates with most AWS services, which means that you can use AWS KMS CMKs to
control the encryption of the data that you store in these services. To learn more, see AWS Key
Management Service features at https://aws.amazon.com/kms/features/.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 50
Amazon Cognito
Amazon Cognito features:
• Adds user sign-up, sign-in, and access control to your web and mobile
applications.
• Supports sign-in with social identity providers, such as Facebook, Google, and
Amazon; and enterprise identity providers, such as Microsoft Active Directory
via Security Assertion Markup Language (SAML) 2.0.
Amazon Cognito
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 62
Amazon Cognito provides solutions to control access to AWS resources from your application.
You can define roles and map users to different roles so your application can access only the
resources that are authorized for each user.
Amazon Cognito uses common identity management standards, such as Security Assertion
Markup Language (SAML) 2.0. SAML is an open standard for exchanging identity and security
information with applications and service providers. Applications and service providers that
support SAML enable you to sign in by using your corporate directory credentials, such as your
username and password from Microsoft Active Directory. With SAML, you can use single sign-on
(SSO) to sign in to all of your SAML-enabled applications by using a single set of credentials.
Amazon Cognito helps you meet multiple security and compliance requirements, including
requirements for highly regulated organizations such as healthcare companies and merchants.
Amazon Cognito is eligible for use with the US Health Insurance Portability and Accountability Act
(HIPAA – see more on HIPAA at https://aws.amazon.com/compliance/hipaa-compliance/). It can
also be used for workloads that are compliant with the Payment Card Industry Data Security
Standard (PCI DSS – more on PCI DSS at https://aws.amazon.com/compliance/pci-dss-level-1-
faqs/); the American Institute of CPAs (AICPA) Service Organization Control (SOC – more on SOC
at https://aws.amazon.com/compliance/soc-faqs/); the International Organization for
Standardization (ISO) and International Electrotechnical Commission (IEC) standards. More
on ISO/IEC 27001 at https://aws.amazon.com/compliance/iso-27001-faqs/, ISO/IEC 27017 at
https://aws.amazon.com/compliance/iso-27017-faqs/, and ISO/IEC 27018 at
https://aws.amazon.com/compliance/iso-27018-faqs/; and ISO 9001 at
https://aws.amazon.com/compliance/iso-9001-faqs/.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 51
AWS Shield
• AWS Shield features:
• Is a managed distributed denial of service (DDoS) protection service
• AWS Shield Standard enabled for at no additional cost. AWS Shield Advanced is
an optional paid service.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 63
AWS Shield is a managed distributed denial of service (DDoS) protection service that safeguards
applications that run on AWS. It provides always-on detection and automatic inline mitigations
that minimize application downtime and latency, so there is no need to engage AWS Support to
benefit from DDoS protection.
AWS Shield helps protects your website from all types of DDoS attacks, including Infrastructure
layer attacks (like User Datagram Protocol—or UDP—floods), state exhaustion attacks (like TCP
SYN floods), and application-layer attacks (like HTTP GET or POST floods). For examples, see the
AWS WAF Developer Guide at https://docs.aws.amazon.com/waf/latest/developerguide/waf-
chapter.html.
AWS Shield Standard is automatically enabled to all AWS customers at no additional cost.
AWS Shield Advanced is an optional paid service. AWS Shield Advanced provides additional
protections against more sophisticated and larger attacks for your applications that run on
Amazon EC2, Elastic Load Balancing, Amazon CloudFront, AWS Global Accelerator, and Amazon
Route 53. AWS Shield Advanced is available to all customers. However, to contact the DDoS
Response Team, customers need to have either Enterprise Support or Business Support from
AWS Support.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 52
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 53
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 65
Data encryption is an essential tool to use when your objective is to protect digital data. Data
encryption takes data that is legible and encodes it so that it is unreadable to anyone who does
not have access to the secret key that can be used to decode it. Thus, even if an attacker gains
access to your data, they cannot make sense of it.
You can create encrypted file systems on AWS so that all your data and metadata is encrypted at
rest by using the open standard Advanced Encryption Standard (AES)-256 encryption algorithm.
When you use AWS KMS, encryption and decryption are handled automatically and transparently,
so that you do not need to modify your applications. If your organization is subject to corporate
or regulatory policies that require encryption of data and metadata at rest, AWS recommends
enabling encryption on all services that store your data. You can encrypt data stored in any
service that is supported by AWS KMS. See How AWS Services use AWS KMS for a list of
supported services at https://docs.aws.amazon.com/kms/latest/developerguide/service-
integration.html.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 54
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 66
Data in transit refers to data that is moving across the network. Encryption of data in transit is
accomplished by using Transport Layer Security (TLS) 1.2 with an open standard AES-256
cipher. TLS was formerly called Secure Sockets Layer (SSL).
AWS Certificate Manager is a service that enables you to provision, manage, and deploy SSL or
TLS certificates for use with AWS services and your internal connected resources. SSL or TLS
certificates are used to secure network communications and establish the identity of websites
over the internet, and also resources on private networks. With AWS Certificate Manager, you
can request a certificate and then deploy it on AWS resources (such as load balancers or
CloudFront distributions). AWS Certificate Manager also handles certificate renewals.
Web traffic that runs over HTTP is not secure. However, traffic that runs over Secure HTTP
(HTTPS) is encrypted by using TLS or SSL. HTTPS traffic is protected against eavesdropping and
man-in-the-middle attacks because of the bidirectional encryption of the communication.
AWS services support encryption for data in transit. Two examples of encryption for data in
transit are shown. The first example shows an EC2 instance that has mounted an Amazon EFS
shared file system. All data traffic between the instance and Amazon EFS is encrypted by using
TLS or SSL. For further details about this configuration, see Encryption of EFS Data in Transit at
https://docs.aws.amazon.com/whitepapers/latest/efs-encrypted-file-systems/encryption-of-
data-in-transit.html.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 55
The second example shows the use of AWS Storage Gateway, a hybrid cloud storage service that
provides on-premises access to AWS Cloud storage. In this example, the storage gateway is
connected across the internet to Amazon S3, and the connection encrypts the data in transit.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 56
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 67
By default, all Amazon S3 buckets are private and can be accessed only by users who are explicitly
granted access. It is essential to manage and control access to Amazon S3 data. AWS provides
many tools and options for controlling access to your S3 buckets or objects, including:
• Using Amazon S3 Block Public Access. These settings override any other policies or object
permissions. Enable Block Public Access for all buckets that you don't want to be publicly
accessible. This feature provides a straightforward method for avoiding unintended exposure
of Amazon S3 data.
• Writing IAM policies that specify the users or roles that can access specific buckets and
objects. This method was discussed in detail earlier in this module.
• Writing bucket policies that define access to specific buckets or objects. This option is
typically used when the user or system cannot authenticate by using IAM. Bucket policies can
be configured to grant access across AWS accounts or to grant public or anonymous access to
Amazon S3 data. If bucket policies are used, they should be written carefully and tested fully.
You can specify a deny statement in a bucket policy to restrict access. Access will be restricted
even if the users have permissions that are granted in an identity-based policy that is attached
to the users.
• Setting access control lists (ACLs) on your buckets and objects. ACLs are less commonly used
(ACLs predate IAM). If you do use ACLs, do not set access that is too open or permissive.
• AWS Trusted Advisor provides a bucket permission check feature that is a useful tool for
discovering if any of the buckets in your account have permissions that grant global access.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 57
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 58
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 69
AWS engages with external certifying bodies and independent auditors to provide customers with
information about the policies, processes, and controls that are established and operated by
AWS.
As an example of a certification for which you can use AWS services to meet your compliance
goals, consider the ISO/IEC 27001:2013 certification. It specifies the requirements for
establishing, implementing, maintaining, and continually improving an Information Security
Management System. The basis of this certification is the development and implementation of a
rigorous security program, which includes the development and implementation of an
Information Security Management System. The Information Security Management System
defines how AWS perpetually manages security in a holistic, comprehensive manner.
AWS also provides security features and legal agreements that are designed to help support
customers with common regulations and laws. One example is the Health Insurance Portability
and Accountability Act (HIPAA) regulation. Another example, the European Union (EU) General
Data Protection Regulation (GDPR) protects European Union data subjects' fundamental right to
privacy and the protection of personal data. It introduces robust requirements that will raise and
harmonize standards for data protection, security, and compliance. The GDPR Center at
https://aws.amazon.com/compliance/gdpr-center/ contains many resources to help customers
meet their compliance requirements with this regulation.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 59
AWS Config
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 70
AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your
AWS resources. AWS Config continuously monitors and records your AWS resource
configurations, and it enables you to automate the evaluation of recorded configurations against
desired configurations. With AWS Config, you can review changes in configurations and
relationships between AWS resources, review detailed resource configuration histories, and
determine your overall compliance against the configurations that are specified in your internal
guidelines. This enables you to simplify compliance auditing, security analysis, change
management, and operational troubleshooting.
As you can see in the AWS Config Dashboard screen capture shown here, AWS Config keeps an
inventory listing of all resources that exist in the account, and it then checks for configuration rule
compliance and resource compliance. Resources that are found to be noncompliant are flagged,
which alerts you to the configuration issues that should be addressed within the account.
AWS Config is a Regional service. To track resources across Regions, enable it in every Region that
you use. AWS Config offers an aggregator feature that can show an aggregated view of resources
across multiple Regions and even multiple accounts.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 60
AWS Artifact
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 71
AWS Artifact provides on-demand downloads of AWS security and compliance documents, such
as AWS ISO certifications, Payment Card Industry (PCI), and Service Organization Control (SOC)
reports. You can submit the security and compliance documents (also known as audit artifacts) to
your auditors or regulators to demonstrate the security and compliance of the AWS infrastructure
and services that you use. You can also use these documents as guidelines to evaluate your own
cloud architecture and assess the effectiveness of your company's internal controls. AWS Artifact
provides documents about AWS only. AWS customers are responsible for developing or obtaining
documents that demonstrate the security and compliance of their companies.
You can also use AWS Artifact to review, accept, and track the status of AWS agreements such as
the Business Associate Agreement (BAA). A BAA typically is required for companies that are
subject to HIPAA to ensure that protected health information (PHI) is appropriately safeguarded.
With AWS Artifact, you can accept agreements with AWS and designate AWS accounts that can
legally process restricted information. You can accept an agreement on behalf of multiple
accounts. To accept agreements for multiple accounts, use AWS Organizations to create an
organization. To learn more, see Managing agreements in AWS Artifact at
https://docs.aws.amazon.com/artifact/latest/ug/managing-agreements.html.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 61
Section 6 key
• AWS security compliance programs provide
takeaways information about the policies, processes, and
controls that are established and operated by
AWS.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 72
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 62
Module wrap-up
Module 4: AWS Cloud Security
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
It’s now time to review the module and wrap up with a knowledge check and discussion of a
practice certification exam question.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 63
Module summary
In summary, in this module you learned how to:
• Recognize the shared responsibility model
• Identify the responsibility of the customer and AWS
• Recognize IAM users, groups, and roles
• Describe different types of security credentials in IAM
• Identify the steps to securing a new AWS account
• Explore IAM users and groups
• Recognize how to secure AWS data
• Recognize AWS compliance programs
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 77
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 64
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 78
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 65
Choice Response
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 79
Look at the answer choices and rule them out based on the keywords.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 66
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 80
The following are the keywords to recognize: “AWS’s responsibility” and “AWS shared
responsibility model”.
This sample exam question comes from the AWS Certified Cloud Practitioner sample exam
questions document that is linked to from the main AWS Certified Cloud Practitioner exam
information page at https://aws.amazon.com/certification/certified-cloud-practitioner/.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 67
Additional resources
• AWS Cloud Security: https://aws.amazon.com/security/
• AWS Security Resources: https://aws.amazon.com/security/security-learning/?cards-
top.sort-by=item.additionalFields.sortDate&cards-top.sort-order=desc&awsf.Types=*all
• AWS Security Blog: https://aws.amazon.com/blogs/security/
• Security Bulletins : https://aws.amazon.com/security/security-bulletins/?card-
body.sort-by=item.additionalFields.bulletinId&card-body.sort-
order=desc&awsf.bulletins-flag=*all&awsf.bulletins-year=*all
• Vulnerability and Penetration testing: https://aws.amazon.com/security/penetration-
testing/
• AWS Well-Architected Framework – Security pillar:
https://d1.awsstatic.com/whitepapers/architecture/AWS-Security-Pillar.pdf
• AWS documentation - IAM Best Practices:
https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 81
Security is a large topic and this module has only provided an introduction to the subject. The
following resources provide more detail:
• The AWS Cloud Security home page: https://aws.amazon.com/security/
• AWS Security Resources: https://aws.amazon.com/security/security-learning/?cards-top.sort-
by=item.additionalFields.sortDate&cards-top.sort-order=desc&awsf.Types=*all
• AWS Security Blog: https://aws.amazon.com/blogs/security/
• Security Bulletins notify the customer about the latest security and privacy events with AWS
services: https://aws.amazon.com/security/security-bulletins/?card-body.sort-
by=item.additionalFields.bulletinId&card-body.sort-order=desc&awsf.bulletins-
flag=*all&awsf.bulletins-year=*all
• The Vulnerability and Penetration testing page: https://aws.amazon.com/security/penetration-
testing/
• AWS Well-Architected Framework – Security pillar:
https://d1.awsstatic.com/whitepapers/architecture/AWS-Security-Pillar.pdf
• AWS documentation – IAM Best Practices:
https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 68
Thank you
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 82
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 69
Module 5
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Contents
Module 5: Networking and Content Delivery 4
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 3
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
This module covers three fundamental Amazon Web Services (AWS) for networking and content
delivery: Amazon Virtual Private Cloud (Amazon VPC), Amazon Route 53, and Amazon
CloudFront.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 4
Module overview
Topics Activities
• Label a network diagram
• Networking basics
• Design a basic VPC architecture
• Amazon VPC
Demo
• VPC networking
• VPC demonstration
• VPC security
Lab
• Amazon Route 53
• Build your VPC and launch a web
server
• Amazon CloudFront
Knowledge check
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 2
This module includes some activities that challenge you to label a network diagram and design a
basic VPC architecture.
You will watch a recorded demonstration to learn how to use the VPC Wizard to create a VPC
with public and private subnets.
You then get a chance to apply what you have learned in a hands-on lab where you use the VPC
Wizard to build a VPC and launch a web server.
Finally, you will be asked to complete a knowledge check that test your understanding of key
concepts that are covered in this module.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 5
Module objectives
After completing this module, you should be able to:
• Recognize the basics of networking
• Describe virtual networking in the cloud with Amazon VPC
• Label a network diagram
• Design a basic VPC architecture
• Indicate the steps to build a VPC
• Identify security groups
• Create your own VPC and add additional components to it to produce a customized
network
• Identify the fundamentals of Amazon Route 53
• Recognize the benefits of Amazon CloudFront
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 3
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 6
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
In this section, you will review a few basic networking concepts that provide the necessary
foundation to your understanding of the AWS networking service, Amazon Virtual Private Cloud
(Amazon VPC).
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 7
Networks
Subnet 1 Subnet 2
Router
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 5
A computer network is two or more client machines that are connected together to share
resources. A network can be logically partitioned into subnets. Networking requires a networking
device (such as a router or switch) to connect all the clients together and enable communication
between them.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 8
IP addresses
192 . 0 . 2 . 0
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 6
Each client machine in a network has a unique Internet Protocol (IP) address that identifies it. An
IP address is a numerical label in decimal format. Machines convert that decimal number to a
binary format.
In this example, the IP address is 192.0.2.0. Each of the four dot (.)-separated numbers of the IP
address represents 8 bits in octal number format. That means each of the four numbers can be
anything from 0 to 255. The combined total of the four numbers for an IP address is 32 bits in
binary format.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 9
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 7
IPv6 addresses, which are 128 bits, are also available. IPv6 addresses can accommodate more
user devices.
An IPv6 address is composed of eight groups of four letters and numbers that are separated by
colons (:). In this example, the IPv6 address is 2600:1f18:22ba:8c00:ba86:a05e:a5ba:00FF. Each
of the eight colon-separated groups of the IPv6 address represents 16 bits in hexadecimal
number format. That means each of the eight groups can be anything from 0 to FFFF. The
combined total of the eight groups for an IPv6 address is 128 bits in binary format.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 10
192 . 0 . 2 . 0 / 24
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 8
A common method to describe networks is Classless Inter-Domain Routing (CIDR). The CIDR
address is expressed as follows:
• An IP address (which is the first address of the network)
• Next, a slash character (/)
• Finally, a number that tells you how many bits of the routing prefix must be fixed or allocated
for the network identifier
The bits that are not fixed are allowed to change. CIDR is a way to express a group of IP addresses
that are consecutive to each other.
In this example, the CIDR address is 192.0.2.0/24. The last number (24) tells you that the first 24
bits must be fixed. The last 8 bits are flexible, which means that 2 8 (or 256) IP addresses are
available for the network, which range from 192.0.2.0 to 192.0.2.255. The fourth decimal digit is
allowed to change from 0 to 255.
If the CIDR was 192.0.2.0/16, the last number (16) tells you that the first 16 bits must be fixed.
The last 16 bits are flexible, which means that 216 (or 65,536) IP addresses are available for the
network, ranging from 192.0.0.0 to 192.0.255.255. The third and fourth decimal digits can each
change from 0 to 255.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 11
Numb Protocol/Addres
Layer Function
er s
HTTP(S), FTP, DHCP,
Application 7 Means for an application to access a computer network
LDAP
Data link 2 Transfer data in the same LAN network (hubs and switches) MAC
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 9
The Open Systems Interconnection (OSI) model is a conceptual model that is used to explain how
data travels over a network. It consists of seven layers and shows the common protocols and
addresses that are used to send data at each layer. For example, hubs and switches work at layer
2 (the data link layer). Routers work at layer 3 (the network layer). The OSI model can also be
used to understand how communication takes place in a virtual private cloud (VPC), which you
will learn about in the next section.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 12
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Many of the concepts of an on-premises network apply to a cloud-based network, but much of
the complexity of setting up a network has been abstracted without sacrificing control, security,
and usability. In this section, you learn about Amazon VPC and the fundamental components of a
VPC.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 13
Amazon VPC
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 11
Amazon Virtual Private Cloud (Amazon VPC) is a service that lets you provision a logically isolated
section of the AWS Cloud (called a virtual private cloud, or VPC) where you can launch your AWS
resources.
Amazon VPC gives you control over your virtual networking resources, including the selection of
your own IP address range, the creation of subnets, and the configuration of route tables and
network gateways. You can use both IPv4 and IPv6 in your VPC for secure access to resources and
applications.
You can also customize the network configuration for your VPC. For example, you can create a
public subnet for your web servers that can access the public internet. You can place your
backend systems (such as databases or application servers) in a private subnet with no public
internet access.
Finally, you can use multiple layers of security, including security groups and network access
control lists (network ACLs), to help control access to Amazon Elastic Compute Cloud (Amazon
EC2) instances in each subnet.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 14
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 12
Amazon VPC enables you to provision virtual private clouds (VPCs). A VPC is a virtual network
that is logically isolated from other virtual networks in the AWS Cloud. A VPC is dedicated to your
account. VPCs belong to a single AWS Region and can span multiple Availability Zones.
After you create a VPC, you can divide it into one or more subnets. A subnet is a range of IP
addresses in a VPC. Subnets belong to a single Availability Zone. You can create subnets in
different Availability Zones for high availability. Subnets are generally classified as public or
private. Public subnets have direct access to the internet, but private subnets do not.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 15
IP addressing
• When you create a VPC, you assign it to
an IPv4 CIDR block (range of private IPv4 VPC
addresses).
• You cannot change the address range x.x.x.x/16 or 65,536 addresses (max)
after you create the VPC. to
• The largest IPv4 CIDR block size is /16. x.x.x.x/28 or 16 addresses (min)
• The smallest IPv4 CIDR block size is /28.
• IPv6 is also supported (with a different
block size limit).
• CIDR blocks of subnets cannot overlap.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 13
IP addresses enable resources in your VPC to communicate with each other and with resources
over the internet. When you create a VPC, you assign an IPv4 CIDR block (a range of private IPv4
addresses) to it. After you create a VPC, you cannot change the address range, so it is important
that you choose it carefully. The IPv4 CIDR block might be as large as /16 (which is 2 16, or 65,536
addresses) or as small as /28 (which is 24, or 16 addresses).
You can optionally associate an IPv6 CIDR block with your VPC and subnets, and assign IPv6
addresses from that block to the resources in your VPC. IPv6 CIDR blocks have a different block
size limit.
The CIDR block of a subnet can be the same as the CIDR block for a VPC. In this case, the VPC and
the subnet are the same size (a single subnet in the VPC). Also, the CIDR block of a subnet can be
a subset of the CIDR block for the VPC. This structure enables the definition of multiple subnets.
If you create more than one subnet in a VPC, the CIDR blocks of the subnets cannot overlap. You
cannot have duplicate IP addresses in the same VPC.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 16
Reserved IP addresses
Example: A VPC with an IPv4 CIDR block of 10.0.0.0/16 has 65,536 total IP addresses.
The VPC has four equal-sized subnets. Only 251 IP addresses are available for use by
each subnet.
IP Addresses for
CIDR block Reserved for
VPC: 10.0.0.0/16
10.0.0.0/24
Subnet 1 (10.0.0.0/24) Subnet 2 (10.0.2.0/24)
10.0.0.0 Network address
Network broadcast
10.0.0.255
address
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 14
When you create a subnet, it requires its own CIDR block. For each CIDR block that you specify,
AWS reserves five IP addresses within that block, and these addresses are not available for use.
AWS reserves these IP addresses for:
• Network address
• VPC local router (internal communications)
• Domain Name System (DNS) resolution
• Future use
• Network broadcast address
For example, suppose that you create a subnet with an IPv4 CIDR block of 10.0.0.0/24 (which has
256 total IP addresses). The subnet has 256 IP addresses, but only 251 are available because five
are reserved.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 17
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 15
When you create a VPC, every instance in that VPC gets a private IP address automatically. You
can also request a public IP address to be assigned when you create the instance by modifying
the subnet’s auto-assign public IP address properties.
An Elastic IP address is a static and public IPv4 address that is designed for dynamic cloud
computing. You can associate an Elastic IP address with any instance or network interface for any
VPC in your account. With an Elastic IP address, you can mask the failure of an instance by rapidly
remapping the address to another instance in your VPC. Associating the Elastic IP address with
the network interface has an advantage over associating it directly with the instance. You can
move all of the attributes of the network interface from one instance to another in a single step.
Additional costs might apply when you use Elastic IP addresses, so it is important to release them
when you no longer need them.
To learn more about Elastic IP addresses, see Elastic IP Addresses in the AWS Documentation at
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-eips.html.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 18
Subnet: 10.0.1.0/24
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 16
An elastic network interface is a virtual network interface that you can attach or detach from an
instance in a VPC. A network interface's attributes follow it when it is reattached to another
instance. When you move a network interface from one instance to another, network traffic is
redirected to the new instance.
Each instance in your VPC has a default network interface (the primary network interface) that is
assigned a private IPv4 address from the IPv4 address range of your VPC. You cannot detach a
primary network interface from an instance. You can create and attach an additional network
interface to any instance in your VPC. The number of network interfaces you can attach varies by
instance type.
For more information about Elastic Network Interfaces, see the AWS Documentation at
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 19
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 17
A route table contains a set of rules (called routes) that directs network traffic from your subnet.
Each route specifies a destination and a target. The destination is the destination CIDR block
where you want traffic from your subnet to go. The target is the target that the destination traffic
is sent through. By default, every route table that you create contains a local route for
communication in the VPC. You can customize route tables by adding routes. You cannot delete
the local route entry that is used for internal communications.
Each subnet in your VPC must be associated with a route table. The main route table is the route
table is automatically assigned to your VPC. It controls the routing for all subnets that are not
explicitly associated with any other route table. A subnet can be associated with only one route
table at a time, but you can associate multiple subnets with the same route table.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 20
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 18
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 21
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Now that you have learned about the basic components of a VPC, you can start routing traffic in
interesting ways. In this section, you learn about different networking options.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 22
Internet gateway
AWS Cloud
Region
Availability Zone
VPC: 10.0.0.0/16
Public Subnet Route Table
Public subnet:10.0.1.0/24
Destination Target
10.0.0.0/16 local
0.0.0.0/0 igw-id
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 20
An internet gateway is a scalable, redundant, and highly available VPC component that allows
communication between instances in your VPC and the internet. An internet gateway serves two
purposes: to provide a target in your VPC route tables for internet-routable traffic, and to perform
network address translation for instances that were assigned public IPv4 addresses.
To make a subnet public, you attach an internet gateway to your VPC and add a route to the route
table to send non-local traffic through the internet gateway to the internet (0.0.0.0/0).
For more information about internet gateways, see Internet Gateways in the AWS Documentation
at https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 23
Region
Availability Zone
VPC: 10.0.0.0/16
Public subnet:10.0.1.0/24 Public Subnet Route Table
Public route Destination Target
table
10.0.0.0/16 local
NAT gateway
(nat-gw-id) 0.0.0.0/0 igw-id
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 21
A network address translation (NAT) gateway enables instances in a private subnet to connect to
the internet or other AWS services, but prevents the internet from initiating a connection with
those instances.
To create a NAT gateway, you must specify the public subnet in which the NAT gateway should
reside. You must also specify an Elastic IP address to associate with the NAT gateway when you
create it. After you create a NAT gateway, you must update the route table that is associated with
one or more of your private subnets to point internet-bound traffic to the NAT gateway. Thus,
instances in your private subnets can communicate with the internet.
You can also use a NAT instance in a public subnet in your VPC instead of a NAT gateway.
However, a NAT gateway is a managed NAT service that provides better availability, higher
bandwidth, and less administrative effort. For common use cases, AWS recommends that you use
a NAT gateway instead of a NAT instance.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 24
VPC sharing
AWS Cloud
Region
Account D (participant)
Account B Account C
(participant) (participant) Internet
NAT gateway
gateway
EC2 EC2 EC2 RDS Amazon
instance instance instance instance EC2 instance Redshift
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 22
VPC sharing enables customers to share subnets with other AWS accounts in the same
organization in AWS Organizations. VPC sharing enables multiple AWS accounts to create their
application resources—such as Amazon EC2 instances, Amazon Relational Database Service
(Amazon RDS) databases, Amazon Redshift clusters, and AWS Lambda functions—into shared,
centrally managed VPCs. In this model, the account that owns the VPC (owner) shares one or
more subnets with other accounts (participants) that belong to the same organization in AWS
Organizations. After a subnet is shared, the participants can view, create, modify, and delete their
application resources in the subnets that are shared with them. Participants cannot view, modify,
or delete resources that belong to other participants or the VPC owner.
VPC sharing enables you to decouple accounts and networks. You have fewer, larger, centrally
managed VPCs. Highly interconnected applications automatically benefit from this approach.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 25
VPC peering
AWS Cloud
You can connect VPCs in
your own AWS account,
VPC A: 10.0.0.0/16 VPC B: 10.3.0.0/16 between AWS accounts, or
between AWS Regions.
Peering
connection Restrictions:
(pcx-id) • IP spaces cannot overlap.
• Transitive peering is not
supported.
Route Table for VPC A Route Table for VPC B • You can only have one
Destination Target Destination Target
peering resource
10.0.0.0/16 local 10.3.0.0/16 local
between the same two
10.3.0.0/16 pcx-id 10.0.0.0/16 pcx-id
VPCs.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 23
A VPC peering connection is a networking connection between two VPCs that enables you to
route traffic between them privately. Instances in either VPC can communicate with each other
as if they are within the same network. You can create a VPC peering connection between your
own VPCs, with a VPC in another AWS account, or with a VPC in a different AWS Region.
When you set up the peering connection, you create rules in your route table to allow the VPCs
to communicate with each other through the peering resource. For example, suppose that you
have two VPCs. In the route table for VPC A, you set the destination to be the IP address of VPC B
and the target to be the peering resource ID. In the route table for VPC B, you set the destination
to be the IP address of VPC A and the target to be the peering resource ID.
For more information about VPC peering, see VPC Peering in the AWS Documentation at
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-peering.html.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 26
By default, instances that you launch into a VPC cannot communicate with a remote network. To
connect your VPC to your remote network (that is, create a virtual private network or VPN
connection), you:
1. Create a new virtual gateway device (called a virtual private network (VPN) gateway) and
attach it to your VPC.
2. Define the configuration of the VPN device or the customer gateway. The customer gateway
is not a device but an AWS resource that provides information to AWS about your VPN device.
3. Create a custom route table to point corporate data center-bound traffic to the VPN gateway.
You also must update security group rules. (You will learn about security groups in the next
section.)
4. Establish an AWS Site-to-Site VPN (Site-to-Site VPN) connection to link the two systems
together.
5. Configure routing to pass traffic through the connection.
For more information about AWS Site-to-Site VPN and other VPN connectivity options, see VPN
Connections in the AWS Documentation at
https://docs.aws.amazon.com/vpc/latest/userguide/vpn-connections.html.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 27
Region
Availability Zone Internet
VPC: 10.0.0.0/16
Public subnet:10.1.0.0/24
802.1q
VLAN AWS Direct
Connect
For more information about DX, see the AWS Direct Connect product page at
https://aws.amazon.com/directconnect/.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 28
VPC endpoints
AWS Cloud Default DNS hostname or Public Subnet Route Table
endpoint-specific DNS hostname
Destination Target
Region
10.0.0.0/16 local
Availability Zone
Amazon S3 vpcep-id
VPC: 10.0.0.0/16 ID
Public subnet:10.0.1.0/24
A VPC endpoint is a virtual device that enables you to privately connect your VPC to supported
AWS services and VPC endpoint services that are powered by AWS PrivateLink. Connection to
these services does not require an internet gateway, NAT device, VPN connection, or AWS Direct
Connect connection. Instances in your VPC do not require public IP addresses to communicate
with resources in the service. Traffic between your VPC and the other service does not leave the
Amazon network.
For more information about VPC endpoints, see VPC Endpoints in the AWS Documentation at
https://docs.aws.amazon.com/vpc/latest/privatelink/concepts.html.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 29
AWS Direct
Customer VPN Amazon VPC Amazon Connect
gateway connection VPC peering VPC gateway
Amazon Amazon
VPC VPC
VPN
connection
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 27
You can configure your VPCs in several ways, and take advantage of numerous connectivity
options and gateways. These options and gateways include AWS Direct Connect (via DX
gateways), NAT gateways, internet gateways, VPC peering, etc. It is not uncommon to find AWS
customers with hundreds of VPCs distributed across AWS accounts and Regions to serve multiple
lines of business, teams, projects, and so forth. Things get more complex when customers start to
set up connectivity between their VPCs. All the connectivity options are strictly point-to-point, so
the number of VPC-to-VPC connections can grow quickly. As you grow the number of workloads
that run on AWS, you must be able to scale your networks across multiple accounts and VPCs to
keep up with the growth.
Though you can use VPC peering to connect pairs of VPCs, managing point-to-point connectivity
across many VPCs without the ability to centrally manage the connectivity policies can be
operationally costly and difficult. For on-premises connectivity, you must attach your VPN to each
individual VPC. This solution can be time-consuming to build and difficult to manage when the
number of VPCs grows into the hundreds.
To solve this problem, you can use AWS Transit Gateway to simplify your networking model. With
AWS Transit Gateway, you only need to create and manage a single connection from the central
gateway into each VPC, on-premises data center, or remote office across your network. A transit
gateway acts as a hub that controls how traffic is routed among all the connected networks,
which act like spokes. This hub-and-spoke model significantly simplifies management and
reduces operational costs because each network only needs to connect to the transit gateway
and not to every other network. Any new VPC is connected to the transit gateway, and is then
automatically available to every other network that is connected to the transit gateway. This ease
of connectivity makes it easier to scale your network as you grow.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 30
?
?
? Public? subnet:10.0.1.0/24
? ? Internet
_?_ IP address Q6
?
Destination Target
Private subnet: 10.0.2.0/24
? ? local
?
0.0.0.0/0 ?
?
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 28
See if you can recognize the different VPC networking components that you learned about by
labeling this network diagram.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 31
Activity: Solution
AWS Cloud
Region
Availability Zone
VPC Publicsubnet
subnet:10.0.1.0/24
Public
Internet Route table Internet
gateway
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 29
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 32
Recorded
Amazon VPC
demonstration
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 30
Now that you know how to design a VPC, watch the demonstration at https://aws-tc-
largeobjects.s3-us-west-2.amazonaws.com/ILT-TF-100-ACFNDS-20-
EN/Module_5_VPC_Wizard+v2.0.mp4 to learn how to use the VPC Wizard to set up a VPC with
public and private subnets.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 33
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 34
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
You can build security into your VPC architecture in several ways so that you have complete
control over both incoming and outgoing traffic. In this section, you learn about two Amazon VPC
firewall options that you can use to secure your VPC: security groups and network access control
lists (network ACLs).
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 35
Security groups (1 of 2)
AWS Cloud
Region
Availability Zone
VPC: 10.0.0.0/16
Public subnet:10.0.1.0/24
Security group
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 33
A security group acts as a virtual firewall for your instance, and it controls inbound and outbound
traffic. Security groups act at the instance level, not the subnet level. Therefore, each instance in
a subnet in your VPC can be assigned to a different set of security groups.
At the most basic level, a security group is a way for you to filter traffic to your instances.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 36
Security groups (2 of 2)
• Security groups have rules that control inbound and outbound instance traffic.
• Default security groups deny all inbound traffic and allow all outbound traffic.
• Security groups are stateful.
Inbound
Source Protocol Port Range Description
sg-xxxxxxxx All All Allow inbound traffic from network interfaces assigned to
the same security group.
Outbound
Destination Protocol Port Range Description
0.0.0.0/0 All All Allow all outbound IPv4 traffic.
::/0 All All Allow all outbound IPv6 traffic.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 34
Security groups have rules that control the inbound and outbound traffic. When you create a
security group, it has no inbound rules. Therefore, no inbound traffic that originates from another
host to your instance is allowed until you add inbound rules to the security group. By default, a
security group includes an outbound rule that allows all outbound traffic. You can remove the
rule and add outbound rules that allow specific outbound traffic only. If your security group has
no outbound rules, no outbound traffic that originates from your instance is allowed.
Security groups are stateful, which means that state information is kept even after a request is
processed. Thus, if you send a request from your instance, the response traffic for that request is
allowed to flow in regardless of inbound security group rules. Responses to allowed inbound
traffic are allowed to flow out, regardless of outbound rules.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 37
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 35
When you create a custom security group, you can specify allow rules, but not deny rules. All
rules are evaluated before the decision to allow traffic.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 38
Region
Availability Zone
VPC: 10.0.0.0/16
Public subnet:10.0.0.0/24
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 36
A network access control list (network ACL) is an optional layer of security for your Amazon VPC. It
acts as a firewall for controlling traffic in and out of one or more subnets. To add another layer of
security to your VPC, you can set up network ACLs with rules that are similar to your security
groups.
Each subnet in your VPC must be associated with a network ACL. If you don't explicitly associate a
subnet with a network ACL, the subnet is automatically associated with the default network ACL.
You can associate a network ACL with multiple subnets; however, a subnet can be associated with
only one network ACL at a time. When you associate a network ACL with a subnet, the previous
association is removed.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 39
• A network ACL has separate inbound and outbound rules, and each rule can
either allow or deny traffic.
• Default network ACLs allow all inbound and outbound IPv4 traffic.
• Network ACLs are stateless.
Inbound
Rule Type Protocol Port Range Source Allow/Deny
100 All IPv4 traffic All All 0.0.0.0/0 ALLOW
* All IPv4 traffic All All 0.0.0.0/0 DENY
Outbound
Rule Type Protocol Port Range Destination Allow/Deny
100 All IPv4 traffic All All 0.0.0.0/0 ALLOW
* All IPv4 traffic All All 0.0.0.0/0 DENY
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 37
A network ACL has separate inbound and outbound rules, and each rule can either allow or deny
traffic. Your VPC automatically comes with a modifiable default network ACL. By default, it allows
all inbound and outbound IPv4 traffic and, if applicable, IPv6 traffic. The table shows a default
network ACL.
Network ACLs are stateless, which means that no information about a request is maintained after
a request is processed.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 40
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 38
You can create a custom network ACL and associate it with a subnet. By default, each custom
network ACL denies all inbound and outbound traffic until you add rules.
A network ACL contains a numbered list of rules that are evaluated in order, starting with the
lowest numbered rule. The purpose is to determine whether traffic is allowed in or out of any
subnet that is associated with the network ACL. The highest number that you can use for a rule is
32,766. AWS recommends that you create rules in increments (for example, increments of 10 or
100) so that you can insert new rules where you need them later.
For more information about network ACLs, see Network ACLs in the AWS Documentation at
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 41
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 39
Here is a summary of the differences between security groups and network ACLs:
• Security groups act at the instance level, but network ACLs act at the subnet level.
• Security groups support allow rules only, but network ACLs support both allow and deny
rules.
• Security groups are stateful, but network ACLs are stateless.
• For security groups, all rules are evaluated before the decision is made to allow traffic. For
network ACLs, rules are evaluated in number order before the decision is made to allow
traffic.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 42
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 40
Now, it’s your turn! In this scenario, you are a small business owner with a website that is hosted
on an Amazon Elastic Compute Cloud (Amazon EC2) instance. You have customer data that is
stored on a backend database that you want to keep private.
See if you can design a VPC that meets the following requirements:
• Your web server and database server must be in separate subnets.
• The first address of your network must be 10.0.0.0. Each subnet must have 256 IPv4
addresses.
• Your customers must always be able to access your web server.
• Your database server must be able to access the internet to make patch updates.
• Your architecture must be highly available and use at least one custom firewall layer.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 43
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 41
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 44
Lab 2:
Build Your VPC
and Launch a
Web Server
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 42
You will now work on Lab 2: Build Your VPC and Launch a Web Server.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 45
Lab 2: Scenario
In this lab, you use Amazon VPC to create your own VPC and add some
components to produce a customized network. You create a security
group for your VPC. You also create an EC2 instance and configure it
to run a web server and to use the security group. You then launch the
EC2 instance into the VPC.
Amazon Amazon
VPC EC2
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 43
In this lab, you use Amazon VPC to create your own VPC and add some components to produce a
customized network. You also create a security group for your VPC, and then create an EC2
instance and configure it to run a web server and to use the security group. You then launch the
EC2 instance into the VPC.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 46
Lab 2: Tasks
• Create a VPC.
Security
group • Create a VPC security group.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 44
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 47
Security group
NAT
Web Private Route Table
gateway
server Destination Target
Private subnet 1: Private subnet 2: 10.0.0.0/16 Local
10.0.1.0/24 10.0.3.0/24
0.0.0.0/0 NAT gateway
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 45
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 48
~ 30 minutes
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 46
It is now time to start the lab. It should take you approximately 30 minutes to complete the lab.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 49
Lab debrief:
Key takeaways
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 47
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 50
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 51
Amazon Route 53
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 49
Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web
service. It is designed to give developers and businesses a reliable and cost-effective way to route
users to internet applications by translating names (like www.example.com) into the numeric IP
addresses (like 192.0.2.1) that computers use to connect to each other. In addition, Amazon
Route 53 is fully compliant with IPv6. See more on Domain Name Systems at
https://aws.amazon.com/route53/what-is-dns/.
You can use Amazon Route 53 to configure DNS health checks so you that can route traffic to
healthy endpoints or independently monitor the health of your application and its endpoints.
Amazon Route 53 traffic flow helps you manage traffic globally through several routing types,
which can be combined with DNS failover to enable various low-latency, fault-tolerant
architectures. You can use Amazon Route 53 traffic flow’s simple visual editor to manage how
your users are routed to your application’s endpoints—whether in a single AWS Region or
distributed around the globe.
Amazon Route 53 also offers Domain Name Registration—you can purchase and manage domain
names (like example.com), and Amazon Route 53 will automatically configure DNS settings for
your domains.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 52
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 50
Here is the basic pattern that Amazon Route 53 follows when a user initiates a DNS request. The
DNS resolver checks with your domain in Route 53, gets the IP address, and returns it to the user.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 53
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 51
Amazon Route 53 supports several types of routing policies, which determine how Amazon Route
53 responds to queries:
• Simple routing (round robin) – Use for a single resource that performs a given function for your
domain (such as a web server that serves content for the example.com website).
• Weighted round robin routing – Use to route traffic to multiple resources in proportions that
you specify. Enables you to assign weights to resource record sets to specify the frequency
with which different responses are served. You might want to use this capability to do A/B
testing, which is when you send a small portion of traffic to a server where you made a
software change. For instance, suppose you have two record sets that are associated with one
DNS name: one with weight 3 and one with weight 1. In this case, 75 percent of the time,
Amazon Route 53 will return the record set with weight 3, and 25 percent of the time, Amazon
Route 53 will return the record set with weight 1. Weights can be any number between 0 and
255.
• Latency routing (LBR) – Use when you have resources in multiple AWS Regions and you want
to route traffic to the Region that provides the best latency. Latency routing works by routing
your customers to the AWS endpoint (for example, Amazon EC2 instances, Elastic IP addresses,
or load balancers) that provides the fastest experience based on actual performance
measurements of the different AWS Regions where your application runs.
• Geolocation routing – Use when you want to route traffic based on the location of your users.
When you use geolocation routing, you can localize your content and present some or all of
your website in the language of your users. You can also use geolocation routing to restrict the
distribution of content to only the locations where you have distribution rights. Another
possible use is for balancing the load across endpoints in a predictable, easy-to-manage way,
so that each user location is consistently routed to the same endpoint.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 54
• Geoproximity routing – Use when you want to route traffic based on the location of your
resources and, optionally, shift traffic from resources in one location to resources in another.
• Failover routing (DNS failover) – Use when you want to configure active-passive failover. Amazon
Route 53 can help detect an outage of your website and redirect your users to alternate locations
where your application is operating properly. When you enable this feature, Amazon Route 53
health-checking agents will monitor each location or endpoint of your application to determine
its availability. You can take advantage of this feature to increase the availability of your customer-
facing application.
• Multivalue answer routing – Use when you want Route 53 to respond to DNS queries with up to
eight healthy records that are selected at random. You can configure Amazon Route 53 to return
multiple values—such as IP addresses for your web servers—in response to DNS queries. You can
specify multiple values for almost any record, but multivalue answer routing also enables you to
check the health of each resource so that Route 53 returns only values for healthy resources. It's
not a substitute for a load balancer, but the ability to return multiple health-checkable IP
addresses is a way to use DNS to improve availability and load balancing.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 55
Amazon Route 53
some-elb-name.us-west- User
2.elb.amazonaws.com
some-elb-name.ap-southeast-
2.elb.amazonaws.com
Name Type Value
example.com ALIAS some-elb-name.us-west-2.elb.amazonaws.com
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 52
Multi-Region deployment is an example use case for Amazon Route 53. With Amazon Route 53,
the user is automatically directed to the Elastic Load Balancing load balancer that’s closest to the
user.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 56
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 53
Amazon Route 53 enables you to improve the availability of your applications that run on AWS
by:
• Configuring backup and failover scenarios for your own applications.
• Enabling highly available multi-Region architectures on AWS.
• Creating health checks to monitor the health and performance of your web applications, web
servers, and other resources. Each health check that you create can monitor one of the
following—the health of a specified resource, such as a web server; the status of other health
checks; and the status of an Amazon CloudWatch alarm.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 57
Primary
This diagram indicates how DNS failover works in a typical architecture for a multi-tiered web
application. Route 53 passes traffic to a load balancer, which then distributes traffic to a fleet of
EC2 instances.
You can do the following tasks with Route 53 to ensure high availability:
1. Create two DNS records for the Canonical Name Record (CNAME) www with a routing policy
of Failover Routing. The first record is the primary route policy, which points to the load
balancer for your web application. The second record is the secondary route policy, which
points to your static Amazon S3 website.
2. Use Route 53 health checks to make sure that the primary is running. If it is, all traffic defaults
to your web application stack. Failover to the static backup site would be triggered if either
the web server goes down (or stops responding), or the database instance goes down.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 58
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 55
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 59
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
The purpose of networking is to share information between connected resources. So far in this
module, you learned about VPC networking with Amazon VPC. You learned about the different
options for connecting your VPC to the internet, to remote networks, to other VPCs, and to AWS
services.
Content delivery occurs over networks, too—for example, when you stream a movie from your
favorite streaming service. In this final section, you learn about Amazon CloudFront, which is a
content delivery network (CDN) service.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 60
Hop
Router
Hop Hop
Origin server
Hop
Router
Router
Hop
Hop
Client
Router Hop
User
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 57
As explained earlier in this module when you were learning about AWS Direct Connect, one of
the challenges of network communication is network performance. When you browse a website
or stream a video, your request is routed through many different networks to reach an origin
server. The origin server (or origin) stores the original, definitive versions of the objects
(webpages, images, and media files). The number of network hops and the distance that the
request must travel significantly affect the performance and responsiveness of the website.
Further, network latency is different in various geographic locations. For these reasons, a content
delivery network might be the solution.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 61
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 58
A content delivery network (CDN) is a globally distributed system of caching servers. A CDN
caches copies of commonly requested files (static content, such as Hypertext Markup Language,
or HTML; Cascading Style Sheets, or CSS; JavaScript; and image files) that are hosted on the
application origin server. The CDN delivers a local copy of the requested content from a cache
edge or Point of Presence that provides the fastest delivery to the requester.
CDNs also deliver dynamic content that is unique to the requester and is not cacheable. Having a
CDN deliver dynamic content improves application performance and scaling. The CDN establishes
and maintains secure connections closer to the requester. If the CDN is on the same network as
the origin, routing back to the origin to retrieve dynamic content is accelerated. In addition,
content such as form data, images, and text can be ingested and sent back to the origin, thus
taking advantage of the low-latency connections and proxy behavior of the PoP.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 62
Amazon CloudFront
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 59
Amazon CloudFront is a fast CDN service that securely delivers data, videos, applications, and
application programming interfaces (APIs) to customers globally with low latency and high
transfer speeds. It also provides a developer-friendly environment. Amazon CloudFront delivers
files to users over a global network of edge locations and Regional edge caches. Amazon
CloudFront is different from traditional content delivery solutions because it enables you to
quickly obtain the benefits of high-performance content delivery without negotiated contracts,
high prices, or minimum fees. Like other AWS services, Amazon CloudFront is a self-service
offering with pay-as-you-go pricing.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 63
Edge locations
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 60
Amazon CloudFront delivers content through a worldwide network of data centers that are called
edge locations. When a user requests content that you serve with CloudFront, the user is routed
to the edge location that provides the lowest latency (or time delay) so that content is delivered
with the best possible performance. CloudFront edge locations are designed to serve popular
content quickly to your viewers.
As objects become less popular, individual edge locations might remove those objects to make
room for more popular content. For the less popular content, CloudFront has Regional edge
caches. Regional edge caches are CloudFront locations that are deployed globally and are close to
your viewers. They are located between your origin server and the global edge locations that
serve content directly to viewers. A Regional edge cache has a larger cache than an individual
edge location, so objects remain in the Regional edge cache longer. More of your content
remains closer to your viewers, which reduces the need for CloudFront to go back to your origin
server and improves overall performance for viewers.
For more information about how Amazon CloudFront works, see How CloudFront Delivers
Content in the AWS Documentation at
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/HowCloudFrontWorks.
html#HowCloudFrontWorksContentDelivery.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 64
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 61
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 65
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 66
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 62
Amazon CloudFront charges are based on actual usage of the service in four areas:
• Data transfer out – You are charged for the volume of data that is transferred out from
Amazon CloudFront edge locations, measured in GB, to the internet or to your origin (both
AWS origins and other origin servers). Data transfer usage is totaled separately for specific
geographic regions, and then cost is calculated based on pricing tiers for each area. If you use
other AWS services as the origins of your files, you are charged separately for your use of
those services, including storage and compute hours.
• HTTP(S) requests – You are charged for the number of HTTP(S) requests that are made to
Amazon CloudFront for your content.
• Invalidation requests – You are charged per path in your invalidation request. A path that is
listed in your invalidation request represents the URL (or multiple URLs if the path contains a
wildcard character) of the object that you want to invalidate from CloudFront cache. You can
request up to 1,000 paths each month from Amazon CloudFront at no additional charge.
Beyond the first 1,000 paths, you are charged per path that is listed in your invalidation
requests.
• Dedicated IP custom Secure Sockets Layer (SSL) – You pay $600 per month for each custom SSL
certificate that is associated with one or more CloudFront distributions that use the Dedicated
IP version of custom SSL certificate support. This monthly fee is prorated by the hour. For
example, if your custom SSL certificate was associated with at least one CloudFront
distribution for just 24 hours (that is, 1 day) in the month of June, your total charge for using
the custom SSL certificate feature in June is (1 day / 30 days) * $600 = $20.
For the latest pricing information, see the Amazon CloudFront pricing page at
https://aws.amazon.com/cloudfront/pricing/.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 67
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 63
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 68
Module wrap-up
Module 5: Networking and Content Delivery
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
It’s now time to review the module, and wrap up with a knowledge check and a discussion of a
practice certification exam question.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 69
Module summary
In summary, in this module you learned how to:
• Recognize the basics of networking
• Describe virtual networking in the cloud with Amazon VPC
• Label a network diagram
• Design a basic VPC architecture
• Indicate the steps to build a VPC
• Identify security groups
• Create your own VPC and added additional components to it to produce a customized
network
• Identify the fundamentals of Amazon Route 53
• Recognize the benefits of Amazon CloudFront
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 65
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 70
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 66
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 71
Choice Response
A AWS Config
B Amazon Route 53
D Amazon VPC
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 67
Look at the answer choices and rule them out based on the keywords.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 72
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 68
The following are the keywords to recognize: “AWS networking service” and “create a virtual
network”.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 73
Additional resources
• Amazon VPC Overview pag:
https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-
vpc.html
• Amazon Virtual Private Cloud Connectivity Options whitepaper:
https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-
options/introduction.html
• One to Many: Evolving VPC Design AWS Architecture blog post:
https://aws.amazon.com/blogs/architecture/one-to-many-evolving-vpc-
design/
• Amazon VPC User Guide:
https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-
vpc.html
• Amazon CloudFront overview page:
https://aws.amazon.com/cloudfront/?nc=sn&loc=1
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 69
If you want to learn more about the topics covered in this module, you might find the following
additional resources helpful:
• Amazon VPC Overview page: https://docs.aws.amazon.com/vpc/latest/userguide/what-is-
amazon-vpc.html
• Amazon Virtual Private Cloud Connectivity Options whitepaper:
https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-
options/introduction.html
• One to Many: Evolving VPC Design AWS Architecture blog post:
https://aws.amazon.com/blogs/architecture/one-to-many-evolving-vpc-design/
• Amazon VPC User Guide: https://docs.aws.amazon.com/vpc/latest/userguide/what-is-
amazon-vpc.html
• Amazon CloudFront overview page: https://aws.amazon.com/cloudfront/?nc=sn&loc=1
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 74
Thank you
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 70
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 75