Aws-Saa-C02-Course - Readme - MD at Master Alozano-77 - Aws-Saa-C02-Course Github PDF
Aws-Saa-C02-Course - Readme - MD at Master Alozano-77 - Aws-Saa-C02-Course Github PDF
Sign i Sign u
n p
Why GitHub? Explore Pricing
Dismiss
Join GitHub today
GitHub is home to over 50 million developers working together to host and
review code, manage projects, and build software together.
Sign up
1 contributor
SAA‐C02 Notes
These are my personal notes from Adrian Cantrill's ﴾SAA‐C02﴿ course. Learning Aids from aws‐sa‐associate‐saac02. There may
be errors, so please purchase his course to get the original content and show support https://learn.cantrill.io.
Table of Contents
Cloud‐Computing‐Fundamentals
AWS‐Fundamentals
IAM‐Accounts‐AWS‐Organizations
Simple‐Storage‐Service‐S3
Virtual‐Private‐Cloud‐VPC
Elastic‐Cloud‐Compute‐EC2
Containers‐and‐ECS
Advanced‐EC2
Route‐53
Relational‐Database‐Service‐RDS
Network‐Storage‐EFS
HA‐and‐Scaling
Serverless‐and‐App‐Services
CDN‐and‐Optimization
Advanced‐VPC
Hybrid‐and‐Migration
Security‐Deployment‐Operations
NoSQL‐and‐DynamoDB
Cloud‐Computing‐Fundamentals
Cloud computing provides
1. On‐Demand Self‐Service: Provision and terminate using a UI/CLI without human interaction.
2. Broad Network Access: Access services over any networks on any devices using standard protocols and methods.
3. Resource Pooling: Economies of scale, cheaper service.
4. Rapid Elasticity: Scale up and down automatically in response to system load.
5. Measured Service: Usage is measured. Pay only for what you consume.
1. On‐Premises: The individual manages all components from data to facilities. Provides the most flexibility, but also most IT
intensive.
2. Data Center Hosting: Place equipment in a building managed by a vendor. You pay for the facilities only.
3. Infrastructure as a Service ﴾IaaS﴿: Vendor manages facilities and everything else related to servers up to the OS. You pay per
second or minute for the OS used to the vendor. Lose some flexibility, but big risk reductions.
4. Platform as a Service ﴾PaaS﴿: Good for running an application only. The unit of consumption is the runtime environment. You
manage the application and the data, but the vendor manges all else.
5. Software as a Service ﴾SaaS﴿: You consume the software as a service. This can be Outlook or Netflix. There are almost no risks or
additional costs, but very little control.
There are additional services such as Function as a Service, Container as a Service, and DataBase as a Service which be explained
later.
AWS‐Fundamentals
Public Internet: AWS is a public cloud platform and connected to the public internet. It is not on the public internet, but is next to
it.
AWS Public Zone: Attached to the Public Internet. S3 Bucket is hosted in the Public Zone, not all services are. Just because you
connect to a public service, that does not mean you have permissions to access it.
AWS Private Zone: No direct connectivity is allowed between the AWS Private Zone and the public cloud unless this is
configured for that service. This is done by taking a part of the private service and projecting it into the AWS public zone which
allows public internet to make inbound or outbound connections.
Regions
AWS Region is an area of the world they have selected for a full deployment of AWS infrastructure.
Ohio
California
Singapore
Beijing
London
Paris
AWS can only deploy regions as fast as their planning allows. Regions are often not near their customers.
Local distribution points. Useful for services such as Netflix so they can store data closer to customers for low latency high speed
transfers.
If a customer wants to access data stored in Brisbane, they will stream data from the Sydney Region through an Edge Location
hosted in Brisbane.
AWS Management
Regions are connected together with high speed networking. Some services such as EC2 need to be selected in a region. Some
services are global such as IAM
Region's 3 Benefits
Geographical Separation
Useful for natural disasters
Provide isolated fault domain
Regions are 100% isolated
Geopolitical Separation
Different laws change how things are accessed
Stability from political events
Location Control
Tune architecture for performance
Duplicate infrastructure at closer points to customers
AWS will provide between 2 and 6 AZs per region. AZs are isolated compute, storage, networking, power, and facilities. Components
are allowed to distribute load and resilience by using multiple zones.
AZs are connected to each other with high speed redundant networks.
Service Resilience
1. Globally Resilient: IAM or Route 53. No way for them to go down. Data is replicated throughout multiple regions.
2. Region Resilient: Operate as separate services in each region. Generally replicate data to multiple AZs in that region.
3. AZ Resilient: Run from a single AZ. It is possible for hardware to fail in an AZ and the service to keep running because of
redundant equipment, but should not be relied on.
One default VPC per region. Can have many custom VPCs which are all private by default.
VPC CIDR ‐ defines start and end ranges of the VPC. IP CIDR of a default VPC is always: 172.31.0.0/16
Subnets are given one section of the IP ranges for the default service. In general do not use the Default VPC in a region because it is
not flexible.
Default VPC is large because it uses the /16 range. A subnet is smaller such as /20 The higher the / number is, the smaller the
grouping.
Two /17's will fit into a /16, sixteen /20 subnets can fit into one /16.
The unit of consumption is an instance EC2 instance is configured to launch into a single VPC subnet. Private service by default,
public access must be configured. The VPC needs to support public access. If you use a custom VPC then you must handle the
networking on your own.
Different sizes and capabilities all use On‐Demand Billing ‐ Per second. Only pay for what you consume.
Charge for running the instance, CPU, memory and storage. Extra cost for any commercial software the instance deploys with.
CPU
Memory
Storage
Networking
Running State
Stopped State
Terminated State
Contains:
Permissions: control which accounts can and can't use the AMI.
Owner ‐ Implicit allow, only the owner can use it spin up new instances
Block Device Mapping: links the volumes that the AMI has and how they're presented to the operating system. Determines
which volume is a boot volume and which volumes is a data volume.
Connecting to EC2
Login to the instance using an SSH key pair. Private Key ‐ Stored on local machine to initiate connection. Public Key ‐ AWS places this
key on the instance.
S3 is an object storage, not file, or block storage. You can't mount an S3 Bucket.
Objects
Other components:
Version ID
Metadata
Access Control
Sub resources
Buckets
If the objects name starts with a slash such as /old/Koala1.jpg the UI will present this as a folder. In actuality this is not true, there
are no folders.
CloudFormation Basics
Templates can modify infrastructure to, create, update and delete.
## Can control the command line UI. The bigger your template, the more likely
## this section is needed
Metadata:
template metadata
## Prompt the user for more data. Name of something, size of instance,
## data validation
Parameters:
set of parameters
## Decision making in the template. Things will only occur if a condition is met.
## Step 1: create condition
## Step 2: use the condition to do something else in the template
Conditions:
set of conditions
Transform:
set of transforms
Resources
Resources:
Instance: ## Logical Resource
Type: 'AWS::EC2::Instance' ## This is what will be created
Properties: ## Configure the resources in a particular way
ImageId: !Ref LatestAmiId
Instance Type: !Ref Instance Type
KeyName: !Ref Keyname
Once a template is created, AWS will make a stack. This is a living and active representation of a template. One template can create
infinite amount of stacks.
For any Logical Resources in the stack, CF will make a corresponding Physical Resources in your AWS account.
It is cloud formations job to keep the logical and physical resources in sync.
A template can be updated and then used to update the same stack.
CloudWatch Basics
Collects and manages operational data on your behalf.
Namespace
Container for monitoring data. Naming can be anything so long as it's not AWS/service such as AWS/EC2 . This is used for all
metric data of that service
Metric
CPU Usage
Network IN/OUT
Disk IO
This is not for a specific server. This could get things from different servers
Timestamp = 2019‐12‐03
Value = 98.3
Dimensions separate data points for different things or perspectives within the same metric
Alarms
Has two states ok or alarm .State can send an SNS or action. Third state can be insufficient data state. Not a problem, just wait.
Aims to ensure an agreed level of operational performance, usually uptime, for a higher than normal period
Instead of diagnosing the issue, swap it out.
Redundant hardware to minimize downtown
User disruption is not ideal, but is allowed
The user might need to log back in or lose some data on their screen.
Maximizing a system's uptime
99.9% ﴾Three 9's﴿ = 8.7 hours downtime per year.
99.999 ﴾Five 9's﴿ = 5.26 minutes downtime per year.
Fault‐Tolerance ﴾FT﴿
System can continue operating properly in the event of the failure of some ﴾one or more faults within﴿ of its
components
Fault tolerance is much more complicated than high availability and more expensive. Outages must be minimized and the
system needs levels of redundancy.
An airplane is an example of system that needs Fault Tolerance. It has more engines than it needs for redundancy.
Example: A patient is waiting for a life saving surgery and is under anesthetic. While being monitored, the life support system is
dosing medicine. This type of system cannot only be highly available, even a movement of interruption is deadly.
Set of policies, tools and procedures to enable the recovery or continuation of vital technology infrastructure and systems
following a natural or human‐induced disaster.
DR can largely be automated to eliminate the time for recovery and errors.
This involves:
Pre‐planning
Ensure plans are in place for extra hardware
Do not store backups at the same site as the system
DR Processes
Cloud machines ready when needed
This is designed to keep the crucial and non replaceable parts of the system in place.
DNS Client: Piece of software running on the OS for a device you're using.
Resolver: Software on your device or server which queries DNS on your behalf.
Zone: A part of the DNS database.
This would be www.amazon.com
What the data is, the substance
Zonefile: physical database for a zone
How physically that data is stored
Nameserver: where zonefiles are hosted
Steps:
Find the Nameserver which hosts a particular Zonefile. Query that Nameserver for a record with that Zone. It then passes the
information back to the client.
DNS Root
The starting point of DNS. DNS names are read right to left with multiple parts separated by periods.
www.netflix.com.
The period is assumed to be there in a browser when it's not present. The DNS Root is hosted on DNS Root Servers ﴾13﴿. These are
hosted by 12 major companies.
Root Hints is a pointer to the DNS Root server
Process
1. DNS client asks DNS Resolver for IP address of a given DNS name.
2. Using the Root Hints file, the DNS Resolver communicates with one or more of the root servers to access the root zone and
begin the process of finding the IP address.
The Root Zone is organized by IANA ﴾Internet Assigned Numbers Authority﴿. Their job is to manage the contents of the root zone.
IANA is in charge of the DNS system because they control the root zone.
DNS Hierarchy
Assuming a laptop is querying DNS directly for www.amazon.com and using a root hints file to know how to access a root server
and query the root zone.
The top level domains are the only things to the left of the DNS name.
Registry maintains the zones for a TLD ﴾e.g .ORG﴿ Registrar has relationships with the .org TLD zone manager allowing domain
registration
Route53 Fundamentals
Registers domains
Can Host Zone Files on managed nameservers
This is a global service, no need to pick a region
Globally Resilience
Can operate with failure in one or more regions
Register Domains
Route 53 will check with the top level domain to see if the name is available
Router 53 creates a zonefile for the domain to be registered
Allocates nameservice for that zone
Generally four of these for one individual zone
This is a hosted zone
The zone file will be put on these four managed nameservers
Router 53 will communicate with the .org registry and add the nameserver records into the zone file for the top level domain.
This is done with a nameserver record.
Route53 Details
DNS Record
Nameserver ﴾NS﴿: Allows delegation to occur in the DNS.
A and AAAA Records: Maps the host to a v4 or v6 host type. Most of the time you will make both types of record, A and AAAA.
CNAME Record Type: Allows DNS shortcuts to reduce admin overhead. CNAMES cannot point directly at an IP address and only
another name.
MX records: How emails are sent. They have two main parts:
Priority: Lower values for the priority field are higher priority.
Value
If it is just a host, it will not have a dot on the right. It is assumed to be part of the same zone as the host.
If you include a dot on the right, it is a fully qualified domain name
TXT Record: Allows you to add arbitrary text to a domain. One common usage is to prove domain ownership.
This is a numeric setting on DNS records in seconds. Allows the admin to specify how long the query can be stored at the resolver
server. If you need to upgrade the records, it is smart to lower the TTL value first.
If another client queries the same thing, they will get back a Non‐Authoritative response.
IAM‐Accounts‐AWS‐Organizations
When an identity attempts to access AWS resources, that identity needs to prove who it is to AWS, a process known as
Authentication. Once authenticated, that identity is known as an authenticated identity
Statement Components
Statement ID ﴾SID﴿: Optional field that should help describe
The resource you're interacting
The actions you're trying to perform
Effect: is either allow or deny .
It is possible to be allowed and denied at the same time
Action are formatted service:operation . There are three options:
specific individual action
wildcard as an action
list of multiple independent actions
Resource: similar to action except for format arn:aws:s3:::catgifs
Priority Level
IAM Users
Identity used for anything requiring long‐term AWS access
Humans
Applications
Service Accounts
If you can name a thing to use the AWS account, this is an IAM user.
When a principal wants to request to perform an action, it will authenticate against an identity within IAM. An IAM user is an
identity which can be used in this way.
This allows you to refer to a single or group of resources. This prevents individual resources from the same account but in different
regions from being confused.
arn:partition:service:region:account‐id:resource‐id
arn:partition:service:region:account‐id:resource‐type/resource‐id
arn:partition:service:region:account‐id:resource‐type:resource‐id
arn:aws:s3:::catgifs
This references an actual bucket
arn:aws:s3:::catgifs/*
This refers to objects in that bucket, but not the bucket itself.
IAM FACTS
IAM Groups
Containers for users. You cannot login to IAM groups They have no credentials of their own. Used solely for management of IAM
users.
AWS merges all of the policies from all groups the user is in together.
GROUPS ARE NOT A TRUE IDENTITY THEY CAN'T BE REFERENCED AS A PRINCIPAL IN A POLICY
An S3 Resource cannot grant access to a group, it is not an identity. Groups are used to allow permissions to be assigned to IAM
users.
IAM Roles
A single thing that uses an identity is an IAM User.
IAM Roles are also identities that are used by large groups of individuals. If have more than 5000 principals, it could be a candidate
for an IAM Role.
IAM Users can have inline or managed policies which control which permissions the identity gets within AWS
Trust Policy: Specifies which identities are allowed to assume the role.
Permissions Policy: Specifies what the role is allowed to do.
If an identity is allowed on the Trust Policy, it is given a set of Temporary Security Credentials. Similar to access keys except
they are time limited to expire. The identity will need to renew them by reassuming the role.
Every time the Temporary Security Credentials are used, the access is checked against the Permissions Policy. If you change
the policy, the permissions of the temp credentials also change.
Roles are real identities and can be referenced within resource policies.
Secure Token Service ﴾sts:AssumeRole﴿ this is what generates the temporary security credentials ﴾TSC﴿.
When this is run, it uses the sts:AssumeRole to generate keys to CloudWatch and S3.
Break Glass Situation ‐ There is a key for something the team does not normally have access to. When you break the glass, you must
have a reason to do. A role can have an Emergency Role which will allow further access if its really needed.
You may have an existing identity provider you are trying to allow access to. This may offer SSO ﴾Single Sign On﴿ or over 5000
identities. This is useful to reuse your existing identities for AWS. External accounts can't be used to access AWS directly. To solve this,
you allow an IAM role in the AWS account to be assumed by one of the active directories. ID Federation allowing an external
service the ability to assume a role.
Web Identity Federation uses IAM roles to allow broader access. These allow you to use an existing web identity such as google,
facebook, or twitter to grant access to the app. We can trust these web identities and allow those identities to assume an IAM role to
access web resources such as DynamoDB. No AWS Credentials are stored on the application. Can scale quickly and beyond.
You can use a role in the partner account and use that to upload objects to AWS resources.
AWS Organizations
Without an organization, each AWS account needs it's own set of IAM users as well as individual payment methods. If you have
more than 5 to 10 accounts, you would want to use an org.
Take a single AWS account standard AWS account and create an org. The standard AWS account then becomes the master
account. The master account can invite other existing standard AWS accounts. They will need to approve their joining to the org.
When standard AWS accounts become part of the org, they become member accounts. Organizations can only have one master
accounts and zero or more member accounts
Organization Root
This is a container that can hold AWS member accounts or the master account. It could also contain organizational units which
can contain other units or member accounts.
Consolidated billing
The individual billing for the member accounts is removed and they pass their billing to the master account. Inside an AWS
organization, you get a single monthly bill for the master account which covers all the billing for each users. Can offer a discount
with consolidation of reservations and volume discounts
Adding accounts in an organization is easy with only an email needed. You no longer need IAM users in each accounts. You can use
IAM roles to change these. It is best to have a single AWS account only used for login. Some enterprises may use an AWS account
while smaller ones may use the master.
Role Switching
The master account cannot be restricted by SCPs which means this should not be used because it is a security risk.
SCPs limit what the account, including root can do inside that account. They don't grant permissions themselves, just act as a
barrier.
When you enable SCP on your org, AWS applies FullAWSAccess . This means SCPs have no effect because nothing is restricted. It
has zero influence by themselves.
{
"Version": "2012‐10‐17",
"Statement": {
"Effect": "Allow",
"Action": "*",
"Resource": "*"
}
}
SCPs by themselves don't grant permissions. When SCPs are enabled, there is an implicit deny.
You must then add any services you want to Deny such as DenyS3
{
"Version": "2012‐10‐17",
"Statement": {
"Effect": "Deny",
"Action": "s3:*",
"Resource": "*"
}
}
Deny List is a good default because it allows for the use of growing services offered by AWS. A lot less admin overhead.
{
"Version": "2012‐10‐17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*",
"ec2:*"
],
"Resource": "*"
}
]
}
CloudWatch Logs
This is a public service, this can be used from AWS VPC or on premise environment.
Comes with some AWS Integrations. Security is provided with IAM roles or Service roles Can generate metrics based on logs metric
filter
Need logging sources such as external APIs or databases. This sends information as log events. These are stored in log streams.
This is a sequence of log events from the same source.
Log Groups are containers for multiple logs streams of the same type of logging. This also stores configuration settings such as
retention settings and permissions.
Once the settings are defined on a log group, they apply to all log streams in that log group. Metric filters are also applied on the log
groups.
CloudTrail Essentials
Concerned with who did what.
Logs API calls or activities as CloudTrail Event
Stores the last 90 days of events in the Event History. This is enabled by default and is no additional cost.
To customize the service you need to create a new trail. Two types of events. Default only logs Management Events
Management Events: Provide information about management operations performed on resources in the AWS account. Create
an EC2 instance or terminating one.
Data Events: Objects being uploaded to S3 or a Lambda function being invoked. This is not enabled by default and must be
enabled for that trail.
CloudTrail Trail
Logs events for the AWS region it is created in. It is a regional service.
Most services log events in the region they occur. The trail then must be a one region trail in that region or an all region trail to log
that event.
A small number of services log events globally to one region. Global services such as IAM or STS or CloudFront always log their
events to us‐east‐1
AWS services are largely split into regional services or global services.
When the services log, they log in the region they are created or to us‐east‐1 if they are a global service.
A trail can store events in an S3 bucket as a compressed JSON file. It can also use CloudWatch Logs to output the data.
CloudTrail products can create an organizational trail. This allows a single management point for all the APIs and management
events for that org.
CloudTrail Pricing
https://aws.amazon.com/cloudtrail/pricing/
Simple‐Storage‐Service‐﴾S3﴿
S3 Security
S3 is private by default! The only identity which has any initial access to an S3 bucket is the account root user of the account
which owns that bucket.
S3 Bucket Policy
Each bucket can only have one policy, but it can have multiple statements.
ACLs ﴾Legacy﴿
A way to apply a subresource to objects and buckets. These are legacy and AWS does not recommend their use. They are inflexible
and allow simple permissions.
S3 Exam PowerUp
Identity
Bucket
S3 Static Hosting
Normal access is via AWS APIs. This allows access via HTTP using a web browser.
When you enable static website hosting you need two HTML files:
index document
default page returned from a website
entry point for most websites
error document
similar to index, but only when something goes wrong
This is influenced by the bucket name and region it is in. This cannot be changed.
You can use a custom domain for a bucket, but then the bucket name matters. The name of the bucket must match the domain.
Offloading
Instead of using EC2 to host an entire website, the compute service can generate a HTML file which points to the resources hosted
on a static bucket. This ensures the media is retrieved from S3 and not EC2.
Out‐of‐band pages
This may be an error page to display maintenance if the server goes offline. We could then change our DNS and move customers to
a backup website on S3.
S3 Pricing
Versioning
The latest or current version is always returned when an object version is not requested.
When an object is deleted, AWS puts a delete marker on the object and hides all previous versions. You could delete this marker to
enable the item.
To delete an object, you must delete all the versions of that object using their version marker.
MFA Delete
Enabled within version configuration in a bucket. This means MFA is required to change bucket versioning state. MFA is required to
delete versions of an object.
In order to change a version state or delete a particular version of an object, you need to provide the serial number of your MFA
token as well as the code it generates. These are concatenated and passed with any API calls.
S3 Performance Optimization
Single PUT Upload
Multipart Upload
S3 Accelerated Transfer
Off by default.
Uses the network of AWS edge locations to speed up transfer.
Bucket name cannot contain periods.
Name must be DNS compatible.
Benefits improve the larger the location and distance.
The worse the start, the better the performance benefits.
Encryption 101
Encryption at Rest
Encryption in Transit
Terms
Symmetric Encryption
The key is handed from one entity to another before the data. This is difficult because the key needs to be transferred securely. If the
data is time sensitive, the key needs to be arranged beforehand.
Asymmetric Encryption
The public key is uploaded to cloud storage. The data is encrypted and sent back to the original entity. The private key can decrypt
the data.
This is secure because stolen public keys can only encrypt data. Private keys must be handled securely.
Signing
Steganography
Encryption is obvious when used. There is no denying that the data was encrypted. Someone could force you to decrypt the data
packet.
A file can be hidden in an image or other file. If it difficult to find the message unless you know what to look for.
One party would take another party's public key and encrypt some data to create ciphertext. That ciphertext can be hidden in
another file so long as both parties know how the data will be hidden.
KMS does not store the DEK, once provided to a user or service, it is discarded. KMS doesn't actually perform the encryption or
decryption of data using the DEK or anything past generating them.
Architecture
KMS can create an alias which is a shortcut to a particular CMK. Aliases are also per region. You can create a MyApp1 alias in all
regions but they would be separate aliases, and in each region it would be pointing potentially at a different CMK.
Object Encryption
Buckets aren't encrypted, objects are. Multiple objects in a bucket can use a different encryption methods.
Two main methods of encryption S3 is capable of supporting. Both types are encryption at rest. Data sent from a user to S3 is
automatically encrypted in transit outside of these methods.
Client‐Side encryption
Server‐Side encryption
1. When placing an object in S3, you provide encryption key and plaintext object
2. Once the key and object arrive, it is encrypted.
3. A hash of the key is taken and attached to the object. The hash can identify if the specific key was used to encrypt the object.
4. The key is then discarded after the hash is taken.
5. The encrypted and one‐way hash are stored persistently on storage.
To decrypt the object, you must tell S3 which object to decrypt and provide it with the key used to encrypt it. If the key that you
supply is correct, the proper hash, S3 will decrypt the object, discard the key, and return the plaintext version of the object.
AWS handles both the encryption and decryption process as well as the key generation and management. This provides very little
control over how the keys are used, but has little admin overhead.
Not good for regulatory environment where keys and access must be controlled.
No way to control key material rotation.
No role separation. A full S3 admin can decrypt data and open objects.
Much like SSE‐S3, where AWS handles both the keys and encryption process. KMS handles the master key and not S3. The first time
an object is uploaded, S3 works with KMS to create an AWS managed CMK. This is the default key which gets used in the future.
Every time an object is uploaded, S3 uses a dedicated key to encrypt that object and that key is a data encryption key which KMS
generates using the CMK. The CMK does not need to be managed by AWS and can be a customer managed CMK.
1. S3 is provided a plaintext version of the data encryption key as well as an encrypted version.
2. The data is encrypted with the plaintext key and the key discarded.
3. The encrypted key is stored alongside the encrypted object.
When uploading an object, you can create and use a customer managed CMK. This allows the user to control the permissions and
the usage of the key material. In regulated industries, this is reason enough to use SSE‐KMS You can also add logging and see any
calls against this key from CloudTrail.
The best benefit is the role separation. To decrypt any object, you need access to the CMK that was used to generate the unique key
that encrypted them. The CMK is used to decrypt the data encryption key for that object. That decrypted data encryption key is used
to decrypt the object itself. If you don't have access to KMS, you don't have access to the object.
S3 Standard
Default AWS storage class that's used in S3, should be user default as well.
S3 Standard is region resilient, and can tolerate the failure of an AZ.
Objects are replicated to at least 3+ AZs when they are uploaded.
99999999999% durability
99.99% availability
Offers low latency and high throughput.
No minimums, delays, or penalties.
Billing is storage fee, data transfer fee, and request based charge.
All of the other storage classes trade some of these compromises for another.
S3 Standard‐IA
Designed for data that isn't accessed often, long term storage, backups, disaster recovery files. The requirement for data to be safe is
most important.
One Zone‐IA
Designed for data that is accessed less frequently but needed quickly.
80% of the base cost of Standard‐IA.
Same minimum size and duration fee as Standard‐IA
Data is only stored in a single AZ, no 3+ AZ replication.
99.5% availability, lower than Standard‐IA
If data is easily creatable from a primary data set, this would be a great place to store the output from another data set.
S3 Glacier
Retrieval methods:
S3 Intelligent‐Tiering
This is good for objects that are unknown their access pattern.
Transition Actions
Objects must flow downwards, they can't flow in the reverse direction.
Expiration Actions
Once an object has been uploaded and changed, you can purge older versions after 90 days to keep costs down.
S3 Replication
There are two types of S3 replication available.
Architecture for both is similar, only difference is if both buckets are in the same account or different accounts.
The replication configuration is applied to the source bucket and configures S3 to replicate from this source bucket to a destination
bucket. It also configures the IAM role to use for the replication process. The role is configured to allow the S3 service to assume it
based on its trust policy. The role's permission policy allows it to read objects on the source bucket and replicate them to the
destination bucket.
When different accounts are used, the role is not by default trusted by the destination account. If configuring between accounts, you
must add a bucket policy on the destination account to allow the IAM role from the source account access to the bucket.
S3 Replication Options
SRR ‐ Log Aggregation SRR ‐ Sync production and test accounts SRR ‐ Resilience with strict sovereignty requirements CRR ‐ Global
resilience improvements CRR ‐ Latency reduction
S3 Presigned URL
A way to give another person or application access to a object inside an S3 bucket using your credentials in a safe way.
security credentials
bucket name
object key
expiry date and time
indicate how the object or bucket will be accessed
S3 will create a presigned URL and return it. This URL will have encoded inside it the details that IAM admin provided. It will be
configured to expire at a certain date and time as requested by the IAM admin user.
You can create a presigned URL for an object you have do not have access to. The object will not allow access because your user
does not have access.
When using the URL the permission that you have access to, match the identity that generated it at the moment the item is
being accessed.
If you get an access deny it means the ID never had access, or lost it.
Don't generate presigned URLs with an IAM role.
The role will likely expire before the URL does.
If you retrieve a 5TB object, it takes time and consumes 5TB of data. Filtering at the client side doesn't reduce this cost.
S3 and Glacier select lets you use SQL‐like statements to select part of the object which is returned in a filtered way. The filtering
happens at the S3 service itself saving time and data.
Virtual‐Private‐Cloud‐VPC
Networking Refresher
There are just over 4 billion addresses. This was not very flexible because it was either too small or large for some corporations.
Some IP addresses was always left unused.
Classful Addressing
Class A range
Starts at 0.0.0.0 and ends at 127.255.255.255 .
Split into 128 class A networks
Handed out to large companies
Class B Range
Half the range of class A.
Starts at 128.0.0.0 and ends at 191.255.255.255 .
Class C Range
Half of range class B
Starts at 192.0.0.0 and ends at 223.255.255.255 .
Internet / Private IPs ‐ RFC1918
These can't communicate over the internet and are used internally only
CIDR networks are represented by the starting IP address of the network called the network address and the prefix.
10.0.0.0/16 is the equivalent of 1234 as a password. You should consider other ranges that people might use to ensure it does
not overlap.
Packets
Contains:
source IP address
destination IP address
data the source IP wants to communicate with the destination IP.
TCP/UDP Segment has a source and destination port number. This allows devices to have multiple conversations at the same time. In
AWS when data goes through network devices, filters can be set based on IP addresses and port numbers.
2001:0db8:28ac:0000:0000:82ae:3910:7334
The value is hex and there are two octets per spacing or one hextet. The redundant zeros can be removed to create:
2001:0db8:28ac:0:0:82ae:3910:7334
2001:0db8:28ac::82ae:3910:7334
Each address is 128 bits long. They are addressed by the start of the network and the prefix. Since each grouping is 16 values, we
can multiple the groups by this to achieve the prefix.
What size should the VPC be. This will limit the use.
Are there any networks we can't use?
Be mindful of ranges other VPCs use or are used in other cloud environments
Try to predict the future uses.
VPC structure with tiers and resilience ﴾availability﴿ zones
VPC min /28 network ﴾16 IP﴿
VPC max /16 ﴾65456 IP﴿
Avoid common range 10.0 or 10.1, include up to 10.10
Suggest starting of 10.16 for a nice clean base 2 number.
Reserve 2+ network ranges per region being used per account. Think of the highest region you will operate in and add extra as a
buffer.
A subnet is located in one availability zone. Try to split each subnet into tiers ﴾web, application, db, spare﴿. Since each Region has at
least 3 AZ's, it is a good practice to start splitting the network into 4 different AZs. This allows for at least one subnet in each AZ, and
one spare. Taking a /16 subnet and splitting it 16 ways will make each a /20.
Custom VPC
Regional Isolated and Resilient Service.
Operates from all AZs in that region
Allows isolated networks inside AWS.
Nothing IN or OUT of a VPC without explicit configuration.
Isolated blast radius. Any problems are limited to that VPC or anything connected to it.
Flexible configuration
Hybrid networking to allow connection to other cloud or on‐prem networking.
Default or Dedicated Tenancy. This refers to how the hardware is configured.
Default allows on a per resource decision later on.
Dedicated locks any resourced created in that VPC to be on dedicated hardware which comes at a cost premium.
Available on the base IP address of the VPC + 2. If the VPC is 10.0.0.0 then the DNS IP will be 10.0.0.2
If true, instances with public IPs in a VPC are given public DNS hostnames.
If false, this is not available.
VPC Subnets
AZ Resilient subnetwork of a VPC.
If the AZ fails, the subnet and services also fail.
High availability needs multiple components into different AZs.
1 subnet can only have 1 AZ.
1 AZ can have zero or many subnets.
IPv4 CIDR is a subset of the VPC CIDR block.
Cannot overlap with any other subnets in that VPC
Subnet can optionally be allocated IPv6 CIDR block.
﴾256 /64 subnets can fit in the /56 VPC﴿
Subnets can communicate with other subnets in the VPC by default.
Reserved IP addresses
There are five IP addresses within every VPC subnet that you cannot use. Whatever size of the subnet, the IP addresses are five less
than you expect.
This is how computing devices receive IP addresses automatically. There is one options set applied to a VPC at one time and this
configuration flows through to subnets.
This can be changed, can create new ones, but you cannot edit one.
If you want to change the settings
You can create a new one
Change the VPC allocation to the new one
Delete the old one
IP allocation Options
Auto Assign public IPv4 address
This will create a public IP address in addition to their private subnet.
This is needed to make a subnet public.
Auto Assign IPv6 address
For this to work, the subnet and VPC need an allocation of addresses.
Route tables defines what the VPC router will do with traffic when data leaves that subnet. A VPC is created with a main route table.
If you don't associate a custom route table with a subnet, it uses the main route table of the VPC.
If you do associate a custom route table you create with a subnet, then the main route table is disassociated. A subnet can only have
one route table associated at a time, but a route table can be associated by many subnets.
Route Tables
When traffic leaves the subnet that this route table is associated with, the VPC router reviews the IP packets looking for the
destination address. The traffic will try to match the route against the route table. If there are more than one routes found as a
match, the prefix is used as a priority. The higher the prefix, the more specific the route, thus higher priority. If the target says local,
that means the destination is in the VPC itself. Local route can never be updated, they're always present and the local route always
takes priority. This is the exception to the prefix rule.
Internet Gateway
A managed service that allows gateway traffic between the VPC and the internet or AWS Public Zones ﴾S3, SQS, SNS, etc.﴿
Using IGW
The public address is not public and connected to the EC2 instance itself. Instead, the IGW creates a record that links the instance's
private IP to the public IP. This is why when an EC2 instance is created it only sees the private IP address. This is IMPORTANT. For IPv4
it is not configured in the OS with the public address.
When the linux instance wants to communicate with the linux update service, it makes a packet of data. The packet has a source
address of the EC2 instance and a destination address of the linux update server. At this point the packet is not configured with any
public addressing and could not reach the linux update server.
The IGW sees this is from the EC2 instance and analyzes the source IP address. It changes the packet source IP address from the
linux EC2 server and puts on the public IP address that is routed from that instance. The IGW then pushes that packet on the public
internet.
On the return, the inverse happens. As far as it is concerned, it does not know about the private address and instead uses the
instance's public IP address.
If the instance uses an IPv6 address, that public address is good to go. The IGW does not translate the packet and only pushes it to a
gateway.
It is an instance in a public subnet inside a VPC. These are used to allow incoming management connections. Once connected, you
can then go on to access internal only VPC resources. Used as a management point or as an entry point for a private only VPC.
This is an inbound management point. Can be configured to only allow specific IP addresses or to authenticate with SSH. It can also
integrate with your on premise identification service.
All VPCs have a default NACL, this is associated with all subnets of that VPC by default. NACLs are used when traffic enters or leaves
a subnet. Since they are attached to a subnet and not a resource, they only filter data as it crosses in or out. If two EC2 instances in a
VPC communicate, the NACL does nothing because it is not involved.
When a specific rule set has been called, the one with the lowest rule number first. As soon as one rule is matched, the processing
stops for that particular piece of traffic.
The action can be for the traffic to allow or deny the traffic.
type
protocol: tcp, udp, or icmp
port range
Inbound rule: Source ‐ who traffic is from
Outbound rule: Destination ‐ who traffic is destined to
Examples:
ssh: tcp port 22
http: tcp port 80
https: tcp port 443
ping traffic: icmp
If all of those fields match, then the first rule will either allow or deny.
The rule at the bottom with * is the implicit deny This cannot be edited and is defaulted on each rule list. If no other rules match
the traffic being evaluated, it will be denied.
NACLs are processed in order starting at the lowest rule number until it gets to the catch all. A rule with a lower rule number will be
processed before another rule with a higher rule number.
Security Groups
SGs are boundaries which can filter traffic.
Attached to a resource and not a subnet.
SGs have two sets of rules like NACLs.
SGs are stateful.
Only one inbound rule is needed.
They see traffic and response as the same thing.
Understand AWS logical resources so they're not limit to IP traffic only.
Can have a source and destination referencing the instance and not the IP.
Default SG is created in a VPC to allow all traffic.
Does so by referencing itself. Anything this SG is attached to is matched by this rule.
SGs have a hidden implicit Deny.
Anything that is not allowed in the rule set for the SG is implicitly denied.
SG cannot explicit deny anything.
NACLs are used in conjunction with SGs to do explicit denys.
SGs vs NACL
NACLs are used when products cannot use SGs, e.g. NAT Gateways.
NACLs are used when adding explicit deny, such as bad IPs or bad actors.
SGs is the default almost everywhere because they are stateful.
NACLs are associated with a subnet and only filter traffic that crosses that boundary. If the resource is in the same subnet, it will
not do anything.
IP masquerading, hides CIDR block behind one IP. This allows many IPv4 addresses to use one public IP for outgoing internet
access. Incoming connections don't work. Outgoing connections can get a response returned.
Elastic‐Cloud‐Compute‐EC2
EC2 provides Infrastructure as a Service ﴾IaaS Product﴿
Virtualization 101
Servers are configured in three sections without virtualization.
CPU hardware
Kernel
Operating system
Runs in privileged mode and can interact with the hardware directly.
User Mode
Runs applications.
Can make a system call to the Kernel to interact with the hardware.
If an app tries to interact with the hardware without a system call, it will cause a system error and can crash the server or at
minimum the app.
Host OS operated on the HW and included a hypervisor ﴾HV﴿. SW ran in privileged mode and had full access to the HW. Guest OS
wrapped in a VM and had devices mapped into their OS to emulate real HW. Drivers such as graphics cards were all SW emulated to
allow the process to run properly.
The guest OS still believed they were running on real HW and tried to take control of the HW. The areas were not real and only
allocated space to them for the moment.
The HV performs binary translation. System calls are intercepted and translated in SW on the way. The guest OS needs no
modification, but slows down a lot.
Para‐Virtualization
Guest OS are modified and run in HV containers, except they do not use slow binary translation. The OS is modified to change the
system calls to user calls. Instead of calling on the HW, they call on the HV using hypercalls. Areas of the OS call the HV instead
of the HW.
The physical HW itself is virtualization aware. The CPU has specific functions so the HV can come in and support. When guest OS
tries to run privileged instructions, they are trapped by the CPU and do not halt the process. They are redirected to the HV from the
HW.
What matters for a VM is the input and output operations such as network transfer and disk IO. The problem is multiple OS try to
access the same piece of hardware but they get caught up on sharing.
Allows a network or any card to present itself as many mini cards. As far as the HW is concerned, they are real dedicated cards for
their use. No translation needs to be done by the HV. The physical card handles it all. In EC2 this feature is called enhanced
networking.
When instances are provisioned within a specific subnet within a VPC A primary elastic network interface is provisioned in a subnet
which maps to the physical hardware on the EC2 host. Subnets are also within one specific AZ. Instances can have multiple network
interfaces, even within different subnets so long as they're within the same AZ.
An instance runs on a specific host. If you restart the instance it will stay on that host until either:
The instance will be relocated to another host in the same AZ. Instances cannot move to different AZs. Everything about their
hardware is locked within one specific AZ. A migration is taking a copy of an instance and moving it to a different AZ.
In general instances of the same type and generation will occupy the same host. The only difference will generally be their size.
EC2 Strengths
Long running compute needs. Many other AWS services have run time limits.
Naming Scheme
R5dn.8xlarge ‐ whole thing is the instance type. When in doubt give the full instance type
Storage Refresher
Instance Store
Direct ﴾local﴿ attached storage
Super fast
Ephemeral storage or temporary storage
Elastic Block Store ﴾EBS﴿
Network attached storage
Volumes delivered over the network
Persistent storage lives on past the lifetime of the instance
Block Storage: Volume presented to the OS as a collection of blocks. No structure beyond that. These are mountable and
bootable. The OS will create a file system on top of this, NTFS or EXT3 and then it mounts it as a drive or a root volume on
Linux. Spinning hard disks or SSD. This could also be delivered by a physical volume. Has no built in structure. You can mount
an EBS volume or boot off an EBS volume.
File Storage: Presented as a file share with a structure. You access the files by traversing the storage. You cannot boot from
storage, but you can mount it.
Object Storage: It is a flat collection of objects. An object can be anything with or without attached metadata. To retrieve the
object, you need to provide the key and then the value will be returned. This is not mountable or bootable. It scales very well
and can have simultaneous access.
Storage Performance
This isn't the only part of the chain, but it is a simplification. A system might have a throughput cap. The IOPS might decrease as the
block size increases.
Uses a performance bucket architecture based on the IOPS it can deliver. The GP2 starts with 5,400,000 IOPS allocated. It is all
available instantly.
You can consume the capacity quickly or slowly over the life of the volume. The capacity is filled back based upon the volume size.
Min of 100 IOPS added back to the bucket per second.
Above that, there are 3 IOPS/GiB of volume size. The max is 16,000 IOPS. This is the baseline performance
Default for boot volumes and should be the default for data volumes. Can only be attached to one EC2 instance at a time.
You pay for capacity and the IOPs set on the volume. This is good if your volume size is small but need a lot of IOPS.
50:1 IOPS to GiB Ratio 64,000 is the max IOPS per volume assuming 16 KiB I/O.
Good for latency sensitive workloads such as mongoDB. Multi‐attach allows them to attach to multiple EC2 instances at once.
great value
great for high throughput vs IOPs
500 GiB ‐ 16 TiB
Neither can be used for EC2 boot volumes.
Good for streaming data on a hard disk.
Media conversion with large amounts of storage.
Frequently accessed high throughput intensive workload
log processing
data warehouses
The access patterns should be sequential
Massive inefficiency for small reads and writes
Two types
st1
Starts at 1 TiB of credit per TiB of volume size.
40 MB/s baseline per TiB
Burst of 250 MB/s per TiB
Max t‐put of 500 MB/s
sc1
Designed for less frequently accessed data, it fills slower.
12 MB/s baseline per TiB
Burst of 80 MB/s per TiB
Max t‐put of 250 MB/s
Each instance has a collection of volumes that are locked to that specific host. If the instance moves, the data doesn't.
The number, size, and performance of instance store volumes vary based on the type of instance used. Some instances do not have
any instance store volumes at all.
Highly available and reliable in an AZ. Can self correct against HW issues.
Persist independently from EC2 instances.
Can be removed or reattached.
You can terminated instance and keep the data.
Multi‐attach feature of io1
Can create a multi shared volume.
Region resilient backups.
Require up to 64,000 IOPS and 1,000 MiB/s per volume
Require up to 80,000 IOPS and 2,375 MB/s per instance
Snapshots are incremental volume copies to S3. The first is a full copy of data on the volume. This can take some time. EBS won't
be impacted, but will take time in the background. Future snaps are incremental, consume less space and are quicker to perform.
If you delete an incremental snapshot, it moves data to ensure subsequent snapshots will work properly.
Volumes can be created ﴾restored﴿ from snapshots. Snapshots can be used to move EBS volumes between AZs. Snapshots can be
used to migrate data between volumes.
When creating a new EBS volume without a snapshot, the performance is available immediately.
When restoring from S3, performs Lazy Restore
If you restore a volume, it will transfer it slowly in the background.
If you attempt to read data that hasn't been restored yet, it will immediately pull it from S3, but this will achieve lower
levels of performance than reading from EBS directly.
You can force a read of every block all data immediately using DD.
Fast Snapshot Restore ﴾FSR﴿ allows for immediate restoration. You can create 50 of these FSRs per region. When you enable it on a
snapshot, you pick the snapshot specifically and the AZ that you want to be able to do instant restores to. Each combination of
Snapshot and AZ counts as one FSR set. You can have 50 FSR sets per region. FSR is not free and can get expensive with lost of
different snapshots.
Billed using a GB/month metric. 20 GB stored for half a month, represents 10 GB‐month.
This is used data, not allocated data. If you have a 40 GB volume but only use 10 GB, you will only be charged for the allocated data.
This is not how EBS itself works.
The data is incrementally stored which means doing a snapshot every 5 minutes will not necessarily increase the charge as opposed
to doing one every hour.
EBS Encryption
When you don't have EBS encryption, the volume is not encrypted. The physical hardware itself may be performing at rest
encryption, but that is a separate thing.
When you set up an EBS volume initially, EBS uses KMS and a customer master key. This can be the EBS default ﴾CMK﴿ which is
referred to as aws/ebs or it could be a customer managed CMK which you manage yourself.
That key is used by EBS when an encrypted volume is created. The CMK generates an encrypted data encryption key which is stored
on the volume with the physical disk. This key can only be encrypted by KMS when a role with the proper permissions makes the
request.
When the volume is first used, EBS asks CMS to decrypt the key and stores the decrypted key in memory on the EC2 host while it's
being used. At all other times it's stored on the volume in encrypted form.
When the EC2 instance is using the encrypted volume, it can use the decrypted data encryption key to move data on and off the
volume. It is used for all cryptographic operations when data is being used to and from the volume.
If a snapshot is made of an encrypted EBS volume, the same data encryption key is used for that snapshot. Anything made from this
snapshot is also encrypted in the same way.
Every time you create a new EBS volume from scratch, it creates a new data encryption key.
When you launch an instance with Security Groups, they are on the network interface and not the instance.
MAC address
Primary IPv4 private address
From the range of the subnet the ENI is within.
Will be static and not change for the lifetime of the instance
10.16.0.10
Given a DNS name that is associated with the address.
ip‐10‐16‐0‐10.ec2.internal
Only resolvable inside the VPC and always points at private IP address
0 or more secondary private IP addresses
0 or 1 public IPv4 address
The instance must manually be set to receive an IPv4 address or spun into a subnet which automatically allocates an IPv4.
This is a dynamic IP that is not fixed. If you stop an instance the address is removed. When you start up again, it is given a
brand new IPv4 address. Restarting the instance will not change the IP address. Changing between EC2 hosts will change
the address. This will be allocated a public DNS name. The Public DNS name will resolve to the primary private IPv4
address of the instance. Outside of the VPC, the DNS will resolve to the public IP address. This allows one single DNS name
for an instance, and allows traffic to resolve to an internal address inside the VPC and the public will resolve to a public IP
address.
1 elastic IP per private IPv4 address
Can have 1 public elastic interface per private IP address on this interface. This is allocated to your AWS account. Can
associate with a private IP on the primary interface or secondary interface. If you are using a public IPv4 and assign an
elastic IP, the original IPv4 address will be lost. There is no way to recover the original address.
0 or more IPv6 address on the interface
These are by default public addresses.
Security groups
Applied to network interfaces.
Will impact all IP addresses on that interface.
If you need different IP addresses impacted by different security groups, then you need to make multiple interfaces and
apply different security groups to those interfaces.
Source / destination checks
If traffic is on the interface, it will be discarded if it is not from going to or coming from one of the IP addresses
Secondary interfaces function in all the same ways as primary interfaces except you can detach interfaces and move them to other
EC2 instances.
Public DNS for a given instance will resolve to the primary private IP address in a VPC. If you have instance to instance
communication within the VPC, it will never leave the VPC. It does not need to touch the internet gateway.
When you launch an EC2 instance, you are using an Amazon provided AMI.
Can be Amazon or community provided
Marketplace ﴾can include commercial software﴿
Will charge you for the instance cost and an extra cost for the AMI
AMIs are regional with a unique ID.
Controls permissions
Default only your account can use it.
Can be set to be public.
Can have specific AWS accounts on the AMI.
Can create an AMI from an existing EC2 instance to capture the current config.
AMI Lifecycle
1. Launch: EBS volumes are attached to EC2 devices using block IDs.
BOOT /dev/xvda
DATA /dev/xvdf
AMI contains:
Permissions: who can use it, is it public or private
EBS snapshots are created from attached EBS volumes
Snapshots are referenced inside the AMI using block device mapping.
Table of data that links the snapshot IDs that you've just created when making that AMI and it has for each one of
those snapshots, a device ID that the original volumes had on the EC2 instance.
4. Launch: When launching an instance, the snapshots are used to create new EBS volumes in the AZ of the EC2 instance and
contain the same block device mapping.
On‐Demand Instances
Spot Instances
Up to 90% off on‐demand, but depends on the spare capacity. You can set a maximum hourly rate in a certain AZ in a certain region.
If the max price you set is above the spot price, you pay only that spot price for the duration that you consume that instance. As the
spot price increases, you pay more. Once this price increases past your maximum, it will terminate the instance. Great for data
analytics when the process can occur later at a lower use time.
Reserved Instance
Up to 75% off on‐demand. The trade off is commitment. You're buying capacity in advance for 1 or 3 years. Flexibility on how to pay
All up front
Partial upfront
No upfront
Best discounts are for 3 years all up front. Reserved in region, or AZ with capacity reservation. Reserved instances takes priority for
AZ capacity. Can perform scheduled reservation when you can commit to specific time windows.
Great if you have a known stead state usage, email usage, domain server. Cheapest option with no tolerance for disruption.
Vertical Scaling
As customer load increases, the server may need to grow to handle more data. The server can increase in capacity, but this will
require a reboot.
Often times vertical scaling can only occur during planned outages.
Larger instances also carry a $ premium compared to smaller instances.
Instance size is an upper cap on performance.
No application modification is needed.
Works for all applications, even monoliths ﴾all code in one app﴿
Horizontal Scaling
As the customer load increases, this adds additional capacity. Instead of one running copy of an application, you can have multiple
versions running on each server. This requires a load balancer. When customers try to access an application, the load balancer
ensures the servers get equal parts of the load.
Instance Metadata
EC2 service provides data to instances Accessible inside all instances
Memorize http://169.254.169.254/latest/meta‐data/
Meta‐data contains information on the environment the instance is in. You can find out about the networking or user‐data among
other things. This is not authenticated or encrypted. Anyone who can gain access to the instance can see the meta‐data. This can be
restricted by local firewall
Containers‐and‐ECS
Intro to Containers
Virtualization Problems
Using an EC2 virtual machine with Nitro Hypervisor, 4 GB ram, and 40 GB disk, the OS can consume 60‐70% of the disk and much
of the available memory. Containers leverage the similarities of multiple guest OS by removing duplicate resources. This allows
applications to run in their own isolated environments.
Image Anatomy
A Docker image is composed of multiple layers and not a monolithic disk image. Each line of a Docker image creates a new
filesystem layer on top of the previous. Images are created from scratch or a base image. Images contain read only layers, images
are layer onto images.
Docker container is the same as a Docker image, except it has an additional READ/WRITE layer of the container.
If you have lots of containers with very similar base structures, they will share the parts that overlap. The other layers are reused
between containers.
Container Registry
Registry or hub of container images. Dockerfile can create a container image where it gets stored in the container registry.
Docker hosts can run many containers based on one or more images. A single image can generate Containers on many different
Docker hosts.
ECS Service is configured via Service Definition and represents how many copies of a task you want to run for scaling and HA.
EC2 mode
ECS cluster is created within a VPC. It benefits from the multiple AZs that are within that VPC. You specify an initial size which will
drive an auto scaling group.
ECS using EC2 mode is not a serverless solution, you need to worry about capacity for your cluster.
The container instances are not delivered as a managed service, they are managed as normal EC2 instances. You can use spot pricing
or prepaid EC2 servers.
Fargate mode
Removes more of the management overhead from ECS, no need to manage EC2.
Fargate shared infrastructure allows all customers to access from the same pool of resources.
Fargate deployment still uses a cluster with a VPC where AZs are specified.
For ECS tasks, they are injected into the VPC. Each task is given an elastic network interface which has an IP address within the VPC.
They then run like a VPC resource.
EC2 mode is good for a large workload if you are price conscious. This allows for spot pricing and prepayment.
Advanced‐EC2
In systems automation, bootstrapping allows the system to self configure. In AWS this is EC2 Build Automation.
This could perform some software installs and post install configs.
Bootstrapping is done using user data and is injected into the instance in the same way that meta‐data is. It is accessed using the
meta‐data IP.
http://169.254.169.254/latest/
EC2 doesn't validate the user data. You can tell EC2 to pass in trash data and the data will be injected. The OS needs to understand
the user data.
Bootstrapping Architecture
An AMI is used to launch an EC2 instance in the usual way to create an EBS volume that is attached to the EC2 instance. This is based
on the block mapping inside the AMI.
Now the EC2 service provides some user data through to the EC2 instance. There is SW within the OS designed to look at the
metadata IP for any user data. If it sees any user data, it executes this on launch of that instance.
This is treated like any other script the OS runs. At the end of running the script, the instance will be in:
EC2 doesn't know what the user data contains, it's just a block of data. The user data is not secure, anyone can see what gets passed
in. For this reason it is important not to pass passwords or long term credentials.
User data is limited to 16 KB in size. Anything larger than this will need to pass a script to download the larger set of data.
User data can be modified if you stop the instance, change the user data, then restart the instance. This won't be executed since the
instance has already started.
Boot‐Time‐To‐Service‐Time
How quickly after you launch an instance is it ready for service? This includes the time for EC2 to configure the instance and any
software downloads that are needed for the user. When looking at an AMI, this can be measured in minutes.
AMI baking will front load the time needed by configuring as much as possible.
AWS::CloudFormation::Init
cfn‐init is a helper script installed on EC2 OS. This is a simple configuration management system.
This is executed as any other command by being passed into the instance as part of the user data and retrieves its directives from
the CloudFormation stack and you define this data in the CloudFormation template called AWS::CloudFormation::Init .
cfn‐init explained
Starts off with a CloudFormation template. This has a logical resource within it which is to create an EC2 instance. This has a
specific section called Metadata . This then passes in the information passed in as UserData . cfn‐init gets variables passed into the
user data by CloudFormation.
It knows the desired state and can work towards a final configuration. This can monitor the user data and change things as the EC2
data changes.
If you pass in user data, there is no way for CloudFormation to know if the EC2 instance was provisioned properly. It may be marked
as complete, but the instance could be broken.
A CreationPolicy is something which is added to a logical resource inside a CloudFormation template. You create it and supply a
timeout value.
This waits for a signal from the resource itself before moving to a create complete state.
EC2 Instance Roles
IAM roles are the best practice ways for services to be granted permissions. EC2 instance roles are roles that an instance can assume
and anything running in that instance has the permissions that role grants.
Starts with an IAM role with a permissions policy. EC2 instance role allows the EC2 service to assume that role.
The instance profile is the item that allows the permissions to get inside the instance. When you create an instance role in the
console, an instance profile is created with the same name.
When IAM roles are assumed, you are provided temporary roles based on the permission assigned to that role. These credentials
are passed through instance meta‐data.
EC2 and the secure token service ensure the credentials never expire.
Key facts
Strings
StringList
SecureString
Parameter Store:
Can store license codes, database strings, and full configs and passwords.
Allows for hierarchies and versioning.
Can store plaintext and ciphertext.
This integrates with k m s to encrypt passwords.
Allows for public parameters such as the latest AMI parameter to be stored and referenced for EC2 creating.
Is a public service so any services needs access to the public sphere or to be an AWS public service.
Applications, EC2 instances, lambda functions can all request access to parameter store.
Tied closely to IAM, can use
Long term credentials such as access keys.
Short term use of IAM roles.
CloudWatch Agent is required for OS visible data. It sends this data into CW For CW to function, it needs configuration and
permissions in addition to having the CW agent installed. The CW agent needs to know what information to inject into CW and CW
Logs.
The agent also needs some permissions to interact with AWS. This is done with an IAM role as best practice. The IAM role has
permissions to interact with CW logs. The IAM role is attached to the instance which provides the instance and anything running on
the instance, permissions to manage CW logs.
The data requested is then injected in CW logs. There is one log group for each individual log we want to capture. There is one log
stream for each group for each instance that needs management.
We can use parameter store to store the configuration for the CW agent.
Cluster Placement
Best practice is to launch all of the instances within that group at the same time. If you launch with 9 instances and AWS places you
in a place with capacity for 12, you are now limited in how many you can add.
Cluster placements need to be part of the same AZ. Cluster placement groups are generally the same rack, but they can even be the
same EC2 host.
All members have direct connections to each other. They can achieve 10 Gbps single stream vs 5 Gbps normally. They also have
the lowest latency and max PPS possible in AWS.
Clusters can't span AZs. The first AZ used will lock down the cluster.
They can span VPC peers.
Requires a supported instance type.
Best practice to use the same type of instance and launch all at once.
This is the only way to achieve 10Gbps SINGLE stream, other data metrics assume multiple streams.
Spread Placement
This provides the best resilience and availability. Spread groups can span multiple AZs. Information will be put on distinct racks with
their own network or power supply. There is a limit of 7 instances per AZ. The more AZs in a region, the more instances inside a
spread placement group.
Use case: small number of critical instances that need to be kept separated from each other. Several mirrors of an application
Partition Placement
If a problem occurs with one rack's networking or power, it will at most take out one instance.
The main difference is you can launch as many instances in each partition as you desire.
When you launch a partition group, you can allow AWS decide or you can specifically decide.
The host hardware has physical sockets and cores. This dictates how many instances can be run on the HW.
Hosts are designed for a specific size and family. If you purchase one host, you configure what type of instances you want to run on
it. With the older VM system you cannot mix and match. The new Nitro system allows for mixing and matching host size.
Enhanced Networking
Enhanced networking uses SR‐IOV. The physical network interface is aware of the virtualization. Each instance is given exclusive
access to one part of a physical network interface card.
There is no charge for this and is available on most EC2 types. It allows for higher IO and lower host CPU usage This provides more
bandwidth and higher packet per seconds. In general this provides lower latency.
EBS Optimized
Historically network on EC2 was shared with the same network stack used for both data networking and EBS storage networking.
EBS optimized instance means that some stack optimizations have taken place and dedicated capacity has been provided for that
instance for EBS usage.
Most new instances support this and have this enabled by default for no charge.
Route‐53
Hosted zones are created automatically when you register a domain using R53.
Hosted zones can be created separately. If you want to register a domain elsewhere and use R53 to host the zone file and records for
that domain, then you can specifically create a hosted zone and point at an externally registered domain at that zone. There is a
monthly fee to host each hosted zone within R53 and a fee for any queries made to that service.
Hosted Zones are what the DNS system references via delegation and name server records. A hosted zone, when referenced in this
way by the DNS system, is known as being authoritative for a domain. It becomes the single source of truth for a domain.
If the bug gets fixed, the health check will pass and the server will be added back into a healthy state.
Health checks are separate from, but are used by records inside R53. You don't create health checks inside records themselves.
These are performed by a fleet of global health checkers. If you think they are bots and block them, this could cause alarms.
Checks occur every 30 seconds by default. This can be increased to 10 seconds for additional costs. These checks are per health
checker. Since there are many you will automatically get one every few seconds. The 10 second option will complete multiple checks
per second.
There could be one of three checks
TCP checks: R53 tries to establish TCP with end point within 10 seconds.
HTTP/HTTPS: Same as TCP but within 4 seconds. The end point must respond with a 200 or 300 status code within 3 seconds of
checking.
String matching: Same as above, the body must have a string within the first 5120 bytes. This is chosen by the user.
Endpoint checks
CloudWatch alarms
Checks of checks
Failover: Create two records of the same name and the same type. One is set to be the primary and the other is the secondary.
This is the same as the simple policy except for the response. Route 53 knows the health of both instances. As long as the
primary is healthy, it will respond with this one. If the health check with the primary fails, the backup will be returned instead.
This is set to implement active ‐ passive failover.
Weighted: Create multiple records of the same name within the hosted zone. For each of those records, you provide a weighted
value. The total weight is the same as the weight of all the records of the same name. If all of the parts of the same name are
healthy, it will distribute the load based on the weight. If one of them fails its health check, it will be skipped over and over again
until a good one gets hit. This can be used for migration to separate servers.
Latency‐based: Multiple records in a hosted zone can be created with the same name and same type. When a client request
arrives, it knows which region the request comes from. It knows the lowest latency and will respond with the lowest latency.
Geolocation: Focused to delivering results matching the query of your customers. The record will first be matched based on the
country if possible. If this does not happen, the record will be checked based on the continent. Finally, if nothing matches again
it will respond with the default response. This can be used for licensing rights. If overlapping regions occur, the priority will
always go to the most specific or smallest region. The US will be chosen over the North America record.
Multi‐value: Simple records use one name and multiple values in this record. These will be health checked and the unhealthy
responses will automatically be removed. With multi‐value, you can have multiple records with the same name and each of
these records can have a health check. R53 using this method will respond to queries with any and all healthy records, but it
removes any records that are marked as unhealthy from those responses. This removes the problem with simple routing where
a single unhealthy record can make it through to your customers. Great alternative to simple routing when you need to
improve the reliability, and it's an alternative to failover when you have more than two records to respond with, but don't want
the complexity or the overhead of weighted routing.
Relational‐Database‐Service‐RDS
Database Refresher
Systems to store and manage data.
Relational ﴾SQL﴿
Every row in a table must have a value for the primary key. There must be a value stored for every attribute in the table.
SQL systems are relational so we generally define relationships between tables as well. This is defined with a join table. A join table
has a composite key which is a key formed of two parts. Composite keys together must be unique.
Keys in different tables are how the relationships between the tables are defined.
The Table schema and relationships must be defined in advance which can be hard to do.
Non‐Relational ﴾NoSQL﴿
Not a single thing, and is a catch all for everything else. There is generally no schema or a weak one.
Key‐Value databases
This is just a list of keys and value pairs. So long as every key is unique, there is no real schema or structure needed. These are really
fast and highly scalable. This is also used for in memory caching.
Each row or item has one or more keys. One key is called the partition key. You can have additional keys other than the partition key
called the sort or range key.
It can be single key ﴾only partition key﴿ or composite key ﴾partition key and sort key﴿.
Every item in a table can also have attributes, but they don't have to be the same between values. The only requirements is that
every item inside the table has to use the same key structure and it has to have a unique key.
Document
Documents are generally formatted using JSON or XML.
This is an extension of a key‐value store where each document is interacted with via an ID that's unique to that document, but the
value of the document contents are exposed to the database allowing you to interact with it.
Great for nested data items within a document structure such as user profiles.
If you needed to read the price of one item you need that row first. If you wanted to query all of the sizes of every order, you will
need to check for each row.
Great for things which deal in rows and items where they are constantly accessed, modified, and removed.
Instead of storing data in rows on disk, they store it based on columns. The data is the same, but it's grouped together on disk, based
on column so every order value is stored together, every product item, color, size, and price are all grouped together.
This is bad for transactional style processing, but great for reporting or when all values for a specific size are required.
Graph
Relationships between things are formally defined and stored along in the database itself with the data. They are not calculated each
and every time you run a query. These are great for relationship driven data.
Nodes are objects inside a graph database. They can have properties.
Relationships themselves can also have attached data, so name value pairs. We might want to store the start date of any
employment relationship.
Can store massive amounts of complex relationships between data or between nodes in a database.
Databases on EC2
It is always a bad idea to do this.
Runs one of a few types of database engines and can contain multiple user created databases. Create one when you provision the
instance, but multiple ones can be created after.
When you create a database instance, the way you access it is using a database host‐name, a CNAME, and this resolves to the
database instance itself.
RDS uses standard database engines so you can access an RDS instance using the same tooling as if you were accessing a self‐
managed database.
When you provision an instance, you provision dedicated storage to that instance. This is EBS storage located in the same AZ. RDS is
vulnerable to failures in that AZ.
Billing is per instance and hourly rate for that compute. You are billed for storage allocated.
RDS enables synchronous replication from the primary instance to the standby replica.
RDS Access ONLY via database CNAME. The CNAME will point at the primary instance. You cannot access the standby replica for any
reason via RDS.
If any error occurs with the primary database, AWS detects this and will failover within 60 to 120 seconds to change to the new
database.
This does not provide fault tolerance as there will be some impact during change.
Time between the last backup and when the failure occurred.
Amount of maximum data loss.
Influences technical solution and cost.
Business usually provides an RPO value.
RDS Backups
First snap is full copy of the data used on the RDS volume. From then on, the snapshots are incremental and only store the change in
data.
When any snapshot occurs, there's a brief interruption to the flow of data between the compute resource and the storage. If you are
using single AZ, this can impact your application. If you are using Multi‐AZ, the snapshot occurs on the standby replica.
Manual snapshots don't expire, you have to clean them yourself. Automatic Snapshots can be configured to make things easier.
In addition to automated backup, every 5 minutes database transaction logs are saved to S3. Transaction logs store the actual data
which changes inside a database so the actual operations that are executed. This allows a database to be restored to a point in time
often with 5 minute granularity.
Automatic cleanups can be anywhere from 0 to 35 days. This means you can restore to any point in that time frame. This will use
both the snapshots and the translation logs.
When you delete the database, they can be retained but they will expire based on their retention period.
The only way to maintain backups is to create a final snapshot which will not expire automatically.
When performing a restore, RDS creates a new RDS with a new endpoint address.
When restoring a manual snapshot, you are setting it to a single point in time. This influences the RPO value.
Automated backups are different, they allow any 5 minute point in time.
Backups are restored and transaction logs are replayed to bring DB to desired point in time.
Restores aren't fast, think about RTO.
RDS Read‐Replicas
Kept in sync using asynchronous replication
It is written fully to the primary and standby instance first. Once its stored on disk, it is then pushed to the replica. This means there
could be a small lag. These can be created in the same region or a different region. This is known as cross region replication. AWS
handles all of the encryption, configuration, and networking without intervention.
If the primary instance fails, you can promote a read‐replica to take over.
Once it is promoted, it allows for read and write.
Only works for failures.
Read‐replicas will replicate data corruption.
In this case you must default back to snapshots and backups.
Promotion cannot be reversed.
Amazon Aurora
Aurora architecture is VERY different from RDS.
There is a primary instance and a number of replicas. The read applications from applications can use the replicas.
There is a shared storage of max 64 TiB across all replicas. This uses 6 copies across AZs.
All instances have access to these storage nodes. This replication happens at the storage level. No extra resources are consumed
during replication.
By default the primary instance is the only one who can write. The replicas will have read access.
Aurora automatically detect hardware failures on the shared storage. If there is a failure, it immediately repairs that area of disk and
recreates that data with no corruption.
With Aurora you can have up to 15 replicas and any of them can be a failover target. The failover operation will be quicker because it
doesn't have to make any storage modifications.
Aurora Endpoints
Aurora clusters like RDS use endpoints, so these are DNS addresses which are used to connect to the cluster. Unlike RDS, Aurora
clusters have multiple endpoints that are available for an application.
Minimum endpoints
Costs
No free‐tier option
Aurora doesn't support micro instances
Beyond RDS singleAZ ﴾micro﴿ Aurora provides best value.
Compute is billed per second with a 10 minute minimum.
Storage is billed using the high watermark for the lifetime using GB‐Month.
Additional IO cost per request made to the cluster shared storage.
100% DB size in backups are included for free.
100 GB cluster will have 100 GB of storage for backups.
Backups in Aurora work in the same way as RDS. Restores create a brand new cluster.
Backtrack must be enabled on a per cluster basis. This allows you to roll back your data base to a previous point in time. This helps
for data corruption.
Fast clones make a new database much faster than copying all the data. It references the original storage and only write the
differences between the two. It uses a tiny amount of storage and only stores data that's changed in the clone or changed in the
original after you make the clone.
Aurora Serverless
Provides a version of Aurora database product without managing the resources. You still create a cluster, but it uses ACUs or Aurora
Capacity Units.
For a cluster, you can set a min and max ACU based on the load and can even go down to 0 to be paused. In this case you would only
be billed for storage consumed.
ACUs are stateless and shared across many AWS customers and have no local storage. They can be allocated to your Aurora
Serverless cluster rapidly when required. Once ACUs are allocated to a cluster, they have access to cluster storage in the same way
as an Aurora Provisioned cluster.
There is a shared proxy fleet. When a customer interacts with the data they are actually communicating with the proxy fleet. The
proxy fleet brokers an application with the ACU and ensures you can scale in and out without worrying about usage. This is
managed by AWS on your behalf.
Single‐master Mode
Aurora Multi‐master has no endpoint or load balancing. An application can connect with one or both of the instances inside a multi‐
master cluster.
When one of the R/W nodes receives a write request from the application, it immediately proposes that data be committed to all of
the storage notes in that cluster. At this point, each node that makes up a cluster either confirms or rejects the proposed change. It
will reject if this conflicts with something already in flight.
The writing instance is looking for a bunch of nodes to agree. If the group rejects it, it cancels the write in error. If it commits, it will
replicate on all storage nodes in the cluster.
If a writer goes down in a multi‐master cluster, the application will shift all future load over to a new writer with little if any
disruption.
Need to define the source and destination endpoints. These point at the physical source and target databases. One of these end
points must be on AWS.
Full load migration is a one off process which transfers everything at once. This requires the database to be down during this
process. This might take several days.
Instead Full Load + CDC allows for a full load transfer to occur and it monitors any changes that happens during this time. Any of
the captured changes can be applied to the target.
CDC only migration is good if you have a vendor solution that works quickly and only changes need to be captured.
Schema Conversion Tool or SCT can perform conversions between database types.
Network‐Storage‐EFS
EFS Architecture
EFS moves the instances closer to being stateless.
EFS runs inside a VPC. Inside EFS you create file systems and these use POSIX permissions. EFS is made available inside a VPC via
mount targets. Mount targets have IP addresses taken from the IP address range of the subnet they're inside. For HA, you need to
make sure that you put mount targets in each AZ the system runs in.
You can use hybrid networking to connect to the same mount targets.
HA‐and‐Scaling
A better solution is to use multiple servers. Without load balancing, this could bring additional problems.
The user connects to a load balancer that is set to listens on port 80 and 443.
Within AWS, the configuration for which ports the load balancer listens on is called a listener.
The user is connected to the load balancer and not the actual server.
Behind the load balancer, there is an application server. At a high level when the user connects to the load balancer, it distributes that
load to servers on the application server. The users client thinks it is talking directly to the application server.
LB will run health checks against all of the servers. If one of the servers does fail, the load balancer will realize this and stop sending
connections to that server. From the users client, the application always works.
As long as 1+ servers are operational, the LB is operational. Clients shouldn't see errors that occur with one server.
LB Exam PowerUp
Capacity that you have as part of an ALB increases automatically based on the load which passes through that ALB. This is made of
multiple ALB nodes each running in different AZs. This makes them scalable and highly available.
Load balancing can be internet facing or internal. The difference is whether the nodes of the LB, the things which run in the AZs have
public IP addresses or not.
Internet facing LB is designed to be connected to, from public internet based clients, and load balance them across targets.
Internal load balancer is not accessible from the internet and is used to load balance inside a VPC only.
Load balancer sits between a client and one or more servers. Front end or listening side, accepts connections from a client. Back end
is used for distribution to the targets.
LB billed on hourly rate and Load Balancer Capacity Unit LCU. LCU that you consume is based on the highest value for all of the
individual measurements. You pay a certain number of LCUs based on your load over that hour.
Each node that is part of the load balancer is able to distribute load across all instances across all AZ that are registered with that LB,
even if its not in the same AZ. It is the reason we can achieve a balanced distribution of connections behind a load balancer.
It can also provide health checks on the target servers. If all instances are shown as healthy, it can distribute evenly.
ALB can support a wide array of targets. Targets are grouped within target groups and an individual target can be a member of
multiple groups. It's the groups which ALBs distribute connections to. You could create rules to direct traffic to different Target
Groups based on their DNS.
Targets are one single compute resource that connections are connected towards.
Target groups are groups of targets which are addressed using rules.
Rules are
path based /cat or /dog
host based if you want to use different DNS names.
Support EC2, EKS, Lambda, HTTPS, HTTP/2 and websockets.
ALB can use SNI for multiple SSL certs attached to that LB.
LB can direct individual domain names using SSL certs at different target groups.
AWS does not suggest using Classic Load Balancer ﴾CLB﴿, these are legacy.
This can only use one SSL certificate.
LTs are newer and provide more features than LCs like versioning.
Both of these are not editable. You define them once and that configuration is locked. If you need to adjust a configuration, you must
make a new one and launch it.
LTs can be used to save time when provisioning EC2 instances from the console UI / CLI.
Autoscaling Groups
Automatic scaling and self‐healing for EC2
They make use of LCs or LTs to know what to provision.
Autoscaling group uses one LC or one version of a LT which it's linked with.
Three values to control
minimum
desired
maximum
Provision or terminate instances to keep at the desired level Scaling Policies can trigger this based on metrics.
Autoscaling Groups will distribute EC2 instances to try and keep the AZs equal.
Scaling Policies
Manual Scaling ‐ manually adjust the desired capacity Scheduled Scaling ‐ time based adjustments Dynamic Scaling
Cooldown Period is how long to wait at the end of a scaling action before scaling again. There is a minimum billable duration for
an EC2 instance. Currently this is 300 seconds.
Self healing occurs when an instance has failed and AWS provisions a new instance in its place. This will fix most problems that are
isolated to one instance.
AGS can use the load balancer health checks rather than EC2. ALB status checks can be much richer than EC2 checks because they
can monitor the status of HTTP and HTTPS requests. This makes them more application aware.
Autoscaling Groups are free, only billed for the resources deployed.
Always use cool downs to avoid rapid scaling.
Try and use more smaller instances to allow granularity.
You should use ALB with autoscaling groups.
ASG defines when and where, Launch Template defines what.
There is nothing stopping NLB from load balancing on HTTP just by routing data. They would do this really fast and can deliver
millions of requests per second.
Only member of the load balancing family that can be provided a static IP. There is 1 interface per AZ. Can also use Elastic IPs
﴾whitelisting﴿ and should be used for this purpose.
NLB can load balance non HTTP/S applications, doesn't care about anything above TCP/UDP. This means it can handle load balancing
for FTP or things that aren't HTTP or HTTPS.
One or more clients makes one or more connections to a load balancer. The load balancer is configured so its listener uses HTTPS,
SSL connections occur between the client and the load balancer.
The load balancer then needs an SSL certificate that matches the domain name that the application uses. AWS has access to this
certificate. If you need to be careful of where your certificates are stored, you may have a problem with this system.
ELB initiates a new SSL connection to backend instances with a removed HTTPS certificate. This can take actions based on the
content of the HTTP.
The application local balancer requires a SSL certificate because it needs to decrypt any data that's being encrypted by the client.
Once decrypted, it will interpret it then create new encrypted sessions between it and the back end EC2 instances. The EC2 instance
will need matching SSL certificates.
Needs the compute for the cryptographic operations. Every EC2 instance must perform these cryptographic operations. This
overhead can be significant.
The main benefit is the elastic load balancer gets to see the unencrypted HTTP and can take actions based on what's contained in this
plain text protocol.
The client connects, but the load balancer passes the connection along without decrypting the data at all. The instances still need the
SSL certificates, but the load balancer does not. Specifically it's a network load balancer which is able to perform this style of
connection.
The load balancer is configured for TCP, it can see the source or destinations, but it never touches the encrypted connection. The
certificate never needs to be seen by AWS.
Negative is you don't get any load balancing based on the HTTP part because that is never exposed to the load balancer. The EC2
instances still need the compute cryptographic overhead.
Offload
Clients connect to the load balancer using HTTPS and are terminated on the load balancer. The LB needs an SSL certificate to decrypt
the data, but on the backend the data is sent via HTTP. While there is a certificate required on the load balancer, this is not needed on
the LB.
Data is in plaintext form across AWS's network. Not a problem for most.
Connection Stickiness
If there is no stickiness, each time the customer logs on they will have a stateless experience. If the state is stored on a particular
server, sessions can't be load balanced across multiple servers.
There is an option available within elastic load balancers called Session Stickiness. And within an application load balancer this is
enabled on a target group. If enabled, the first time a user makes a request, the load balancer generates a cookie called AWSALB with
a duration. A valid duration is between one second and seven days. For this time, sessions will be sent to the same backend instance.
This will happen until:
This could cause backend unevenness because one user will always be forced to the same server no matter what the distributed
load is. Applications should be designed to hold session stickiness somewhere other than EC2.
Serverless‐and‐App‐Services
Architecture Evolution
Monolithic
Fails together.
One error will bring the whole system down.
Scales together.
Everything expects to be running on the same compute hardware
Bills together.
All components are always running and always incurring charges.
Tiered
Data no longer moves between tiers to be processed and instead uses a queue.
Often are FIFO ﴾first in, first out﴿
Data moves into a S3 bucket.
Detailed information is put into the next slot in the queue.
Tiers no longer expect an answer.
Upload tier sends an async message.
The upload tier can add more messages to the queue.
The queue will have an autoscaling group to increase processing capacity.
The autoscaling group will only bring up servers as they are needed.
The queue has the location of the S3 bucket and passes this onto the processing tier.
Event producers
Interact with customers or systems monitoring components.
Produce events in reaction to something.
Clicks, events, errors, actions
Event consumers
Pieces of software waiting for events to occur.
Actions are taken and the system returns to waiting
Services can be producers and consumers at once.
Resources are not waiting around to be used.
Event router is needed for event driven architecture that also manages an event bus.
Only consumes resources while handling events.
AWS Lambda
Function‐as‐a‐service ﴾FaaS﴿
Service accepts functions.
Event driven invocation ﴾execution﴿ based on an event occurring.
Lambda function is piece of code in one language.
Lambda functions use a runtime ﴾e.g. Python 3.6﴿
Runs in a runtime environment.
Virtual environment that is ready to go to run code in that language.
You are billed only for the duration a function runs.
There is no charge for having lambda functions waiting and ready to go.
Lambda Architecture
Best practice is to make it very small and very specialized. Lambda function code, when executed is known as being invoked. When
invoked, it runs inside a runtime environment that matches the language the script is written in. The runtime environment is
allocated a certain amount of memory and an appropriate amount of CPU. The more memory you allocate, the more CPU it gets,
and the more the function costs to invoke per second.
Lambda functions can be given an IAM role or execution role. The execution role is passed into the runtime environment.
Whenever that function executes, the code inside has access to whatever permissions the role's permission policy provides.
Lambda can be invoked in an event‐driven or manual way. Each time you invoke a lambda function, the environment provided is
new. Never store anything inside the runtime environment, it is ephemeral.
Lambda functions by default are public services and can access any websites. By default they cannot access private VPC resources,
but can be configured to do so if needed. Once configured, they can only access resources within a VPC. Unless you have configured
your VPC to have all of the configuration needed to have public internet access or access to the AWS public space endpoints, then
the Lambda will not have access.
The Lambda runtime is stateless, so you should always use AWS services for input and output. Something like DynamoDB or S3. If a
Lambda is invoked by an event, it gets details of the event given to it at startup.
Key Considerations
EventBridge is basically CloudWatch Events V2 that uses the same underlying APIs and has the same architecture, but with
additional features. Things created in one can be visible in the other for now.
Both systems have a default Event bus for a single AWS account. A bus is a stream of events which occur for any supported service
inside an AWS account. In CW Events, there is only one bus ﴾implicit﴿, this is not exposed. EventBridge can have additional event
buses for your applications or third party applications and services. These can be interacted with in the same way as the default bus.
In both services, you create rules and these rules pattern match events which occur on the buses and when they see an event which
matches, they deliver that event to a target. Alternatively you can have schedule based rules which match a certain date and time or
ranges of dates and times.
Rules match incoming events or schedules. The rule matches an event and routes that event to one or more targets as you define on
that rule.
Architecturally at the heart of event bridge is the default account event bus. This is a stream of events generated by supported
services within the AWS account. Rules are created and these are linked to a specific event bus or the default event bus. Once the
rule completes pattern matching, the rule is executed and moves that event that it matched through to one or more targets. The
events themselves are JSON structures and the data can be used by the targets.
Serverless
This is not one single thing, you manage few if any servers. This aims to remove overhead and risk as much as possible. Applications
are a collection of small and specialized functions that do one thing really well and then stop.
These functions are stateless and run in ephemeral environments. Every time they run, they obtain the data that they need, they do
something and then optionally, they store the result persistently somehow or deliver the output to something else.
Generally, everything is event driven. Nothing is running until it's required. While not being used, there should be little to no cost.
Aim is to consume as a service whatever you can, code as little as possible, and use function as a service for any general purpose
compute needs, and then use all of those building blocks together to create your application.
Example of Serverless
1. User browses to a static website that is running the uploader. The JS runs directly from the web browser.
2. Third party auth provider, google in this case, authenticates via token.
3. AWS cannot use tokens provided by third parties. Cognito is called to swap the third party token for AWS credentials.
4. Service uses these temporary credentials to upload a video to S3 bucket.
5. Bucket will generate an event once it has completed the upload.
6. A lambda triggers to transcode the video as needed. The transcoder will get the original S3 bucket video location and will use
this for its workload.
7. Output will be added to a new transcode bucket and will put an entry into DynamoDB.
8. User can interact with another Lambda to pull the media from the transcode bucket using the DynamoDB entry.
Offers:
State machine is designed to perform an activity or workflow with lots of individual components and maintain the idea of data
between those states.
Standard
Default
1 year workflow
Express
Started via API Gateway, IOT Rules, EventBridge, Lambda. Generally used for back end processing.
With State machines you can use a template to create and export State Machines once they're configured to your liking, it's called
Amazon States Language or ASL. It's based on JSON.
State machines are provided permission to interact with other AWS services via IAM roles.
States are the things inside a workflow, the things which occur. These states are available.
Billed on requests not messages. A request is a single request to SQS. One request can return 0 ‐ 10 messages up to 64KB data in
total. Since requests can return 0 messages, frequently polling a SQS Queue, makes it less effective.
short ﴾immediate﴿ : uses 1 request and can return 0 or more messages. If the queue is empty, it will return 0 and try again. This
hurts queues that stay short
long ﴾waitTimeSeconds﴿ : it will wait for up to 20 seconds for messages to arrive on the queue. It will sit and wait if none
currently exist.
Messages can live on SQS Queue for up to 15 days. They offer KMS encryption at rest. Server side encryption. Data is encrypted in
transit with SQS and any clients.
Access to a queue is based on identity policies or a queue policy. Queue policies only can allow access from an outside account. This
is a resource policy.
Kinesis
Scalable streaming service. It is designed to inject data from lots of devices or lots of applications.
Many producers send data into a Kinesis Stream.
The stream can scale from low to near infinite data rates.
Highly available public service by design.
Streams store a 24‐hour moving window of data.
Can be increased to 7 days.
Data 24 hours + 1s is replaced by new data entering the stream.
Kinesis includes the storage costs within it for the amount of data that can be ingested during a 24 hour period. However much
you ingest during 24 hours, that's included.
Multiple consumers can access data from that moving window.
One might look at data points once per hour
Another looks at data 1 per minute.
Kinesis stream starts with 1 shard and expands as needed.
Each shard can have 1MB/s for ingestion and 2MB/s consumption.
Kinesis data records ﴾1MB﴿ are stored across shards and are the blocks of data for a stream.
Kinesis Data Firehose connects to a Kinesis stream. It can move the data from a stream onto S3 or another service.
SQS vs Kinesis
Kinesis
SQS
CDN‐and‐Optimization
Architecture Basics
CloudFront is a global object cache ﴾CDN﴿
Download caching only
Content is cached in locations close to customers.
If the content is not available on the local cache when requested, CloudFront will fetch the item and cache it and deliver it locally.
This provides lower latency and higher throughput for customers.
Can handle static and dynamic content.
Origin the original location of your content, can be an S3 bucket or LB.
Distribution the configuration unit of CloudFront.
Edge locations global infrastructure which hosts a cache of your data.
There are over 200 edge locations.
They can be one or more racks in a third party server system.
Normally 90% storage with some small compute.
Regional Edge Cache
Larger version of an edge location.
Support a number of local edge locations.
Designed to hold more data to cache things which are accessed less often.
Provides another layer of caching.
Caching Optimization
Parameters can be passed on the url such as query string parameter. An example is ?language=en and ?language=es
Caching will cache each string parameter storing two different objects. You must use the same string parameters again to retrieve
them. If you remove them and the object is not caching it will need to be fetched first.
If string parameters aren't involved in the caching, you can select no to forward them to the origin.
If the application does use query string parameters, you can use all of them for caching or just selected ones.
As long as accesses are coming from the edge locations, it will know they are from the OAI and allow them. Any direct attempts will
not use the OAI and will only get the implicit deny.
Best practice is to create one OAI per CloudFront distribution to manage permissions.
Advanced‐VPC
Normally when you want to access a public service through a VPC, you need infrastructure. You would create an IGW and attach it to
the VPC. Resources inside need to be granted IP address or implement one or more NAT gateways which allow instances with
private IP addresses to access these public services.
When you allocate a gateway endpoint to a subnet, a prefix list is added to the route table. The target is the gateway endpoint. Any
traffic destined for S3, goes via the gateway endpoint. The gateway endpoint is highly available for all AZs in a region by default.
With a gateway endpoint you set which subnet will be used with it and it will configure automatically. A gateway endpoint is a VPC
gateway object. Endpoint policy controls what things can be connected to by that endpoint.
Gateway endpoints can only be used to access services in the same region. Can't access cross‐region services.
S3 buckets can be set to private only by allowing access ONLY from a gateway endpoint. For anything else, the implicit deny will
apply.
Gateway endpoints work using prefix lists and route tables so they do not need changes to the applications. The application thinks
it's communicating directly with S3 or DynamoDB and all we're doing by using a gateway endpoint is influencing the route that the
traffic flow uses. Instead of using IGW, it goes via gateway endpoint and can use private IP addressing. highly available
Interface Endpoints uses DNS and a private IP address for the interface endpoint. You can either use the endpoint specific DNS
names or you can enable PrivateDNS which overrides the default and allows unmodified applications to access the services using
the interface endpoint. This doesn't use routing and only DNS. not highly available
VPC Peering
Direct encrypted network link between two and only two VPCs. Peering connection can be in the same or cross region and in the
same or across accounts.
When you create a VPC peer, you can enable an option so that public hostnames of services in the peered VPC resolve to the private
internal IPs. You can use the same DNS names if its in peered VPCs or not. If you attempt to resolve the public DNS hostname of an
EC2 instance, it will resolve to the private IP address of the EC2 instance.
VPCs in the same region can reference each other by using security group id. You can do the same efficient referencing and nesting
of security groups that you can do if you're inside the same VPC. This is a feature that only works with VPC peers inside the same
region.
In different regions, you can utilize security groups, but you'll need to reference IP addresses or IP ranges. If VPC peers are in the
same region, then you can do the logical referencing of an entire security group.
VPC Peering does not support transitive peering. If you want to connect 3 VPCs, you need 3 connections. You can't route through
interconnected VPCs.
Hybrid‐and‐Migration
DX provides NO ENCRYPTION and needs to be managed on a per application basis. There is a common way around this limitation.
The Public VIF allows connections to AWS public services. Inside the VPC we already have a virtual private gateway, because this is
used for any private VIFs running over the Direct Connect. Creating a virtual private gateway creates end points that are located
inside the AWS public zone with public IP addresses. These end points have already been created and they already exist. We can
create a VPN and instead of using the public internet as the transit network, you can use the public VIF running over Direct Connect.
You run an IPSEC VPN over the public VIF, over the Direct Connect connection, you get all of the benefits of Direct Connect such as
high speeds, and all the benefits of IPSEC encryption.
Storage Gateway
Hybrid Storage Virtual Application ﴾On‐premise﴿
Can be run inside AWS as part of certain disaster recovery scenarios
Allows for migration of existing infrastructure into AWS slowly.
Tape Gateway ﴾VTL﴿ Mode
Virtual Tapes are stored on S3
File Mode ﴾SMB and NFS﴿
File Storage Backed by S3 Objects
Volume Mode ﴾Gateway Stored﴿
Block Storage backed by S3 and EBS
Great for disaster recovery
Data is kept locally
Awesome for migrations
Volume Mode ﴾Cache Mode﴿
Data as added to gateway is not stored locally.
Backup to EBS Snapshots
Primarily stored on AWS
Great for limited local storage capacity.
Snowball
Snowball Edge
Snowmobile
Portable data center within a shipping container on a truck. This is a special order and is not available in high volume. Ideal for single
location where 10 PB+ is required. Max is 100 PB per snowmobile.
Devices can join a directory so laptops, desktops, and servers can all have a centralized management and authentication. You can
sign into multiple devices with the same username and password.
One common directory is Active Directory by Microsoft and its full name is Microsoft Active Directory Domain Services or
AD DS.
Directory Modes
AWS DataSync
Data transfer service TO and FROM AWS.
This is used for migrations or for large amounts of data processing transfers.
Designed to work at huge scales. Each agent can handle 10 Gbps and each job can handle 50 million files.
Transfers metadata and timestamps
Each agent is about 100 TB per day.
Can use bandwidth limiters to avoid customer impact
Supports incremental and scheduled transfer options
Compression and encryption in transit is also supported
Has built in data validation and automatic recovery from transit errors.
AWS service integration with S3, EFS, FSx for Windows servers.
Pay as you use product.
Task
job within datasync
defines what is being synced how quickly
defines two locations involved in the job
Agent
software to read and write to on prem such as NFS or SMB
used to pull data off that store and move into AWS or vice versa
Location
every task has two locations FROM and TO
example locations:
network file systems ﴾NFS﴿, common in Linux or Unix
server message block ﴾SMB﴿, common in Windows environments
AWS storage services ﴾EFS, FSx, and S3﴿
Security‐Deployment‐Operations
Secrets are secured using KMS so you never risk any leakage via physical access to the AWS hardware and KMS ensures role
separation.
Shield Standard
Shield advanced
Example of Architecture
Shield standard automatically looks at the data before any data reaches past Route53. The user is directed to the closest CloudFront
location. Again, shield standard looks at the data again before it moves on.
WAF Rules are defined and included in a WEBACL which is associated to a cloud front distribution and deployed to the edge.
Shield advanced can then intercept traffic when it reaches the load balancer. Once the data reaches the VPC, it has been filtered at
Layer 3, 4, and 7 already.
CloudHSM
KMS is the key management service within AWS. It is used for encryption within AWS and it integrates with other AWS products.
Can generate keys, manage keys, and can integrate for encryption. The problem is this is a shared service. You're using a service
which other accounts within AWS also use. Although the permissions are strict, AWS still does manage the hardware for KMS. KMS
is a hardware security module or HSM. These are industry standard pieces of hardware which are designed to manage keys and
perform cryptographic operations.
You can run your own HSM on premise. Cloud HSM is a true "single tenant" hardware security module ﴾HSM﴿ that's hosted within
the AWS cloud. AWS provisions the HW, but it is impossible for them to help. There is no way to recover data from them if access is
lost.
Fully FIPS 140‐2 Level 3 ﴾KSM is L2 overall, but some is L3﴿ IF you require level 3 overall, you MUST use CloudHSM.
KSM all actions are performed with AWS CLI and IAM roles.
HSM will not integrate with AWS by design and uses industry standard APIs.
PKCS#11
Java Cryptography Extensions ﴾JCE﴿
Microsoft CryptoNG ﴾CNG﴿ libraries
KMS can use CloudHSM as a custom key store, CloudHSM integrates with KMS.
HSM is not highly available and runs within one AZ. To be HA, you need at least two HSM devices and one in each AZ you use. Once
HSM is in a cluster, they replicate all policies in sync automatically.
HSM needs an endpoint in the subnet of the VPC to allow resources access to the cluster.
AWS has no access to the HSM appliances which store the keys.
No native AWS integration with AWS products. You can't use S3 SSE with CloudHSM.
Can offload the SSL/TLS processing from webservers. CloudHSM is much more efficient to do these encryption processes.
Oracle Databases can use CloudHSM to enable transparent data encryption ﴾TDE﴿
Can protect the private keys an issuing certificate authority.
Anything that needs to interact with non AWS products.
NoSQL‐and‐DynamoDB
DynamoDB Architecture
NoSQL Database as a Service ﴾DBaaS﴿
Dynamo DB Tables
In DynamoDB, capacity means speed. If you choose on‐demand capacity model you don't have to worry about capacity. You only pay
for the operations for the table. If you choose provisioned capacity, you must set this on a per table basis.
1 WCU means you can write 1KB per second to that table 1 RCU means you can read 4KB per second for that table
Dynamo DB Backups
On‐demand Backups: Similar to manual RDS snapshots. Full backup of the table that is retained until you manually remove that
backup. This can be used to restore data in the same region or cross‐region. You can adjust indexes, or adjust encryption settings.
Point‐in‐time Recovery: Must be enabled on each table and is off by default. This allows continuous record of changes for 35 days
to allow you to replay any point in that window to a 1 second granularity.
Dynamo DB Considerations
Access to Dynamo is from the console, CLI, or API. You don't have SQL access.
Can purchase reserved capacity with a cheaper rate for a longer term commit.
On‐Demand: Unknown or unpredictable load on a table. This is also good for as little admin overhead as possible. Pay a price per
million Read or Write units. This is as much as 5 times the price as provisioned.
1 RCU = 1 x 4KB read operation per second. This rounds up. 1 WCU = 1 x 1KB write operation per second.
Every single table has a WCU and RCU burst pool. This is 500 seconds of RCU or WCU as set by the table.
Query
The PK can be the sensor unit, the Sort Key ﴾SK﴿ can be the day of the week you want to look at.
Query accepts a single PK value and optionally a SK or range. Capacity consumed is the size of all returned items. Further filtering
discards data, but capacity is still consumed.
In this example you can only query for one weather station.
If you query a PK it can return all fields items that match. It is always more efficient to pull as much data as needed per query to save
RCU.
You have to query for at least one item of PK and are charged for the response of that query operation.
If you filter data and only look at one attribute, you will still be charged for pulling all the attributes against that query.
Scan
Least efficient when pulling data from Dynamo, but the most flexible.
Scan moves through the table item by item consuming the capacity of every item. Even if you consume less than the whole table, it
will charge based on that. It adds up all the values scanned and will charge rounding up.
Eventually Consistent: easier to implement and scales better Strongly ﴾Immediately﴿ Consistent: more costly to achieve
Every piece of data is replicated between storage nodes. There is one Leader storage node and every other node follows.
Writes are always directed to the leader node. Once the leader is complete, it is consistent. It then starts the process of replication.
This typically takes milliseconds and assumes the lack of any faults on the storage nodes.
Eventual consistent could lead to stale data if a node is checked before replication completes. You get a discount for this risk.
A strongly consistent read always uses the leader node and is less scalable.
Not every application can tolerate eventual consistency. If you have a stock database or medical information, you must use strongly
consistent reads. If you can tolerate the cost savings you can scale better.
Store 10 items per second with 2.5K average size per item.
Calculate WCU per item, round up, then multiply by average per second.
﴾2.5 KB / 1 KB﴿ = 3 * 10 p/s = 30 WCU
Retrieve 10 items per second with 2.5K average size per item.
Calculate RCU per item, round up, then multiply by average per second.
﴾2.5 KB / 4 KB﴿ = 1 * 10 p/s = 10 RCU for strongly consistent.
5 RCU for eventually consistent.
Inserts
Updates
Deletes
Pre or post change state might be empty if you use insert or delete
Trigger Concepts
Item change generates an event that contains the data which was changed. The specifics depend on the view type. The action is
taken using that data. This will combine the capabilities of stream and lambda. Lambda will complete some compute based on this
trigger.
This is great for reporting and analytics in the event of changes such as stock levels or data aggregation. Good for data aggregation
for stock or voting apps. This can provide messages or notifications and eliminates the need to poll databases.
Choose alternative sort key with the same partition key on base table data.
If item does not have sort key it will not show on the table.
These must be created with a base table in the beginning.
This cannot be added later.
Maximum of 5 LSIs per base table.
Uses the same partition key, but different sort key.
Shares the RCU and WCU with the table.
It makes a smaller table and makes scan operates easier.
In regards to Attributes, you can use:
ALL
KEYS_ONLY
INCLUDE
GSI as default and only use LSI when strong consistency is required
Indexes are designed when data is in a base table needs an alternative access pattern. This is great for a security team or data science
team to look at other attributes from the original purpose.
Traditional Cache: The application needs to access some data and checks the cache. If the cache doesn't have the data, this is
known as a cache miss. The application then loads directly from the database. It then updates the cache with the new data.
Subsequent queries will load data from the cache as a cache hit and it will be faster
DAX: The application instance has DAX SDK added on. DAX and dynamoDB are one in the same. Application uses DAX SDK and
makes a single call for the data which is returned by DAX. If DAX has the data, then the data is returned directly. If not it will talk to
Dynamo and get the data. It will then cache it for future use. The benefit of this system is there is only one set of API calls using one
SKD. It is tightly integrated and much less admin overhead.
DAX Architecture
This runs from within a VPC and is designed to be deployed to multiple AZs in that VPC. Must be deployed across AZs to ensure it is
highly available.
DAX is a cluster service where nodes are placed into different AZs. There is a primary node which is the read and write note. This
replicates out to other nodes which are replica nodes and function as read replicas. With this architecture, we have an EC2 instance
running an application and the DAX SDK. This will communicate with the cluster. On the other side, the cluster communicates with
DynamoDB.
DAX maintains two different caches. First is the item cache and this caches individual items which are retrieved via the GetItem or
BatchGetItem operation. These operate on single items and must specify the items partition or sort key.
There is a query cache which holds data and the parameters used for the original query or scan. Whole query or scan operations
can be rerun and return the same cached data.
Every DAX cluster has an endpoint which will load balance across the cluster. If data is retrieved from DAX directly, then it's called a
cache hit and the results can be returned in microseconds.
Any cache misses, so when DAX has to consult DynamoDB, these are generally returned in single digit milliseconds. Now in writing
data to DynamoDB, DAX can use write‐through caching, so that data is written into DAX at the same time as being written into the
database.
If a cache miss occurs while reading, the data is also written to the primary node of the cluster and the data is retrieved. And then it's
replicated from the primary node to the replica nodes.
When writing data to DAX, it can use write‐through. Data is written to the database, then written to DAX.
DAX Considerations
Primary node which writes and Replicas which support read operations.
Nodes are HA, if the primary node fails there will be an election and secondary nodes will be made primary.
In‐memory cache allows for much faster read operations and significantly reduced costs. If you are performing the same set of
read operations on the same set of data over and over again, you can achieve performance improvements by implementing
DAX and caching those results.
With DAX you can scale up or scale out.
DAX supports write‐through. If you write data to DynamoDB, you can use the DAX SDK. DAX will handle that data being
committed to DynamoDB and also storing that data inside the cache.
DAX is not a public service and is deployed within a VPC. Anything that uses that data many times will benefit from DAX.
Any questions which talk about caching with DynamoDB, assume it is DAX.
Amazon Athena
You can take data stored in S3 and perform Ad‐hoc queries on data. Pay only for the data consumed.
Start off with structured, semi‐structured and even unstructured data that is stored in its raw form on S3.
Athena uses schema‐on‐read, the original data is never changed and remains on S3 in its original form.
The schema which you define in advance, modifies data in flight when its read.
Normally with databases, you need to make a table and then load the data in.
With Athena you create a schema and load data on this schema on the fly in a relational style way without changing the data.
The output of a query can be sent to other services and can be performed in an event driven fully serverless way.
Athena Explained
The source data is stored on S3 and Athena can read from this data. In Athena you are defining a way to get the original data and
defining how it should show up for what you want to see.
Tables are defined in advance in a data catalog and data is projected through when read. It allows SQL‐like queries on data without
transforming the data itself.
You can optimize the original data set to reduce the amount of space uses for the data and reduce the costs for querying that data.
© 2020 GitHub, Inc. Terms Privacy Security Status Help Contact GitHub Pricing API Training Blog About