AWS Cloud Practitioner Essentials
AWS Cloud Practitioner Essentials
AWS Global Infrastructure is built around Regions and Availability Zones(AZs). is broken
down into:
a) Regions:
Regions are geographical areas that host two or more availability zones. AWS
regions provide multiple, physically separated and isolated Availability Zones
which are connected with low latency, high throughput, and highly redundant
networking. When building and choosing custom services and features, you have
the opportunity to choose in what geographical region your information will be
stored.
Benefits:
- choosing the region will help in optimizing the latency while minimizing
costs and adhering to whatever regulatory requirements you may have.
- easily deploy applications in multiple regions.
- regions are completely separate entities from one another and are not
automatically replicated to other regions.
b) Availability Zones:
Availablity Zones or AZs are a collection of data centers within a specific region.
Each Availability Zone is a physically distinct, independent infrastructure. They
are physically and logically separated. Each zone has:
1) Physically distinct
2) Their own uninterruptible power supply
3) Onsite backup generators
4) Cooling equipment
5) Networking connectivity
Isolating them protects from failures in other AZ’s to ensure high availability.
AWS best practice is to provision your data across multiple AZ’s.
c) Edge Locations:
AWS Edge locations host a content delivery network or CDN called Amazon
CloudFront. CloudFront is used to deliver content to your customers. Requests
for content are automatically routed to the nearest edge location so that the
content is delivered faster to the end users. Utilizing the global network of edge
locations and regions you have:
1) Access to quicker content delivery
2) Typically located in highly populated areas similarly to regions and AZs.
- AMAZON VPC
Amazon Virtual Private Cloud or VPC, is the networking AWS Service. It is an AWS
Foundational service and integrates with numerous AWS Services.
1) It allows you to create a private, virtual network within AWS Cloud
Uses the same concepts as on premise networking
2) Gives you complete control of the network configuration (define normal
networking configuration items such as IP address spaces, subnets, and routing
tables).
Allows you to control what you want to expose to the Internet and what you
want to isolate within the Amazon VPC.
3) Allows several layers of security controls.
Ability to allow and deny specific internet and internal traffic
4) Other AWS services that deploy into your VPC
Services inherent security built into network
1) Builds upon the high availability of AWS global infrastructure of Regions and
Availability Zones.
Amazon VPCs lives within a region and span across multiple AZs.
Each AWS account can create multiple VPCs.
2) Subnets
A VPC defines an IP address and further divided into subnets. Used to divide
Amazon VPC.
Subnets are deployed within AZs, causing the VPC to span Availability Zones.
By default, Subnets within a VPC can communicate with each other.
Subnets are classified as public (having direct access to the internet) and
private (not having direct access to the internet).
Amazon EC2 instances needs a public IP address to route to an Internet
Gateway.
3) Route Tables
Configure route table for the subnets to control the traffic between subnets
and the internet.
4) Internet Gateway (IGW) – for a subnet to be public – Public Subnets
Allows access to the internet from Amazon VPC
5) NAT Gateway (Network Address Translation) – for a subnet to be private –
Private Subnets
Allows private subnet resources to access the internet
6) Network Access Control List (NACL)
Control access to subnets; stateless
- SECURITY GROUPS
Security of the AWS Cloud is one of Amazon Web Services highest priorities, and provide
many secure options to help secure the data:
1) Built-in Firewalls:
AWS provides virtual firewalls that can control traffic for one or more
instances, called security groups.
Control accessibility to your instances by creating security group rules. These
can be managed on the AWS Management console.
AWS is an event-driven serverless compute service. AWS has a broad catalogue of compute
services:
Simple Application Services to flexible virtual servers
Serverless computing
Flexibility
Cost effectiveness
Pay as you go – Pay only for running instances, and only for the time they
are running
Broad selection of Hardware/Software
Global Hosting – Selection of where to host your instances
- AWS LAMDA
AWS Lamda is a compute service that lets you run code without provisioning
or managing service. (No servers to manage) Run code in response to the
events
Executes code only when needed and scales automatically to thousands of
requests per second. (Continuous scaling)
Fully-managed serverless compute
Event-driven execution
Multiple Languages supported including Node.js, Java, C Sharp, and Python.
Pay as you use - Don’t pay for the compute time when the code is not
running. This makes AWS Lamda ideal for variable and intermittent
workloads. You can run code for virtually any application or backend service,
or with zero administration.
AWS Lamda runs code on a highly available compute infrastructure, which
provides all administration, including server, and operating system
maintenance, capacity provision, and Auto scaling, code monitoring, and
logging.
Use AWS Lamda for event-driven computing.
o You can run code in response to events, including changes to an
Amazon S3 bucket or an Amazon Dynamo DB table.
o You can respond to HTTP requests using Amazon API Gateway.
o You can invoke your code using API calls made using the AWS
SDKs.
o You can build serverless applications that are triggered by AWS
Lamda functions
o Automatically deploy them using AWS CodePipeline AWS
CodeDeploy.
AWS Lamda is intended to support serverless and microservices
applications.
Use AWS Lamda to build your extract, transform, and load pipelines.
Use AWS Lamda to perform data validation, filtering, sorting, or other
transformations for every data change in a DynamoDB table, and load the
transformed data into another data store.
Use AWS Lamda to build your backends for your IoT devices. You can
combine API Gateway with AWS Lamda to easily build your mobile backend.
AWS Lamba acts as a connecting tissue for AWS Services, from building
microservices architectures to running your applications.
With AWS Lamda, we can run code for virtually any application or backend
service. AWS Lamda use cases include automated backups, processing
objects uploaded to Amazon S3, event-driven transformations, Internet of
Things(IoT), operating serverless websites.
To avoid creating monolithic and tightly coupled solutions, AWS Lamda
employs the following configuration options:
o Disc space is limited to 512 megabytes
o Memory location is available from 128 megabytes to 1536
megabytes
o AWS Lamda function execution is limited to a maximum of
five minutes.
o You are constrained by deployment package size and the
maximum number of file descriptors
o Request and Response body payload cannot exceed six
megabytes.
o The event request body is also limited to 128 kilobits.
o AWS Lamda is built on the number of times your code is
triggered, and for each millisecond of execution time.
- AWS ELASTIC BEANSTALK
Elastic Beanstalk is platform as a service for deploying and scaling web applications
Application Load Balancer is best suited for load balancing of HTTP and HTTPS traffic
and provides advanced request routing targeted at the delivery of modern
application architectures, including microservices and containers. Operating at the
individual request level(Layer 7), Application Load Balancer routes traffic to targets
within Amazon VPC based on the content of the request.
- CLASSIC LOAD BALANCER (Layer 7 or Layer 4 Load Balancing) Elastic Load Balancer’s Original
Type
Classic Load Balancer provides basic load balancing across multiple Amazon EC2
Instances and operates at both the request level and connection level. Classic Load
Balancer is intended for applications that were built within the EC2-Classic network.
We recommend Application Load Balancer for Layer 7 and Network Load Balancer
for Layer 4 when using Virtual Private Cloud.
Traffic Distribution
o Elastic Load Balancing distributes traffic depending on what type of
request you are distributing.
o If processing a TCP requests, ELB uses a simple Round Robin for
these requests
o If processing a HTTP or HTTPS requests, ELB will use a least
outstanding request for the backend instances
o ELB also helps with distributing traffic across multiple Availability
Zones. If the load balancer is created within the AWS Management
Console, then this feature is enabled by default. If the ELB is
launched through the command line tools or SDK, then this will
need to be enabled as a secondary process.
o ELB provides a single exposed endpoint to provide access to
backend instances
o If cookies are used in the application, then ELB provides the feature
of sticky sessions. Sticky sessions bind a user’s session for the
duration of that session and it’s set depending on the whether to
use application-controlled sticky session or duration based cookies
Monitoring
o Provides many metrics by default, These metrics allow you to see:
1. View HTTP Responses
2. See number of healthy and unhealthy hosts
3. Filter metrics based on the Availability zone of the backend
instances or based on the load balancer that is being used.
4. For Health Checks, the load balancer allows you to see the
number of healthy and unhealthy EC2 hosts behind your
load balancer, this is achieved with a simple attempted
connection or ping requests to the backend EC2 instance.
Scalability
o Provides Multi-Zone load balancing, which enables to distribute
traffic across multiple availability zones within the VPC.
o The load balancer itself will scale based on the traffic pattern that it
sees
Internet -Facing Load Balancers
o Classic Load Balancer has the ability to create different types of load
balancers
1. Internet-Facing Load Balancer or Public-Facing Load Balancer
- AUTO SCALING
Auto Scaling helps to ensure that there are correct number of Amazon EC2
instances available to handle the load of the application. (Allows to add or
remove EC2 instances based on conditions that are specified).
Removes the guesswork of how many EC2 instances you need at a point of
time to meet your workload requirements
When you run the applications on EC2 Instances, it is critical to monitor the
performance of the workload using Amazon CloudWatch. However, Amazon
CloudWatch cannot add or remove EC2 Instances. This is where Auto Scaling
comes into the picture.
Auto Scaling allows to add or remove EC2 Instances based on the conditions
that are specified.
Auto Scaling is especially powerful in environments with fluctuating
performance requirements. This allows to maintain performance and
minimize costs.
Solves two critical questions:
1. How can I ensure that my workload has enough EC2
resources to meet fluctuating performance requirements?
2. How can I automate EC2 resource provisioning to occur on-
demand?
Auto Scaling matches up two critical AWS best practices:
o Make the environment scalable - Scalability
o Automation
Scaling Out and Scaling In
o Adds more instances – Scaling Out
o When Auto Scaling terminates instances – Scaling In
3 Components required for Auto Scaling:
Create a Launch configuration – what to deploy
Create an auto scaling group – where to deploy
Define at least one auto scaling policy – when to deploy
o Launch Configuration:
Defining what will be launched by Auto scaling
Examples: All the things that you would specify when you
launch an EC2 instance from the console, such as:
Amazon Machine Image (AMI),
What Instance type
Security Groups
Roles to apply to the instance
o Auto Scaling Group:
Defining where the deployment takes place and some
boundaries for the deployment.
1. VPC and Subnet(s)
2. Load Balancer
3. Minimum instances
4. Maximum instances
5. Desired capacity
VPC and Subnet(s) : Define which VPC to deploy instances,
Load Balancer: in which load balancer to interact with.
Minimum and Maximum Instances: Also, specify the
boundaries of a group. If a minimum of two are set, if the
server account goes below two, another instance will be
launched to replace it. If a maximum of eight are set, you
will never have more than eight instances in your group
Desired Capacity: The desired capacity is the number that
you wish to start with
o Auto Scaling Policy
This is to specify when to launch or terminate EC2 instances.
Scheduled: Create conditions that define threshold or
trigger adding or removing instances.
On Demand: Condition based policies make Auto Scaling
dynamic and able to meet fluctuating requirements
Scale Out Policy: Best practice is to create at least one Auto
scaling Policy to specify when to Scale Out, and
Scale In Policy: at least one Auto Scaling Policy to specify
when to Scale In
How does Dynamic Auto Scaling work?
1. One common configuration is to create CloudWatch
Alarms based on performance information from EC2
instances or a load balancer.
2. When a performance threshold is breached, a
CloudWatch alarm triggers an Auto Scaling event
which either scales out or scales in EC2 instances in
the environment.
- AMAZON ELASTIC BLOCK STORE (EBS)
Overview
EBS Volumes are used as a storage unit for Amazon EC2 Instances.
Amazon EBS allows you to create storage volumes and attach them
to EC2 Instances. Once attached, you create a file system on top of
these volumes, or use them in any other way you would use block
storage.
EBS Volumes provide disk space for the instances running on AWS.
Amazon EBS provides a range of options that allow to optimize
storage performance and cost for the workload. These options are
divided into two major catorgies: Choose between HDD and SSD
types – Volumes can be hard disks or SSD devices. SSD-backed
storage for transactional workloads, such as databases and boot
volumes(performance depends primarily on IOPS), and HDD-backed
storage for throughput intensive workloads, such as MapReduce
and Log processing(performance depends primarily on MB/s)
SSD backed volumes include the highest performance provisioned
IOPS SSD(io1) for latency-sensitive transactional workloads and
General Purpose SSD(gp2) that balance price and performance for a
wide variety of transactional data.
HDD-backed volumes include Throughput Optimized HDD(st1) for
frequently accessed, throughput intensive workloads and the lowest
cost Cold HDD(sc1) for less frequently accessed data.
Pay as you use – Whenever a volume is not needed, you can delete
it and stop paying for it.
Persistent and customizable block storage for EC2 instances
Offers Consistent and Low latency performance needed to run your
workloads.
EBS Volumes are designed for being durable and available
Replicated in the same availability zone – The data in the volume is
automatically replicated across multiple servers running the
Availability zone to protect from component failure, offering high
availability, and durability.
Back up using Snapshots
Easy and Transparent Encryption
Elastic Volumes – A feature of Amazon EBS that allows to
dynamically increase capacity, tune performance, and change type
of live volumes with no downtime or performance impact. This
allows to easily right-size your deployment and adapt to
performance changes.
EBS is designed for application workloads that benefit from fine
tuning for performance, cost, and capacity.
E Throughput
Optimized
Volume Type EBS Provisioned EBS General Cold HDD(sc1)
HDD(st1)
IOPS SSD(io1) Purpose
SSD(gp2)
Enable and Control access to your data in Amazon Glacier by using AWS
Identity and Access Management(IAM). Simply setup the AWS IAM policy
that specifies which user.
Glacier encrypts data with AES-256
Glacier will handle the key management and protection for you, but if you
need to manage your own keys, you can encrypt your data prior to
uploading it to Glacier.