Module 2
Module 2
Module -2
2
Compute Service – Amazon
EC2
Type of Instances
Amazon EC2 (Elastic Compute Cloud) offers a wide variety of instance types
tailored to meet different use cases. These instance types are categorized based on
their compute, memory, storage, and networking capabilities. Each instance type
belongs to a specific instance family optimized for particular workloads.
3. Memory-Optimized Instances
•High memory-to-CPU ratio for memory-intensive applications.
•Ideal for in-memory databases, caching, and real-time big data analytics.
•Examples:
•R Series: r5, r6g, r7g
•Optimized for memory-intensive applications.
23
•X Series: x2idn, x2iedn
Type of Instances
4. Accelerated Computing Instances
24
Type of Instances
5. Storage-Optimized Instances
25
Type of Instances
6. Networking-Optimized Instances
•Designed for applications requiring high network bandwidth and low
latency.
•Examples:
•High Network Bandwidth Instances: Instances with enhanced
networking capabilities like c5n and m5n.
26
Elastic Beanstalk
Key features of Elastic Beanstalk
Easy Deployment – Upload your application code, and EB automatically
provisions the necessary infrastructure.
28
How Elastic Beanstalk works?
When you deploy an application using Elastic Beanstalk, it follows these steps:
1. Upload Your Code – Use the AWS Management Console, AWS CLI, or an
IDE plugin to upload your application.
29
How Elastic Beanstalk works?
5. Application Updates – You can deploy updates with zero downtime using
Rolling Updates.
30
Web Server Tier
A Web Server Environment is designed to run web applications that handle
HTTP(S) requests from users via an Elastic Load Balancer (ELB).
How It Works
1. Elastic Beanstalk provisions Amazon EC2 instances.
2. It configures a Load Balancer (ELB) and Auto Scaling Group.
3. EC2 instances run a web server (Apache, Nginx, IIS, etc.).
4. The environment automatically manages scaling and monitoring.
Use Cases
Hosting web applications (Node.js, Django, Spring Boot, Laravel, etc.)
Running REST APIs
Serving static and dynamic content
34
Web Server Tier
Architecture
User Request --> Load Balancer --> EC2 Web Server --> Application
Code --> Database (optional)
35
Worker Tier
A Worker Environment is designed for processing background jobs that
do not require immediate responses.
How It Works
1. Elastic Beanstalk provisions EC2 instances running a worker process.
2. It creates an Amazon Simple Queue Service (SQS) queue.
3. The worker tier listens to the SQS queue for tasks.
4. When a task arrives, the worker instance processes it.
Use Cases
Asynchronous task processing (e.g., sending emails, image
processing)
Scheduled jobs (e.g., database cleanup, report generation)
Long-running operations (e.g., video encoding, machine learning
36 batch jobs)
Worker Tier
Architecture
Web Server (User Request) --> SQS Queue --> Worker EC2 Instance --> Task
Execution
37
AWS Local Zones
AWS Local Zones are extensions of AWS Regions that bring compute,
storage, and other select AWS services closer to major population centers,
industry hubs, and end users. They are designed to address use cases
requiring low latency (single-digit millisecond) or local data residency.
4. Seamless Integration:
•Local Zones are extensions of AWS Regions, which means management
and deployment workflows remain consistent across the AWS ecosystem.
45
AWS Backup
AWS Backup is a fully managed, centralized service that simplifies and
automates the process of backing up data across AWS services and on-
premises environments.
47
AWS Backup
5. Backup Monitoring and Alerts
•Use Amazon CloudWatch and AWS Backup Audit Manager to monitor
backup activity.
•Receive alerts for backup status and compliance violations.
6. Lifecycle Management
•Automatically transition backups to cold storage after a specified period,
reducing storage costs.
•Retain backups for as long as required based on regulatory or business
needs.
7. Point-in-Time Recovery
•Recover data to a specific point in time (supported for certain services
like RDS and DynamoDB).
48
Supported AWS Services
•Amazon EBS: Backups for block storage volumes.
49
Storage-Amazon EBS
AWS Storage Services: AWS offers a wide range of storage services that
can be provisioned depending on your project requirements and use case.
AWS storage services have different provisions for highly confidential data,
frequently accessed data, and the not so frequently accessed data.
You can choose from various storage types namely, object storage, file
storage, block storage services, backups,, and data migration options. All of
which fall under the AWS Storage Services list.
50
Storage-Amazon EBS
Elastic Block Storage (EBS): From the aforementioned list, EBS is a
block type durable and persistent storage that can be attached to EC2
instances for additional storage. Unlike EC2 instance storage volumes
which are suitable for holding temporary data EBS volumes are highly
suitable for essential and long-term data. EBS volumes are specific to
availability zones and can only be attached to instances within the same
availability zone.
EBS can be created from the EC2 dashboard in the console as well as in
Step 4 of the EC2 launch. Just note that when creating EBS with EC2,
the EBS volumes are created in the same availability zone as EC2,
however when provisioned independently users can choose the AZ in
which EBS is required.
51
Storage-Amazon EBS
Features of EBS:
• Scalability: EBS volume sizes and features can be scaled as per the
needs of the system. This can be done in two ways:
• Take a snapshot of the volume and create a new volume using the
Snapshot with new updated features.
• Updating the existing EBS volume from the console.
52
Storage-Amazon EBS
Features of EBS (contd..
53
Storage-Amazon EBS
Features of EBS (contd..
• Charges: Unlike AWS S3, where you are charged for the storage
you consume, AWS charges users for the storage you hold. For
example if you use 1 GB storage in a 5 GB volume, you’d still be
charged for a 5 GB EBS volume.
• EBS charges vary from region to region.
• EBS Volumes are independent of the EC2 instance they are attached
to. The data in an EBS volume will remain unchanged even if the
instance is rebooted or terminated.
57
AWS Snapshots
•In AWS, snapshots are point-in-time backups of data, typically associated
with storage services like Amazon Elastic Block Store (EBS), Amazon RDS,
and Amazon FSx. Snapshots are widely used for data protection, disaster
recovery, and migration purposes.
4.Lifecycle Management
•Automate snapshot creation, retention, and deletion using AWS Backup or
Data Lifecycle Manager (DLM).
5.Fast Recovery
•Snapshots can be used to quickly restore volumes or databases to a specific
point in time.
59
Types of AWS Snapshots
1. EBS Snapshots
• Create backups of Amazon EBS volumes.
• Can be restored to create a new volume or attached to an instance.
• Support fast snapshot restore (FSR) for selected volumes, reducing
recovery times.
2. RDS Snapshots
• Back up Amazon RDS databases.
• Two types:
• Automated Snapshots: Automatically created based on retention
settings.
• Manual Snapshots: User-initiated and retained until explicitly
deleted.
• Used for point-in-time recovery.
60
Types of AWS Snapshots
3. DynamoDB Backups
• On-demand snapshots or continuous backups for point-in-time recovery.
4. Amazon FSx Snapshots
• Backup file systems like FSx for Windows, FSx for Lustre, or FSx for
OpenZFS.
• Ensures consistent recovery for shared file storage.
5. Amazon Redshift Snapshots
• Backup data in Amazon Redshift clusters.
• Can be automated or manual.
61
AWS Snapshots
Use Cases
1. Disaster Recovery
1. Maintain backups in different regions to ensure availability during
outages.
2. Migration
1. Use snapshots to migrate volumes or databases across accounts or
regions.
3. Development and Testing
1. Create snapshots of production data for non-production
environments.
4. Data Retention and Archiving
1. Preserve snapshots for compliance and long-term retention.
62
AWS Simple Storage Service (S3)
Amazon S3 is a Simple Storage Service in AWS that stores files of different
types like Photos, Audio, and Videos as Objects providing more scalability
and security to.
It allows the users to store and retrieve any amount of data at any point in
time from anywhere on the web.
63
S3 used for?
Amazon S3 is used for various purposes in the Cloud because of its
robust features with scaling and Securing of data. It helps people with all
kinds of use cases from fields such as Mobile/Web applications, Big
data, Machine Learning and many more. The following are a few Wide
Usage of Amazon S3 service.
• Data Storage: Amazon s3 acts as the best option for scaling both small
and large storage applications. It helps in storing and retrieving the data-
intensitive applications as per needs in ideal time.
64
S3 used for?
• Hosting Static Websites: Amazon S3 facilitates in storing HTML, CSS
and other web content from Users/developers allowing them for
hosting Static Websites benefiting with low-latency access and cost-
effectiveness. To know more detailing refer this Article – How to host
static websites using Amazon S3
• This enables users to have more control over their data. Bucket Names
must be unique.
• There is a limit of 100 buckets per AWS account. But it can be increased
if requested by AWS support.
66
S3 used for?
Amazon S3 Objects: Fundamental entity type stored in AWS S3.You
can store as many objects as you want to store. The maximum size of an
AWS S3 bucket is 5TB. It consists of the following:
• Key
• Version ID
• Value
• Metadata
• Subresources
• Access control information
• Tags
67
Amazon S3 Versioning and Access Control
S3 Versioning: Versioning means always keeping a record of previously
uploaded files in S3. Points to Versioning are not enabled by default.
Once enabled, it is enabled for all objects in a bucket. Versioning keeps
all the copies of your file, so, it adds cost for storing multiple copies of
your data. For example, 10 copies of a file of size 1GB will have you
charged for using 10GBs for S3 space. Versioning is helpful to prevent
unintended overwrites and deletions. Objects with the same key can be
stored in a bucket if versioning is enabled (since they have a unique
version ID).
Access control lists (ACLs): A document for verifying access to S3
buckets from outside your AWS account. An ACL is specific to each
bucket. You can utilize S3 Object Ownership, an Amazon S3 bucket-level
feature, to manage who owns the objects you upload to your bucket and
68
to enable or disable ACLs.
Amazon S3 Storage Classes
• Amazon S3 Standard
• Amazon S3 Intelligent-Tiering
69
Amazon S3 Storage Classes
• Amazon S3 Standard
It is used for general purposes and offers high durability, availability, and
performance object storage for frequently accessed data. S3 Standard is
appropriate for a wide variety of use cases, including cloud applications,
dynamic websites, content distribution, mobile and gaming applications, and
big data analytics.
Mainly it is used for general purposes in order to maintain durability,
availability, and performance to a higher extent. Its applications are cloud
applications, dynamic websites, content distribution, mobile & gaming apps as
well as big data analysis or data mining.
Characteristics of S3 Standard
• Availability criteria are quite good like 99.9%.
• Improves the recovery of an object file.
70
Amazon S3 Storage Classes
• Amazon S3 Intelligent-Tiering
The first cloud storage automatically decreases the user’s storage cost. It
provides very cost-effective access based on frequency, without affecting other
performances. It also manages tough operations. Amazon S3 Intelligent –
Tiering reduces the cost of granular objects automatically. No retrieval charges
are there in Amazon S3 Intelligent – Tiering.
Characteristics of S3 Intelligent-Tiering
71
Amazon S3 Storage Classes
S3 Standard-Infrequent Access: Cost-Effective Storage for Less
Frequently Used Data
To access the less frequently used data, users use S3 Standard-IA. It
requires rapid access when needed. We can achieve high strength, high
output, and low bandwidth by using S3 Standard-IA. It is best in storing
the backup, and recovery of data for a long time. It act as a data store for
disaster recovery files.
73
Amazon S3 Storage Classes
S3 One Zone-Infrequent Access: Cost-Optimized Storage for
Single Availability Zone
Different from other S3 Storage Classes which store data in a minimum
of three Availability Zones, S3 One Zone-IA stores data in a single
Availability Zone and costs 20% less than S3 Standard-IA. It’s a very
good choice for storing secondary backup copies of on-premises data or
easily re-creatable data. S3 One Zone-IA provides you the same high
durability, high throughput, and low latency as in S3 Standard.
75
Amazon S3 Storage Classes
Amazon S3 Glacier Deep Archive
The Glacier Deep Archive storage class is designed to provide long-
lasting and secure long-term storage for large amounts of data at a price
that is competitive with off-premises tape archival services that is very
cheap. You no longer need to deal with expensive services. Accessibility
is very much efficient, that it can restore data within 12 hours.
76
AWS Lambda Overview
AWS Lambda is a serverless computing service provided by Amazon
Web Services (AWS). It allows you to run code without provisioning or
managing servers. Lambda automatically scales and executes your code
in response to events, and you only pay for the compute time used.
Key Features
1. Serverless Architecture:
1. No need to manage infrastructure; AWS handles the backend
provisioning.
2. Focus on writing code rather than server setup or maintenance.
2. Event-Driven Execution:
1. Triggers include AWS services (e.g., S3, DynamoDB, API Gateway),
HTTP requests, or custom events.
2. Example: Run code when a file is uploaded to an S3 bucket.
77
AWS Lambda Overview
3. Automatic Scaling:
1. Automatically handles the number of instances based on the volume
of incoming requests.
2. Each function invocation is independent, ensuring high concurrency.
4. Support for Multiple Languages:
1. Native support for Python, Java, Node.js, Ruby, Go, .NET Core,
and custom runtimes (via Amazon Linux).
2. Easily extendable using container images.
5. Pay-As-You-Go Pricing:
1. Billed based on the number of requests and execution duration
(rounded to the nearest millisecond).
2. Free tier includes 1 million requests and 400,000 GB-seconds per
month.
78
AWS Lambda Overview
6. Integration with AWS Ecosystem:
1. Seamless integration with services like S3, DynamoDB, SNS, SQS,
CloudWatch, and more.
2. Use as part of workflows in Step Functions or as a backend for APIs
using API Gateway.
7. Security:
1. Managed through AWS IAM for fine-grained permissions.
2. Supports VPC access for private networking.
8. High Availability:
1. Runs in multiple Availability Zones for redundancy and fault
tolerance.
79
AWS Lambda: Advantage
The following are the advantages of AWS Lambda function
5. Affordable: With AWS Lambda, one doesn’t pay anything when the
code isn’t running. The user has to only be charged for every 100ms of
code execution and the number of times his code is actually triggered.
81
AWS Fargate
AWS Fargate is a serverless compute engine for containers offered by
Amazon Web Services (AWS). It allows you to run containers without the
need to provision, manage, or scale the underlying virtual machines or
clusters of EC2 instances. Fargate is tightly integrated with Amazon
Elastic Container Service (ECS) and Amazon Elastic Kubernetes
Service (EKS).
Key Features
1. Serverless Container Management:
1. No need to manage servers, clusters, or EC2 instances.
2. Focus on defining and deploying your containers.
2. Seamless Scaling:
1. Automatically scales resources to match your container workloads.
2. Eliminates the need for manual scaling or overprovisioning.
82
AWS Fargate
3. Resource Isolation:
1. Provides each container with its own compute resources.
2. Enhances security and performance by avoiding resource
contention.
4. Flexible Resource Configurations:
1. Specify CPU and memory independently for each task or pod.
2. Pay only for the resources your containers use.
5. Integration with ECS and EKS:
1. Supports orchestration through both ECS and Kubernetes (via
EKS).
2. Compatible with existing ECS task definitions or Kubernetes
manifests.
83
AWS Fargate
6. Pay-as-You-Go Pricing:
1. Billed based on the vCPU and memory resources consumed by your
containers.
2. No upfront costs or long-term commitments.
7. Networking and Security:
1. Supports VPC networking, allowing fine-grained control over
container communication.
2. Integrated with AWS Identity and Access Management (IAM) for
secure access.
3. Each task or pod gets its own Elastic Network Interface (ENI).
8. High Availability:
1. Runs workloads across multiple Availability Zones for redundancy.
84
AWS Fargate: Benefits
2.Enhanced Security:
•Containers are isolated at the infrastructure level.
•Tasks/pods run in separate ENIs, ensuring network isolation.
3.Cost Efficiency:
•Pay only for the compute and memory resources consumed.
•Reduces costs by eliminating the need to overprovision resources.
85
AWS Fargate: Benefits
4.Improved Developer Productivity:
•Developers focus on building and deploying applications, not managing
servers.
86
Comparison: AWS Fargate vs. Other Options
87