0% found this document useful (0 votes)
3 views

Module 2

The document provides an overview of AWS Cloud Practitioner Module 2, covering key services such as Amazon EC2, Elastic Beanstalk, AWS Backup, and Amazon S3. It details various EC2 instance types, their use cases, and features of Elastic Beanstalk for application deployment. Additionally, it discusses AWS Local Zones, EBS storage options, snapshots, and the functionalities of Amazon S3 for data storage and retrieval.

Uploaded by

Vandana Awasthi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Module 2

The document provides an overview of AWS Cloud Practitioner Module 2, covering key services such as Amazon EC2, Elastic Beanstalk, AWS Backup, and Amazon S3. It details various EC2 instance types, their use cases, and features of Elastic Beanstalk for application deployment. Additionally, it discusses AWS Local Zones, EBS storage options, snapshots, and the functionalities of Amazon S3 for data storage and retrieval.

Uploaded by

Vandana Awasthi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 87

AWS Cloud Practitioner

Module -2

 Compute – Amazon EC2, Launching EC2 Instances


using Putty, Types of Instances, AWS Elastic Beanstalk,
Local Zone,
 Storage –Amazon EBS, AWS backup – Snapshots,
Amazon S3, Storage Class overview, S3 Glacier, and
Storege gateway.
 Serverless – AWS Lambda and AWS Fargate.

2
Compute Service – Amazon
EC2
Type of Instances
Amazon EC2 (Elastic Compute Cloud) offers a wide variety of instance types
tailored to meet different use cases. These instance types are categorized based on
their compute, memory, storage, and networking capabilities. Each instance type
belongs to a specific instance family optimized for particular workloads.

1. General Purpose Instances

•Balanced compute, memory, and networking resources.


•Ideal for applications with diverse workloads, such as web servers, app servers,
and development environments.
•Examples:
•T Series (Burstable Performance): t2, t3, t4g
•Cost-effective for low to moderate workloads.
•M Series: m5, m6g, m7i
•Balanced performance for a wide range of applications.
22
Type of Instances
2. Compute-Optimized Instances
•High-performance processors for compute-intensive tasks.
•Ideal for high-performance computing (HPC), machine learning
inference, batch processing, and gaming.
•Examples:
•C Series: c5, c6g, c7g

3. Memory-Optimized Instances
•High memory-to-CPU ratio for memory-intensive applications.
•Ideal for in-memory databases, caching, and real-time big data analytics.
•Examples:
•R Series: r5, r6g, r7g
•Optimized for memory-intensive applications.
23
•X Series: x2idn, x2iedn
Type of Instances
4. Accelerated Computing Instances

•Hardware accelerators like GPUs and FPGAs for specialized


tasks.
•Ideal for machine learning, deep learning, video processing, and scientific
simulations.
•Examples:
•P Series: p3, p4 (GPU instances for ML training).
•G Series: g4, g5 (GPU instances for ML inference and graphics
rendering).

24
Type of Instances
5. Storage-Optimized Instances

•High disk throughput and IOPS for storage-heavy applications.


•Ideal for data warehousing, distributed file systems, and NoSQL databases.
•Examples:
•I Series: i3, i4i
•Optimized for high IOPS storage.
•D Series: d2, d3
•Optimized for dense storage workloads.
•H Series: h1
•Optimized for high storage throughput.

25
Type of Instances
6. Networking-Optimized Instances
•Designed for applications requiring high network bandwidth and low
latency.
•Examples:
•High Network Bandwidth Instances: Instances with enhanced
networking capabilities like c5n and m5n.

26
Elastic Beanstalk
Key features of Elastic Beanstalk
Easy Deployment – Upload your application code, and EB automatically
provisions the necessary infrastructure.

Supports Multiple Languages – Java, Python, .NET, Node.js, Ruby, PHP,


Go, and Docker.

Integrated Load Balancing & Auto Scaling – Automatically adjusts


resources based on demand.

Monitoring & Logging – Built-in integration with Amazon


CloudWatch and AWS X-Ray for performance tracking.

Customization – Modify configurations, set environment variables, and


extend with EC2 customizations.

28
How Elastic Beanstalk works?
 When you deploy an application using Elastic Beanstalk, it follows these steps:

1. Upload Your Code – Use the AWS Management Console, AWS CLI, or an
IDE plugin to upload your application.

2. Elastic Beanstalk Provisions Infrastructure – It automatically creates:


1. EC2 instances
2. Elastic Load Balancer (ELB)
3. Auto Scaling Group
4. Security Groups
5. Amazon RDS (if configured)

29
How Elastic Beanstalk works?

3. Application Deployment – EB configures the environment and deploys


your application.

4. Auto-Scaling & Monitoring – The application is monitored, and resources


are adjusted based on traffic.

5. Application Updates – You can deploy updates with zero downtime using
Rolling Updates.

30
Web Server Tier
 A Web Server Environment is designed to run web applications that handle
HTTP(S) requests from users via an Elastic Load Balancer (ELB).

 How It Works
1. Elastic Beanstalk provisions Amazon EC2 instances.
2. It configures a Load Balancer (ELB) and Auto Scaling Group.
3. EC2 instances run a web server (Apache, Nginx, IIS, etc.).
4. The environment automatically manages scaling and monitoring.

 Use Cases
 Hosting web applications (Node.js, Django, Spring Boot, Laravel, etc.)
Running REST APIs
Serving static and dynamic content
34
Web Server Tier

 Architecture

User Request --> Load Balancer --> EC2 Web Server --> Application
Code --> Database (optional)

35
Worker Tier
 A Worker Environment is designed for processing background jobs that
do not require immediate responses.

 How It Works
1. Elastic Beanstalk provisions EC2 instances running a worker process.
2. It creates an Amazon Simple Queue Service (SQS) queue.
3. The worker tier listens to the SQS queue for tasks.
4. When a task arrives, the worker instance processes it.

 Use Cases
 Asynchronous task processing (e.g., sending emails, image
processing)
Scheduled jobs (e.g., database cleanup, report generation)
Long-running operations (e.g., video encoding, machine learning
36 batch jobs)
Worker Tier
Architecture

Web Server (User Request) --> SQS Queue --> Worker EC2 Instance --> Task
Execution

37
AWS Local Zones
AWS Local Zones are extensions of AWS Regions that bring compute,
storage, and other select AWS services closer to major population centers,
industry hubs, and end users. They are designed to address use cases
requiring low latency (single-digit millisecond) or local data residency.

 Key Features of AWS Local Zones


1. Proximity to End Users:
1. Provides access to AWS services closer to users for applications
sensitive to latency, like gaming, live video streaming, and real-time
simulations.
2. Reduced Latency:
1. By hosting applications closer to users, Local Zones eliminate the
latency introduced by connecting to a distant AWS Region.
44
AWS Local Zones
3. Subset of AWS Services:
•Offers a select set of services like Amazon EC2, Amazon EBS,
Amazon ECS, Amazon EKS, AWS Direct Connect, and Amazon
VPC.
•The parent AWS Region handles the rest of the services.

4. Seamless Integration:
•Local Zones are extensions of AWS Regions, which means management
and deployment workflows remain consistent across the AWS ecosystem.

5. Local Data Residency:


•Helps meet compliance and regulatory requirements for data residency by
keeping data within a specific location.

45
AWS Backup
AWS Backup is a fully managed, centralized service that simplifies and
automates the process of backing up data across AWS services and on-
premises environments.

It provides a unified approach to data protection, enabling you to define


backup policies, monitor backup activity, and ensure data compliance from
a single platform.

 Key Features of AWS Backup


1. Centralized Backup Management
1. Manage backups for various AWS services from a single console or
API.
2. Supports data stored in services like Amazon EBS, Amazon RDS,
Amazon DynamoDB, Amazon EFS, AWS Storage Gateway, and
46
more.
AWS Backup
2. Policy-Based Automation
•Define backup schedules, retention periods, and lifecycle policies using
backup plans.
•Automatically apply policies to resources based on tags.

3. Secure and Compliant Backups


•Supports encryption of data at rest and in transit.
•Helps meet regulatory and compliance requirements like GDPR, HIPAA,
and PCI DSS.

4. Cross-Region and Cross-Account Backups


•Replicate backups across AWS regions and accounts for disaster recovery
and improved resilience.

47
AWS Backup
5. Backup Monitoring and Alerts
•Use Amazon CloudWatch and AWS Backup Audit Manager to monitor
backup activity.
•Receive alerts for backup status and compliance violations.

6. Lifecycle Management
•Automatically transition backups to cold storage after a specified period,
reducing storage costs.
•Retain backups for as long as required based on regulatory or business
needs.

7. Point-in-Time Recovery
•Recover data to a specific point in time (supported for certain services
like RDS and DynamoDB).
48
Supported AWS Services
•Amazon EBS: Backups for block storage volumes.

•Amazon RDS: Automated backups for relational databases.

•Amazon DynamoDB: Point-in-time recovery for NoSQL databases.

•Amazon EFS: Backups for file systems.

•AWS Storage Gateway: Protects data stored on-premises and in AWS.

•Amazon S3 (coming soon): Manage S3 backups using AWS Backup.

49
Storage-Amazon EBS
AWS Storage Services: AWS offers a wide range of storage services that
can be provisioned depending on your project requirements and use case.

AWS storage services have different provisions for highly confidential data,
frequently accessed data, and the not so frequently accessed data.

You can choose from various storage types namely, object storage, file
storage, block storage services, backups,, and data migration options. All of
which fall under the AWS Storage Services list.

50
Storage-Amazon EBS
 Elastic Block Storage (EBS): From the aforementioned list, EBS is a
block type durable and persistent storage that can be attached to EC2
instances for additional storage. Unlike EC2 instance storage volumes
which are suitable for holding temporary data EBS volumes are highly
suitable for essential and long-term data. EBS volumes are specific to
availability zones and can only be attached to instances within the same
availability zone.
 EBS can be created from the EC2 dashboard in the console as well as in
Step 4 of the EC2 launch. Just note that when creating EBS with EC2,
the EBS volumes are created in the same availability zone as EC2,
however when provisioned independently users can choose the AZ in
which EBS is required.

51
Storage-Amazon EBS

 Features of EBS:

• Scalability: EBS volume sizes and features can be scaled as per the
needs of the system. This can be done in two ways:
• Take a snapshot of the volume and create a new volume using the
Snapshot with new updated features.
• Updating the existing EBS volume from the console.

52
Storage-Amazon EBS
 Features of EBS (contd..

• Backup: Users can create snapshots of EBS volumes that act as


backups.
• Snapshot can be created manually at any point in time or can be
scheduled.
• Snapshots are stored on AWS S3 and are charged according to the S3
storage charges.
• Snapshots are incremental in nature.
• New volumes across regions can be created from snapshots.

53
Storage-Amazon EBS
 Features of EBS (contd..

• Encryption: Encryption can be a basic requirement when it


comes to storage. This can be due to the government of regulatory
compliance. EBS offers an AWS managed encryption feature.
• Users can enable encryption when creating EBS volumes by clicking
on a checkbox.
• Encryption Keys are managed by the Key Management Service
(KMS) provided by AWS.
• Encrypted volumes can only be attached to selected instance types.
• Encryption uses the AES-256 algorithm.
• Snapshots from encrypted volumes are encrypted and similarly,
volumes created from snapshots are encrypted.
54
Storage-Amazon EBS
 Features of EBS (contd..

• Charges: Unlike AWS S3, where you are charged for the storage
you consume, AWS charges users for the storage you hold. For
example if you use 1 GB storage in a 5 GB volume, you’d still be
charged for a 5 GB EBS volume.
• EBS charges vary from region to region.
• EBS Volumes are independent of the EC2 instance they are attached
to. The data in an EBS volume will remain unchanged even if the
instance is rebooted or terminated.

• Single EBS volume can only be attached to one EC2 instance at a


time. However, one EC2 can have more than one EBS volumes
55 attached to it.
Types of EBS
1. General Purpose SSD (gp3, gp2):
•Cost-effective and suitable for a variety of workloads, including boot volumes
and low-latency applications.

2. Provisioned IOPS SSD (io2, io1):


•Designed for latency-sensitive transactional workloads, such as databases.

3. Throughput Optimized HDD (st1):


•Ideal for sequential data access workloads like big data, data warehouses, and log
processing.

4. Cold HDD (sc1):


•Lowest cost; suitable for infrequently accessed data.

5. EBS Magnetic (Standard):


•Legacy option for workloads that do not require high performance.
56
EBS – Use cases
•Hosting relational and non-relational databases.

•Storing logs and data files for big data analytics.

•Running applications that require high availability and reliability.

•Supporting disaster recovery and business continuity through snapshots


and cross-region replication.

57
AWS Snapshots
•In AWS, snapshots are point-in-time backups of data, typically associated
with storage services like Amazon Elastic Block Store (EBS), Amazon RDS,
and Amazon FSx. Snapshots are widely used for data protection, disaster
recovery, and migration purposes.

 Key Features of Snapshots


1. Incremental Backups
1. After the first full snapshot, subsequent snapshots are incremental,
meaning only changes since the last snapshot are stored. This reduces
storage costs and speeds up the backup process.
2. Cross-Region and Cross-Account Copying
1. Snapshots can be copied to different AWS regions for disaster
recovery or closer proximity to end-users.
2. They can also be shared with other AWS accounts securely.
58
AWS Snapshots
3.Encryption
•Snapshots of encrypted volumes or databases are also encrypted.
•Data is protected using AWS Key Management Service (KMS).

4.Lifecycle Management
•Automate snapshot creation, retention, and deletion using AWS Backup or
Data Lifecycle Manager (DLM).

5.Fast Recovery
•Snapshots can be used to quickly restore volumes or databases to a specific
point in time.

59
Types of AWS Snapshots
1. EBS Snapshots
• Create backups of Amazon EBS volumes.
• Can be restored to create a new volume or attached to an instance.
• Support fast snapshot restore (FSR) for selected volumes, reducing
recovery times.
2. RDS Snapshots
• Back up Amazon RDS databases.
• Two types:
• Automated Snapshots: Automatically created based on retention
settings.
• Manual Snapshots: User-initiated and retained until explicitly
deleted.
• Used for point-in-time recovery.
60
Types of AWS Snapshots
3. DynamoDB Backups
• On-demand snapshots or continuous backups for point-in-time recovery.
4. Amazon FSx Snapshots
• Backup file systems like FSx for Windows, FSx for Lustre, or FSx for
OpenZFS.
• Ensures consistent recovery for shared file storage.
5. Amazon Redshift Snapshots
• Backup data in Amazon Redshift clusters.
• Can be automated or manual.

61
AWS Snapshots
Use Cases
1. Disaster Recovery
1. Maintain backups in different regions to ensure availability during
outages.
2. Migration
1. Use snapshots to migrate volumes or databases across accounts or
regions.
3. Development and Testing
1. Create snapshots of production data for non-production
environments.
4. Data Retention and Archiving
1. Preserve snapshots for compliance and long-term retention.
62
AWS Simple Storage Service (S3)
Amazon S3 is a Simple Storage Service in AWS that stores files of different
types like Photos, Audio, and Videos as Objects providing more scalability
and security to.

It allows the users to store and retrieve any amount of data at any point in
time from anywhere on the web.

It facilitates features such as extremely high availability, security, and simple


connection to other AWS Services.

63
S3 used for?
 Amazon S3 is used for various purposes in the Cloud because of its
robust features with scaling and Securing of data. It helps people with all
kinds of use cases from fields such as Mobile/Web applications, Big
data, Machine Learning and many more. The following are a few Wide
Usage of Amazon S3 service.
• Data Storage: Amazon s3 acts as the best option for scaling both small
and large storage applications. It helps in storing and retrieving the data-
intensitive applications as per needs in ideal time.

• Backup and Recovery: Many Organizations are using Amazon S3 to


backup their critical data and maintain the data durability and availability
for recovery needs.

64
S3 used for?
• Hosting Static Websites: Amazon S3 facilitates in storing HTML, CSS
and other web content from Users/developers allowing them for
hosting Static Websites benefiting with low-latency access and cost-
effectiveness. To know more detailing refer this Article – How to host
static websites using Amazon S3

• Data Archiving: Amazon S3 Glacier service integration helps as a


cost-effective solution for long-term data storing which are less
frequently accessed applications.

• Big Data Analytics: Amazon S3 is often considered as data lake


because of its capacity to store large amounts of both structured and
unstructured data offering seamless integration with other AWS
Analytics and AWS Machine Learning Services.
65
S3 used for?
• Amazon S3 Bucket: Data, in S3, is stored in containers called buckets.
Each bucket will have its own set of policies and configurations.

• This enables users to have more control over their data. Bucket Names
must be unique.

• There is a limit of 100 buckets per AWS account. But it can be increased
if requested by AWS support.

66
S3 used for?
 Amazon S3 Objects: Fundamental entity type stored in AWS S3.You
can store as many objects as you want to store. The maximum size of an
AWS S3 bucket is 5TB. It consists of the following:

• Key
• Version ID
• Value
• Metadata
• Subresources
• Access control information
• Tags

67
Amazon S3 Versioning and Access Control
 S3 Versioning: Versioning means always keeping a record of previously
uploaded files in S3. Points to Versioning are not enabled by default.
Once enabled, it is enabled for all objects in a bucket. Versioning keeps
all the copies of your file, so, it adds cost for storing multiple copies of
your data. For example, 10 copies of a file of size 1GB will have you
charged for using 10GBs for S3 space. Versioning is helpful to prevent
unintended overwrites and deletions. Objects with the same key can be
stored in a bucket if versioning is enabled (since they have a unique
version ID).
 Access control lists (ACLs): A document for verifying access to S3
buckets from outside your AWS account. An ACL is specific to each
bucket. You can utilize S3 Object Ownership, an Amazon S3 bucket-level
feature, to manage who owns the objects you upload to your bucket and
68
to enable or disable ACLs.
Amazon S3 Storage Classes
• Amazon S3 Standard

• Amazon S3 Intelligent-Tiering

• Amazon S3 Standard-Infrequent Access

• Amazon S3 One Zone-Infrequent Access

• Amazon S3 Glacier Instant Retrieval

• Amazon S3 Glacier Flexible Retrieval

• Amazon S3 Glacier Deep Archive

69
Amazon S3 Storage Classes
• Amazon S3 Standard

 It is used for general purposes and offers high durability, availability, and
performance object storage for frequently accessed data. S3 Standard is
appropriate for a wide variety of use cases, including cloud applications,
dynamic websites, content distribution, mobile and gaming applications, and
big data analytics.
 Mainly it is used for general purposes in order to maintain durability,
availability, and performance to a higher extent. Its applications are cloud
applications, dynamic websites, content distribution, mobile & gaming apps as
well as big data analysis or data mining.

 Characteristics of S3 Standard
• Availability criteria are quite good like 99.9%.
• Improves the recovery of an object file.
70
Amazon S3 Storage Classes
• Amazon S3 Intelligent-Tiering

 The first cloud storage automatically decreases the user’s storage cost. It
provides very cost-effective access based on frequency, without affecting other
performances. It also manages tough operations. Amazon S3 Intelligent –
Tiering reduces the cost of granular objects automatically. No retrieval charges
are there in Amazon S3 Intelligent – Tiering.

 Characteristics of S3 Intelligent-Tiering

• Required less monitoring and automatically tier charge.


• No minimum storage duration and no recovery charges are required to access
the service.
• Availability criteria are quite good like 99.9%.
• Durability of S3 Intelligent- Tiering is 99.999999999%.

71
Amazon S3 Storage Classes
 S3 Standard-Infrequent Access: Cost-Effective Storage for Less
Frequently Used Data
 To access the less frequently used data, users use S3 Standard-IA. It
requires rapid access when needed. We can achieve high strength, high
output, and low bandwidth by using S3 Standard-IA. It is best in storing
the backup, and recovery of data for a long time. It act as a data store for
disaster recovery files.

 Characteristics of S3 Standard-Infrequent Access

• High performance and same action rate.


• Very Durable in all AZs.
• Availability is 99.9% in S3 Standard-IA.
• Durability is of 99.999999999%.
72
Amazon S3 Storage Classes
 S3 Glacier Instant Retrieval: High-Performance Archiving
with Rapid Retrieval
 It is an archive storage class that delivers the lowest-cost storage for data
archiving and is organized to provide you with the highest performance
and with more flexibility. S3 Glacier Instant Retrieval delivers the fastest
access to archive storage. Same as in S3 standard, Data retrieval in
milliseconds .
 Characteristics of S3 Glacier Instant Retrieval
• It just takes milliseconds to recover the data.
• The minimum object size should be 128KB.
• Availability is 99.9% in S3 glacier Instant Retrieval.
• Durability is of 99.999999999%.

73
Amazon S3 Storage Classes
 S3 One Zone-Infrequent Access: Cost-Optimized Storage for
Single Availability Zone
 Different from other S3 Storage Classes which store data in a minimum
of three Availability Zones, S3 One Zone-IA stores data in a single
Availability Zone and costs 20% less than S3 Standard-IA. It’s a very
good choice for storing secondary backup copies of on-premises data or
easily re-creatable data. S3 One Zone-IA provides you the same high
durability, high throughput, and low latency as in S3 Standard.

 Characteristics of S3 One Zone-Infrequent Access

• Supports SSL(Secure Sockets Layer) for data in transferring and


encryption of data.
• Availability Zone destruction can damage the data.
• Availability is 99.5% in S3 one Zone- Infrequent Access.
74• Durability is of 99.999999999%.
Amazon S3 Storage Classes
 S3 Glacier Flexible Retrieval: Balancing Cost and Retrieval
Flexibility for Archiving
 It provides low-cost storage compared to S3 Glacier Instant Retrieval. It
is a suitable solution for backing up the data so that it can be recovered
easily a few times in a year. It just takes minutes to access the data.

 Characteristics of S3 Glacier Flexible Retrieval

• Free recoveries in high quantity.


• AZs destruction can lead to difficulty in accessing data.
• when you have to retrieve large data sets , then S3 glacier flexible
retrieval is best for backup and disaster recovery use cases.
• Availability is 99.99% in S3 glacier flexible retrieval.
• Durability is of 99.999999999%.

75
Amazon S3 Storage Classes
 Amazon S3 Glacier Deep Archive
 The Glacier Deep Archive storage class is designed to provide long-
lasting and secure long-term storage for large amounts of data at a price
that is competitive with off-premises tape archival services that is very
cheap. You no longer need to deal with expensive services. Accessibility
is very much efficient, that it can restore data within 12 hours.

 Characteristics of S3 Glacier Deep Archive


• More secured storage.
• Recovery time is less requires less time.
• Availability is 99.99% in S3 glacier deep archive.
• Durability is of 99.999999999%.

76
AWS Lambda Overview
 AWS Lambda is a serverless computing service provided by Amazon
Web Services (AWS). It allows you to run code without provisioning or
managing servers. Lambda automatically scales and executes your code
in response to events, and you only pay for the compute time used.
 Key Features
1. Serverless Architecture:
1. No need to manage infrastructure; AWS handles the backend
provisioning.
2. Focus on writing code rather than server setup or maintenance.
2. Event-Driven Execution:
1. Triggers include AWS services (e.g., S3, DynamoDB, API Gateway),
HTTP requests, or custom events.
2. Example: Run code when a file is uploaded to an S3 bucket.

77
AWS Lambda Overview
3. Automatic Scaling:
1. Automatically handles the number of instances based on the volume
of incoming requests.
2. Each function invocation is independent, ensuring high concurrency.
4. Support for Multiple Languages:
1. Native support for Python, Java, Node.js, Ruby, Go, .NET Core,
and custom runtimes (via Amazon Linux).
2. Easily extendable using container images.
5. Pay-As-You-Go Pricing:
1. Billed based on the number of requests and execution duration
(rounded to the nearest millisecond).
2. Free tier includes 1 million requests and 400,000 GB-seconds per
month.
78
AWS Lambda Overview
6. Integration with AWS Ecosystem:
1. Seamless integration with services like S3, DynamoDB, SNS, SQS,
CloudWatch, and more.
2. Use as part of workflows in Step Functions or as a backend for APIs
using API Gateway.
7. Security:
1. Managed through AWS IAM for fine-grained permissions.
2. Supports VPC access for private networking.
8. High Availability:
1. Runs in multiple Availability Zones for redundancy and fault
tolerance.

79
AWS Lambda: Advantage
 The following are the advantages of AWS Lambda function

1. Zero Server Management: Since AWS Lambda automatically runs


the user’s code, there’s no need for the user to manage the server.
Simply write the code and upload it to Lambda.

2. Scalability: AWS Lambda runs code in response to each trigger, so the


user’s application is automatically scaled. The code also runs in parallel
processes, each triggered individually, so scaling is done precisely with
the size of the workload.

3. Event-Driven Architecture: AWS Lambda function can be triggered


based on the events happing in the aws other service like when the file or
video is added to the S3 bucket you can trigger the AWS Lambda
function.
80
AWS Lambda: Advantage
4. Automatic High Availability: When there is high demand or high
incoming traffic aws lambda function will automatically scale the
servers.

5. Affordable: With AWS Lambda, one doesn’t pay anything when the
code isn’t running. The user has to only be charged for every 100ms of
code execution and the number of times his code is actually triggered.

81
AWS Fargate
AWS Fargate is a serverless compute engine for containers offered by
Amazon Web Services (AWS). It allows you to run containers without the
need to provision, manage, or scale the underlying virtual machines or
clusters of EC2 instances. Fargate is tightly integrated with Amazon
Elastic Container Service (ECS) and Amazon Elastic Kubernetes
Service (EKS).

 Key Features
1. Serverless Container Management:
1. No need to manage servers, clusters, or EC2 instances.
2. Focus on defining and deploying your containers.
2. Seamless Scaling:
1. Automatically scales resources to match your container workloads.
2. Eliminates the need for manual scaling or overprovisioning.
82
AWS Fargate
3. Resource Isolation:
1. Provides each container with its own compute resources.
2. Enhances security and performance by avoiding resource
contention.
4. Flexible Resource Configurations:
1. Specify CPU and memory independently for each task or pod.
2. Pay only for the resources your containers use.
5. Integration with ECS and EKS:
1. Supports orchestration through both ECS and Kubernetes (via
EKS).
2. Compatible with existing ECS task definitions or Kubernetes
manifests.
83
AWS Fargate
6. Pay-as-You-Go Pricing:
1. Billed based on the vCPU and memory resources consumed by your
containers.
2. No upfront costs or long-term commitments.
7. Networking and Security:
1. Supports VPC networking, allowing fine-grained control over
container communication.
2. Integrated with AWS Identity and Access Management (IAM) for
secure access.
3. Each task or pod gets its own Elastic Network Interface (ENI).
8. High Availability:
1. Runs workloads across multiple Availability Zones for redundancy.
84
AWS Fargate: Benefits

1.No Infrastructure Management:


•Removes the operational overhead of managing EC2 instances or clusters.

2.Enhanced Security:
•Containers are isolated at the infrastructure level.
•Tasks/pods run in separate ENIs, ensuring network isolation.

3.Cost Efficiency:
•Pay only for the compute and memory resources consumed.
•Reduces costs by eliminating the need to overprovision resources.

85
AWS Fargate: Benefits
4.Improved Developer Productivity:
•Developers focus on building and deploying applications, not managing
servers.

5.Deep Integration with AWS Ecosystem:


•Works seamlessly with AWS services like IAM, CloudWatch, ECR, and
more.

86
Comparison: AWS Fargate vs. Other Options

87

You might also like