0% found this document useful (0 votes)
32 views

1232 Internship

Uploaded by

Ajay Raval
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views

1232 Internship

Uploaded by

Ajay Raval
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 83

lOMoARcPSD|40590090

1232 Internship

Computer Science (Sasi Institute of Technology and Engineering)

Scan to open on Studocu

Studocu is not sponsored or endorsed by any college or university


Downloaded by Ajay Raval ([email protected])
lOMoARcPSD|40590090

INTERNSHIP REPORT
ON
AWS Cloud Virtual Internship

Submitted by

NAME: K. Vijay

REG. NO: 20K61A1232

Under the esteemed guidance of


Dr . K .Subhash Bhagavan
Associate Professor

Submitted to:
Dr . A .V.N .Chandra Sekhar
DEPARTMENT OF INFORMATION TECHNOLOGY

SASI INSTITUTE OF TECHNOLOGY& ENGINEERING


(Approved by AICTE, New Delhi, Permanently Affiliated to JNTUK, Kakinada and
SBTET-Hyderabad, Accredited by NAAC with ‘A’ Grade, Ranked as "A" Grade by Govt.
of A.P., Recognized by UGC 2(f) & 12(B)) Kadakatla, TADEPALLIGUDEM– 534
101.

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

ii

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

DECLARATION

I, K.Vijay,20K61A1232, student of Information Technology at Sasi Institute of Technology &


Engineering, Tadepalligudem hereby declare that the Summer Training Report entitled “AWS
Cloud Virtual Internship" is an authentic record of my own work as requirements of Industrial
Training during the period from date to final date. I obtained the knowledge of AWS Cloud
through the selfless efforts of the Employee arranged to me by administration. A Training
Report was made on the same and the suggestions are given by the faculty were duly
incorporatedg

K. Vijay

20K61A1232

Acad Head of the Department


emic Supervisor

Dr. K. Subhash Bhagavan Dr. K. Subhash Bhagavan


Associate Professor Associate Professor

External Examiner

iii

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

ACKNOWLEDGEMENT

I would like to thank the entire EduSkills, India. Who has provided me this summer training. I
express my sincere thanks to Mr. P. Srinivasa Sharma, Director for giving me a great opportunity
to work in such domain.

I take immense pleasure to express my deep sense of gratitude to my beloved Guide Dr .K .Subhash
Bhagavan for their benevolent guidance and kind cooperation throughout my training along with
completing this Internship and provided me the various knowledge about their stations.

I express my deep sense of gratitude to my beloved Principal, Prof. Mohammed Ismail for his
valuable guidance and for permitting us to carry out this internship.

I express my deep sense of gratitude to Dr. K. Subhash Bhagavan, Associate Professor and Head of
the Department for the valuable guidance and suggestions, keen interest shown thorough
encouragement extended throughout the period of internship work.

With Gratitude

K. Vijay

20K61A1232

iv

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

ABSTRACT
This project aims to investigate the status of cloud computing among business and government
organizations, and to understand the security concerns of organizations regarding the adoption of
cloud. The study shows that some government agencies lag behind using cloud computing, while
others are leading the way. The literature was reviewed and much was discovered about the
complexity of cloud computing. Then a survey was done and some participants agreed to follow up
interviews in order to clarify the status of cloud acceptance. Security issues were found to be the
major reason for delay in cloud adoption. However, the literature shows that proper adoption of the
cloud actually increases security. Results of the data analysis shows that the US, and Canada lag
behind industry in adopting the cloud, while in the UK, Australia and part of the EU governments are
leading the way.

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

Vision & Mission

Vision of the Institute

 Confect as a premier institute for professional education by creating technocrats who


can address the society`s needs through inventions and innovations.

Mission of the Institute
Partake in the national growth of technological, industrial with societal
responsibilities.
 Provide an environment that promotes productive research.
 Meet stakeholder’s expectations through continue and sustained quality improvements.

Vision of the Program


To become recognized Centre of Excellence for quality IT Education and create
professionals with ability to solve social needs.

Mission of the Program

To provide quality teaching learning environment that build necessary skills for
employability and career development.
To conduct trainings/events for overall development of stakeholders with
collaborations

To impart value education to students to serve society with high integrity and
good character

Provide state of the art facilities to enable innovation, student centric learning

ii

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

PEO’S and PSO


Program Educational Objectives

These PEO’s are meant to prepare our students to thrive and to lead in their career. Our
graduates will be able

Graduates will have strong knowledge about IT applications with leadership

P1 Qualities

Graduates will pursue successful career in IT and allied industries and provide

P2 solutions for global needs

P3 Graduates with life-long learning attitude and practice professional ethics

Program Outcomes

1. Engineering knowledge: Apply the knowledge of mathematics, science, engineering


fundamentals, and an engineering specialization to the solution of complex engineering
problems.

2. Problem analysis: Identify, formulate, research literature, and analyze complex


engineering problems reaching substantiated conclusions using first principles of
mathematics, natural sciences, and engineering sciences.

3. Design/development of solutions: Design solutions for complex engineering problems


and design system components or processes that meet the specified needs with appropriate
consideration for the public health and safety, and the cultural, societal, and environmental
considerations.

4. Conduct investigations of complex problems: Use research-based knowledge and


research methods including design of experiments, analysis and interpretation of data, and
synthesis of the information to provide valid conclusions.

5. Modern tool usage: Create, select, and apply appropriate techniques, resources, and

iii

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

modern engineering and IT tools including prediction and modelling to complex


engineering activities with an understanding of the limitations.

6. The engineer and society: Apply reasoning informed by the contextual knowledge to
assess societal, health, safety, legal and cultural issues and the consequent responsibilities
relevant to the professional engineering practice.

7. Environment and sustainability: Understand the impact of the professional engineering


solutions in societal and environmental contexts, and demonstrate the knowledge of, and
need for sustainable development.

8. Ethics: Apply ethical principles and commit to professional ethics and responsibilities
and norms of the engineering practice.

9. Individual and team work: Function effectively as an individual, and as a member or


leader in diverse teams, and in multidisciplinary settings.

10. Communication: Communicate effectively on complex engineering activities with the


engineering community and with society at large, such as, being able to comprehend and
write effective reports and design documentation, make effective presentations, and give and
receive clear instructions.

11. Project management and finance: Demonstrate knowledge and understanding of the
engineering and management principles and apply these to one’s own work, as a member
and leader in a team, to manage projects and in multidisciplinary environments

12. Life-long learning: Recognize the need for, and have the preparation and ability to
engage in independent and life-long learning in the broadest context of technological change

Program Specific Outcomes

1. Application Development: Develop risk free innovative IT applications for industrial


needs.

2. Successful Career and Entrepreneurship: Explore technical knowledge in diverse areas


of IT and experience an environment conducive in cultivating skills for successful career,
entrepreneurship and higher studies

iv

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

LIST OF FIGURES
SI.NO FIGURE NAME FIGURE PAGE.NO
01. VPC with Public & Private Subnet 22

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

TABLE CONTENTS

Description Page No.

Abstract i

Vision and Mission ii

Pos, PSOs and PEOs iii

List of Figures iv
Chapter 1: CLOUD CONCEPTS OVERVIEW 01-05
1.1 Introduction to cloud computing 01
1.2 Advantages of Cloud 02
1.3 Introduction to AWS 03
1.4 Moving to the AWS cloud 04-05
Chapter 2: Cloud Economics and Billing 06-11
2.1 Fundamentals of Pricing 06-07
2.2 Total Cost of Ownership 07- 08
2.3 08-09
2.4 AWS Billing & Cost Management 9
2.5 Technical Support Models 10-11
Chapter 3: AWS Global Infrastructure Overview 11-14
3.1 AWS Global Infrastructure 11-12
3.2 AWS Services & Service Category 12-14
Chapter 4: AWS Cloud Security 14-20
4.1 AWS Shared Responsibility Model 14
4.2 AWS IAM 15-16
4.3 Securing a new AWS Account 16-17
4.4 Securing Accounts 17-19
4.5 Securing Data 19
4.6 Working to Ensure Compliance 19-20
Chapter 5: Networking and Content Delivery 20-28

vi

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

5.1 Networking Basics 20-21


5.2 Amazon VPC 21-23
5.3 VPC Networking 23-24
5.4 VPC Security 24-26
5.5 Route 53 26-27
5.6 CloudFront 27-28
Chapter 6: Compute 29-36
6.1 Compute Services Overview 29
6.2 Amazon EC2 part 1 29-30
6.3 Amazon EC2 part 2 31
6.4 Amazon EC2 part 3 31-32
6.5 Amazon EC2 Cost Optimization 32-33
6.6 Container Services 33-34
6.7 Introduction to AWS Lambda 34-35
6.8 Introduction to AWS Elastic Beanstalk 35-36
Chapter 7: Storage 37-42
7.1 AWS EBS 37-38
7.2 AWS S3 38-39
7.3 AWS EFS 39-40
7.4 AWS S3 Glacier 41-42
Chapter 8: Databases 43-48
8.1 Amazon RDS 43-44
8.2 Amazon DynamoDB 44-45
8.3 Amazon Redshift 45-47
8.4 Amazon Aurora 47-48
Chapter 9: Cloud Architecture 49-58
9.1 AWS Well Architected Framework Design Principles 49-50
9.2 Operational Excellence 50-51
9.3 Security 51-52
9.4 Reliability 52-53
9.5 Performance Efficiency 53-54

vii

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

9.6 Cost Optimization 54-55


9.7 Reliability & High Availability 55-57
9.8 AWS Trusted Advisor 57-57
Chapter 10: Auto Scaling and Monitoring 58-62
10.1 Elastic Load Balancing 58-59
10.2 Amazon CloudWatch 59-61
10.3 Amazon EC2 Scaling 61-62

Appendix A (Internship Evaluation Form) 63-66

Appendix B (PO's and PSO's relevance with Internship) 67-69

viii

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

CLOUD CONCEPTS

1.1 Introduction to Cloud Computing

Cloud computing is a transformative technology that has revolutionized the way businesses
and individuals store, process, and access data and applications. It represents a shift from traditional
on-premises IT infrastructure to a model that leverages the internet to deliver computing resources
on-demand. Here's an overview of cloud computing:

Definition:

Cloud computing is the delivery of various computing services over the internet, including servers,
storage, databases, networking, software, analytics, and more. These services are provided by cloud
service providers and are often paid for on a subscription or pay-as-you-go basis.

Characteristics:

On-Demand Self-Service: Users can provision and manage resources as needed, without human
intervention.

Broad Network Access: Cloud services are accessible over the internet from a variety of devices.

Resource Pooling: Resources are shared and dynamically allocated to multiple users as needed.

Rapid Elasticity: Users can quickly scale resources up or down based on demand.

Measured Service: Cloud usage is metered, and users are billed for the resources they consume.

Service Models:

Infrastructure as a Service (IaaS): Provides virtualized computing resources over the internet,
including virtual machines, storage, and networking.

Platform as a Service (PaaS): Offers a platform for developers to build, deploy, and manage
applications without worrying about the underlying infrastructure.

Software as a Service (SaaS): Delivers software applications over the internet on a subscription
basis.

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

Deployment Models:

Public Cloud: Resources are owned and operated by a cloud service provider, and they are made
available to the general public.

Private Cloud: Resources are used exclusively by a single organization, offering greater control and
security.

Hybrid Cloud: Combines both public and private cloud resources, allowing data and applications to
be shared between them.

Popular Cloud Service Providers:

 Amazon Web Services (AWS)

 Microsoft Azure
 Google Cloud Platform (GCP)
 IBM Cloud
 Oracle Cloud
1.2 Advantages of Cloud
 Cost-Efficiency: Pay-as-You-Go Model, Cloud services typically operate on a pay-as-you-go
or subscription basis, eliminating the need for large upfront capital investments in hardware
and software.
 Scalability: Cloud resources can be easily scaled up or down based on demand, allowing
businesses to handle fluctuations in workload without overprovisioning.
 Flexibility: Anytime, Anywhere can access Cloud services are accessible from any location
with an internet connection, enabling remote work and mobile access.
 Reliability and High Availability: Cloud providers typically offer data redundancy and
backup solutions, reducing the risk of data loss due to hardware failures.
 Security: Cloud providers invest heavily in security, employing experts to protect against
threats, including data breaches and cyberattacks.
 Automatic Updates and Maintenance: Cloud providers handle software updates and
maintenance, reducing the burden on internal IT teams and ensuring that systems are up to
date and secure.

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

 Disaster Recovery and Business Continuity: Cloud services often include disaster recovery
solutions, enabling businesses to quickly recover data and applications in the event of a
catastrophe.
 Collaboration: Cloud-based collaboration tools facilitate real-time communication and
document sharing among team members, improving productivity and teamwork.
1.3 Introduction to AWS

Amazon Web Services (AWS) is a leading and widely used cloud computing platform provided by
Amazon.com. It offers a vast array of cloud services and solutions to individuals, organizations, and
governments, enabling them to build, deploy, and manage applications, as well as store and analyze
data securely in the cloud.

AWS was officially launched in 2006 and has since become one of the most dominant players in the
cloud computing industry. It provides a wide range of cloud services, including computing power,
storage, databases, machine learning, analytics, content delivery, and more. AWS operates from data
centers located in regions around the world, making it a global cloud provider.

Core Services:

AWS offers a multitude of services across various categories, including:

 Compute: Services such as Amazon EC2 for virtual servers and AWS Lambda for serverless
computing.
 Storage: Options like Amazon S3 for scalable object storage and Amazon EBS for block
storage.
 Databases: RDS for managed relational databases and DynamoDB for NoSQL databases.
 Networking: Amazon VPC for private cloud networks and CloudFront for content delivery.
 Machine Learning: Sage Maker for building, training, and deploying machine learning
models.
 Analytics: Services like Redshift for data warehousing and Athena for querying data.
 Security: AWS Identity and Access Management (IAM) for identity and access control.

Regions and Availability Zones:

AWS operates in multiple geographic regions around the world, each of which contains
multiple Availability Zones. Availability Zones are isolated data centers designed for high availability

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

and fault tolerance. This architecture ensures that data and applications can be redundantly hosted for
reliability.

Pricing:

AWS uses a pay-as-you-go pricing model, allowing users to pay only for the resources and
services they consume. There are various pricing options, including on-demand, reserved instances,
and spot instances, which offer cost savings based on your usage needs.

Security:

AWS takes security seriously and provides tools and features to help users secure their applications
and data. This includes encryption, identity and access management, firewalls, and DDoS protection.

1.4 How to Move to the Cloud

Moving to the cloud is a strategic and often complex process that involves migrating your IT
infrastructure, applications, and data from on-premises or existing environments to a cloud-based
platform. This transition can bring various benefits, including cost savings, scalability, flexibility, and
improved accessibility. Here a step-by-step guide to help you move to the cloud:

1 Define Your Cloud Strategy: Determine your organization's goals and objectives for
moving to the cloud. This could include cost reduction, scalability, improved agility, or
specific business needs.
2 Select a Cloud Service Provider: Research and choose a cloud service provider that
aligns with your requirements. Major providers include Amazon Web Services (AWS),
Microsoft Azure, Google Cloud, and others.
3 Assess Your Current Environment: Conduct a thorough assessment of your existing
IT infrastructure, applications, and data. Identify what can be migrated to the cloud and what
may need modification.
4 Data Classification and Security: Classify your data to identify sensitive and
compliance-related data. Implement security measures, such as encryption and access
controls, to protect your data in transit and at rest.
5 Choose Migration Methods: Decide on the migration strategy for your applications
and data. Common methods include rehosting (lift and shift), rearchitecting, refactoring, and
rebuilding.

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

6 Data Migration: Plan how you will transfer your data to the cloud. Use cloud-based
data transfer services or tools to ensure a smooth transition.
7 Application Migration: Assess your applications to determine how they can best run
in the cloud. This may involve adjusting configurations or making code changes.
8 Testing: Rigorously test your applications and data in the cloud environment to
ensure they perform as expected and are secure.
9 Pilot Migration: Start with a pilot migration, which may involve a non-critical
application or workload. This allows you to identify and address issues before a full-scale
migration.
10 Full Migration: Once you're confident in the success of your pilot migration, proceed
with the full-scale migration of your applications and data.
11 Monitoring and Optimization: Implement cloud monitoring tools to keep track of
your cloud resources, ensure performance, and optimize costs. Regularly adjust resource
allocation as needed.
12 Documentation: Maintain thorough documentation of your cloud environment,
configurations, and best practices to facilitate management and future growth.
13 Training and Skill Development: Invest in training for your IT staff to ensure they are
familiar with gcloud services, best practices, and security procedures.
14 Disaster Recovery and Backup: Establish robust disaster recovery and backup
solutions to ensure business continuity in case of unexpected events.
15 Compliance and Governance: Ensure that your cloud environment adheres to relevant
compliance standards and regulatory requirements.
16 Review and Iterate: Continuously review your cloud setup, costs, and performance to
identify opportunities for improvement, cost optimization, and resource efficiency.
17 Security and Access Control: Regularly review and update security policies, access
controls, and identity management to protect your cloud resources.
18 Post-Migration Support: Provide ongoing support and training for your team to ensure
the effective operation and maintenance of your cloud environment.

CLOUD ECONOMICS AND BILLING

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

CLOUD ECNOMICS AND BILLING

Cloud economics and billing are like the financial wizards of the digital realm. Imagine you
have a genie who can conjure up any computing power or storage you need, and you only pay for
what you use.

Cloud economics is the study of how organizations can make the most out of their cloud
investments. It's like being a chef in a kitchen where you only pay for the ingredients you actually
use. Companies can scale up or down based on their needs, which is not only convenient but also
cost-effective. Cloud billing is often pay-as-you-go. Need more power for a day? Pay for a day. Have
a slow month? Pay for less. It's this flexibility that makes cloud computing so attractive.

2.1 Fundamentals Of Pricing:

Definition

Pricing fundamentals encompass the foundational principles and strategies businesses use
to determine the monetary value assigned to their products or services. This involves considering
costs, market demand, perceived value, competition, and various pricing models to set a price that is
both competitive in the market and aligned with the perceived worth of the offering. Effective
pricing requires a balance between covering costs, meeting customer expectations, and achieving
strategic business objectives.

Absolutely, let's dig deeper into the fundamentals of pricing:

 Costs:

Fixed Costs: These are the costs that don't change with the level of production or service.
Think of it as the baseline expenses—rent, salaries, etc.

Variable Costs: These costs fluctuate with the level of production or service. For a
software product, it might be the cost per user or the data storage expenses.

 Value-Based Pricing: This is where you price your product or service based on the perceived
value to the customer. If your offering solves a critical problem or provides significant value,
you can charge more.
 Competitive Pricing: Benchmarking your prices against competitors. You might choose to
price your product similarly, lower (to gain market share), or higher (if you're offering
premium features).

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

 Customer-Centric Pricing: Understanding your customer segments and their willingness to


pay. Some customers are willing to pay a premium for extra features or faster service, while
others are more price-sensitive.
 Psychological Pricing: Playing with the customer's psyche. For example, pricing something
at $99.99 instead of $100. It seems like a small difference, but the human brain often
perceives it as significantly less.
 Elasticity of Demand: How much does the demand for your product change with a change
in price? If a small drop in price leads to a significant increase in sales, you might have an
elastic product. If not, it's inelastic.
 Pricing Models:
 Hourly/Milestone Billing: Common in consulting or project-based work.
 Subscription-Based: Monthly or yearly recurring payments.
 Freemium Model: Offering a basic product for free and charging for premium
features.
 One-Time Purchase: Customers pay a single fee for perpetual access.

2.2 Total Cost of Ownership (TCO):

TCO is a financial estimate that aims to assess the complete cost of owning and operating a
product, system, or service over its entire lifecycle. It extends beyond the initial purchase or
implementation costs and includes various factors that impact the total economic value. TCO is a
valuable tool for decision-makers, helping them make informed choices by considering the holistic
cost implications of an investment.

 Initial Costs: These are the upfront expenses associated with acquiring a product or service.
For example, in IT, this could include the purchase of hardware, software licenses,
implementation services, and training.
 Operating Costs: This encompasses the ongoing costs incurred during the normal operation
of the product or service. For IT systems, this might include maintenance, support, energy
consumption, and consumables.
 Maintenance and Support: Costs related to keeping the system in good working order. This
includes regular maintenance, updates, and technical support.

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

 Training and Knowledge Transfer: Expenses associated with training staff to use and
maintain the system. This is particularly relevant in the context of complex technologies or
software.
 Downtime and Productivity Loss: TCO considers the cost of any disruptions that might
occur, leading to downtime and a subsequent loss in productivity. This is critical for systems
where uptime is crucial.
 Scalability and Upgrades: TCO analysis looks at how the system can scale to meet growing
demands and what costs are associated with upgrading or expanding the system.
 Integration Costs: For systems that need to work seamlessly with existing infrastructure,
there may be costs associated with integration.
 End-of-Life Costs: The expenses related to retiring or replacing a system at the end of its
useful life. This might include decommissioning costs, data migration, and disposal.
2.3 AWS Organisations

AWS Organizations is a service provided by Amazon Web Services (AWS) that helps you
consolidate multiple AWS accounts into an organization that you create and centrally manage. It
simplifies the management of billing and permissions across multiple AWS accounts, providing a
hierarchical structure for better organization and control.

 Organization: At the top level, you have an AWS organization. This is the main entity that
ties together multiple AWS accounts under a common umbrella.
 Organizational Units (OUs): OUs are containers within an organization where you can
organize and group AWS accounts. This allows you to apply policies and permissions to
multiple accounts at once. For example, you might have separate OUs for development,
testing, and production accounts.
 Accounts: AWS accounts are individual entities within the organization. Each account
operates independently, but AWS Organizations allows you to manage them collectively. This
is particularly useful for businesses with different departments or projects that require their
own isolated environments.
 Consolidated Billing: One of the key benefits of AWS Organizations is consolidated billing.
You can link all the accounts in your organization to one paying account, known as the
management account. This simplifies the billing process by providing a single bill for all
accounts, making it easier to track and manage costs.

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

 Service Control Policies (SCPs): SCPs are a way to set fine-grained permissions at the
organization, OU, or account level. They allow you to control what services and actions are
allowed or denied within your accounts. This adds an extra layer of security and governance.
 Policy-Based Management: With AWS Organizations, you can apply policies across your
accounts. These policies can include things like service control policies, tagging policies, and
even AWS Identity and Access Management (IAM) policies.
 Tag Policies: AWS Organizations allows you to enforce tagging standards across your
accounts. This ensures consistency in resource tagging, which can be beneficial for tracking
and managing resources.
 Service Quotas: You can use AWS Organizations to set service quotas for your accounts.
This helps in controlling resource usage and avoiding unexpected resource limitations.
2.4 AWS Billing & Cost Management:

AWS Billing and Cost Management is a suite of tools and services provided by Amazon Web
Services (AWS) to help users understand, control, and optimize their AWS costs. It's a crucial aspect
for organizations leveraging cloud services to ensure efficient resource utilization and cost-
effectiveness.

 AWS Billing Console: The AWS Billing Console is where users can access and manage
billing information. It provides an overview of current and historical bills, usage details, and
allows users to set up billing alerts.
 Cost Explorer: Cost Explorer is a powerful tool that allows users to visualize, understand,
and analyze their AWS costs and usage over time. It provides customizable reports and
enables users to drill down into specific cost and usage data.
 Budgets: AWS Budgets allows users to set custom cost and usage budgets that alert them
when they exceed their thresholds. This is crucial for preventing unexpected overages and
managing costs within predefined limits.
 AWS Pricing Calculator: The Pricing Calculator is a web-based tool that helps users
estimate their monthly AWS bill based on their usage patterns. It's useful for planning and
forecasting costs before deploying resources.
 Reserved Instances (RIs) Planning Tools: For users interested in purchasing Reserved
Instances for cost savings, AWS provides planning tools within the console. These tools help
users understand the potential cost savings of purchasing RIs for specific services.

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

 Savings Plans: AWS Savings Plans provide significant savings over On-Demand pricing, in
exchange for a commitment to a consistent amount of usage (measured in $/hr) for a 1 or 3-
year period. The AWS Cost Explorer includes features to analyze and visualize savings plans
usage and savings.
 Cost and Usage Reports: AWS Cost and Usage Reports provide detailed data about your
AWS costs and usage. Users can use these reports for in-depth analysis, cost allocation, and
creating custom reports. The data is stored in an Amazon S3 bucket for easy access.
2.5 AWS Support

AWS (Amazon Web Services) Technical Support is a service provided by Amazon to help
customers effectively use AWS products and services. It is designed to assist customers with
technical issues, provide guidance on best practices, and offer expertise to optimize their use of AWS
resources.

 Support Plans: AWS offers different support plans to cater to the diverse needs of its
customers. These plans include Basic Support, Developer Support, Business Support, and
Enterprise Support. Each plan comes with different levels of access to AWS Support
resources, response times, and features.
 Access to AWS Trusted Advisor: AWS Trusted Advisor is a tool that provides best practices
and recommendations to improve security, optimize performance, and save costs. Users with
AWS Support plans, especially Developer, Business, and Enterprise Support, have access to
Trusted Advisor.
 24/7 Access to Cloud Support Engineers: Customers with higher-tier support plans, such as
Business and Enterprise Support, have access to Cloud Support Engineers 24/7. These
engineers are available to assist with technical issues, troubleshoot problems, and provide
guidance.
 Response Time SLAs: Each support plan comes with different response time Service Level
Agreements (SLAs). Response times can vary from a few hours to a few minutes, depending
on the support plan.
 Architectural Guidance: AWS Support can provide advice on architectural best practices.
This includes guidance on designing scalable, secure, and cost-effective architectures based
on AWS services.

10

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

 Access to AWS Documentation and Knowledge Base: Customers have access to AWS
documentation, whitepapers, and a vast knowledge base. This self-service approach allows
users to find answers to common questions and troubleshoot issues on their own.
 Case Management: AWS Support provides a case management system where customers can
submit and track support cases. This is useful for efficiently managing and resolving
technical issues.
 Operational Support: AWS Support can assist with operational issues such as configuring
services, deploying applications, and optimizing performance. They can help customers
navigate through the AWS Management Console and troubleshoot problems.
 Security Support: AWS Support includes security assistance, helping customers understand
and implement security best practices. This includes guidance on identity and access
management, encryption, and compliance.

AWS GLOBAL INFRASTRUCTURE

3.1 AWS Global infrastructure

The AWS (Amazon Web Services) Global Infrastructure is a vast and distributed network of data
centers, regions, availability zones, edge locations, and other network facilities that forms the
backbone of AWS's cloud computing platform. This infrastructure is designed to provide high
availability, low-latency access, and scalability for AWS customers worldwide.

Regions:

AWS regions are physical locations around the world where AWS has data centers. Each
region is entirely independent and isolated from other regions. AWS regions are designed to meet the
needs of customers in specific geographic areas.

AWS has regions in various parts of the world, such as North America, Europe, Asia, and
South America. Each region consists of multiple data centers and is identified by a unique name,
such as "US East (N. Virginia)" or "EU (Ireland)."

Availability Zones (AZs):

Within each AWS region, there are multiple availability zones (AZs). An availability zone is
essentially a data center or a cluster of data centers.

11

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

AZs are isolated from one another, each having its own power, cooling, and networking
infrastructure. This isolation provides redundancy and fault tolerance within a region.

Customers can deploy their applications and data across multiple AZs to ensure high
availability and resilience.

Edge Locations:

AWS has a network of edge locations around the world. These edge locations are part of the
AWS Content Delivery Network (CDN) service called Amazon CloudFront.

Edge locations are used to cache and deliver content, such as web pages, images, videos, and
other assets, to end-users with low latency. They play a crucial role in improving the performance of
web applications and content delivery.

Wavelength Zones:

AWS Wavelength Zones are designed to bring AWS services closer to the edge of 5G
networks. They are strategically located in cities to reduce latency for applications that require ultra-
low latency, such as augmented reality and virtual reality.

Local Zones:

AWS Local Zones are extensions of existing AWS regions and are designed to place AWS
compute, storage, and other select services closer to specific population centers.

Local Zones allow customers to run applications that require low-latency access to AWS resources
without the need to build their ow

n data centers.

Data Centers and Backbone Network:

AWS's data centers are state-of-the-art facilities that house the physical infrastructure,
including servers, storage, and networking equipment.

AWS's global backbone network interconnects these data centers and enables high-speed,
low-latency communication and data transfer.
12

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

Ground Stations:

AWS Ground Stations are designed to simplify satellite communication for applications such
as Earth observation and data collection. These ground stations are part of the AWS Global
Infrastructure.

3.2 AWS Services & Service Categories

Amazon Web Services (AWS) offers a vast array of cloud computing services that span various
categories to address a wide range of computing needs. Here are some of the primary service
categories and examples of key AWS services within each category:

1. Compute Services:
 These services provide the capability to run applications and workloads on virtual servers
or in serverless environments.

 Key AWS Compute Services:


 Amazon Elastic Compute Cloud (EC2): Virtual servers in the cloud.
 AWS Lambda: Serverless computing for running code without provisioning or
managing servers.
 Amazon Elastic Container Service (ECS): Container orchestration service.
 AWS Elastic Beanstalk: Platform-as-a-Service (PaaS) for deploying and
managing applications.

2. Storage Services:
 These services offer scalable and durable storage solutions for data and applications.
 Key AWS Storage Services:
 Amazon Simple Storage Service (S3): Scalable object storage.
 Amazon Elastic Block Store (EBS): Block storage for EC2 instances.
 Amazon Glacier: Low-cost storage for archiving and backup.
 AWS Storage Gateway: Hybrid cloud storage integration.
3. Database Services:
 AWS provides various database services to manage, store, and analyze data.
 Key AWS Database Services:
13

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

 Amazon RDS: Managed relational database service.


 Amazon DynamoDB: Fully managed NoSQL database.
 Amazon Redshift: Data warehousing service.
 Amazon ElastiCache: In-memory data store.
4. Networking Services:
 These services enable network connectivity and control in the cloud.
 Key AWS Networking Services:
 Amazon Virtual Private Cloud (VPC): Isolated cloud networks.
 AWS Direct Connect: Dedicated network connection to AWS.
 Amazon Route 53: Scalable Domain Name System (DNS) web service.
 AWS Elastic Load Balancing: Load balancing for applications.
5. Content Delivery and CDN Services:
 These services optimize the delivery of content, applications, and media to end-users.
 Key AWS CDN Services:
 Amazon CloudFront: Content delivery network service.
 AWS Global Accelerator: Service to improve availability and performance.
6. Security and Identity Services:
 These services provide security, identity, and access control for AWS resources.
 Key AWS Security Services:
 AWS Identity and Access Management (IAM): Identity and access control for
AWS resources.
 AWS Key Management Service (KMS): Encryption key management.
 AWS WAF: Web Application Firewall.

AWS CLOUD SECURITY

4.1 AWS shared Responsibility Model:

The AWS (Amazon Web Services) Shared Responsibility Model is a security framework that
defines the division of security responsibilities between AWS and its customers. It helps
organizations understand who is responsible for securing what within the AWS cloud environment.
This model is essential for ensuring the security and compliance of workloads and data hosted on
AWS.

The AWS Shared Responsibility Model consists of two main components:

14

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

AWS's Responsibility: AWS takes responsibility for the security "of" the cloud infrastructure. This
includes the physical data centers, network infrastructure, and the foundational AWS services. AWS
is responsible for safeguarding these components against physical threats, ensuring redundancy and
fault tolerance, and providing security measures like data encryption at rest and in transit.

Customer's Responsibility: Customers are responsible for the security "in" the cloud. This means
customers are responsible for securing their data, applications, and workloads that they run on AWS
infrastructure. They are also responsible for configuring and managing security settings for the AWS
services they use, as well as implementing access controls, encryption, and identity and access
management for their resources.

4.2 AWS IAM:

AWS Identity and Access Management (IAM) is a web service provided by Amazon Web Services
(AWS) that allows you to control access to AWS services and resources. IAM enables you to create
and manage users, groups, and roles, and define their permissions and policies. It is a critical
component for ensuring the security of your AWS infrastructure by controlling who has access to
your resources and what actions they can perform.

 Users: Users are individual identities within your AWS account. Each user has their own set
of security credentials (username and password or access keys) and permissions. Users can
represent individuals, applications, or services.
 Groups: Groups are collections of users. Instead of attaching policies and permissions
directly to individual users, you can attach them to groups. This makes it easier to manage
and maintain permissions for multiple users who share similar roles or responsibilities.
 Roles: Roles are similar to users, but they are not associated with a specific individual or
group. Instead, roles are assumed by AWS services, applications, or other entities. Roles are
commonly used for granting permissions to AWS resources like EC2 instances or Lambda
functions.
 Permissions and Policies: Permissions in IAM are defined using policies. Policies are JSON
documents that specify what actions are allowed or denied for which resources. AWS
provides a set of managed policies that cover common use cases, and you can also create
custom policies.

15

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

 Access Keys: Access keys consist of an access key ID and a secret access key. They are used
for programmatic access to AWS services and resources, such as when using the AWS
Command Line Interface (CLI) or SDKs.
 Multi-Factor Authentication (MFA): IAM supports MFA, which adds an extra layer of
security by requiring users to provide a time-based one-time password in addition to their
regular credentials.
 Identity Federation: You can integrate IAM with external identity providers, such as
Microsoft Active Directory, to allow users to sign in with their existing corporate credentials.
 Access Control Lists (ACLs): IAM also allows you to create and manage resource-based
policies that control access to resources like S3 buckets and SQS queues.
 Permission Boundaries: You can set permission boundaries for users and roles to limit the
maximum permissions they can grant to others.
 Audit Logging: IAM actions are logged by AWS CloudTrail, allowing you to monitor who
has accessed your AWS resources and what actions they performed.

4.3 Securing a new AWS account:

Securing a new AWS account is crucial to prevent security breaches and ensure the integrity and
confidentiality of your data and resources. Here's a checklist of steps to follow when securing a new
AWS account:

 Enable MFA (Multi-Factor Authentication):


 Enable MFA for the AWS account root user to add an extra layer of security.
 Require MFA for all users who have access to your AWS account.
 Create Individual IAM Users:
 Avoid using the root user for daily tasks; create IAM users for individuals or
applications.
 Assign appropriate permissions to each IAM user based on the principle of
least privilege.
 Use Strong Password Policies:
 Enforce strong password policies for IAM users.
 Rotate passwords regularly.
 Set Up and Configure AWS Identity and Access Management (IAM):
 Create IAM roles and groups to manage permissions more effectively.
 Use IAM policies to control access to AWS services and resources.
16

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

 Implement access control based on business needs.


 Implement AWS Organizations:
 Use AWS Organizations to centrally manage multiple AWS accounts.
 Set up Service Control Policies (SCPs) to enforce policies across member
accounts.
 Enable AWS CloudTrail Logging:
 Enable AWS CloudTrail to log all API calls and create trails for the desired
AWS regions.
 Store CloudTrail logs in a secure S3 bucket with appropriate access controls.
 Encrypt Data at Rest and in Transit:
 Use AWS Key Management Service (KMS) to manage encryption keys.
 Encrypt sensitive data at rest using services like Amazon S3, RDS, and EBS.
 Enable SSL/TLS for data in transit (e.g., HTTPS for websites, SSL for
databases).
 Configure VPC Security:
 Set up Virtual Private Cloud (VPC) to isolate resources.
 Implement Network Access Control Lists (NACLs) and Security Groups to
control inbound and outbound traffic.
 Use AWS WAF and Shield for DDoS Protection:
 Implement AWS Web Application Firewall (WAF) to protect against web
application attacks.
 Consider using AWS Shield to mitigate Distributed Denial of Service (DDoS)
attacks.
 Secure Key Management:
 Securely manage and rotate encryption keys using AWS Key Management
Service (KMS).
 Implement key rotation policies.
4.4 Securing Accounts

Securing accounts is a critical aspect of information security, whether it's AWS accounts,
email accounts, or any other type of online account.

Use Strong, Unique Passwords:

 Create strong and unique passwords for each account.


17

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

 Use a mix of uppercase and lowercase letters, numbers, and special characters.
 Avoid using easily guessable information like birthdates or common words.
 Consider using a password manager to generate and store complex passwords securely.

Enable Multi-Factor Authentication (MFA):

 Whenever possible, enable MFA for your accounts. MFA requires an additional verification
step (e.g., a one-time code from a mobile app or hardware token) beyond the password.
 MFA adds an extra layer of security, making it significantly harder for unauthorized users to
access your accounts.

Regularly Update Passwords:

 Change your passwords periodically, especially for critical accounts.


 Set up reminders to update passwords, and avoid reusing old passwords.

Beware of Phishing:

 Be cautious when clicking on links in emails, especially if they ask for login information.
 Double-check the email sender's address and website URLs to ensure they are legitimate.
 Be skeptical of unsolicited requests for personal or login information.

Keep Software and Devices Updated:

 Keep your operating system, applications, and devices up to date with the latest security
patches.
 Vulnerabilities in outdated software can be exploited by attackers.

Secure Your Email Account:

 Your email account is often the gateway to other accounts. Secure it with a strong password
and MFA.
 Be cautious with email attachments and links, as they can deliver malware or lead to phishing
sites.

Use Different Email Addresses for Different Accounts:

18

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

 Consider using separate email addresses for different types of accounts (e.g., one for
personal, one for work, one for online shopping).
 This can help compartmentalize your online presence and protect against widespread data
breaches.

Check Account Activity Regularly:

 Review your account activity and statements regularly for any unauthorized access or
transactions.
 Report any suspicious activity to the service provider.

Log Out of Shared Devices:

 Always log out of your accounts when using public or shared computers or devices.
 Browsers may offer to save your login information, which can be a security risk.

Use Strong Security Questions:

Avoid using easily discoverable information for security questions (e.g., mother's maiden
name).

4.5 Securing Data

Securing data is of paramount importance to protect sensitive information, maintain privacy, and
prevent unauthorized access. Whether you're dealing with personal data, business data, or data in a
cloud environment like AWS

Data Classification: Classify your data based on its sensitivity and importance. Not all data requires
the same level of security.

Encryption: Use encryption for data at rest and in transit. This includes encrypting data stored on
devices, in databases, and during data transmission. Implement strong encryption algorithms and key
management practices. Consider using services like AWS Key Management Service (KMS) for
managing encryption keys.

Access Control: Implement strict access control policies. Only authorized users or applications
should have access to sensitive data. Use role-based access control (RBAC) and the principle of least
privilege (granting the minimum permissions necessary).

19

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

Data Loss Prevention (DLP): Implement DLP solutions to monitor and prevent unauthorized
access, sharing, or leakage of sensitive data. Set up policies and alerts to detect and respond to data
breaches.

Data Backup and Recovery: Regularly back up data to prevent data loss in case of accidental
deletion or hardware failure.

4.6 Working to ensure compliances:

Ensuring compliance with relevant regulations and standards is crucial for organizations to maintain
the trust of their customers, protect sensitive data, and avoid legal and financial repercussions.

 Understand Applicable Regulations and Standards: Identify and understand the specific
regulations and standards that apply to your industry and organization. Common examples
include GDPR, HIPAA, PCI DSS, SOC 2, ISO 27001, and more.
 Create a Compliance Team: Establish a dedicated compliance team or designate individuals
responsible for compliance efforts. This team should have the necessary expertise to interpret
and implement compliance requirements.
 Conduct a Compliance Gap Analysis: Assess your organization's current practices and
policies against the requirements of the relevant regulations and standards. Identify gaps and
areas of non-compliance.
 Develop a Compliance Plan: Create a comprehensive plan outlining the specific steps,
processes, and policies needed to address the identified compliance gaps. Prioritize the most
critical issues.
 Implement Security and Privacy Controls: Deploy security and privacy controls to protect
sensitive data and ensure the privacy and integrity of customer information. This includes
data encryption, access controls, monitoring, and incident response plans.

NETWORKING AND CONTENT DELIVERY

5.1 Networking Basics

Amazon Web Services (AWS) is a popular cloud computing platform that provides a wide
range of networking services to help businesses build and manage their infrastructure in the cloud.

20

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

Understanding the basics of networking in AWS is essential for setting up and managing cloud
resources effectively.

 Virtual Private Cloud (VPC): A VPC is the foundational networking construct in AWS. It
allows you to create isolated sections of the AWS cloud where you can launch resource.
 Subnets: Subnets are segments of your VPC. You can have public and private subnets for
different purposes.
 Security Groups: Security groups act as virtual firewalls for your instances. You can define
inbound and outbound rules to control the traffic to and from your resources.
 Network Access Control Lists (NACLs): NACLs are stateless network-level firewalls that
control traffic at the subnet level
 Elastic Load Balancers (ELB): AWS ELB distributes incoming traffic across multiple
instances for improved availability and fault tolerance.
 Route 53: Amazon Route 53 is a scalable and highly available Domain Name System (DNS)
web service.
 Elastic Ip: Elastic IPs are static IP addresses that you can allocate to your instances, which
can be remapped when needed.
 Direct Connect: AWS Direct Connect establishes dedicated network connections from on-
premises data centers to AWS, providing more reliable and consistent network performance.
 Virtual Private Network (VPN): AWS provides VPN options for secure communication
between on-premises networks and your VPC.
 Transit Gateway: Transit Gateway is a network transit hub that enables connectivity
between VPCs and on-premises networks.

5.2 Amazon VPC

 VPC stands for Virtual Private Cloud.


 Amazon Virtual Private Cloud (Amazon VPC) provides a logically isolated area of the AWS
cloud where you can launch AWS resources in a virtual network that you define.
 You have complete control over your virtual networking environment, including a selection
of your IP address range, the creation of subnets, and configuration of route tables and
network gateways.
 You can easily customize the network configuration for your Amazon Virtual Private Cloud.
For example, you can create a public-facing subnet for web servers that can access to the

21

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

internet and can also place your backend system such as databases or application servers to a
private-facing subnet.
 You can provide multiple layers of security, including security groups and network access
control lists, to help control access to Amazon EC2 instances in each subnet.

Architecture of VPC

Fig 1. VPC with Public & Private Subnet

The outer line represents the region, and the region is us-east-1. Inside the region, we have
VPC, and outside the VPC, we have internet gateway and virtual private gateway. Internet Gateway
and Virtual Private Gateway are the ways of connecting to the VPC. Both these connections go to the
router in a VPC and then router directs the traffic to the route table. Route table will then direct the
traffic to Network ACL. Network ACL is the firewall or much like security groups. Network ACL
are statelist which allows as well as deny the roles. You can also block the IP address on your
Network ACL. Now, move over to the security group that accesses another line against the EC2
instance. It has two subnets, i.e., Public and Private subnet. In public subnet, the internet is accessible
by an EC2 instance, but in private subnet, an EC2 instance cannot access the internet on their own.
We can connect the instances. To connect an instance, move over to the public subnet and then it

22

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

SSH to the private subnet. This is known as jump boxes. In this way, we can connect an instance in
public subnet to an instance in private subnet.

Some ranges are reserved for private subnet:

 10.0.0.0 - 10.255.255.255 (10/8 prefix)


 172.16.0.0 - 172.31.255.255 (172.16/12 prefix)
 192.168.0.0 - 192.168.255.255 (192.108/16 prefix)

What can we do with a VPC?

 Launch instances in a subnet of your choosing. We can choose our own subnet addressing.
 We can assign custom IP address ranges in each subnet.
 We can configure route tables between subnets.
 We can create an internet gateway and attach it to our VPC.
 It provides much better security control over your AWS resources.
 We can assign security groups to individual instances.
 We also have subnet network access control lists (ACLS).

VPC Peering

 VPC Peering is a networking connection that allows you to connect one VPC with another
VPC through a direct network route using private IP addresses.
 Instances behave as if they were on the same private network.
 You can peer VPC's with other AWS accounts as well as other VPCs in the same account.
 Peering is in a star configuration, i.e., 1 VPC peers other 4 VPCs.
 It has no Transitive Peering!!.

5.3 VPC Networking

VPC networking, or Virtual Private Cloud networking, refers to the network architecture and
configurations within an Amazon Web Services (AWS) Virtual Private Cloud (VPC). VPC
networking is essential for setting up, managing, and controlling the communication between the
various resources and services deployed within the VPC.

23

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

1. IP Address Range (CIDR Block): When you create a VPC, you specify an IP address range
in the form of a Classless Inter-Domain Routing (CIDR) block. This IP address range defines
the address space for your VPC and all of its associated subnets. For example, you can
choose a CIDR block like 10.0.0.0/16.
2. Subnets: Subnets are subdivisions of the VPC IP address range. You can create multiple
subnets within your VPC, and each subnet is associated with a specific CIDR block. Subnets
can be categorized as public (accessible from the internet) or private (not directly accessible
from the internet). Proper subnet design is crucial for organizing your resources.
3. Route Tables: Each subnet is associated with a route table. Route tables determine how
traffic is routed within the VPC. You can configure routes to direct traffic to various
destinations, such as the internet gateway, VPC peering connections, or Virtual Private
Gateways.
4. Internet Gateway: An Internet Gateway is a horizontally scaled, highly available VPC
component that allows instances in your public subnets to connect to the internet. It enables
resources to have public IP addresses for internet access.
5. Elastic Network Interfaces (ENIs): ENIs can be attached to EC2 instances and serve as
network interfaces. You can use ENIs to create additional network connections or to use
features like network-level monitoring.
6. Security Groups: Security Groups act as stateful firewalls for your resources. You can define
inbound and outbound traffic rules to control access to instances in your VPC.
7. Network Access Control Lists (NACLs): NACLs are stateless network-level firewalls that
allow you to control traffic at the subnet level. You can specify rules to allow or deny specific
traffic.
8. VPC Peering: VPC peering allows you to connect two VPCs so that resources in one VPC
can communicate with resources in another VPC using private IP addresses. It's a way to
establish private network connections.
9. VPC Endpoints: VPC endpoints enable private connectivity to AWS services without going
over the public internet. This is useful for accessing services like S3 or DynamoDB securely
from within your VPC.
10. Virtual Private Gateway: A Virtual Private Gateway is used in conjunction with a VPN
connection to establish secure communication between your VPC and an on-premises data
center.

24

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

11. NAT Gateway: A Network Address Translation (NAT) Gateway allows resources in private
subnets to initiate outbound traffic to the internet while keeping them private from incoming
connections.

5.4 VPC Security

VPC security in Amazon Web Services (AWS) is a critical aspect of protecting your cloud resources
and data. It involves configuring and implementing various security measures to ensure that your
Virtual Private Cloud (VPC) and the resources within it are secure from unauthorized access, cyber
threats, and data breaches.

1. Network Isolation: Use VPCs to create isolated network environments for different
applications or workloads. Isolating resources into separate VPCs helps contain potential
security breaches.
2. Security Groups: Security Groups act as virtual firewalls at the instance level. You can
define inbound and outbound rules to control traffic to your EC2 instances. These rules
specify which IP addresses and ports are allowed or denied.
3. Network Access Control Lists (NACLs): NACLs are stateless firewall rules that operate at
the subnet level. You can use them to define rules for allowing or denying traffic to subnets.
NACLs provide an additional layer of security.
4. Public and Private Subnets: Separate resources into public and private subnets within your
VPC. Public subnets are typically used for resources that need to be accessible from the
internet, while private subnets are isolated.
5. Internet Gateway: Control access to the internet by using Internet Gateways. Ensure that
only resources in your public subnets have access to the internet, and use security groups to
limit incoming traffic.
6. Private Subnet Access: To allow resources in private subnets to access the internet for
updates or external services, you can use NAT Gateways or NAT Instances to provide
controlled outbound access.
7. VPC Peering: When peering VPCs, carefully configure and control the routing and security
groups to limit communication to only the necessary resources in the peered VPCs.
8. VPN and Direct Connect: Securely connect your VPC to on-premises networks using VPN
connections or AWS Direct Connect. Ensure that data in transit is encrypted, and use Virtual
Private Gateways for the VPN setup.

25

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

9. VPC Endpoints: When accessing AWS services from your VPC, use VPC endpoints to keep
traffic within the AWS network, preventing exposure to the public internet.
10. Logging and Monitoring: Enable CloudTrail to log API calls and events within your VPC.
Use Amazon CloudWatch to monitor network traffic, resource usage, and security group
violations.
11. Data Encryption: Encrypt data at rest and in transit. Use services like AWS Key
Management Service (KMS) for key management, and enable SSL/TLS for data in transit.
12. Regular Auditing and Patching: Regularly audit your resources, apply security patches, and
follow best practices for securing your operating systems and applications.
13. IAM (Identity and Access Management): Use AWS Identity and Access Management
(IAM) to control and manage user and resource access permissions. Follow the principle of
least privilege to restrict access.
14. Security Best Practices: Stay up-to-date with AWS security best practices and
recommendations. Regularly review your security configurations to ensure they align with
industry standards.
15. DDoS Protection:Consider enabling AWS Shield for Distributed Denial of Service (DDoS)
protection to safeguard your resources from malicious traffic.

5.5 Route 53

Amazon Route 53 is a scalable and highly available Domain Name System (DNS) web
service provided by Amazon Web Services (AWS). It is designed to route end-user requests to
various AWS services, such as Amazon EC2 instances, Elastic Load Balancers, and S3 buckets, as
well as resources outside of AWS. Route 53 offers domain registration, DNS routing, and health
checking services.

1. Domain Registration: Route 53 allows you to register new domain names or transfer existing
domains to AWS. It provides a user-friendly interface to manage your domain names and
configure DNS settings.
2. DNS Service: Route 53 is a highly available and globally distributed DNS service that translates
user-friendly domain names (e.g., example.com) into IP addresses. It ensures low-latency and
reliable DNS resolution for your domain.

26

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

3. Routing: Route 53 supports multiple routing policies, including simple, weighted, latency-based,
geolocation, and failover routing. You can configure these policies to direct traffic to different
resources based on your specific requirements.
4. Health Checks: You can set up health checks to monitor the availability and performance of your
resources. Route 53 can automatically route traffic away from unhealthy resources to maintain
high availability.
5. Alias Records: Alias records allow you to map your domain directly to AWS resources, such as
Elastic Load Balancers, CloudFront distributions, and S3 buckets, without needing to manage IP
addresses. This helps in creating highly available and scalable architectures.
6. Latency-Based Routing: You can use latency-based routing to direct user traffic to the AWS
region that provides the lowest latency for a better user experience. This is particularly useful for
applications with a global user base.
7. Geolocation Routing: Route 53 enables you to route traffic based on the geographical location
of the user, ensuring that users are directed to the nearest resource data center.
8. Traffic Flow: Traffic Flow is a feature of Route 53 that provides advanced traffic management
with real-time visualization and control over the DNS routing.
9. DNSSEC (Domain Name System Security Extensions): Route 53 supports DNSSEC, which
adds an additional layer of security to DNS by digitally signing DNS data to prevent DNS
spoofing and cache poisoning attacks.
10. Access Control: You can use AWS Identity and Access Management (IAM) to control who can
make changes to your Route 53 resources and configurations.
11. Logging and Monitoring: Route 53 integrates with Amazon CloudWatch for logging and
monitoring, allowing you to capture DNS query logs and track the performance of your DNS
responses.
12. Failover and Disaster: Recovery: You can configure failover routing policies to direct traffic to
a backup resource or location in case of primary resource unavailability. This is useful for
disaster recovery scenarios.

5.6 Cloud Front

Amazon CloudFront is a content delivery network (CDN) service provided by Amazon Web Services
(AWS). CDNs are designed to help distribute content such as web pages, images, videos, and other
resources to users globally with low latency and high data transfer speeds. Amazon CloudFront, in

27

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

particular, offers a range of features and capabilities to enhance the performance, security, and
scalability of content delivery

• Content Distribution: CloudFront caches and distributes your content to a network of edge
locations, which are data centers located around the world. This helps reduce the latency for
users and speeds up content delivery.
• Edge Locations: Amazon CloudFront has a large number of edge locations strategically
placed worldwide. These edge locations serve as caching points for your content, ensuring
that users get content from a location closer to them, reducing the time it takes for data to
travel over the internet.
• Dynamic and Static Content: CloudFront can serve both dynamic and static content. It can
cache and serve HTML, CSS, JavaScript, images, videos, and other assets. This is valuable
for web applications, websites, and APIs.
• Security: CloudFront integrates with other AWS services like AWS WAF (Web Application
Firewall) and AWS Shield to provide protection against DDoS attacks and other security
threats. You can also configure security policies, access controls, and SSL/TLS encryption.
• Customization: You can customize the behavior of your CloudFront distribution through
settings like caching rules, origin settings, and content compression. This allows you to tailor
the CDN to your specific requirements.
• Origin Fetching: CloudFront can pull content from various origins, including Amazon S3
buckets, EC2 instances, load balancers, and other HTTP/HTTPS servers. This flexibility
allows you to use CloudFront for various use cases.
• Real-time Analytics: CloudFront provides access to real-time analytics and logs that allow
you to monitor and optimize content delivery. You can gain insights into user access patterns
and data transfer.
• Live Streaming: CloudFront supports live video and audio streaming, making it suitable for
delivering real-time content such as live events, webinars, and broadcasts.
• Global Reach: Amazon CloudFront provides global coverage, making it suitable for
businesses with a worldwide user base.
• Content Invalidation: You can invalidate or clear cached content in CloudFront to ensure
that users receive the latest version of your content when updates are made.
• Cost-Effective: Amazon CloudFront pricing is pay-as-you-go, and it can be cost-effective for
businesses of all sizes due to its flexible pricing model

28

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

29

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

COMPUTE

6.1 Compute Services

There are several key compute services offered by major cloud providers like Amazon Web
Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). These services can include:

 Virtual Machines (VMs): Virtual machines are software-based emulations of physical


computers. They allow users to run various operating systems and applications on a single
physical server
 Containers: Containers provide a lightweight and portable way to package, distribute, and
run applications. They encapsulate an application and its dependencies, ensuring consistent
behavior across different environments.
 Serverless Computing: Serverless computing allows developers to run code in response to
events without managing the underlying infrastructure. Developers can focus on writing code
without worrying about provisioning or managing servers.
 Functions as a Service (FaaS): FaaS is a subset of serverless computing that allows
developers to write and deploy individual functions or pieces of code that run in response to
events. It's a pay-per-execution model.
 Managed Kubernetes: Kubernetes is an open-source container orchestration platform.
Managed Kubernetes services handle the deployment, scaling, and management of
Kubernetes clusters, making it easier for users to manage their containerized applications.
 Batch Processing: Batch processing services allow users to process large amounts of data or
run specific tasks at scheduled intervals.
 High-Performance Computing (HPC): High-performance computing services are designed
to handle computationally intensive workloads that require significant processing power, such
as scientific simulations and modeling.

These compute services provide scalability, flexibility, and cost-effectiveness for businesses and
developers to deploy and manage applications and workloads in the cloud

6.2 Amazon EC2-1

30

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

Amazon EC2 is a web service that allows users to rent virtual computing resources, known as
instances, in the cloud. These instances can be easily scaled up or down based on demand, providing
flexibility and cost-efficiency. Here's an overview of Amazon EC2:

 Instances: Amazon EC2 instances are virtual servers in the cloud that you can configure and
manage according to your requirements. They come in various configurations based on CPU,
memory, storage, and networking capabilities.
 Instance Types: Amazon EC2 offers a wide range of instance types optimized for different
use cases, including general-purpose, compute-optimized, memory-optimized, storage-
optimized, and more. Each type is designed to meet specific performance and resource
requirements.
 AMI (Amazon Machine Image): An AMI is a pre-configured template used to create
instances. It contains the necessary information to launch an instance, including the operating
system, application software, and configuration settings.
 Regions and Availability Zones: Amazon EC2 operates in various geographic regions
worldwide. Each region is divided into multiple Availability Zones, providing redundancy
and fault tolerance.
 Elastic IP Addresses: Elastic IP addresses are static public IP addresses that can be
associated with EC2 instances. They allow for easy redirection of traffic when instances are
replaced or updated.
 Security Groups: Security groups act as virtual firewalls for EC2 instances, controlling
inbound and outbound traffic based on rules defined by the user.
 Auto Scaling: Auto Scaling automatically adjusts the number of EC2 instances to handle
changes in demand, ensuring optimal performance and cost management.
 Load Balancing: Elastic Load Balancing (ELB) distributes incoming traffic across multiple
EC2 instances to ensure high availability and fault tolerance.
 EBS (Elastic Block Store): EBS provides block-level storage volumes that can be attached
to EC2 instances. It's used for storing data that persists even when the instance is stopped or
terminated.

Amazon EC2 is a foundational service in AWS, forming the backbone of many applications and
services that run on the cloud, providing the ability to scale resources as needed and adapt to
changing workloads

31

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

6.3 Amazon EC2 -2

In Part 2 of an Amazon EC2 video, you might expect to learn about more advanced topics and
features related to EC2.

 Advanced Instance Configuration: In-depth understanding of instance types, performance


characteristics, and how to choose the right instance for specific workloads. Customizing
instance launch configurations, including specifying user data, IAM roles, and advanced
networking options.
 Amazon EBS (Elastic Block Store): Detailed information on EBS volume types (e.g.,
General Purpose SSD, Provisioned IOPS, Magnetic) and when to use each..
 Amazon EC2 Auto Scaling: Advanced configuration of Auto Scaling groups, policies, and
strategies to manage application scaling based on demand.Integration with Elastic Load
Balancing for seamless scaling and handling increased traffic.
 Monitoring and Performance Optimization: Utilizing Amazon CloudWatch to monitor
EC2 instances and set up custom metrics and alarms.Best practices for optimizing
performance, monitoring, and troubleshooting common issues.
 Security and Access Control: Advanced security configurations, such as using Security
Groups and Network ACLs effectively. IAM roles and permissions for EC2 instances and
applications.
 Data Management and Backup: Strategies for data management, backups, and disaster
recovery for EC2 instances and EBS volumes. Utilizing Amazon S3 for data backup and
storage.
 Spot Instances and Reserved Instances: Explanation of Spot Instances and how to use them
to save on costs. Reserved Instances pricing models and strategies for cost optimization.

These topics would provide a deeper understanding of Amazon EC2 and how to make the most out
of this service, covering aspects related to instance management, advanced configurations,
optimization, security, and cost-efficiency

6.4 Amazon EC2 -3

In Part 3 of a series about Amazon EC2, you might expect to delve into more advanced and
specialized topics.

32

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

 EC2 Networking: VPC (Virtual Private Cloud) configuration, including subnets, route
tables, and internet gateways.Private vs. public subnets and how to set up secure networking
for EC2 instances.
 Advanced Security Topics: Network Access Control Lists (NACLs) and their role in
securing traffic in and out of subnets. Security best practices, including using AWS Systems
Manager, AWS Config, and AWS Trusted Advisor.
 High Availability and Fault Tolerance: Designing and implementing highly available
architectures using multiple Availability Zones.Strategies for ensuring fault tolerance,
including load balancing, automatic failover, and health checks.
 Advanced Instance Management: Customizing and optimizing instances for specific
workloads, such as optimizing for compute, memory, or I/O. Instance metadata and user data
for advanced configuration and automation.
 Advanced Storage and Backup Strategies:EBS snapshots, lifecycle policies, and managing
EBS volumes efficiently. Utilizing Amazon Elastic File System (EFS) for scalable and shared
file storage.
 Hybrid Cloud Configurations: Integrating EC2 instances with on-premises data centers
using VPN or Direct Connect. Hybrid cloud architectures and how to securely extend your
data center to the cloud.
 Compliance and Governance: Compliance requirements for EC2 instances and how to meet
them using AWS features and services. Governance policies and practices for managing and
auditing EC2 usage.

These topics would provide a deeper understanding of Amazon EC2, focusing on more advanced
features and use cases that businesses might encounter as they scale and optimize their cloud
infrastructure.

6.5 Amazon EC2 Cost Optimization

 Understanding EC2 Pricing Models: Explanation of various EC2 pricing models, including
On-Demand Instances, Reserved Instances, Spot Instances, and Savings Plans.Comparing
and contrasting these pricing models to choose the most cost-effective option for different
workloads.
 Usage Analysis and Cost Monitoring: Utilizing AWS Cost Explorer and AWS Budgets to
analyze usage patterns and track costs associated with EC2 instances. Setting up cost and
usage alerts to manage and control expenses effectively.
33

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

 Right Sizing and Instance Optimization: Identifying underutilized or overprovisioned instances


and optimizing them for cost efficiency.Utilizing tools like AWS Trusted Advisor or third-
party solutions for right-sizing recommendations.
 Spot Instances for Cost Savings: Understanding Spot Instances, their use cases, and
strategies to utilize them for significant cost savings. Implementing fault-tolerant and
interruption-tolerant applications with Spot Instances.
 Reserved Instances and Savings Plans: Strategies for effectively using Reserved Instances
and Savings Plans to reduce long-term costs. Best practices for optimizing RI and Savings
Plans purchases based on workload requirements.
 Automated Scaling and Scheduling: Implementing automation for scaling EC2 instances
based on demand to avoid overprovisioning.Scheduling instances to run only during specific
timeframes to save on costs.
 Data Transfer and Data Storage Optimization: Managing and optimizing costs associated
with data transfer into and out of EC2 instances. Strategies for efficient data storage,
including utilizing EBS snapshots and lifecycle policies.
 Cost Allocation and Tagging: Implementing proper tagging and resource grouping for
accurate cost allocation and tracking. Analyzing cost allocation data to identify areas for cost
optimization.
 These topics would provide viewers with practical strategies and best practices to optimize
costs associated with using Amazon EC2 instances effectively, ultimately leading to more
efficient and cost-effective usage of the AWS cloud resources.

6.6 Container Services

1. Introduction to Containers:
• What are containers and how they differ from virtual machines.
• Benefits of using containers, including portability, efficiency, and resource isolation.
2. Docker Overview:
• Introduction to Docker, a popular containerization platform.
• Basics of creating, managing, and running Docker containers.
3. Container Orchestration:
• Overview of container orchestration tools like Kubernetes, Docker Swarm, and
Amazon ECS (Elastic Container Service).

34

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

• How container orchestration helps in managing and scaling containerized


applications.
4. Amazon ECS (Elastic Container Service):
• Explanation of ECS, a fully managed container orchestration service by AWS.
• How to create, manage, and deploy Docker containers using ECS.
5. Amazon EKS (Elastic Kubernetes Service):
• Overview of EKS, a managed Kubernetes service by AWS.
• Setting up and managing Kubernetes clusters on AWS using EKS.
6. Container Networking and Storage:
• Understanding networking within containers and between containers and the host system.
• How to manage storage for containers, including persistent and shared storage options.
7. Serverless Containers:
• Overview of serverless container services, like AWS Fargate.
• How to run containers without managing the underlying infrastructure.
8. Security and Compliance for Containers:
• Best practices for securing containerized applications and the container environment.
• Compliance considerations and strategies for container deployments.
9. CI/CD Pipelines for Containers:
• Integrating containers into continuous integration and continuous deployment
(CI/CD) pipelines.
• Automating the build, test, and deployment of containerized applications.

These topics would provide a comprehensive understanding of container services, their management,
orchestration, integration, and security in a cloud computing environment, with a focus on Amazon
ECS and Amazon EKS as examples.

6.7 Introduction to AWS Lambda

• Introduction to Serverless Computing: Explanation of the serverless computing paradigm


and its benefits, such as reduced operational overhead and automatic scaling.
• Overview of AWS Lambda: Explanation of what AWS Lambda is and how it fits into the
serverless architecture. Description of key features, including event-driven programming,
automatic scaling, and pay-as-you-go pricing.

35

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

• Lambda Functions: Understanding Lambda functions, which are the pieces of code that run
in response to events. Creating and configuring Lambda functions using the AWS
Management Console or other development tools.
• Event Sources and Triggers: Explanation of event sources that can trigger Lambda
functions, such as changes to data in an S3 bucket, an update to a DynamoDB table, or an
HTTP request through Amazon API Gateway.
• Programming Languages and Runtimes: Overview of the programming languages
supported by AWS Lambda, including Node.js, Python, Java, C#, Go, and custom runtimes.
Understanding how to select the appropriate runtime for a specific use case.
• Deployment and Configuration: Steps to deploy Lambda functions and configure the
necessary triggers and permissions.Demonstrations of best practices for function
configuration and environment variables.
• Integration with Other AWS Services: Overview of integrating Lambda functions with
other AWS services, such as Amazon S3, DynamoDB, API Gateway, and more.How to set up
permissions and access control for Lambda functions.
• Error Handling and Monitoring: Strategies for error handling and logging within Lambda
functions.Using Amazon CloudWatch to monitor Lambda function performance and
troubleshoot issues.
• This introduction would provide a solid foundation for understanding AWS Lambda and its
capabilities, enabling viewers to start building serverless applications using this service.

6.8 Introduction to AWS Elastic Beanstalk

AWS Elastic Beanstalk is a fully managed service by Amazon Web Services that simplifies
the deployment and management of applications. It abstracts away the underlying infrastructure
complexities, allowing developers to focus on writing code and deploying applications without
getting bogged down by server management.

• Introduction to AWS Elastic Beanstalk: Explanation of what AWS Elastic Beanstalk is and
how it simplifies the deployment and scaling of applications.
• Supported Platforms and Languages: Overview of the supported platforms (e.g., Java,
.NET, Python, Node.js, etc.) and languages in Elastic Beanstalk. How to choose the
appropriate platform for a specific application.

36

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

• Application Environments: Understanding the concept of environments in Elastic Beanstalk


and how they relate to different versions and configurations of an application. Creating and
managing different environments for an application.
• Deployment Options: Explanation of the various deployment options available in Elastic
Beanstalk, including single instance, load balanced, and auto-scaling deployments. Steps to
deploy an application using Elastic Beanstalk.
• Configuration and Customization: Overview of configuration settings for Elastic Beanstalk
environments, including scaling options, environment variables, and more. How to customize
and tailor the environment to specific application requirements.
• Integration with Other AWS Services: Demonstrations of integrating an Elastic Beanstalk
application with other AWS services such as Amazon RDS, Amazon S3, and Amazon
CloudWatch .Leveraging AWS services to enhance application functionality.
• Monitoring and Management: Using AWS Management Console and AWS CLI to monitor
and manage applications in Elastic Beanstalk. How to monitor application performance, view
logs, and troubleshoot issues.

This introduction would provide a foundational understanding of AWS Elastic Beanstalk, enabling
viewers to start using this service to deploy and manage their applications on AWS with ease.

37

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

STORAGE

7.1 AWS EBS:

Amazon Elastic Block Store (EBS) is a cloud-based block storage service provided by
Amazon Web Services (AWS). EBS is designed to provide highly available and reliable block-level
storage volumes for use with Amazon Elastic Compute Cloud (EC2) instances.

1. Block Storage: EBS provides block-level storage volumes that can be attached to EC2
instances. These volumes can be used to store data, run databases, and operate file systems.
2. Persistence: EBS volumes are persistent, which means data stored on them remains intact
even if the associated EC2 instance is stopped or terminated. You can also take snapshots of
EBS volumes to create backups.
3. Elasticity: You can easily increase or decrease the size of EBS volumes as your storage needs
change. This makes it a scalable solution to accommodate growing data.
4. Types of EBS Volumes:
a. Standard EBS: Also known as Magnetic, this provides low-cost storage but with
lower performance. It's suitable for workloads with less I/O demand.
b. Provisioned IOPS (io1): These volumes are designed for high I/O performance and
can be used for database workloads, for example.
c. General Purpose (gp2): These volumes provide a balance of cost and performance,
suitable for a wide range of workloads.
d. Cold HDD (sc1): These are designed for infrequently accessed, colder data.
e. Throughput Optimized (st1): These volumes are optimized for streaming workloads
with high throughput.
5. Snapshots: EBS volumes can be snapshotted. Snapshots are incremental backups of the data
on the volume. You can use these snapshots to create new volumes or to migrate data to other
regions.
6. Data Encryption: EBS volumes can be encrypted at rest using AWS Key Management
Service (KMS) keys. This provides an extra layer of security for your data.
7. Availability and Durability: EBS volumes are designed to be highly available and durable.
Amazon replicates the data within an Availability Zone (AZ), and you can also create multi-
AZ setups for increased fault tolerance.
8. Use Cases: EBS volumes are used for a wide range of applications, including database
storage, file storage, boot volumes for EC2 instances, and more.
38

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

9. Performance Optimization: For applications with high I/O demands, you can optimize EBS
performance by choosing the right volume type, size, and using RAID configurations.
10. Cost Structure: You pay for the provisioned capacity of EBS volumes. The pricing depends
on the type of volume and the amount of storage.

In summary, AWS Elastic Block Store (EBS) is a versatile storage solution that plays a crucial role in
many AWS deployments by providing scalable, durable, and performant block-level storage for EC2
instances. It's a fundamental component for various applications and workloads running on the AWS
cloud platform.

7.2 AWS S3

Amazon Simple Storage Service (Amazon S3) is a widely used object storage service provided
by Amazon Web Services (AWS). It offers scalable and highly available storage for a wide range of
data types, including documents, images, videos, application backups, and more.

1. Object Storage: Amazon S3 is an object storage service, meaning it stores data as objects
rather than traditional file systems. Each object consists of data, a unique key (or URL), and
metadata.
2. Scalability: S3 is highly scalable, and you can store an unlimited amount of data. You don't
need to worry about provisioning or managing the underlying infrastructure.
3. Data Durability and Availability: Amazon S3 is designed for 99.999999999% (11 nines)
durability, which means data stored in S3 is highly resilient. It also provides high availability
through data replication across multiple Availability Zones (AZs) within a region.
4. Data Lifecycle Management: S3 supports data lifecycle policies that enable automatic data
movement between storage classes or deletion after a specified time.
5. Data Versioning: S3 allows you to enable versioning, which keeps multiple versions of an
object over time. This can help with data protection and recovery.

39

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

6. Security: S3 provides a variety of security features, including bucket policies, access control
lists (ACLs), and integration with AWS Identity and Access Management (IAM) for fine-
grained access control. You can also encrypt data at rest and in transit.
7. Storage Classes: S3 offers various storage classes to optimize costs and performance for
different use cases, such as Standard, Intelligent-Tiering, Glacier, and Glacier Deep Archive.
8. Data Transfer Acceleration: You can enable Amazon S3 Transfer Acceleration to speed up
uploading and downloading of objects using a content delivery network (CDN).
9. Static Website Hosting: S3 can be used to host static websites by configuring a bucket for
static website hosting. This is a cost-effective way to host websites with low traffic.
10. Data Integration: Amazon S3 integrates seamlessly with other AWS services, including
Amazon EC2, Lambda, Redshift, and more. It is also used as a data lake or data warehouse
for analytics and big data processing.
11. Event Notifications: You can configure event notifications for specific S3 events, such as
object creation or deletion, to trigger actions in other AWS services.
12. Access Logging: S3 allows you to log all access requests to your objects, which can be
useful for security and auditing.
13. Pricing: Amazon S3 charges for storage used, data transfer, and certain operations like GET
requests. The pricing depends on the storage class, the region, and the volume of data stored
and transferred.

In summary, Amazon S3 is a versatile and highly reliable object storage service that serves as the
foundation for many cloud-based applications and data storage needs. It offers scalability, durability,
security, and a wide range of features to meet various storage requirements in the AWS ecosystem.

7.3 AWS EFS

Amazon Elastic File System (Amazon EFS) is a fully managed, scalable file storage service
provided by Amazon Web Services (AWS). EFS is designed to provide scalable and shared file
storage for use with AWS cloud services and on-premises resources.

1. Network File System (NFS) Protocol: Amazon EFS uses the NFSv4 protocol, which is a
widely supported network file system protocol. This allows multiple EC2 instances to mount
an EFS file system concurrently, enabling shared access to data.

40

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

2. Managed and Scalable: EFS is a fully managed service, meaning AWS takes care of
administrative tasks like hardware provisioning, patching, and maintenance. It can
automatically scale up or down as needed to accommodate the amount of data and the level
of throughput required.
3. Performance Modes: EFS offers two performance modes:
 General Purpose (default): Suitable for a wide range of workloads with a balance of
throughput and latency.
 Max I/O: Optimized for high I/O operations, which is beneficial for applications with a
high level of small, random I/O requests.
4. Storage Classes: EFS offers two storage classes:
 Standard: Provides low-latency and general-purpose storage.
 One Zone: Provides lower-cost storage but only stores data within a single Availability
Zone, making it suitable for workloads that don't require multi-AZ redundancy.
5. Lifecycle Management: EFS supports data lifecycle management, allowing you to
automatically move files to lower-cost storage classes based on access patterns.
6. Cross-AZ Data Replication: By default, EFS data is replicated across multiple Availability
Zones within a region to ensure data availability. This makes it highly durable.
7. Access Control: EFS allows you to control access to your file system using AWS Identity
and Access Management (IAM) policies and Network ACLs, providing fine-grained access
control to your data.
8. Integration: EFS can be seamlessly integrated with a wide range of AWS services, including
Amazon EC2, AWS Lambda, ECS, EKS, and more, making it suitable for various use cases
like content management, data sharing, and application data storage.
9. Pricing: You pay for the storage capacity you use with EFS, and pricing is based on the
storage class (Standard or One Zone) and the volume of data stored.
10. Use Cases: EFS is ideal for workloads that require shared file storage, such as web content,
data analytics, shared application storage, and database backups.

In summary, Amazon Elastic File System (EFS) is a versatile and fully managed file storage service
that provides scalable and highly available shared file storage for various AWS workloads and
applications. It simplifies the management of file data and allows multiple instances to access and
share data within your AWS infrastructure.

41

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

7.4 AWS S3 Glacier

Amazon S3 Glacier is a storage class within Amazon S3 that provides secure, durable, and
extremely low-cost storage for data archiving and long-term backup. It is designed for data that you
want to archive and access less frequently but still need to retain for compliance or data retention
purposes.

1. Low-Cost Archiving: S3 Glacier is a cost-effective storage solution for data archiving. It


offers significantly lower storage costs compared to standard S3 storage classes.
1. Data Durability: Like other Amazon S3 storage classes, S3 Glacier is designed for high
durability. Your data is stored across multiple Availability Zones within an AWS region.
2. Data Retrieval: Retrieving data from S3 Glacier is not as immediate as standard S3 storage.
Instead, it involves a retrieval time ranging from a few minutes to several hours, depending
on the retrieval option you choose.
3. Retrieval Options:
 Expedited: Data can be retrieved within 1-5 minutes.
 Standard: Data can be retrieved within 3-5 hours.
 Bulk: Data can be retrieved within 5-12 hours and is the most cost-effective option.
4. Integration: S3 Glacier is integrated with Amazon S3, making it easy to move data between
the two storage classes. You can use lifecycle policies to automatically transition objects from
S3 to S3 Glacier based on your defined criteria.
5. Data Encryption: S3 Glacier encrypts your data at rest using AES-256 encryption. You can
also choose to manage your own encryption keys using AWS Key Management Service
(KMS).
6. Data Retrieval Policies: You can set retrieval policies to manage access and control costs by
specifying how often and when data should be retrieved from S3 Glacier.
7. Audit Logging: S3 Glacier allows you to audit and log access to your archived data,
providing security and compliance capabilities.
8. Use Cases: S3 Glacier is suitable for long-term data archiving and backup, regulatory
compliance, and data retention requirements where rapid access to data is not a primary
concern.
9. Pricing: S3 Glacier pricing is based on the volume of data stored and the amount of data
retrieved. Retrieval costs vary depending on the retrieval option chosen.

42

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

In summary, Amazon S3 Glacier is a cost-effective and durable solution for archiving and retaining
data that you don't need to access frequently but must keep for extended periods. It provides a way to
securely store and manage long-term data, with flexible retrieval options to accommodate various
data access requirements.

43

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

DATABASES

8.1 AMAZON RELATIONAL DATABASE SERVICES:

Amazon Relational Database Service (Amazon RDS) is a fully managed relational database
service offered by Amazon Web Services (AWS). It simplifies the setup, operation, and scaling of a
relational database, allowing developers and businesses to focus on their applications without the
burden of managing the underlying database infrastructure. Amazon RDS supports various database
engines, making it a versatile choice for different application needs.

Amazon RDS is an ideal choice for businesses and developers who need a reliable and
managed relational database service. It simplifies database management, offers high availability and
scalability, and provides support for various database engines, making it a versatile solution for a
wide range of applications, from small web applications to large enterprise systems.

Key features and aspects of Amazon RDS:

• Managed Service: Amazon RDS automates many database management tasks, including
hardware provisioning, database setup, patching, backups, and ongoing maintenance. This
reduces the administrative overhead and allows you to concentrate on your applications and
data.
• Supported Database Engines: Amazon RDS supports multiple database engines, including
MySQL, PostgreSQL, MariaDB, Oracle, Microsoft SQL Server, and Amazon Aurora (a
MySQL and PostgreSQL-compatible database engine developed by AWS). This allows you
to choose the database engine that best fits your application requirements.
• High Availability: Amazon RDS provides high availability options through Multi-AZ
(Availability Zone) deployments. In a Multi-AZ configuration, your database is replicated in
a secondary Availability Zone to ensure failover in case of hardware or software issues.
• Automated Backups: Amazon RDS automatically performs regular database backups,
ensuring data durability. You can also set retention periods for backups, enabling point-in-
time recovery.
• Scalability: Amazon RDS supports vertical scaling (resizing your database instance) and
horizontal scaling (read replicas) to accommodate increased workloads. This allows you to
adapt to changing traffic patterns and maintain database performance.

44

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

• Security: Amazon RDS offers various security features, including encryption at rest and in
transit, database security groups, and parameter groups for fine-tuning database settings. You
can also leverage AWS Identity and Access Management (IAM) for access control.
• Performance Monitoring: Amazon RDS integrates with Amazon CloudWatch, allowing you
to monitor database performance, set up alarms, and gain insights into resource utilization
and query performance.
• Maintenance and Patching: Routine maintenance tasks, such as applying patches and
updates to the database software, are managed by Amazon RDS, reducing downtime and
administrative effort.
• Database Event Notifications: Amazon RDS can send event notifications to alert you about
database events, such as scaling operations, backups, and maintenance activities.
• Database Engine Options: Amazon RDS provides a range of features specific to each
supported database engine, allowing you to leverage the full capabilities of your chosen
database.
• Global Databases: For multi-region redundancy and low-latency global access, Amazon
RDS allows you to set up global databases with read replicas in different AWS regions.
• Database Migration: Amazon RDS supports easy database migration, making it simpler to
move your existing on-premises or cloud-based databases to AWS RDS.

8.2 Amazon DynamoDB:

Amazon DynamoDB is a fully managed NoSQL database service provided by Amazon Web
Services (AWS). It is designed to offer fast and flexible performance for both read and write
operations while providing seamless scalability, making it an excellent choice for applications that
require high availability and low latency. DynamoDB is especially well-suited for web and mobile
applications, gaming, Internet of Things (IoT) devices, and various other use cases that require
consistent and predictable performance.

Amazon DynamoDB is a powerful and versatile database service that can handle a wide
range of workloads, from simple key-value storage to complex query-intensive applications. Its
combination of managed infrastructure, high availability, and low-latency performance makes it a
popular choice for developers and businesses looking to build scalable, responsive, and highly
available applications.

45

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

• Managed Service: DynamoDB is a fully managed service, which means AWS takes care of
the infrastructure, provisioning, setup, patching, and scaling of the database. This relieves
developers and administrators from many database management tasks, allowing them to
focus on application development.
• NoSQL Database: DynamoDB is a NoSQL database, which means it does not rely on the
traditional relational database model. Instead, it stores data in flexible, schema-less JSON-
like documents, making it well-suited for rapidly evolving data requirements.
• Performance: DynamoDB is designed for high-performance read and write operations. It
offers single-digit millisecond response times, which ensures low latency and consistent
performance for applications with high request rates.
• Scalability: DynamoDB provides automatic and seamless scaling to handle changes in traffic
patterns and workloads. You can start with a small database and easily scale it to support
large-scale applications.
• Serverless Architecture: DynamoDB can be used in a serverless architecture, allowing you
to connect it to AWS Lambda, Amazon API Gateway, and other AWS services. This enables
you to build applications that automatically scale based on demand without managing server
infrastructure.
• Global Tables: DynamoDB supports global replication, allowing you to create multi-region,
multi-master tables for redundancy and disaster recovery.
• Security: DynamoDB provides fine-grained access control using AWS Identity and Access
Management (IAM). You can control who can access your tables and perform operations
using IAM policies. Data can also be encrypted at rest and in transit.
• Backup and Restore: DynamoDB supports on-demand and continuous backups, as well as
point-in-time recovery. This ensures that your data is protected from accidental deletion or
corruption.
• Auto Scaling: You can enable Auto Scaling for your DynamoDB tables to automatically
adjust capacity in response to changes in traffic. This feature helps you avoid over-
provisioning and reduces costs.
• Global Secondary Indexes (GSI): GSIs allow you to efficiently query data in ways other
than the primary key. This provides flexibility for complex queries.
• Time to Live (TTL): DynamoDB offers TTL to automatically delete items from a table after
a specified expiration time. This is useful for managing data that has a finite lifespan.

46

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

• Event-Driven Triggers: You can set up event-driven triggers using AWS Lambda to execute
custom code in response to data changes in DynamoDB tables.

8.3 Amazon Redshift

Amazon Redshift is a fully managed, petabyte-scale data warehousing service provided by


Amazon Web Services (AWS). It is designed for large-scale data analysis and provides a high-
performance, columnar storage system that is optimized for analytical queries. Amazon Redshift is
particularly well-suited for organizations that need to efficiently process and analyze vast amounts of
data to make data-driven decisions.

Amazon Redshift is a powerful data warehousing solution that makes it easier for
organizations to process and analyze large volumes of data. Its managed nature, scalability, and high-
performance query capabilities make it a popular choice for businesses looking to gain insights and
intelligence from their data.

The key features and aspects of Amazon Redshift:

• Data Warehousing: Amazon Redshift is designed for data warehousing and analytics. It
allows you to store and query large datasets, making it an ideal solution for business
intelligence, reporting, and data analysis.
• Columnar Storage: Redshift uses a columnar storage model, which is highly efficient for
analytical queries. Data is stored in columns rather than rows, allowing for better
compression and improved query performance, as only the necessary columns are read during
queries.
• Massive Scalability: Redshift can easily scale from a few hundred gigabytes to several
petabytes of data, allowing you to grow your data warehouse as your needs expand.
• High Performance:Amazon Redshift is designed for speed. It uses parallel query execution,
data compression, and automatic optimizations to deliver fast query response times, even on
large datasets.
• Data Compression:Redshift automatically compresses data to save storage space, reducing
storage costs and improving query performance.
• Integration: Redshift integrates seamlessly with various data sources and tools, making it
easy to import data from S3, DynamoDB, and other data stores. It also supports data loading
from various data integration tools and supports standard SQL for querying.

47

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

• Concurrency: Redshift allows multiple users to run queries concurrently, ensuring that users
don't have to wait in line for their queries to be processed.
• Security: Amazon Redshift provides security features, including encryption of data in transit
and at rest, VPC integration, and fine-grained access control using AWS Identity and Access
Management (IAM) and Redshift's own access control mechanisms.
• Snapshot and Backup: Redshift offers automated and manual snapshots for backup and
recovery. Snapshots are used to create point-in-time backups of your data warehouse.
• Materialized Views: Redshift supports materialized views, which can be precomputed and
used to accelerate query performance for complex analytical queries.
• Workload Management: You can use Redshift's workload management features to allocate
resources for different workloads and query groups to prioritize and manage concurrent query
workloads.
• Query Monitoring: Redshift provides tools for monitoring queries, enabling you to track
query performance, identify bottlenecks, and optimize queries.
• Redshift Spectrum: This feature allows you to query data stored in Amazon S3 directly from
your Redshift cluster, providing an integrated and cost-effective way to analyze both
structured and unstructured data.
• Geospatial Data Support: Redshift supports geospatial data types and functions, making it
suitable for location-based and mapping applications.

8.3 Amazon Aurora

Amazon Aurora is a fully managed, high-performance relational database service provided by


Amazon Web Services (AWS). It is designed to offer the performance and reliability of commercial-
grade databases at a fraction of the cost. Aurora is compatible with MySQL and PostgreSQL, making
it an excellent choice for organizations looking for a powerful and cost-effective database solution.

Amazon Aurora is an excellent choice for organizations looking for a highly available,
scalable, and high-performance relational database service. Whether you need to support a single
application or a global, high-traffic website, Aurora provides a reliable and cost-effective solution for
your database needs.

48

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

The key features and aspects of Amazon Aurora:

• Compatibility: Amazon Aurora is compatible with MySQL and PostgreSQL, which means
you can use existing MySQL or PostgreSQL tools, libraries, and applications with Aurora
without making significant modifications.
• High Performance: Aurora offers high performance with low-latency read and write
operations. It is optimized for both OLTP (Online Transaction Processing) and OLAP (Online
Analytical Processing) workloads, making it suitable for a wide range of applications.
• Cloud-Native: Aurora is a cloud-native database designed specifically for AWS, taking
advantage of AWS infrastructure and services to provide high availability, scalability, and
data durability.
• Global Databases: Aurora supports global databases that allow you to have read replicas in
multiple AWS regions. This enables you to create low-latency, read-scalable databases for
global applications.
• Replication and Failover: Aurora automatically replicates data across multiple Availability
Zones (AZs) for high availability. In the event of an AZ failure, Aurora performs an
automatic failover to ensure minimal downtime.
• Performance and Scalability: Aurora automatically divides your database volume into
10GB segments distributed across many disks. This parallelizes I/O operations, improving
overall database performance. You can easily scale your Aurora database by adding read
replicas to distribute read workloads.
• Security: Aurora supports encryption at rest and in transit, ensuring that data is secure. You
can also use AWS Identity and Access Management (IAM) to control access to your
databases.
• Automated Backups and Snapshots: Aurora continuously backs up your data and provides
automated and manual snapshots. This allows you to restore your database to a specific point
in time.
• Database Cloning: You can create a clone of your Aurora database for testing and
development without affecting the production environment.
• Performance Insights: Aurora provides a Performance Insights feature that allows you to
monitor database performance in real-time and identify bottlenecks.

49

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

CLOUD ARCHITECTURE

9.1 AWS Well- Architected Framework

The AWS Well-Architected Framework provides guidance on building and improving your
architecture in line with best practices. It's based on five key pillars, each with its own set of design
principles:

1. Operational Excellence: Perform Operations as Code: Automate operations to minimize


manual intervention and reduce human error. Annotate Documentation: Keep comprehensive
and up-to-date documentation for your systems, processes, and procedures. Make Frequent,
Small, Reversible Changes: Implement changes incrementally and in smaller batches to
reduce the risk and facilitate quick reversals if needed.
2. Security: Implement a Strong Identity Foundation: Ensure that only authorized and
authenticated entities can access resources by implementing strong authentication and
authorization mechanisms. Apply Security at All Layers: Implement security measures at
every layer of the architecture, including network, application, and data layers.
3. Reliability:
1• Automatically Recover from Failure: Design your system to automatically recover from
failures to minimize downtime and maintain a high level of availability.
2• Scale Horizontally to Increase Aggregate Reliability: Distribute workloads across
multiple smaller resources to increase resilience and reliability.
3• Stop Guessing Capacity: Use auto-scaling and load testing to determine and adjust
capacity dynamically, rather than guessing peak loads.
4. Performance Efficiency:
1• Democratize Advanced Technologies: Use cloud-native technologies and services to
improve performance and reduce costs, making them accessible to all teams.
2• Experiment More Often: Experiment with different architectures, configurations, and
technologies to optimize performance continuously.
3• Use Serverless Architectures: Leverage serverless computing to improve performance and
cost efficiency by only paying for actual usage.
5. Cost Optimization:
1• Adopt a Consumption Model: Pay only for the resources you consume, and optimize
resource usage to minimize costs.

50

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

2• Use Managed Services to Reduce TCO: Utilize managed services provided by AWS to
offload administrative tasks and reduce the total cost of ownership.
3• Optimize Over Time: Continuously monitor and optimize your architecture to find cost-
saving opportunities and improve cost efficiency.

9.2 Operational Excellence

Operational Excellence is one of the five pillars of the AWS Well-Architected Framework. It focuses
on running and monitoring systems to deliver business value and continually improving processes
and procedures.

• Perform Operations as Code: Automate operations to the greatest extent possible using
code, scripts, and automation tools. Use AWS CloudFormation or AWS CDK (Cloud
Development Kit) to define and manage infrastructure as code (IaC) for reproducible and
consistent deployments.
• Annotate Documentation: Maintain detailed and up-to-date documentation for your
systems, architecture, processes, and procedures.Use tools like AWS Systems Manager, AWS
Config, and AWS X-Ray to generate insights and documentation for system components and
behavior.
• Make Frequent, Small, Reversible Changes: Implement a culture of making small,
reversible changes to systems and applications.Use techniques such as blue-green
deployments, canary releases, and feature flags to reduce the risk and impact of changes.
• Refine Operations Procedures Frequently: Continuously refine and improve operational
procedures based on lessons learned and feedback from incidents or system changes. Conduct
regular reviews and post-mortems to identify areas for improvement and optimize processes.
• Anticipate Failure: Design and architect for failure by considering potential failure points
and implementing redundancy and fault tolerance.Use AWS services like Amazon
CloudWatch, AWS Auto Scaling, and AWS Elastic Load Balancer for automated scaling and
monitoring to ensure high availability.
• Learn from Operational Events: Establish mechanisms to capture and learn from
operational events, including incidents, outages, and near misses.Implement incident
response and resolution processes to learn from failures and enhance system resilience.
• Use Game Days for Resilience Testing: Conduct regular "Game Days" or simulated failure
scenarios to test system resilience and response capabilities.Identify weaknesses, optimize
processes, and train staff to effectively handle real-world incidents.
51

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

• Optimize Human Work: Automate repetitive and manual tasks to improve efficiency and
reduce the risk of human error. Use AWS Lambda, AWS Step Functions, and other serverless
technologies to automate workflows and operations.
• Measure and Iterate Over Time: Define key performance indicators (KPIs) and use metrics
and logs to measure operational performance. Continuously analyze metrics to identify
opportunities for improvement and iterate on operational processes.

By incorporating these practices into your operations on AWS, you can optimize for efficiency,
reliability, and agility, ultimately achieving operational excellence in your cloud environment.

9.3 Security

Security is a critical aspect of any architecture, and in the context of AWS (Amazon Web Services),
it's one of the key pillars of the AWS Well-Architected Framework.

• Identity and Access Management (IAM): Implement least privilege access principles to
ensure that users and systems have the minimum permissions required to perform their
tasks.Utilize IAM to control access to AWS services and resources, managing users, groups,
and roles securely.
• Data Encryption: Use encryption at rest and in transit to protect sensitive data. AWS Key
Management Service (KMS) can be utilized to manage encryption keys. Utilize HTTPS for
secure communication and SSL/TLS for data in transit.
• Network Security: Implement Virtual Private Cloud (VPC) to isolate and control your
network and resources. Utilize security groups and network access control lists (NACLs) to
control inbound and outbound traffic to instances and resources.
• Compliance and Governance: Adhere to regulatory requirements and industry standards
relevant to your business and data, ensuring compliance and good governance.Utilize AWS
services such as AWS Config and AWS CloudTrail for monitoring and auditing your AWS
environment.
• Security Monitoring and Logging: Enable AWS CloudTrail to record AWS API calls for
auditing and security analysis.Utilize Amazon CloudWatch for monitoring AWS resources
and setting up alarms for suspicious activities.
• Incident Response and Disaster Recovery: Establish an incident response plan and
regularly test it to ensure effective handling of security incidents. Implement a robust disaster
recovery plan to recover data and operations in case of a security incident or failure.

52

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

• Security Automation and Orchestration: Automate security best practices using AWS
Lambda, AWS CloudFormation, AWS Systems Manager, and AWS Config to enforce
security configurations consistently. Use AWS Security Hub to centrally manage and
prioritize security findings from various AWS services.
• Secure Development Practices: Follow secure coding practices and conduct regular security
code reviews to identify and fix vulnerabilities. Use AWS services like AWS WAF (Web
Application Firewall) and AWS Shield to protect against DDoS attacks and secure web
applications.
• Security Training and Awareness:Train employees and teams on security best practices,
both in general and specific to AWS services.Foster a security-aware culture to ensure
everyone understands their role in maintaining security.
• Threat Detection and Prevention: Use AWS services like Amazon GuardDuty for threat
detection and Amazon Macie for data discovery and protection.

9.4 Realiability Pillar in AWS

Reliability is one of the five pillars of the AWS Well-Architected Framework. It focuses on designing
and maintaining systems to deliver consistent and predictable performance, even under varying
workloads and conditions.

• Automatically Recover from Failure: Design systems to automatically recover from


failures, ensuring minimal downtime and disruption to users. Use mechanisms such as
automated scaling, load balancing, and self-healing architectures to maintain service
availability.
• Scale Horizontally to Increase Aggregate Reliability: Distribute workloads across multiple
smaller resources to increase reliability and fault tolerance. Implement auto-scaling to
dynamically adjust resources based on demand, providing consistent performance during
peak loads.
• Stop Guessing Capacity: Avoid over-provisioning or under-provisioning resources by using
auto-scaling and load testing to determine the right capacity.Optimize resource allocation
based on usage patterns to ensure efficient utilization and cost-effectiveness.
• Manage Change in Automation: Automate change management processes to reduce human
error and ensure consistent and controlled deployments. Use tools like AWS CloudFormation
and AWS CodeDeploy to automate and manage application and infrastructure changes.

53

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

• Test Recovery Procedures: Regularly test and validate recovery procedures to ensure they
are effective and can be executed quickly and accurately in case of failures. Conduct disaster
recovery drills and simulate failure scenarios to evaluate the reliability of the recovery
processes.
• Mitigate Disruptions: Use multi-region architectures and deploy applications across
multiple Availability Zones (AZs) to mitigate disruptions caused by outages in a specific
region or AZ. Implement health checks and circuit breakers to isolate and mitigate the impact
of failing components or services.
• Monitoring and Insights:Implement comprehensive monitoring using AWS CloudWatch,
AWS CloudTrail, and AWS Config to gain insights into system behavior and performance.

9.5 Performance Efficiency

The Performance Efficiency pillar is one of the five pillars of the AWS Well-Architected Framework.
It focuses on optimizing performance and resource utilization to deliver efficient outcomes for
various workloads.

• Select Right Resources Based on Requirements: Choose appropriate AWS services and
resources based on the specific requirements of your application, workload, and use case.
Utilize the AWS Trusted Advisor tool to receive personalized recommendations for resource
optimization.
• Use Serverless Architectures: Leverage serverless computing models like AWS Lambda to
execute code in response to events, reducing operational overhead and costs. Utilize
serverless services for functions, APIs, and backend services without the need to manage
servers.
• Experiment More Often: Experiment with different AWS services, configurations, and
architectures to continuously optimize performance and cost. Use AWS CloudFormation and
AWS CodePipeline for automated deployments and experimentation.
• Mechanical Sympathy: Understand the underlying hardware and infrastructure of AWS
services to optimize their use and performance. Align your architecture with the underlying
AWS infrastructure to achieve better efficiency and performance.
• Right-size Resources: Analyze and right-size your AWS resources, adjusting their
specifications to match actual usage patterns and requirements. Use AWS tools like AWS
Trusted Advisor and AWS Cost Explorer to analyze and optimize resource allocation.

54

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

• Monitor Performance: Implement monitoring and metrics using AWS CloudWatch to track
the performance of your applications and infrastructure.Set up alarms and alerts to be notified
of performance degradation and potential issues.
• Use Caching: Implement caching mechanisms using Amazon ElastiCache or Amazon
CloudFront to improve application performance by reducing response times and load on
backend systems. Leverage in-memory caching to accelerate frequently accessed data.
• Optimize for Cost: Optimize performance with cost in mind by selecting cost-effective
instance types, storage options, and AWS services. Monitor cost and usage regularly using
AWS Cost Explorer and AWS Budgets to identify opportunities for optimization.
• Scalability: Design and implement scalable architectures using AWS Auto Scaling to
automatically adjust resources based on demand, improving performance during traffic
spikes. Use AWS services like Amazon EC2 Auto Scaling and AWS Lambda to scale
applications horizontally and vertically.
• Design for Growth: Design architectures that can easily accommodate future growth by
considering scalability, elasticity, and future resource requirements. Plan for scaling at
different levels, such as application layer, database layer, and overall infrastructure.

By applying these best practices, you can optimize the performance and efficiency of your AWS
workloads, ultimately improving user experience, reducing costs, and maximizing the value of your
cloud infrastructure.

9.6 Cost Optimization

Cost Optimization is a crucial aspect of the AWS Well-Architected Framework, focusing on


efficiently managing and optimizing costs associated with AWS services. Effectively managing costs
allows organizations to use AWS resources in a cost-effective manner while still meeting business
and operational goals.

• Adopt a Consumption Model: Pay only for the resources you consume and utilize AWS
services with a pay-as-you-go model. Leverage serverless computing and managed services
to scale resources based on actual usage, minimizing costs during idle periods.
• Measure and Attribute Expenditure: Utilize AWS Cost Explorer and AWS Cost and Usage
Reports to monitor and analyze spending, allowing you to attribute costs to specific teams,

55

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

projects, or resources. Implement tagging strategies to allocate costs and track spending
across different dimensions, helping in budgeting and cost management.
• Use Cost-Effective Resources: Choose the appropriate AWS instance types, storage options,
and services to match your workload requirements while optimizing costs. Consider Reserved
Instances (RIs) or Savings Plans to commit to a specific usage in exchange for discounted
pricing over a term.
• Optimize Over Time: Continuously monitor and analyze your AWS usage and costs to
identify opportunities for optimization. Regularly review AWS Trusted Advisor
recommendations and AWS Cost Explorer to optimize usage and reduce expenses.
• Match Supply with Demand:Use auto-scaling and elasticity features to align resources with
actual demand, scaling up or down based on workloads. Implement scheduling or automation
to start and stop non-production instances during off-hours, reducing costs.
• Optimize Data Transfer: Minimize data transfer costs by optimizing data flow between
AWS regions, availability zones, and edge locations. Leverage AWS Direct Connect or AWS
Global Accelerator to optimize and reduce data transfer costs.
• Leverage Managed Services: Use AWS managed services to offload administrative tasks,
maintenance, and monitoring, reducing operational overhead and costs. Utilize managed
services like Amazon RDS, Amazon S3, and AWS Lambda to eliminate the need for
managing underlying infrastructure.
• Experiment and Iterate on Architectures: Experiment with different architectural patterns
and services to find the most cost-effective solutions for your applications and workloads.
Analyze usage patterns and costs to iteratively optimize and refine your architecture for cost
efficiency.
• Optimize Licensing: Leverage AWS License Manager to manage software licenses and
optimize license costs across your AWS environment. Consider using AWS-provided licenses
or BYOL (Bring Your Own License) options for cost savings.
• Stay Informed about Pricing Changes and Discounts: Stay updated on AWS pricing
changes, new offerings, and discounts to take advantage of cost-saving opportunities.

By implementing these best practices and continuously monitoring and optimizing your AWS usage,
you can effectively manage costs and ensure a cost-efficient use of AWS services while maximizing
the value of your investments.

9.7 Reliability and High availability

56

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

Reliability and high availability are fundamental concepts in designing and managing systems,
especially in cloud computing environments like AWS.

Reliability: Reliability refers to the ability of a system or service to consistently perform its intended
functions under specific conditions for a defined period. It involves minimizing the possibility of
failure, and when failures do occur, quickly recovering to a fully operational state.

In AWS, achieving reliability involves designing systems that can handle failures and errors
gracefully, utilizing redundancy, failover mechanisms, and automated recovery processes. Key AWS
services and practices that enhance reliability include using multiple Availability Zones (AZs),
deploying backups, implementing automated scaling, and regularly testing for resilience.

High Availability (HA): High availability is a specific aspect of reliability that focuses on ensuring
that a system remains operational and accessible for a high percentage of time. Typically, this means
designing systems with redundant components and configurations to minimize downtime.

In AWS, achieving high availability often involves distributing workloads across multiple
geographically separated Availability Zones (AZs) within a region. If one AZ experiences a failure,
the workload can continue to run in another AZ, minimizing service interruptions. Services like AWS
Elastic Load Balancer (ELB), Amazon Route 53, and AWS Auto Scaling contribute to achieving high
availability.

Key AWS Features and Best Practices for Reliability and High Availability:

• Multi-AZ Deployments: Utilize multiple Availability Zones (AZs) to ensure high


availability and fault tolerance. AWS provides data centers in different AZs within a region to
minimize the risk of a single point of failure.
• Auto Scaling: Implement auto scaling to automatically adjust resources based on demand,
ensuring consistent performance during traffic spikes and scaling down during low-traffic
periods.
• Load Balancing: Utilize AWS Elastic Load Balancer (ELB) to distribute traffic evenly
across multiple instances, enhancing fault tolerance and availability.
• Amazon RDS Multi-AZ Deployments: Use Amazon RDS Multi-AZ deployments to
automatically replicate databases across AZs, ensuring high availability and automatic
failover.
• AWS Route 53 DNS Failover: Leverage Amazon Route 53 for DNS routing and DNS-based
failover to route traffic to healthy endpoints in case of failures.
57

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

• Backups and Replication: Implement regular backups and data replication across regions to
protect against data loss and ensure data availability in the event of a disaster.
• Redundancy and Failover Architecture: Design systems with redundant components and
failover mechanisms to ensure continued operation in case of component failures.
• Monitoring and Alarming: Use AWS CloudWatch and set up alarms to monitor system
performance and respond to anomalies, ensuring prompt detection and resolution of issues.
• Disaster Recovery Planning: Develop and test disaster recovery plans to restore critical
systems and services in the event of a large-scale failure or disaster.
• Well-Architected Framework: Follow the AWS Well-Architected Framework, specifically
focusing on the Reliability pillar, to design and implement reliable and highly available
architectures.

By employing these AWS features and best practices, you can enhance the reliability and availability
of your applications and services, providing a seamless experience to users even during unexpected
events or failures.

9.8 AWS Trusted Advisor

AWS Trusted Advisor is a service provided by Amazon Web Services (AWS) that offers real-
time guidance to help you optimize your AWS infrastructure, improve performance, increase
security, and reduce costs. It provides recommendations based on best practices and AWS expertise,
helping you to follow AWS Well-Architected Framework principles effectively.

Key features and benefits of AWS Trusted Advisor include:

• Cost Optimization Recommendations: Provides cost-saving recommendations by


identifying idle or underutilized resources, suggesting rightsizing opportunities, and
optimizing Reserved Instance (RI) usage.
• Performance Recommendations: Offers guidance to improve system performance by
identifying issues such as low-utilization Amazon EC2 instances, leveraging Auto Scaling,
and optimizing load balancer configurations.
• Security Recommendations: Helps enhance security by identifying security groups that
allow unrestricted access, suggesting password policies, and recommending the use of AWS
Identity and Access Management (IAM) best practices.

58

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

• Fault Tolerance Recommendations: Provides recommendations to improve system fault


tolerance by identifying resources without redundancy, suggesting the usage of Amazon
Elastic Load Balancing, and recommending Multi-AZ deployments.
• Service Limits Checks: Monitors service usage against service limits and provides alerts if
you are approaching or exceeding these limits.
• Best Practices Checks: Evaluates your AWS environment against AWS best practices and
offers actionable recommendations to help you optimize your architecture.
• Integration with AWS Management Console: Accessible through the AWS Management
Console, making it easy to review recommendations, take action, and track your optimization
progress.
• Personalized Recommendations and Dashboard: Provides a personalized dashboard with
insights and recommendations tailored to your AWS environment, making it easier to
prioritize and implement improvements.

AWS Trusted Advisor is available to all AWS customers, but AWS Support customers (Business and
Enterprise levels) have access to a more extensive set of checks and recommendations. It offers a
valuable tool for organizations to optimize their AWS usage, save costs, improve performance,
enhance security, and adhere to AWS best practices.

AUTO SCALING AND MONITORING

10.1 Elastic Load Balancing

Elastic Load Balancing (ELB) is a service provided by Amazon Web Services (AWS) that
distributes incoming application or network traffic across multiple targets, such as Amazon EC2
instances, containers, or IP addresses, to ensure high availability, fault tolerance, and efficient
resource utilization. ELB plays a critical role in building scalable and reliable applications in AWS. It
essentially acts as a traffic cop, directing requests to the most suitable and healthy resources.

There are three types of Elastic Load Balancers in AWS:

• Application Load Balancer (ALB): ALB operates at the application layer (Layer 7) of the
OSI model and is designed for routing HTTP/HTTPS traffic. It can route requests to different
services based on URL paths, hostnames, and even content-based routing using rules. ALB is
well-suited for modern, microservices-based applications.

59

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

• Network Load Balancer (NLB): NLB operates at the transport layer (Layer 4) of the OSI
model and is designed for routing TCP and UDP traffic. It is used when you need to load
balance non-HTTP traffic, such as gaming applications, streaming protocols, or other
network-based services. NLB is known for high throughput and low latency.
• Classic Load Balancer: Classic Load Balancer provides basic load balancing for
applications that don't require the advanced features of ALB or the specific requirements of
NLB. It can distribute HTTP/HTTPS, TCP, and SSL traffic across multiple instances.

Key features and benefits of Elastic Load Balancing:

• High Availability: ELB automatically distributes traffic across multiple Availability Zones
within a region, ensuring that your application remains available even if one or more
instances fail.
• Health Checks: ELB continuously monitors the health of the registered instances and routes
traffic only to the healthy ones. If an instance becomes unhealthy, it is automatically removed
from the load balancer rotation.
• Auto Scaling: ELB works seamlessly with Auto Scaling groups. As you scale your
application up or down, ELB can automatically register or deregister instances.
• Security: ELB can be used to offload SSL/TLS encryption and decryption, providing an
extra layer of security for your application.
• Content-Based Routing: With ALB, you can route traffic based on the content of the
request, allowing for more complex routing scenarios.
• Centralized Logging: ELB can log access logs, which are stored in Amazon S3, for analysis
and monitoring purposes.
• Static IP Address: Network Load Balancers provide a static IP address for the lifetime of the
load balancer, which can be useful for applications that require a fixed entry point.
• Cross-Zone Load Balancing: By default, ELB evenly distributes traffic across all healthy
instances in all available Availability Zones, ensuring balanced resource utilization.

10.2 Amazin CloudWatch

Amazon CloudWatch is a monitoring and observability service provided by Amazon Web


Services (AWS) that enables you to collect and monitor various metrics, log files, and set alarms for
60

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

your AWS resources, applications, and services in real-time. It helps you gain insights into the
performance, health, and operational status of your AWS infrastructure and applications.

Amazon CloudWatch is a crucial component for ensuring the reliability, performance, and
security of your AWS resources and applications. It plays a central role in monitoring and
observability within AWS environments, helping you proactively manage your infrastructure and
applications while ensuring that they meet your operational requirements.

Here's a more detailed explanation of Amazon CloudWatch's features and capabilities:

• Metrics: CloudWatch allows you to collect and store various types of data called "metrics."
These metrics could be system-level metrics like CPU utilization, network traffic, or
application-specific metrics like the number of requests processed. You can choose from a
wide range of AWS services and resources to collect metrics from, such as EC2 instances,
RDS databases, Lambda functions, and more.
• Dashboards: You can create custom dashboards to visualize your metrics in a single,
consolidated view. Dashboards help you monitor the performance of your AWS resources and
applications in real-time, making it easier to identify and respond to issues quickly.
• Alarms: CloudWatch allows you to set alarms based on predefined thresholds or custom
expressions on your metrics. When an alarm threshold is breached, CloudWatch can
automatically trigger notifications (e.g., via email, SMS, or SNS) or perform automated
actions, such as scaling an Auto Scaling group.
• Logs: CloudWatch Logs enables you to centralize, store, and monitor log data from various
AWS resources, making it easier to troubleshoot issues, identify trends, and analyze historical
data. Log data can be generated by EC2 instances, AWS Lambda, and other AWS services.
• Log Insights: With CloudWatch Logs Insights, you can interactively search, analyze, and
visualize log data in real-time. It provides a powerful query language for filtering and
extracting specific information from your log streams.
• Retentions and Data Storage: You can configure the retention period for metrics and log
data. Data older than the defined retention period is automatically removed or archived to
Amazon S3 for long-term storage.
• Custom Metrics: CloudWatch allows you to publish custom metrics for applications and
services using its API or the AWS SDKs. This feature is especially useful for monitoring
custom or third-party applications running in AWS.

61

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

• Cross-Account and Cross-Region Access: You can access CloudWatch data from multiple
AWS accounts and regions, which is particularly useful for organizations with distributed
infrastructure.
• Integration: CloudWatch integrates seamlessly with other AWS services, such as AWS
Lambda, Auto Scaling, and CloudFormation, to automate actions based on metric data or
alarm triggers.
• Resource Optimization: By analyzing CloudWatch metrics and logs, you can optimize
resource utilization, improve performance, and save costs.

10.3 AMACON EC2 AUTO-SCALING:

Amazon EC2 Auto Scaling is a service provided by Amazon Web Services (AWS) that helps
you automatically scale your Amazon Elastic Compute Cloud (Amazon EC2) instances based on the
dynamic demand of your applications. With Auto Scaling, you can ensure that your applications are
highly available, responsive, and cost-effective by automatically adjusting the number of EC2
instances to match your application's workload.

Amazon EC2 Auto Scaling enables you to build resilient and cost-effective applications that
automatically adjust capacity based on demand. It is particularly useful for handling variations in
traffic, reducing over-provisioning, and maintaining a responsive and cost-efficient environment,
which is crucial for businesses that operate in dynamic and unpredictable environments.

Here are the key features and components of Amazon EC2 Auto Scaling:

• Auto Scaling Groups: Auto Scaling groups are the core component of Amazon EC2 Auto
Scaling. They define the group of EC2 instances that need to be managed together. You
specify the desired number of instances, minimum and maximum instance counts, and launch
configuration or launch template when creating an Auto Scaling group. Auto Scaling groups
can span multiple Availability Zones for fault tolerance.
• Launch Configuration and Launch Template: These are templates for the configuration of
your EC2 instances, including the Amazon Machine Image (AMI), instance type, security
groups, key pairs, and user data. Auto Scaling groups use these templates to launch instances.
Launch templates are a more recent and flexible option, offering additional features.
• Scaling Policies: Scaling policies define the rules for automatically increasing or decreasing
the number of instances in an Auto Scaling group based on specific conditions. You can

62

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

create simple or step scaling policies to adjust capacity in response to various metrics, such as
CPU utilization, network traffic, or custom CloudWatch metrics.
• Target Tracking Scaling: This type of scaling policy allows you to set a specific target value
for a metric, and Auto Scaling will adjust the group size to maintain that target. For example,
you can set a target of 70% CPU utilization, and Auto Scaling will add or remove instances to
keep the CPU utilization near 70%.
• Scheduled Scaling: You can schedule changes in your Auto Scaling group's capacity, which
is particularly useful for handling predictable changes in demand, such as increased traffic
during specific hours or days.
• Dynamic Scaling: Auto Scaling reacts to changes in the workload automatically, ensuring
that you have enough instances to handle the incoming requests and that you're not over-
provisioning, which can be costly.
• Health Checks: Auto Scaling continuously monitors the health of instances in an Auto
Scaling group. If an instance becomes unhealthy, it will be automatically terminated and
replaced with a healthy one.
• Cooldown Periods: Cooldown periods prevent Auto Scaling from launching or terminating
instances too quickly in response to changes. This helps stabilize your environment during
fluctuations in demand.
• Integration with Elastic Load Balancing: Auto Scaling integrates seamlessly with Elastic
Load Balancing (ELB), allowing it to distribute traffic evenly among instances and ensuring
high availability and fault tolerance.
• CloudWatch Alarms: You can use Amazon CloudWatch alarms to trigger Auto Scaling
actions based on custom metrics and threshold values.

63

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

Appendix A
INDUSTRIAL INTERNSHIP EVALUATION FORM
For the Students of B.Tech. (IT), Sasi Institute of Technology
&Engineering, Tadepalligudem, West Godavari District, Andhra
Pradesh

Date:
Name of the Intern : K.Vijay

Reg. No. : 20K61A1232

Branch : Information Technology

Internship Offered : From 2023 To 2024

Evaluate this student intern on the following parameters by checking the


appropriate attributes.

Attributes
Evaluation Give Your Feedback with Tick Mark
Parameters (√ )
Excellent Very Good Good Satisfactory Poor

Attendance
(Punctuality)
Productivity
(Volume, Promptness)
Quality of Work
(Accuracy, Completeness,
Neatness)
Initiative
(Self-Starter, Resourceful)

64

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

Attitude
(Enthusiasm, Desire to Learn)
Interpersonal Relations
(Cooperative, Courteous,
Friendly)
Ability to Learn
(Comprehension of New
Concepts)
Use of Academic
Training (Applies
Education to Practical
Usage)
Communications Skills
(Written and Oral Expression)
Judgement
(Decision Making)

65

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

Please summarize. Your comments are especially helpful.

Areas where student excels:

Areas where student needs to improve:

Areas where student gained new skills, insights, values, confidence, etc.

Was student’s academic preparation sufficient for this internship?

Additional comments or suggestions for the student:

66

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

Points
Awarded
Overall Evaluation of the Intern’s Performance

(Evaluation Scale shown below)

Evaluation Scale:
Attributes Excellent Very Goo Satisfactory Poor
Good d
Points

Name of Officer In- :


charge
(Guide/Supervisor)
Designation :

Signature of Officer In-


charge
(Guide/Supervisor)

67

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

Appendix B

PO's and PSO's relevance with Internship


Work

Program outcomes Relevance


Engineering Knowledge: Apply Applied basic knowledge of
knowledge of mathematics ,science, engineering to understand about
PO1
engineering fundamentals and an entrepreneurship
engineering specialization to the solution
of complex engineering problems
Problem Analysis: Identify, formulate Performed research in various
research literature and analyze complex ways to analyze problems and find
PO2
engineering problems reaching a soution
substantiated conclusions using first
principles of mathematics, natural
sciences and engineering sciences.
Design/development of solutions: Able to understand the market
Design solutions for complex engineering strategies and problems in the
PO3
problems and design systems components society
or processes that meet specified need with
appropriate consideration for public
health and safety, cultural, societal and
environmental
considerations.
Conduct investigations of complex Investigation of various problems
problems: Research based knowledge of farmers
PO4
and research methods including design of
experiments, analysis and interpretation
of data and synthesis
of information to provide valid
conclusions.
Modern tool usage: Create, select and Used many of the Tremendous
apply appropriate techniques, resources tools for Development Process
PO5
and modern engineering and it tools

68

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

including prediction and modelling to


complex engineering activities
with an understanding of the limitations.
The engineer and society : Apply It can be Implemented in
reasoning informed by contextual various real-world problems
PO6
knowledge to asses societal, health,
safety, legal and cultural issues and
consequent responsibilities relevant to
professional engineering practice
Environment and sustainability:
Understand the impact of the professional
PO7
engineering solutions in societal and
environmental contexts, and demonstrate
the knowledge and need for sustainable -----------
development
Ethics: Apply ethical principles and Able to identify standard norms
commit to professional ethics and
PO8
responsibilities and
norms of the engineering practice.

Individual and team work: Function It is an Individual/Team work that


effectively as an individual, and as a solves problem through
PO9
member or leader in diverse teams, and in technology
multidisciplinary
settings.
Communication: Communicate Prepared & documented
effectively on complex engineering summer internship
PO10
activities with the engineering community report
and with society at large, such as, being on Technology
able to comprehend and write effective Entrepreneurship Program
reports and design documentation, make
effective presentations,
and give and receive clear instructions.
Project management and finance: It is a one-year training process
Demonstrate knowledge and conducted by Indian School of
PO11
understanding of the engineering and Business With heavy costing.
management principles and apply these to
one’s own work, as a member and leader
in a team, to manage projects and in

69

Downloaded by Ajay Raval ([email protected])


lOMoARcPSD|40590090

multidisciplinary environments.

Life-long learning: Recognize the need It is a endless learning procedure


for and have the preparation and ability to because entrepreneur should learn
PO12
engage in independent and life-long everyday from everything.
learning in the
broadest context of technological change.
Application Development An application that helps farmers
PSO1

Successful career and Entrepreneurship


PSO2

70

Downloaded by Ajay Raval ([email protected])

You might also like