0% found this document useful (0 votes)
30 views

Unit 1

Uploaded by

friendlucky460
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views

Unit 1

Uploaded by

friendlucky460
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 83

A232409

CLOUD COMPUTING
Dr. Medikonda Asha Kiran,
B. Tech., MBA, M. Tech., IAENG, AMIE, MISTE, MIEEE, ACM,
Ph.D.
Assistant Professor,
Dept. of Information Technology,
Anurag University,
1 Hyderabad-500088.
A232409

CLOUD COMPUTING

Course Objectives
 Impart the concepts of virtualization and its
benefits
 Discuss various Virtualization Technologies

 Demonstrate the use of storage virtualization

 Analyze various cloud architectures

 Acquire the knowledge of disaster recovery

and security in the cloud


A232409

CLOUD COMPUTING

 Course Outcomes
At the end of this course, students will be able
to:
 Appreciate Virtualization Concepts

 Analyze various Virtualization Technologies

 Compare cloud storage mechanisms

 Draw cloud architecture

 Apply security mechanisms for cloud


computing
UNIT I
Introduction to Virtualization: Objectives of
virtualization, history of virtualization, benefits of
virtualized technology, the virtual service desk,
what can be virtualized, related forms of
computing, cloud computing, software as a
service – SaaS, grid computing, utility
UNIT II
computing, virtualization processes. [TB:1, CH:1]
 Virtualization Technologies: Storage virtualization,
Virtualization density, Para-virtualization, OS
virtualization, Virtualization Software, Data Storage
virtualization, Intel virtualization technology, Thinstall
virtualization suite, Net framework virtualization,
Windows virtualization on Fedora, Storage virtualization
technologies, Virtualization level, Security monitoring
and virtualization, Oracle virtualization. [TB:1, CH:3]
UNIT III
 Virtualization and Storage Management:
The heart of cloud computing-virtualization,
defining virtualization, why virtualize, what
can be virtualized, where does virtualization
happen, how does virtualization happen, on
the road to storage virtualization, improving
availability using virtualization, improving
performance through virtualization,
improving capacity through virtualization,
business value for virtualization. [TB:1, CH:6]
UNIT IV
 Overview of Cloud Computing: Essentials,
Need and History of Cloud Computing,
Benefits and Limitations.
 Cloud Computing Architecture:
Introduction, Grid Architecture, Advantages
and Challenges. Cloud Computing
Architecture – on the basis, Similarities and
Differences between Grid and Cloud
Computing, Characteristics of Cloud
Computing, Cloud Service Models. [TB:2, CH:
1,3, 4.1]
UNIT V
 Models of Cloud Computing: Cloud
Computing Deployment Models, Cloud Data
Center Core Elements, Replication
Technologies, Backup, and Disaster Recovery.
 Security issues of Cloud Computing –

Introduction, Security Concerns, Information


Security Objectives, Design Principles, and
Security Services. [TB:2, CH:4.4,5,10]
TEXT BOOKS
 Ivanka Menken, Gerard Blokdijk, Cloud
Computing Virtualization Specialist Complete
Certification Kit - Study Guide Book, 2009

 Shailendra Singh, Cloud Computing, Oxford


University Press, 2018
REFERENCE BOOKS
 Anthony T. Velte, TobeJ.Velte, Robert Elsenpeter, Cloud
Computing: A Practical Approach, Publication Pearson
Education, 2009
 2. Tom Clark, Storage Virtualization: Technologies for
Simplifying Data Storage and Management, Addison-
Wesley, 2005
 3. Curtis Brian, J.S. Chee, Cloud Computing
Technologies and Strategies of the Ubiquitous
Datacenter, 2010
INTRODUCTION & BASIC CONCEPTS
Introduction to Virtualization:
 Objectives of virtualization

 History of virtualization

 Benefits of virtualized technology

 The virtual service desk

 What can be virtualized

 Related forms of computing

 Cloud computing

 Software as a service – SaaS

 Grid computing

 Utility computing

 Virtualization processes.
INTRODUCTION TO
VIRTUALIZATION
 Virtualization is a technology that allows the
creation of multiple simulated environments
or dedicated resources from a single,
physical hardware system. This technique
enables one physical machine to run multiple
operating systems or applications
concurrently, each within its own isolated
environment, known as a virtual machine
(VM).
BEFORE VIRTUALIZATION

12
Key Concepts:
Hypervisor: The software layer that enables
virtualization. It abstracts the hardware resources
and allocates them to each virtual machine.

Virtual Machine (VM): A software emulation of a


physical computer. VMs run an operating system
and applications just like a physical computer, but
they share the physical hardware with other VMs.

Host Machine: The physical hardware that runs


the hypervisor and hosts the virtual machines.

Guest Machine: The virtual machines running on


the host machine.
Objectives of Virtualization
1. Resource Optimization:
Maximize Utilization: Virtualization allows multiple virtual
machines (VMs) to run on a single physical server, ensuring
that hardware resources such as CPU, memory, and storage
are fully utilized. This reduces the inefficiencies associated
with underutilized physical servers.
Consolidation: By consolidating multiple workloads onto fewer
physical servers, organizations can reduce the number of
servers required, leading to lower hardware costs and energy
consumption.
2. Cost Reduction:
Lower Capital Expenditure (CapEx): Fewer physical servers
are needed to run the same number of workloads, which
reduces the need for purchasing additional hardware.
Reduced Operating Expenses (OpEx): Virtualization decreases
energy consumption, cooling costs, and space requirements
in data centers. It also simplifies management, which can
reduce administrative overhead.
3. Scalability and Flexibility:
On-Demand Resource Allocation: Virtualization enables the
dynamic allocation of resources to VMs based on current
demand. This flexibility allows for rapid scaling of resources up
or down without significant reconfiguration or downtime.

Quick Deployment: New virtual machines can be created and


deployed quickly, enabling faster provisioning of IT services
and reducing time-to-market for new applications.

4. Isolation and Security:


Isolation of Workloads: Each VM operates in its own isolated
environment, meaning that issues in one VM do not affect
others. This isolation enhances security by containing potential
breaches within a single VM.

Enhanced Security: Virtualization allows for the creation of


secure environments where sensitive data and applications can
be isolated from other processes, reducing the risk of
unauthorized access or data leakage.
5. Disaster Recovery and Business
Continuity:

Easier Backup and Recovery: Virtual machines can be easily


backed up and restored, enabling faster recovery in case of
hardware failure or other disasters.
Improved Business Continuity: Virtualization facilitates the
replication of VMs to different locations, ensuring that
business operations can continue in the event of a disaster.

6. Testing and Development:


Simplified Testing: Developers can quickly create multiple
virtual environments for testing without the need for
additional physical hardware. This speeds up the
development process and allows for more thorough testing.
Safe Testing Environments: Virtualization provides a sandbox
environment where new applications or updates can be
tested in isolation before being deployed to production,
reducing the risk of disrupting live services.
7. Mobility and Portability:

Live Migration: Virtualization allows for the live migration of


VMs from one physical host to another with minimal
downtime. This facilitates maintenance and load balancing
without interrupting services.

Platform Independence: VMs can be moved across different


types of hardware or cloud environments, providing greater
flexibility and reducing vendor lock-in.
History of Virtualization

1. Early Concepts (1960s)

Mainframe Era: The concept of virtualization began with


IBM in the 1960s, during the era of mainframes. IBM
developed the CP/CMS (Control Program/Cambridge Monitor
System), which was one of the first systems to allow
multiple virtual machines (VMs) to run on a single hardware
platform.

CP-40 and CP-67: The first true virtual machine systems,


CP-40 and CP-67, were developed by IBM in the mid-1960s.
These systems enabled multiple instances of operating
systems to run on the same hardware, a precursor to
modern virtualization.
2. Development of Virtual Machines (1970s)

IBM VM/370: Released in 1972, VM/370 was a


significant milestone in virtualization history. It
allowed multiple operating systems to run
concurrently on an IBM mainframe. VM/370
became the foundation for modern virtual
machine technology.

Time-Sharing and Partitioning: In the 1970s,


virtualization was primarily used for time-sharing
and partitioning resources among different users,
a critical need for large-scale computing systems.
3. Decline and Dormancy (1980s-1990s)

Shift to Personal Computing: As personal


computers became more prevalent, the need for
virtualization on mainframes declined. Computing
resources became cheaper and more accessible,
reducing the need for shared mainframe
resources.

Dormancy: Virtualization technology saw


relatively little innovation during this period, as
the focus shifted toward other areas of
computing.
4. Revival and Modern Virtualization (Late
1990s-2000s)

VMware: Founded in 1998, VMware played a pivotal role in


the revival of virtualization. VMware Workstation, released
in 1999, allowed x86 computers to run multiple operating
systems, making virtualization accessible on personal
computers and servers.
Server Virtualization: The early 2000s saw a surge in
server virtualization. VMware ESX Server (2001) and
Microsoft’s Virtual Server (2004) became popular solutions
for consolidating server workloads, reducing hardware
costs, and improving efficiency.
Open Source and Xen: The open-source community
contributed significantly with the development of Xen in
2003. Xen provided a lightweight, high-performance
hypervisor that became the basis for many virtualization
platforms, including Citrix XenServer and Amazon EC2.
5. Cloud Computing and Virtualization (2010s-
Present)
Cloud Revolution: Virtualization became a foundational
technology for cloud computing. Virtual machines allowed
cloud providers to offer scalable, on-demand computing
resources. Amazon Web Services (AWS), Google Cloud, and
Microsoft Azure all rely heavily on virtualization.
Containerization: The rise of containers, popularized by
Docker in 2013, introduced a new form of lightweight
virtualization. Containers allow applications to run in isolated
environments without the overhead of traditional virtual
machines, leading to greater efficiency and portability.
Hyper-convergence and Beyond: In recent years,
virtualization has extended beyond just compute to include
storage and networking, forming the backbone of hyper-
converged infrastructure. Technologies like Kubernetes for
orchestrating containers and advancements in virtual desktop
infrastructure (VDI) have further evolved the virtualization
6. Future Trends

Edge Computing: Virtualization is expanding to the


edge, enabling devices closer to the data source to
process information more efficiently.
Security and Isolation: Advances in security and
isolation techniques are leading to more robust
virtualization environments, addressing concerns such
as VM escape and data breaches.
Serverless Architectures: The shift toward serverless
computing models continues to influence the evolution
of virtualization, focusing on even finer-grained resource
utilization.
Benefits of Virtualized Technology

1. Resource Efficiency

Better Utilization of Hardware: Virtualization allows


multiple virtual machines (VMs) to run on a single physical
server, maximizing the use of available resources such as
CPU, memory, and storage. This reduces the need for multiple
physical servers, leading to lower hardware costs.

Energy Savings: By consolidating workloads onto fewer


servers, organizations can reduce power consumption and
cooling requirements, leading to significant energy savings
and a smaller carbon footprint.
2. Cost Savings
Reduced Capital Expenditure (CapEx): Fewer physical
servers are needed, which reduces the initial costs associated
with purchasing and maintaining hardware.
Lower Operational Expenditure (OpEx): Virtualization
simplifies management, reduces the need for extensive
physical maintenance, and often decreases the time and
effort required to deploy and manage IT resources, leading to
lower operational costs.
3. Scalability and Flexibility
Easier Scaling: Virtualized environments allow for quick and
easy scaling of resources. Organizations can rapidly provision
additional VMs or containers as needed without the delays
associated with acquiring and setting up new hardware.
Flexibility in Resource Allocation: Virtualization enables
dynamic allocation of resources, such as CPU, memory, and
storage, based on the needs of individual VMs or workloads.
This flexibility ensures that resources are used more
4. Improved Disaster Recovery and Business
Continuity
Simplified Backup and Recovery: Virtual machines can be
easily backed up and restored, reducing downtime in the
event of hardware failure or data corruption.
High Availability: Virtualization technologies often include
features like live migration, which allows VMs to be moved
between physical hosts without downtime, enhancing
business continuity.

5. Enhanced Security and Isolation


Isolation of Environments: Virtual machines run in isolated
environments, meaning that the failure or compromise of one
VM does not affect others. This isolation enhances security,
especially in multi-tenant environments.
Security Features: Many virtualization platforms include
built-in security features, such as firewalls, encryption, and
intrusion detection, providing an additional layer of protection
for virtualized environments.
6. Simplified Management and Automation
Centralized Management: Virtualization allows for centralized
management of all virtualized resources through a single
interface, simplifying the administration of IT environments.
Automation Capabilities: Many virtualization platforms support
automation, enabling tasks such as provisioning, scaling, and
monitoring to be automated, reducing the workload on IT staff
and minimizing human error.
7. Support for Legacy Applications
Running Legacy Software: Virtualization allows legacy
applications to run on newer hardware without modification,
preserving investment in older software and ensuring
continued functionality.
8. Facilitation of DevOps and Agile Practices
Rapid Provisioning of Development Environments:
Virtualization enables the quick creation of isolated
development, testing, and production environments,
facilitating faster software development and deployment
cycles.
Support for Continuous Integration/Continuous Deployment
9. Increased Mobility

Live Migration: VMs can be moved between physical hosts


without downtime, allowing for maintenance or load
balancing without disrupting services.
Support for Remote Work: Virtual Desktop Infrastructure (VDI)
and virtualized applications enable users to access their work
environments from anywhere, supporting remote work and
improving workforce mobility.

10. Environmental Impact

Reduction in E-Waste: By extending the lifecycle of physical


servers and reducing the overall number of machines needed,
virtualization helps decrease electronic waste.
The Types of Virtualization
The Virtual Service Desk
A Virtual Service Desk is a remote or cloud-based support
system that provides IT services and assistance to users,
typically within an organization. It operates as a central point
of contact for resolving technical issues, managing service
requests, and providing IT-related support. Unlike traditional
service desks, which may require physical presence or on-site
support staff, a virtual service desk leverages digital tools and
platforms to deliver these services remotely.
Key Features of a Virtual Service Desk:
1. Remote Accessibility:
Users can access the virtual service desk from anywhere,
typically through a web portal, mobile app, or via email.
This flexibility allows organizations to support remote
workers, distributed teams, and multiple office locations.
2. 24/7 Availability:
Many virtual service desks offer round-the-clock support,
ensuring that users can get help whenever they need it,
regardless of time zones or working hours. This is
especially important for global organizations.
3. Multichannel Support: Users can contact the
virtual service desk through various channels, including
phone, email, chat, and social media. This multichannel
approach improves accessibility and convenience for
users.
4. Automated Ticketing System: Service
requests and issues are logged into an automated
ticketing system, which tracks the progress of each
request from submission to resolution. This system helps
5. Self-Service Options: Virtual service desks often include
self-service portals where users can find solutions to common
problems through knowledge bases, FAQs, and automated
chatbots. This reduces the burden on support staff and allows
users to resolve issues independently.

6. Integration with ITSM Tools: Virtual service desks are


typically integrated with IT Service Management (ITSM) tools
and platforms, enabling efficient management of IT services,
assets, incidents, and changes within the organization.

7. Real-Time Monitoring and Analytics: Real-time


monitoring and analytics tools help track performance
metrics, user satisfaction, and the efficiency of support
operations. This data is used to improve service quality and
identify areas for improvement.
8. Scalability: Virtual service desks can easily scale to
accommodate the needs of growing organizations. As
the number of users or the complexity of support
requests increases, the virtual service desk can be
expanded or enhanced with additional resources and
tools.

9. Global Support: Virtual service desks are well-suited


for organizations with a global presence, offering
consistent support across different regions and
languages.
Benefits of a Virtual Service Desk:
Cost Efficiency: By reducing the need for physical
infrastructure and on-site support staff, organizations can
lower operational costs.

Improved User Experience: Users benefit from faster


response times, 24/7 availability, and access to self-service
options, leading to higher satisfaction.

Increased Flexibility: The ability to access support from


anywhere supports modern work environments, including
remote and hybrid work models.

Enhanced Productivity: Automated processes, efficient


ticket management, and self-service options help resolve
issues quickly, minimizing downtime.

Data-Driven Insights: Analytics and reporting tools provide


valuable insights into service desk performance, user
Use Cases for a Virtual Service Desk:

IT Support: Assisting users with technical issues related to


hardware, software, network connectivity, and other IT
services.

HR Support: Providing assistance with HR-related queries,


such as benefits, payroll, and onboarding.

Customer Support: Managing customer inquiries,


complaints, and service requests in customer-facing
businesses.

Facilities Management: Handling maintenance requests,


office supplies, and other facilities-related services.
What can be Virtualized?
1. Computing Resources
Virtual Machines (VMs): Entire operating systems can be
virtualized and run on a hypervisor, allowing multiple OS
instances on a single physical machine. This includes both
server operating systems (like Windows Server, Linux) and
desktop operating systems (like Windows 10, macOS).
Containers: Containers package an application and its
dependencies into a single unit that can run consistently
across different computing environments. Docker and
Kubernetes are popular technologies in containerization.
2. Storage
Virtualized Storage: Storage virtualization abstracts
physical storage across multiple devices, making it appear as
a single storage pool. This includes technologies like Storage
Area Networks (SANs), Network Attached Storage (NAS), and
software-defined storage (SDS).
Software-Defined Storage (SDS): SDS decouples storage
hardware from the software that manages it, allowing for
more flexible and scalable storage management.
3. Networks
Virtual Networks: Network virtualization allows for the
creation of virtual networks that operate independently of the
physical network infrastructure. This includes virtual LANs
(VLANs), virtual private networks (VPNs), and software-
defined networking (SDN).
Network Functions Virtualization (NFV): NFV virtualizes
network services such as firewalls, load balancers, and
routers, allowing them to run on standard server hardware
rather than specialized devices.
4. Desktops and Workspaces
Virtual Desktop Infrastructure (VDI): VDI allows desktop
environments to be hosted on a centralized server and
accessed remotely by users. This enables centralized
management of desktops and supports remote work.
Desktop as a Service (DaaS): Similar to VDI, DaaS
provides virtual desktops as a cloud service, eliminating the
need for on-premises infrastructure.
5. Applications
Application Virtualization: This technology allows
applications to run on a device without being installed on its
operating system. The application is delivered from a
centralized server and runs in a virtual environment on the
user’s device. Examples include Microsoft App-V and VMware
ThinApp.
Software as a Service (SaaS): While not traditional
virtualization, SaaS delivers applications over the internet as
a service, abstracting the underlying infrastructure and
allowing users to access the software via a web browser.
6. Servers
Server Virtualization: This allows multiple virtual servers to
run on a single physical server, optimizing resource use and
reducing hardware costs. Examples include VMware ESXi,
Microsoft Hyper-V, and KVM.
7. Operating Systems
OS Virtualization: OS-level virtualization allows multiple
isolated user-space instances (containers) to run on a single
operating system kernel. Linux Containers (LXC) and Solaris
Zones are examples of OS virtualization.

8. Memory
Memory Virtualization: This abstracts physical memory into a
pool of resources that can be allocated dynamically to virtual
machines or applications as needed. Techniques like memory
paging and swapping are commonly used in memory
virtualization.

9. Data Centers
Virtual Data Centers: Entire data centers can be virtualized,
creating a software-defined data center (SDDC). This includes
virtualizing compute, storage, networking, and security, often
managed through a centralized software platform.
10. Graphics and GPUs
GPU Virtualization: GPU resources can be shared among
multiple virtual machines or containers, allowing for efficient
use of graphics processing power in applications like 3D
rendering, machine learning, and gaming. NVIDIA vGPU is an
example of GPU virtualization technology.
11. Workspaces
Virtual Workspaces: Virtual workspaces allow for the
virtualization of a complete digital environment, including
desktops, applications, and user settings, which can be
accessed from any device.
12. Networks and Security Services
Virtual Firewalls and Security Appliances: Firewalls,
intrusion detection systems (IDS), and other security
appliances can be virtualized to run as software applications
on standard hardware.
Virtual Private Networks (VPNs): VPNs virtualize network
connections, creating secure, encrypted tunnels over public
networks for remote access to a private network.
13. Test and Development Environments
Virtual Labs: Virtual labs allow developers and
testers to create isolated environments for testing
software, running simulations, or training without
the need for physical hardware.

14. Cloud Services


Infrastructure as a Service (IaaS): IaaS
providers offer virtualized computing resources
over the internet, including virtual machines,
storage, and networks, allowing customers to build
and manage their own IT infrastructure in the
cloud.
Related forms of Computing
Related forms of Computing
1. Cloud Computing
Overview: Cloud computing delivers computing services—
such as servers, storage, databases, networking, software,
and more—over the internet (“the cloud”). Virtualization is a
core technology behind cloud computing, enabling the
abstraction of physical resources and the creation of scalable,
on-demand services.
2. Edge Computing
Overview: Edge computing involves processing data closer
to where it is generated (at the “edge” of the network) rather
than in a centralized data center. Virtualization helps create
lightweight, distributed environments at the edge, allowing
for faster processing and lower latency.
3. Containerization
Overview: Containerization packages an application and its
dependencies into a container that can run consistently
across different environments. Containers are lightweight
compared to virtual machines, sharing the host OS kernel
while maintaining isolated user spaces.
4. Serverless Computing
Overview: Serverless computing allows developers to build
and run applications without managing the underlying
infrastructure. The cloud provider automatically provisions,
scales, and manages the infrastructure needed to run the
code, often using containers or VMs behind the scenes.

5. Software-Defined Everything (SDx)


Overview: Software-defined everything refers to the trend of
replacing hardware-based functions with software-based
solutions, leading to more flexible, scalable, and manageable
IT environments.

6. Hyper-Converged Infrastructure (HCI)


Overview: HCI integrates compute, storage, and networking
into a single system, managed by software. Virtualization is a
key component, allowing resources to be pooled and
managed more efficiently, often within a software-defined
data center.
7. Grid and Distributed Computing
Overview: Grid and distributed computing involve using
multiple computers, often geographically dispersed, to work
on a single task. Virtualization can help manage and allocate
resources across the grid, making it easier to distribute
workloads and achieve higher efficiency.

8. Virtual Desktop Infrastructure (VDI)


Overview: VDI delivers desktop environments from a
centralized server to end users over a network. Virtualization
enables the creation and management of these desktop
environments, supporting remote work and reducing the need
for physical desktops.

9. Green Computing
Overview: Green computing focuses on environmentally
sustainable computing practices, often by optimizing energy
efficiency and reducing waste. Virtualization contributes to
green computing by consolidating workloads onto fewer
physical servers, reducing energy consumption and physical
hardware needs.
10. Big Data and Analytics
Overview: Big data involves the processing and analysis of
vast amounts of data. Virtualization supports big data
initiatives by enabling scalable, flexible infrastructure that can
handle large datasets and complex processing tasks.

11. Artificial Intelligence and Machine


Learning (AI/ML)
Overview: AI and ML require significant computational
resources for training and inference tasks. Virtualization
enables the efficient use of GPU resources, facilitates
distributed training, and supports scalable AI/ML workloads.

12. IoT (Internet of Things)


Overview: IoT involves connecting physical devices to the
internet to collect and exchange data. Virtualization helps
manage IoT workloads by creating isolated environments for
processing data close to where it is generated (edge
computing) and enabling scalable back-end infrastructure in
the cloud.
Definition of Cloud

The term "cloud" in the context of computing


refers to a network of remote servers hosted on
the Internet to store, manage, and process data,
rather than a local server or a personal
computer. Cloud computing provides on-
demand availability of computing resources
such as storage, servers, databases,
networking, software, and more, often over the
internet, enabling flexible and efficient
computing solutions.
48
Benefits of the Cloud
•Cost Efficiency: Reduces the capital expense of buying
hardware and software and setting up and running on-site
data centers.
•Scalability: Easily scales resources up or down to handle
increases or decreases in demand.
•Performance: Provides high performance with large-
scale computing power and storage capabilities.
•Accessibility: Ensures access to services from anywhere
with an internet connection, facilitating remote work and
collaboration.
•Disaster Recovery and Backup: Offers reliable data
backup and disaster recovery solutions.
•Security: Advanced security measures protect data,
applications, and infrastructure from potential threats.
Challenges of the Cloud
•Security and Privacy: Ensuring data security and privacy
remains a significant concern, particularly in a multi-tenant
environment.
•Compliance: Adhering to regulatory and legal
requirements can be complex, especially for organizations
handling sensitive information.
•Downtime: Dependence on internet connectivity and
potential service outages can disrupt business operations.
•Vendor Lock-In: Transitioning between cloud providers or
back to an on-premises infrastructure can be challenging
and costly.
Evolution of Cloud Computing
Evolution of Cloud Computing

1950s-1960s: Mainframe Computing

•Mainframe Computing: The early days of


computing were dominated by large mainframe
computers. Multiple users accessed these
powerful machines via "dumb terminals," which
had no processing power of their own.

•Time-Sharing: The concept of time-sharing


allowed multiple users to share computing
resources, laying the groundwork for future cloud
concepts.
1970s: Virtualization
 Virtual Machines: IBM introduced the
concept of virtualization, allowing multiple
operating systems to run on a single physical
machine. This technology increased
hardware utilization and set the stage for the
development of cloud computing.
1980s: Client-Server Architecture
 Client-Server Model: The shift from
mainframe computing to client-server
architecture enabled more distributed
computing environments. Personal
computers (clients) connected to centralized
servers for data and applications.
1990s: The Rise of the Internet and ASPs
 World Wide Web: The expansion of the internet

in the 1990s provided a global network that could


be leveraged for remote computing.
 Application Service Providers (ASPs): ASPs

emerged, offering software applications over the


internet. This was an early form of what we now
call Software as a Service (SaaS).
Early 2000s: Birth of Modern Cloud Computing
 Amazon Web Services (AWS): In 2006, Amazon

launched AWS, providing on-demand computing


resources and storage services. This marked the
beginning of cloud computing as we know it today.
 Elastic Compute Cloud (EC2): AWS introduced

EC2, allowing users to rent virtual servers and


scale their capacity as needed.
2010s: Expansion and Diversification
•Microsoft Azure and Google Cloud Platform:
Microsoft and Google entered the cloud market,
offering a variety of cloud services including IaaS,
PaaS, and SaaS.
•Hybrid Cloud: The concept of hybrid cloud
emerged, combining public and private cloud
resources for greater flexibility.
•Containerization: Technologies like Docker and
Kubernetes gained popularity, providing a more
efficient way to deploy and manage applications in
the cloud.
2020s: Advanced Cloud Technologies
•Serverless Computing: Serverless
architectures, such as AWS Lambda, allow
developers to run code without provisioning or
managing servers.
•Edge Computing: Cloud services extend to the
edge of the network, bringing computing power
closer to data sources and improving latency and
performance.
•AI and Machine Learning: Cloud providers
offer advanced AI and machine learning services,
enabling organizations to leverage these
technologies without investing in specialized
hardware.
Key Milestones in Cloud Computing
1960s: J.C.R. Licklider envisioned an "intergalactic computer
network" to enable global access to data and programs.
1970s: IBM's VM operating system introduced virtualization
technology.
1999: Salesforce.com launched, pioneering the SaaS model.
2002: Amazon launched AWS, initially providing cloud
storage services.
2006: AWS launched EC2, offering scalable virtual servers.
2008: Google introduced Google App Engine, a PaaS offering
for building and hosting web applications.
2010: Microsoft launched Azure, expanding the cloud
services market.
2014: Docker popularized container technology, improving
application deployment and scalability.
2018: Serverless computing gained traction, with AWS
Lambda and other services offering scalable, event-driven
compute services.
2020s: Edge computing and AI services became integral
parts of cloud offerings.
The Future of Cloud Computing

•Quantum Computing: Integration of quantum


computing capabilities into the cloud.
•Enhanced AI and Machine Learning: More
advanced and accessible AI/ML services.
•Increased Edge Computing: Further
decentralization of computing resources to improve
performance and reduce latency.
•Greater Security and Privacy: Enhanced security
measures and compliance tools to address growing
concerns about data protection.
•Sustainability: Focus on green computing practices
to reduce the environmental impact of cloud data
centers.
Key Characteristics of Cloud
Computing
1. On-Demand Self-Service
2. Broad Network Access
3. Resource Pooling
4. Rapid Elasticity
5. Measured Service
6. Multi-Tenancy
7. Reliability and Availability
8. Security
9. Economies of Scale
Software as a Service (SaaS)
Software as a Service (SaaS) is a cloud computing model
that delivers software applications over the internet on a
subscription basis. Instead of purchasing, installing, and
maintaining software on individual computers or local servers,
users can access the software through a web browser, with
the service provider managing the infrastructure, software
updates, and security.
Key Characteristics of SaaS:
Web-Based Access:
SaaS applications are typically accessed via a web
browser, making them available on any device with an
internet connection. This eliminates the need for local
installations.
Subscription Model:
SaaS is usually offered on a subscription basis, with
customers paying monthly or yearly fees. This model
includes access to the software, updates, and support.
Centralized Management:
The SaaS provider centrally manages the application,
including updates, patches, and security, which simplifies IT
61
Scalability:
SaaS solutions can scale to accommodate the needs of
individual users or large enterprises, often with the ability to
add or remove users as needed.

Multi-Tenancy:
SaaS applications typically use a multi-tenant architecture,
where multiple customers share the same application
instance while their data remains isolated and secure.

Automatic Updates:
Users benefit from automatic updates and new features,
which are managed by the SaaS provider, ensuring that the
software is always up-to-date without user intervention.
Common Examples of SaaS:
Productivity Tools:
Google Workspace (formerly G Suite): Includes Gmail,
Google Docs, Google Sheets, and Google Drive.
Microsoft 365: Includes Word, Excel, PowerPoint, and
Outlook, along with cloud storage via OneDrive.
Customer Relationship Management (CRM):
Salesforce: A leading CRM platform that helps businesses
manage customer interactions, sales, and marketing.

HubSpot: Offers tools for marketing, sales, and customer


service, focusing on inbound marketing and customer
engagement.

Enterprise Resource Planning (ERP):


•SAP S/4HANA Cloud: Provides integrated business applications
covering finance, supply chain, and more.
•Oracle ERP Cloud: Comprehensive suite of cloud-based ERP
applications for managing business processes.
Communication and Collaboration:

Slack: A collaboration hub that connects people, tools, and


information, facilitating team communication.

Zoom: Video conferencing software that offers virtual


meetings, webinars, and chat.

E-commerce Platforms:

Shopify: A platform for setting up and managing online stores,


handling everything from product listings to payments.

BigCommerce: An e-commerce platform that provides tools


for creating and scaling online businesses.
Grid Computing
Grid computing is a distributed computing model that
harnesses the combined processing power of multiple
interconnected computers to work on a single, complex task.
These computers, often referred to as nodes, can be
geographically dispersed, yet they work together as a virtual
supercomputer. Grid computing is used to solve large-scale
computational problems that are beyond the capability of a
single machine.

Key Characteristics of Grid Computing:


Distributed Resources:
The grid consists of multiple computing resources—such
as CPUs, storage, and memory—spread across different
locations. These resources are pooled together to work on
tasks, making grid computing highly scalable.
Parallel Processing:
Grid computing enables parallel processing, where a large
task is divided into smaller sub-tasks. These sub-tasks are
processed simultaneously across multiple nodes,
67
Resource Sharing:
Grid computing allows multiple organizations or departments
to share computing resources, optimizing the use of available
hardware and reducing costs.

Heterogeneous Systems:
The nodes in a grid can be heterogeneous, meaning they may
have different operating systems, hardware architectures,
and configurations. Grid computing middleware manages
these differences to ensure seamless operation.

Scalability:
Grid computing can scale to accommodate large numbers of
nodes, allowing it to handle increasingly complex and
resource-intensive tasks.

Fault Tolerance:
Grid computing systems are designed to be fault-tolerant. If a
node fails, the task can be redistributed to other nodes,
ensuring that the overall computation continues.
Types of Grid Computing:
Computational Grids:
Focus on providing large-scale computing power by distributing
computational tasks across many nodes. These are commonly
used for scientific research, simulations, and complex
mathematical calculations.
Data Grids:
Designed for managing and processing large datasets distributed
across multiple locations. Data grids are often used in fields like
bioinformatics, climate research, and physics.
Collaboration Grids:
Enable collaboration among geographically dispersed teams by
providing shared resources and communication tools. These
grids support collaborative projects, often in academic or
research environments.
Utility Grids:
Provide computing resources on-demand, similar to how utilities
like electricity are supplied. Users pay for the resources they
consume, making utility grids a cost-effective solution for
variable workloads.
Advantages of Grid Computing:
Enhanced Performance:
By combining the processing power of multiple nodes, grid
computing can tackle complex tasks much faster than a single
machine, offering near-supercomputer performance.
Cost Efficiency:
Organizations can maximize the utilization of their existing
computing resources, reducing the need for expensive,
dedicated hardware.
Scalability:
Grid computing can easily scale to accommodate additional
nodes, making it suitable for both small-scale and large-scale
applications.
Resource Utilization:
Idle resources, such as underutilized servers or desktop
computers, can be harnessed to contribute to the grid, improving
overall resource efficiency.
Flexibility and Collaboration:
Grid computing allows multiple organizations to collaborate by
sharing resources, enabling joint research and development
projects.
Challenges of Grid Computing:

Complexity of Management: Managing a grid computing


environment can be complex, especially when dealing with
heterogeneous systems, varying performance levels, and
different administrative domains.
Security Concerns: Since grid computing involves sharing
resources across multiple organizations, ensuring the security
and privacy of data is a significant challenge.
Latency and Bandwidth: The performance of a grid can be
affected by network latency and bandwidth limitations,
particularly when nodes are geographically dispersed.
Interoperability Issues: Integrating different hardware and
software platforms into a cohesive grid can be challenging,
requiring robust middleware solutions to manage
compatibility.
Resource Availability: The availability of resources in a grid
can be unpredictable, as nodes may go offline or become
overloaded, affecting the overall performance and reliability
of the grid.
Examples of Grid Computing Projects:
1.SETI@home:
1. A project that uses grid computing to analyze radio
signals for signs of extraterrestrial intelligence. It relies
on volunteer computing, where users donate their idle
computer power to the grid.
2.World Community Grid:
1. An initiative by IBM that uses grid computing to tackle
global challenges in health, poverty, and sustainability.
Volunteers contribute their computing power to support
various research projects.
3.LHC Computing Grid (LCG):
1. A global grid computing project that processes the data
generated by the Large Hadron Collider (LHC)
experiments at CERN.
4.TeraGrid:
1. A former U.S. national grid computing project that
provided researchers with access to advanced
computing resources across multiple supercomputing
centers.
Utility computing
Utility computing is a service provisioning model where
computing resources—such as processing power, storage,
and applications—are provided to customers on a pay-as-you-
go basis, similar to traditional utilities like electricity or water.
Instead of owning and maintaining physical hardware, users
can access and use these resources from a service provider
as needed, paying only for what they consume.

Key Characteristics of Utility Computing:


1.On-Demand Resource Provisioning:
2.Pay-As-You-Go Pricing:
3.Scalability:
4.Centralized Management:
5.Shared Resources:
6.Accessibility:
Examples of Utility Computing Providers:
1.Amazon Web Services (AWS):
1. AWS offers various utility computing services, including Elastic
Compute Cloud (EC2) for on-demand virtual servers, and
Simple Storage Service (S3) for scalable storage.
2.Microsoft Azure:
1. Microsoft Azure provides a range of utility computing services,
including virtual machines, storage, and networking, allowing
businesses to scale their resources as needed.
3.Google Cloud Platform (GCP):
1. GCP offers utility computing services such as Compute Engine
for scalable computing power and Cloud Storage for on-
demand data storage.
4.IBM Cloud:
1. IBM Cloud provides utility computing services with a focus on
enterprise solutions, offering scalable compute, storage, and
AI services.
Utility Computing vs. Cloud Computing:
•Utility Computing:

• Focuses on providing computing resources as a metered


service, where users pay based on their usage. It's often
considered a precursor to cloud computing, laying the
groundwork for modern cloud service models.

•Cloud Computing:

• Encompasses a broader range of services, including


Infrastructure as a Service (IaaS), Platform as a Service
(PaaS), and Software as a Service (SaaS). Cloud
computing builds on the principles of utility computing
but offers more flexibility, scalability, and service
options.
Virtualization Processes
1. Hardware Virtualization:
•Hypervisor Installation: The hypervisor, also known as a virtual
machine monitor (VMM), is installed on a physical host. It is
responsible for creating and managing virtual machines (VMs). There
are two types of hypervisors:
• Type 1 (Bare-Metal): Runs directly on the host's hardware
(e.g., VMware ESXi, Microsoft Hyper-V).
• Type 2 (Hosted): Runs on top of a conventional operating
system (e.g., VMware Workstation, Oracle VirtualBox).
•Resource Allocation: The hypervisor allocates physical resources
(CPU, memory, storage, etc.) to each VM, which operates as an
independent machine.
•VM Creation: VMs are created with specific configurations (e.g.,
allocated CPU cores, RAM, storage). Each VM runs its own operating
system and applications.
•VM Management: The hypervisor manages the operation,
distribution, and performance of VMs. It also handles tasks such as
starting, stopping, pausing, and migrating VMs between hosts.
2. Operating System Virtualization:

•Containerization: Instead of virtualizing the entire


hardware, operating system virtualization (or
containerization) virtualizes the OS layer. Containers share
the same OS kernel but run isolated user-space instances.
Popular containerization platforms include Docker and
Kubernetes.

•Image Creation: Containers are created from images that


include the application and its dependencies. These images
can be versioned and reused.

•Container Deployment: Containers are deployed and


managed using container orchestration tools like Kubernetes,
which automates the deployment, scaling, and management
of containerized applications.
3. Storage Virtualization:

•Storage Pooling: Physical storage resources (e.g., hard


drives, SSDs) are pooled together and managed as a single
resource. Virtual storage volumes are created from this
pooled resource.

•Logical Volume Management (LVM): LVM is used to


create logical volumes, which are abstracted from the
physical storage. This allows for flexible resizing and
allocation of storage resources.

•SAN/NAS Virtualization: Storage Area Networks (SAN) and


Network-Attached Storage (NAS) can be virtualized to provide
a unified storage environment that is scalable and easily
manageable.
4. Network Virtualization:

•Virtual Network Creation: Virtual networks are created


using software-defined networking (SDN) principles. This
involves abstracting the physical network into virtual
networks, allowing for centralized management and
configuration.

•Virtual Switches and Routers: Virtual switches and


routers replace or augment physical network devices,
enabling VMs to communicate over a virtual network.

•Network Segmentation and Security: Virtual networks


can be segmented into virtual LANs (VLANs) to isolate traffic
and enhance security. Virtual firewalls and other network
security measures are applied to control traffic between
virtual networks.
5. Application Virtualization:

•Application Packaging: Applications are packaged with


their dependencies into a virtual environment. This ensures
that the application can run on any compatible system
without conflicts.

•Application Streaming: The virtualized application is


delivered to the end-user's device on demand, often over a
network, and executed locally without installation.

•Centralized Management: Virtualized applications are


managed centrally, allowing for easy updates, deployment,
and control over application access.
6. Desktop Virtualization:

•Virtual Desktop Infrastructure (VDI): Users access virtual


desktops hosted on a central server. The desktop
environment, including the OS and applications, is virtualized
and delivered to the user over a network.

•Remote Desktop Services (RDS): RDS allows users to


connect to a centralized server that hosts multiple user
sessions, providing access to a shared desktop or individual
applications.

•Thin Clients: Thin clients are lightweight devices that


access virtual desktops or applications hosted on a server,
relying on server-side processing rather than local resources.
8. Service Virtualization:
•Service Mocking: In a development environment, service
virtualization allows developers to create virtual versions of
services or APIs that are not yet available or are difficult to
access. This helps in testing and development without relying
on live services.
•Virtual Service Deployment: Virtual services can be
deployed and managed in a cloud or on-premises
environment, enabling flexible service delivery.

9. Management and Orchestration:


•Centralized Management Tools: Tools like VMware
vCenter, Microsoft System Center, and OpenStack provide
centralized management of virtualized environments,
including monitoring, resource allocation, and automation.
•Orchestration: Automation and orchestration tools manage
the lifecycle of virtual resources, from creation to
decommissioning. This includes automated scaling, load
balancing, and failover management.
10. Backup and Recovery:

•Snapshot and Cloning: Virtual machines and environments


can be backed up using snapshots and clones, allowing for
quick recovery in case of failure.

•Disaster Recovery: Virtualization simplifies disaster


recovery by enabling the replication and migration of virtual
environments to different locations, ensuring business
continuity.

You might also like