Cloud Computing m1
Cloud Computing m1
Community cloud
Hybrid cloud
Cloud computing
Infrastructure
Distributed infrastructure
Defining attributes
Resource virtualization
Massive infrastructure
Autonomous systems
Utility computing. Pay-per-usage
Resources
Accessible via the Internet
Compute & storage servers
Networks Elasticity
Services
Applications
Historical Developments in Cloud Computing
•Early Concepts : Mainframes, time-sharing
•Virtualization : The foundation of cloud computing
•Advent of Cloud Services: AWS launch, emergence of public clouds
•Current Trends: AI integration, edge computing
Private,
Hybrid,
Community
BACK
What is PaaS?
•Definition: PaaS (Platform as a Service) is a cloud computing model that provides a
platform allowing customers to develop, run, and manage applications without dealing
with the underlying infrastructure.
•Key Concept: Focuses on providing a development environment that includes tools,
libraries, and frameworks for building and deploying applications.
Applications Example
•Salesforce CRM: Customer management and sales automation.
•Google Workspace: Collaboration tools including email, documents, and storage.
•Microsoft Office 365: Office applications and cloud services for productivity.
•Slack: Messaging and collaboration platform for teams.
•Dropbox: Cloud storage and file synchronization service.
BACK
Challenges and Considerations
•Data Security: Ensuring the protection of sensitive information stored in the cloud.
•Compliance: Meeting regulatory and industry-specific requirements.
•Customization Limitations: Limited ability to customize software compared to on-
premises solutions.
•Internet Dependency: Reliance on internet connectivity for accessing applications.
•Vendor Lock-In: Potential challenges in switching providers or migrating data.
Back
Mainframes
•Definition: Mainframes are powerful, large-scale computers that were first introduced in the
1950s. They were capable of handling massive amounts of data and processing numerous
tasks simultaneously, which was a significant leap from the earlier, more basic computers.
•Role in Early Computing: Mainframes were the backbone of the early computing era,
primarily used by large organizations like banks, government agencies, and research
institutions. They were expensive, centralized systems, meaning users had to physically
connect to the machine to use its computing power. Mainframes could run multiple
applications and handle numerous users at once, but the interaction was typically batch
processing, where tasks were queued and executed one after another.
•Relation to Cloud Computing: The concept of mainframes introduced the idea of
centralized computing power, which is similar to the infrastructure of modern cloud data
centers. Like mainframes, cloud servers can process large amounts of data and serve
multiple users, but they do this over the internet instead of requiring direct, physical access.
2. Time-Sharing
•Definition: Time-sharing was a revolutionary concept introduced in the 1960s to make computing
resources more accessible. It allowed multiple users to interact with a computer system
simultaneously by dividing the system's processing power into time slices.
•How It Worked: Before time-sharing, computers were used for batch processing, where users
submitted their programs and waited for the results. Time-sharing allowed users to interact with the
computer in real-time by allocating a small slice of CPU time to each user in rotation. This created
the illusion that multiple users were using the computer at the same time, even though the system
was rapidly switching between tasks.
•Relation to Cloud Computing: The idea of time-sharing directly influenced the development of
cloud computing. In cloud environments, resources are allocated to multiple users (or tenants) on-
demand, much like how time-sharing systems allocated CPU time. Cloud providers utilize
virtualization, which is an evolution of time-sharing, to allow multiple virtual machines (VMs) to run
on a single physical server, maximizing resource efficiency.
Evolution from Mainframes and Time-Sharing to Cloud Computing
•From Centralized to Distributed: While mainframes were centralized, cloud computing is
distributed. Instead of a single, powerful computer, cloud services run on clusters of
interconnected servers across various locations. This setup provides scalability, redundancy,
and fault tolerance.
•Virtualization: Virtualization is an advanced form of time-sharing. In cloud computing,
virtualization allows multiple operating systems and applications to run on a single physical
machine by dividing its resources, similar to how time-sharing allocated CPU time.
•On-Demand Access: Unlike early mainframes, where access was limited, cloud computing
provides on-demand access to resources. Users can scale up or down based on their needs
and pay only for what they use. This flexibility was unimaginable during the era of mainframes
and time-sharing systems.
BACK
Public Cloud Overview
1. Definition:
•A public cloud is a computing model in which cloud service providers (CSPs) make computing
resources, such as servers, storage, and applications, available to the public over the internet.
•Resources are owned, managed, and operated by third-party companies, including big players
like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP).
2. Key Features:
•Scalability: Easily scale resources up or down depending on business needs without having to
invest in physical hardware.
•Cost-Efficiency: Pay-as-you-go pricing model allows organizations to only pay for what they use.
•Accessibility: Resources can be accessed from anywhere with an internet connection.
•Multi-Tenancy: Multiple users share the same physical infrastructure, which helps lower costs.
•Self-Service Provisioning: Users can manage resources through a web-based portal, simplifying
setup and management.
3. Benefits:
•Lower Upfront Costs: No need for significant investments in physical infrastructure.
•Flexibility and Agility: Businesses can quickly deploy, manage, and scale applications.
•High Availability: Public clouds offer redundancy and disaster recovery, ensuring minimal
downtime.
•Global Reach: Services can be deployed in data centers worldwide, reducing latency and
improving performance.
4. Common Services Offered:
•Infrastructure as a Service (IaaS): Virtual machines, storage, and networking (e.g., AWS EC2,
Azure Virtual Machines)
•Platform as a Service (PaaS): Development and deployment platforms for building and
running applications (e.g., Google App Engine, Azure App Service)
•Software as a Service (SaaS): Ready-to-use software applications hosted by CSPs (e.g.,
Office 365, Salesforce)
5. Use Cases:
•Web Hosting: Host websites and web applications.
•Backup and Recovery: Store backup data securely off-site.
•Big Data and Analytics: Process large datasets without investing in expensive hardware.
•Application Development: Develop, test, and deploy software without the need for physical
servers.
6. Security Considerations:
•Data Security: Encrypt data in transit and at rest to protect sensitive information.
•Compliance: Ensure cloud services comply with industry regulations (e.g., GDPR, HIPAA).
•Identity and Access Management (IAM): Implement robust authentication and authorization
mechanisms to control access to cloud resources.
7. Challenges:
•Data Privacy: Organizations must be aware of where their data is stored and how it is
protected.
•Vendor Lock-In: Moving services from one provider to another can be complex and
costly.
•Performance Issues: Heavy dependence on internet connectivity can lead to latency
or bandwidth issues.
8. Major Public Cloud Providers:
•Amazon Web Services (AWS)
•Microsoft Azure
•Google Cloud Platform (GCP)
•IBM Cloud
•Oracle Cloud Infrastructure (OCI)
9. Future Trends:
•Edge Computing: Bringing computing closer to the data source to reduce latency.
•Hybrid Cloud Solutions: Combining public clouds with private clouds or on-premise
infrastructure for greater flexibility.
•Artificial Intelligence (AI) and Machine Learning (ML) Services: More advanced AI and
ML tools available as cloud-based services.
BACK
Private Cloud Overview
1. Definition:
•A private cloud is a cloud computing environment that is exclusively dedicated to a single
organization. Unlike public clouds, where resources are shared among multiple users, a private
cloud provides dedicated infrastructure, ensuring greater control over data, security, and
compliance. It can be hosted on-premises (within the organization’s data center) or by a third-
party service provider.
2. Key Features:
•Exclusive Access: Resources are dedicated to a single organization, providing better
performance, security, and control.
•Customization: Allows for customization to meet specific business and IT requirements, unlike
standardized public cloud solutions.
•Scalability: Offers scalability, though typically limited to the capacity of the organization's
infrastructure.
•Enhanced Security: Since resources are not shared, there is a reduced risk of data breaches or
unauthorized access.
3. Benefits:
•Increased Security and Privacy: Sensitive data can be stored and processed within the
organization’s private network, reducing the risk of data leakage.
•Better Compliance: Organizations can implement their own security protocols and
compliance measures, making it easier to meet industry-specific regulations (e.g., HIPAA, PCI-
DSS).
•Custom Control: Full control over the cloud infrastructure, including hardware, software, and
security configurations.
•Resource Efficiency: Optimize resource usage within the organization, as resources can be
provisioned and de-provisioned as needed.
4. Common Use Cases:
•Highly Regulated Industries: Healthcare, finance, and government sectors that require
stringent data security and compliance.
•Data-Intensive Applications: Businesses dealing with large volumes of sensitive data that
need fast and secure access.
•Development and Testing: Organizations can create isolated environments for
development, testing, and production workloads without risking data exposure.
•Legacy Systems: Hosting legacy applications that may not be compatible with public
cloud infrastructure.
5. Deployment Models:
•On-Premises Private Cloud: Infrastructure is hosted and managed within the organization’s
own data centers. Provides full control but may require significant capital investment and
maintenance.
•Hosted Private Cloud: A third-party provider hosts the private cloud, but the infrastructure is
still dedicated to a single organization. Offers the benefits of a private cloud without the
need for on-site hardware.
•Managed Private Cloud: A third-party provider manages the private cloud on behalf of
the organization, including infrastructure, software, and services.
6. Security Considerations:
•Data Encryption: Use encryption for data at rest and in transit to protect sensitive
information.
•Access Control: Implement strict access controls and multi-factor authentication (MFA)
to prevent unauthorized access.
•Regular Audits and Monitoring: Continuous monitoring and regular security audits to
identify and address vulnerabilities.
7. Challenges:
•High Costs: Requires significant investment in hardware, software, and IT personnel,
making it more expensive than public cloud solutions.
•Maintenance and Management: Ongoing maintenance and management of
infrastructure can be resource-intensive.
•Scalability Limitations: Scaling may require additional capital investment in hardware,
unlike the easy scalability of public clouds.
8. Comparison with Public Cloud:
•Cost: Public cloud is generally more cost-effective due to shared resources, while private
cloud involves higher costs for dedicated infrastructure.
•Control: Private cloud provides greater control over infrastructure, security, and
compliance.
•Scalability: Public cloud offers better scalability, while private cloud is limited by physical
resources.
9. Popular Private Cloud Platforms:
•VMware vSphere
•OpenStack
•Microsoft Azure Stack
•Red Hat OpenShift
•Oracle Cloud Infrastructure (OCI) Private Cloud
10. Future Trends:
•Hybrid Cloud Solutions: Combining private and public cloud environments to provide
•Edge Computing Integration: Bringing processing closer to data sources, which can be
integrated with private cloud setups for faster processing and lower latency.
BACK
Virtualization: The Foundation of Cloud Computing
Virtualization is a core technology that underpins cloud computing, enabling the efficient use
of physical resources by creating virtual versions of hardware, software, and networks. It allows
cloud providers to offer scalable, flexible, and cost-effective services.
1. What is Virtualization?
•Definition: Virtualization is the process of creating a virtual version of a physical resource, such
as a server, storage device, network, or even an operating system. Instead of having a single
physical machine dedicated to a specific task, virtualization allows that physical machine to be
split into multiple virtual machines (VMs), each functioning as if it were a separate, independent
system.
• How It Works: Through a software layer known as a hypervisor, virtualization abstracts the
underlying hardware and allows multiple virtual environments to run on a single physical
server. Each VM can have its own operating system, applications, and resources (CPU,
memory, storage) without interfering with others.
2. Types of Virtualization
1.Server Virtualization
1. Description: This is the most common form of virtualization, where multiple virtual
servers run on a single physical server. Each virtual server operates independently,
as if it were a standalone machine.
2. Benefits: Improves resource utilization, reduces hardware costs, and simplifies
server management.
2.Network Virtualization
1. Description: In network virtualization, the physical networking hardware (like routers
and switches) is abstracted into virtual networks. Virtual networks can be created,
managed, and reconfigured without needing to change physical infrastructure.
2. Benefits: Enhances flexibility, allows for network segmentation, and improves
security and scalability.
3. Storage Virtualization
•Description: Storage virtualization aggregates multiple physical storage devices into a single,
virtual storage pool. This pool can then be allocated dynamically to various users or
applications.
•Benefits: Simplifies storage management, increases storage utilization, and provides better
data backup and recovery options.
4. Desktop Virtualization
•Description: Desktop virtualization allows users to run a virtual desktop environment on a
remote server. Users can access their virtual desktops from any device, anywhere, without
needing a specific physical machine.
•Benefits: Provides flexibility, reduces hardware costs, and simplifies IT management.
BACK