cc unit 1
cc unit 1
Cloud computing is the delivery of computing services over the internet ("the cloud"). These services include
storage, databases, servers, networking, software, analytics, and more. Rather than owning and maintaining
physical data centers or servers, organizations and individuals can access and use these resources on-demand
from a cloud provider like Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform
(GCP).
1. On-Demand Self-Service
o Users can provision computing resources as needed without requiring human interaction with
each service provider.
o Services are available over the network and accessed through standard mechanisms like
laptops, mobile phones, or desktops.
3. Resource Pooling
o Providers use a multi-tenant model to serve multiple customers with physical and virtual
resources dynamically assigned and reassigned based on demand.
o Resources like memory, storage, and processing power are pooled to serve multiple users.
4. Rapid Elasticity
o Capabilities can be rapidly and elastically provisioned to quickly scale up or down depending
on demand.
5. Measured Service
o Cloud systems automatically control and optimize resource usage through metering.
o This allows for transparent billing — users only pay for what they use (pay-as-you-go).
6. Scalability
7. Security
o Includes encryption, identity management, and access controls to protect data and services.
8. Cost Efficiency
o Cloud providers offer data backup, disaster recovery, and failover mechanisms to ensure
uptime.
Deployment Models
1. Security Issues
• Misconfigured Cloud Storage: Mistakes in setting access controls can expose confidential data.
• Data Loss: Accidental deletion, hardware failure, or cyberattacks can lead to loss of data.
2. Performance Issues
• Resource Contention: In multi-tenant environments, performance can degrade due to other users
sharing the same resources.
• Scalability Delays: Sometimes auto-scaling doesn't respond fast enough to sudden demand spikes.
3. Data-Related Issues
• Data Privacy: Difficulty ensuring compliance with laws like GDPR, HIPAA, etc.
• Data Location: Uncertainty about where data is stored (can impact legal jurisdiction).
• Data Portability: Moving data between cloud providers can be complex and risky.
• Vendor Lock-In: Dependence on one provider’s platform makes migration difficult and expensive.
4. Energy-Related Issues
• High Energy Consumption: Massive data centers consume a lot of electricity, raising environmental
concerns.
• Carbon Footprint: Not all cloud providers use renewable energy, contributing to global emissions.
• Cooling Requirements: Maintaining optimal temperatures in data centers uses additional power.
• Single Point of Failure: Despite redundancy, some systems might still have components that fail and
bring the service down.
• Disaster Recovery Delays: If recovery systems are not configured properly, restoring services can be
slow or incomplete.
• Software Bugs: Cloud services are still prone to bugs that can cause crashes or errors.
• Network Failures: Connectivity issues between user and cloud provider can halt access.
Here are the challenges faced in architecture by cloud computing (names only):
1. Scalability
2. Interoperability
3. Portability
4. Security and Privacy
5. Availability
6. Multi-tenancy
7. Data Management
8. Load Balancing
9. Resource Allocation
• Global Collaboration with shared platforms for development, storage, and communication
• Secure and Reliable Services with built-in data protection, fault tolerance, and compliance
The diagram you've shared illustrates the components of parallel computing and how they interact throughout
the process of solving computing problems in parallel. Here's a step-by-step explanation of the flow:
1. Computing Problems
• It all starts with identifying a problem that can be solved using parallel processing techniques.
• The problem is analyzed to design parallel algorithms and select suitable data structures that allow
concurrent execution.
• Dependency analysis is performed to determine which parts of the computation can be executed
simultaneously.
• This step involves mapping the parallel parts of the algorithm onto multiple processors.
• Once the algorithm and its division are defined, high-level programming languages are used to
implement them.
• This step includes writing parallel code using programming models like OpenMP, MPI, CUDA, etc.
• The written code is compiled and linked (binding) to create an executable program that can be loaded
and run on the system.
6. Performance Evaluation
• The program’s performance is measured in terms of speedup, efficiency, scalability, and resource
utilization.
• This step helps in identifying bottlenecks and improving the parallel solution.
• Operating System: Manages hardware resources and provides services for execution.
• Application Software: The final product that the user interacts with, built using parallel programming
principles.
This diagram captures the entire lifecycle of solving a problem using parallel computing — from identifying a
suitable problem, designing and implementing a parallel solution, executing it, and evaluating its performance,
all supported by a layered system architecture.
1. Grid Computing
• Goal: Allow users to access high computing power, big storage, and services—like using electricity or
water.
• How it worked:
o Unlike a fixed cluster, a grid was flexible and could include different types of computers from
around the world.
2. Utility Computing
• Users can use computing services like storage, applications, and processing power on a pay-as-you-go
model.
• It brought in better systems like improved operating systems, user control, and billing.
• This concept grew beyond business and became popular in academics too.
• SaaS means software that is stored on the internet (on remote servers) and accessed through a browser.
• Example: Gmail, Yahoo Mail, Hotmail (all are email services you use online).
a. Business Services
• Meant for companies.
• Used for things like supply chain, customer management, and business tools.
b. Customer-Oriented Services
1. Resource Sharing
2. Scalability
3. Concurrency
4. Fault Tolerance
o If one machine fails, others keep working without losing data or crashing.
5. Transparency
o Users feel like they are using a single system, even though it's distributed.
▪ Types include:
7. Security
8. Heterogeneity
Distributed Computing
Distributed Computing is a type of computing where a single task is divided into smaller parts, and each part
is processed by different computers connected over a network. These computers work together to complete the
task faster and more efficiently. They communicate and share data with each other through a network, usually
the internet or a local connection. The goal is to use multiple systems to solve a problem that is too big for a
single computer to handle.
This system increases speed, reliability, and scalability. If one computer fails, others can take over the work,
making it fault-tolerant. Distributed computing is used in many fields like weather forecasting, online banking,
scientific research, and cloud services. Examples include systems like Google Search, Amazon Web Services,
and SETI@home. It helps in better resource utilization by combining the power of multiple machines.
ARCHITECTURE OF DISTRIBUTED COMPUTING:
1. Data-centered architecture:
• This architecture focuses on data as the most important part of the software system.
• It allows different parts of the system to share and access the same data.
• The main goal is to keep data accurate and consistent in distributed and parallel systems.
• A common example or model used here is called the repository architectural style, which stores and
manages data centrally.
2. Data-flow architecture:
• This architecture is about controlling how data moves through the system.
• Unlike the data-centered style, here the focus is on the flow of data between parts of the system.
• The design defines how data is passed from one component to another in a specific order.
• The system is often designed as multiple stages, with each stage performing a part of the overall task
and working together.
• This style creates an environment that looks like a real computer but is actually a software simulation.
• It allows software and applications to run the same way on different hardware systems.
• Virtual machines help software move easily from one type of hardware to another.
• This style describes systems made of components that work together by calling each other’s functions
or methods.
• The work is divided into parts that execute in order and interact with each other through calls.
• This style treats components as independent units that run their own processes.
o Event systems: Components are mostly separate but can announce or broadcast events to
each other to share information or trigger actions.
1. Application Layer
(Top Layer)
• Examples: Web services, multimedia streaming, email services, business applications like Salesforce,
Google Docs, etc.
• Who uses it?: End users and businesses interacting directly with software-as-a-service (SaaS).
2. Platforms Layer
(Middle Layer)
• Function: Provides a software development framework and tools to build, test, and deploy
applications.
• Examples: Platforms like Google App Engine, Microsoft Azure App Services, Heroku.
3. Infrastructure Layer
• Function: Provides core computing resources such as storage, networking, and virtual machines.
• Who uses it?: System administrators and developers who need Infrastructure-as-a-Service (IaaS).
4. Datacenter Layer
(Bottom Layer)
• Function: Physical hardware that includes CPU, memory, disk storage, and network bandwidth.
• Examples: The actual data centers run by Amazon, Google, Microsoft, etc.
• Who uses it?: Cloud providers; not typically accessed directly by users.
Application Layer Delivers software to end users Google Docs, Dropbox, Zoom
Platforms Layer Environment for developing applications Azure App Services, Google App Engine
Infrastructure Layer Provides virtualized computing resources Amazon EC2, DigitalOcean Droplets
A public cloud is a cloud computing model where services such as storage, servers, and applications are
delivered over the internet by third-party providers. These resources are shared among multiple users, also
known as tenants.
Key Features:
• Owned and managed by cloud providers like Amazon Web Services (AWS), Microsoft Azure, and
Google Cloud.
• Scalable and cost-effective, as users pay only for what they use.
Examples:
• Microsoft Azure
Use Cases:
Public cloud is ideal for businesses seeking flexibility, reduced IT costs, and quick deployment without
investing in physical infrastructure.
1. Data Encryption
o Continuous logging and monitoring help detect threats and unusual activities.
5. Compliance Standards
Challenges:
1. A private cloud is a type of cloud setup that is created within a company’s own network (like on-site
data centers) and is usually managed by the company itself.
2. It gives some of the same features as public cloud—like using more resources when needed, easy
setup of services, and access based on services—but it's used only by one organization.
3. A private cloud is best when a company needs more control, strong security, and reliability, and
wants to give limited access to certain users only.
2. Governance and management – Controlling and managing the whole cloud system properly.
3. Multi-tenancy – Allowing different departments or teams to use the same cloud but with separate
access.
4. Consistent deployment – Making sure software and services are set up the same way every time.
5. Chargeback and pricing – Tracking usage and costs for each team or department.
6. Security and access control – Protecting data and making sure only authorized users can access it.
1. Better Security – Data is stored in a private environment, reducing the risk of external threats.
2. More Control – Organizations have full control over their infrastructure and data.
4. Improved Performance – Resources are dedicated, leading to faster and more reliable performance.
• This type of private cloud shares computing resources (like servers) across different departments of
the same organization.
• Charges are based on usage, so each business unit pays for what they use.
• It provides services that can be set up quickly when needed (dynamic provisioning).
• It uses a system called Service-Oriented Architecture (SOA) to easily use services in new or existing
accounts.
• This cloud allows work to move in and out of the cloud easily when needed.
• It’s easy to manage, and offers good performance and flexibility (with reliable SLAs and scalability).
2. Better Control
3. Customization
4. Compliance
5. Performance
6. Data Isolation
1. High Cost
2. Maintenance Responsibility
3. Limited Scalability
4. Complex Setup
5. Requires Skilled Staff
6. Hardware Dependency
A hybrid cloud is a cloud computing model that combines both private and public cloud services, allowing
data and applications to move between the two environments. This setup enables organizations to store sensitive
or confidential data on the private cloud—where security and control are higher—while using the public cloud
to handle less critical tasks or sudden spikes in workload. This way, businesses can maintain data privacy
without sacrificing the benefits of scalability and flexibility.
The hybrid cloud offers cost-effectiveness, enhanced performance, and greater agility. It allows companies to
optimize their IT infrastructure by balancing workloads between public and private clouds according to their
needs. This model is especially useful for large organizations that must follow strict regulatory requirements
while also needing the ability to quickly scale resources. Overall, hybrid cloud provides a smart and balanced
approach to modern computing challenges.
IaaS (Infrastructure as a Service) is a cloud computing model where users can access essential infrastructure
resources like servers, storage, networking, and virtual machines over the internet. Instead of buying and
maintaining physical hardware, users rent these resources as needed, which reduces costs and minimizes
maintenance efforts.
In this model, the cloud provider manages the physical infrastructure, while users are responsible for managing
the operating systems, software, and applications. It is suitable for both small and large businesses because the
infrastructure can easily be scaled up or down based on demand. IaaS is commonly used for website hosting,
software development and testing, and disaster recovery.
Some popular IaaS providers include Amazon Web Services (AWS EC2), Microsoft Azure, and Google Cloud
Platform (GCP). This model provides flexibility, cost-effectiveness, and full control over your data and
applications without requiring heavy investment in physical infrastructure.
PaaS is a cloud computing service model that provides developers with a complete platform to build, test, and
deploy applications without worrying about managing the underlying infrastructure like servers, storage, or
networking.
In PaaS, the cloud provider manages everything from the hardware, operating system, middleware, runtime
environment, and databases, so developers can focus fully on coding and developing their applications.
• Development tools: Includes code editors, compilers, debuggers, and testing tools.
• Collaboration: Teams can easily work together on the same platform from anywhere.
Benefits of PaaS:
• Cost-effective: Pay only for what you use, no need to buy hardware.
• Access anywhere: Develop and manage apps from anywhere with internet access.
Examples of PaaS:
• Heroku
SaaS is a cloud computing model where software applications are delivered over the internet on a subscription
basis. Users can access these applications via a web browser without needing to install or maintain anything on
their local devices.
Examples of SaaS:
• Dropbox
• Salesforce
MAJOR PLAYERS IN CLOUD COMPUTING (CC):
2. Microsoft Azure
o Provides a wide range of cloud services including AI, machine learning, and analytics
o Popular for Azure Virtual Machines, Azure App Service, and Azure SQL Database
o Known for big data, machine learning, and container orchestration (Kubernetes)
4. IBM Cloud
5. Oracle Cloud
o Offers cloud infrastructure and platform services tailored for business applications
6. Alibaba Cloud
o Provides IaaS, PaaS, and SaaS with strong focus on e-commerce and retail sectors
Here are some of the key issues related to cloud computing, categorized for better understanding:
• Data loss: If proper backups are not maintained, data may be lost due to failures or cyberattacks.
• Compliance risks: Cloud users must comply with regulations like GDPR, HIPAA, etc.
3. Vendor Lock-in
• It becomes difficult to migrate from one provider to another due to compatibility and dependency
issues.
4. Cost Management
5. Limited Control
• Users have less control over infrastructure and configurations since everything is managed by the
cloud provider.
• Data may be stored in servers located in foreign countries, raising concerns about:
o Legal jurisdiction
• Integrating cloud services with on-premise systems or legacy applications can be complex and costly.
8. Insider Threats
• Employees or insiders at cloud service providers may misuse access to customer data.
9. Network Issues
• Performance heavily depends on internet speed and latency, which can vary by region.
How Cloud Computing Provides Scalability
What is Scalability?
Scalability means the ability to increase or decrease IT resources (like storage, processing power, or
instances) based on demand.
1. Elastic Resources
o Cloud platforms (like AWS, Azure, GCP) offer auto-scaling, where virtual machines or
containers are added or removed automatically based on load.
2. Pay-as-you-go Model
o You pay only for what you use. So, if traffic increases, more resources can be added instantly
without investing in physical hardware.
3. Global Infrastructure
o Cloud providers have data centers worldwide, allowing applications to scale across regions
and serve users faster.
4. Load Balancing
Example:
An e-commerce site experiences a spike in traffic during a sale. Cloud platforms detect the spike and
automatically scale up the servers. Once the sale is over, they scale down, saving costs.
Fault tolerance is the system's ability to keep running smoothly even if part of the system fails.
1. Redundancy
o Critical systems and data are replicated across multiple servers and regions. If one fails,
another takes over instantly.
2. Automated Backups
o Regular snapshots and backups ensure that lost data can be restored quickly.
o Cloud applications can be deployed in multiple availability zones or regions. If one zone
fails (e.g., due to natural disaster), others continue running.
Example:
If a web server fails in AWS, the system detects the failure and launches a new instance automatically—
users may not even notice a glitch.
Summary:
Eucalyptus (Elastic Utility Computing Architecture for Linking Your Programs To Useful Systems) is an
open-source cloud computing platform that enables users to build private or hybrid clouds using their existing
IT infrastructure. It follows the Infrastructure as a Service (IaaS) model and is designed to be highly compatible
with Amazon Web Services (AWS), supporting APIs like EC2 and S3. This compatibility allows organizations
to move workloads between their private cloud and AWS with ease. Eucalyptus provides features such as virtual
machine provisioning, scalable storage, and network management. Originally developed at the University of
California, Santa Barbara, it was later commercialized by Eucalyptus Systems and eventually acquired by
Hewlett-Packard (HP) to enhance its cloud offerings. Eucalyptus is particularly valued for enabling enterprises
to maintain control over their data while enjoying the flexibility of cloud computing.
COMPONENETS OF EUCALYPTUS IN CC
Benefits of Eucalyptus
1. AWS Compatibility
o Supports AWS APIs (like EC2, S3), enabling hybrid cloud and easy integration.
2. Open-Source Platform
o Free to use and highly customizable according to organizational needs.
3. Private & Hybrid Cloud Support
o Allows organizations to build secure private clouds and connect with public clouds.
4. Cost-Effective
o Reduces infrastructure costs by utilizing existing hardware and avoiding vendor lock-in.
5. Scalability
o Easily scalable to meet growing demands of computing resources.
6. Data Control & Security
o Enables on-premises data storage and processing, ensuring data privacy and regulatory
compliance.
7. Modular Architecture
o Components like Walrus, CLC, SC, etc., can be managed and configured independently.
8. Efficient Resource Management
o Provides tools to manage virtual machines, storage, and network resources effectively.
9. Enterprise Ready
o Suitable for large organizations needing control, flexibility, and performance.
Nimbus is an open-source cloud computing toolkit that enables users to turn clusters into an Infrastructure-as-
a-Service (IaaS) cloud. It is primarily designed for scientific and academic environments, allowing researchers
to deploy and manage virtual machines on distributed systems.
Developed by:
Nimbus was developed at the University of Chicago and is widely used in research and academic institutions to
create science-focused cloud environments.
Developed by:
NASA Ames Research Center in the United States.
CloudSim
CloudSim is a framework for modeling and simulating cloud computing environments and services. It is
widely used by researchers and developers to test algorithms, resource provisioning, and scheduling
techniques in cloud scenarios without using real cloud infrastructure, which can be costly and complex.
CloudSim was developed by the CLOUDS Lab at the University of Melbourne, Australia.
o Enables simulation of virtual machine (VM) provisioning and resource allocation policies.
o Users can implement and test their own scheduling, allocation, and load balancing policies.
4. Energy-Aware Simulation
5. Scalability Testing
7. Cost Modeling