0% found this document useful (0 votes)
37 views

Cloud Computing Important Topics Oraf (Series 1) - 1

Uploaded by

anzilrashe2002
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views

Cloud Computing Important Topics Oraf (Series 1) - 1

Uploaded by

anzilrashe2002
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

MODULE 1

1.] NIST Cloud Computing Reference Architecture


NIST Cloud Computing Reference Architecture is a comprehensive framework that
defines the roles, responsibilities, and interactions among different entities involved in cloud
computing. The architecture does not dictate how to implement cloud computing solutions
but provides a high-level overview of what cloud services must deliver.

Five Essential Characteristics of Cloud Computing: B R-MOR

1. Broad Network Access:


Cloud services are available over the network and accessed through standard
mechanisms, promoting accessibility from various client platforms such as mobile
phones, laptops, and PDAs. This broad access provides users with the economic
advantage of avoiding the setup of expensive in-house data centers. Cloud
facilities are ubiquitously available wherever there is a network connection.
2. Rapid Elasticity:
Cloud computing provides rapid provisioning of resources to scale up or down
based on demand. This elasticity creates an impression of infinite resources
available to the user and ensures that resources can be allocated or deallocated
swiftly to avoid wastage.
3. Measured Service:
Cloud systems automatically control and optimize resource use by leveraging a
metering capability. Resource usage is monitored, controlled, and reported,
providing transparency for both the provider and consumer of the service. This
feature ensures that users only pay for what they actually consume.
4. On-demand Self-Service:
Users can unilaterally provision computing capabilities such as server time and
network storage as needed automatically, without requiring human interaction with
each service provider. The services are delivered promptly, and users can manage
their own computing resources.
5. Resource Pooling:
The provider’s computing resources are pooled to serve multiple consumers using
a multi-tenant model, with different physical and virtual resources dynamically
assigned and reassigned according to consumer demand. This model provides
flexibility, scalability, and the capability to serve many users simultaneously without
failure.

Actors in the NIST Cloud Computing Model:


1. Cloud Consumer:
The cloud consumer is the principal stakeholder for the cloud computing service,
representing a person or organization that maintains a business relationship with
and uses services from a cloud provider.
2. Cloud Provider:
The cloud provider is responsible for making a service available to interested
parties. The provider acquires and manages the computing infrastructure required
for providing the services, and their primary interaction is with the cloud
consumers.
3. Cloud Broker:
A cloud broker manages the use, performance, and delivery of cloud services and
negotiates relationships between cloud providers and cloud consumers. Brokers
can aggregate, integrate, and customize services, providing added value.
4. Cloud Auditor:
The cloud auditor is responsible for conducting independent assessments of cloud
services and ensuring they comply with pre-agreed policies and regulations
concerning performance, security, and other aspects. This role is critical in
maintaining trust in cloud services.
5. Cloud Carrier:
The cloud carrier acts as the intermediary that provides connectivity and transport
of cloud services between the provider and consumer. Carriers play a crucial role
in ensuring that data is delivered securely and efficiently across the network.

Service Orchestration:
Service orchestration in NIST Cloud Computing refers to the arrangement,
coordination, and management of computing resources to deliver cloud services to
consumers. The orchestration layer typically comprises:
1. Service Layer: The top layer where cloud providers interface with service
consumers, enabling access to various services such as SaaS, PaaS, and IaaS.
2. Resource Abstraction and Control Layer: This middle layer handles the
abstraction of physical resources and manages access control, resource
allocation, and monitoring.
3. Physical Resource Layer: The lowest layer contains all physical computing
resources, including hardware such as processors, storage devices, and network
infrastructure.

Deployment Models in NIST Cloud Computing:


NIST describes four primary deployment models:

1. Public Cloud:
Provides open access to cloud services over the internet, primarily aimed at
individual users or organizations seeking scalability and cost-efficiency. However,
it offers less control over data and security.
2. Private Cloud:
Restricted access cloud infrastructure meant for a single organization. It is either
managed internally or by a third-party provider and offers greater security and
privacy at the expense of reduced cost efficiency.
3. Community Cloud:
Shared cloud infrastructure among several organizations with common concerns,
such as security, compliance, or jurisdictional considerations. It can be managed
internally or by a third-party and offers a blend of the benefits of both public and
private clouds.
4. Hybrid Cloud:
A combination of two or more cloud deployment models (private, public, or
community), allowing data and applications to be shared between them. This
model offers more flexibility, optimizing resource use while ensuring critical
workloads are handled securely in a private cloud.
Cloud Service Models:
NIST defines three primary cloud service models, each providing different levels of control,
flexibility, and management:

1. Infrastructure as a Service (IaaS):


Provides virtualized hardware resources such as virtual machines, storage, and
networking. Consumers can deploy and run arbitrary software, including operating
systems and applications, giving them more control over the infrastructure but
requiring more management responsibilities. Examples include Amazon Web
Services (AWS), Google Compute Engine, and Microsoft Azure.
IaaS allows for the remote usage of virtual processors, memory, and storage
resources, which can be used to build any computing setup like virtual machines
or networks.
2. Platform as a Service (PaaS):
Offers a platform allowing customers to develop, run, and manage applications
without dealing with the underlying infrastructure. PaaS is ideal for developers
who want to focus on building and deploying software rather than managing
servers, storage, or networking.
PaaS includes the necessary infrastructure (IaaS) but adds layers like operating
systems, middleware, and runtime environments. It often includes tools for
application development and deployment, such as Google App Engine, Microsoft
Azure PaaS, and Heroku.
3. Software as a Service (SaaS):
Delivers software applications over the internet, eliminating the need for local
installation, maintenance, or management. Users can access software via a web
browser, making it easy to use and manage.
SaaS applications are hosted by a third-party provider and made available to
customers over the internet. Common examples include Google Workspace,
Microsoft Office 365, and Salesforce. SaaS enables easy scalability and reduces
the burden of software maintenance and updates on the end-user.

Interactions Between the Actors:


The NIST model emphasizes the interactions between different actors (e.g., consumers,
providers, brokers, auditors, and carriers) to ensure effective cloud service delivery:

1. Direct Interaction:
A cloud consumer may directly request services from a cloud provider. The
provider then delivers the requested services to the consumer through a secure
and reliable network.
The cloud provider is responsible for managing and provisioning the necessary
resources to meet the consumer’s needs.
2. Broker-Mediated Interaction:
Instead of directly contacting a cloud provider, a consumer can engage a cloud
broker. The broker integrates services from various providers, tailoring them to
meet the consumer’s requirements. The broker also manages the relationship and
ensures that the consumer’s needs are met efficiently.
This interaction provides flexibility and reduces the complexity of managing
multiple cloud services.
3. Auditor Interaction:
A cloud auditor interacts with the cloud provider, broker, and consumer to
independently assess the cloud services. The auditor ensures that the services
comply with agreed-upon policies, performance metrics, and security standards.
This interaction is critical for maintaining trust in the cloud service ecosystem and
ensuring compliance with legal and regulatory requirements.

2.] Cluster, Cloud, Grid - Difference


Feature Cluster Computing Grid Computing Cloud Computing
Definition A set of connected A decentralized A model that provides
computers working network of on-demand network
together as a single computers working access to a shared
system, typically together to perform pool of configurable
within a single large-scale tasks computing resources
location. across multiple delivered as a service
locations. over the internet.
Control Centralized task Decentralized control Decentralized
management via a with no single point management with
cluster head. of failure. service orchestration
handled by cloud
providers.
Resource Homogeneous Heterogeneous Virtualized resources,
Type nodes with similar nodes with diverse such as virtual
hardware and configurations and machines, storage,
software capabilities. and networks.
configurations.
Geographical Typically confined to Distributed across Globally distributed,
Distribution a single multiple locations, accessible from
geographical possibly worldwide. anywhere with
location or data internet access.
center.
Feature Cluster Computing Grid Computing Cloud Computing
Scalability Limited scalability, as High scalability due Highly scalable and
adding more nodes to decentralized elastic, allowing
requires centralized nature and loose dynamic allocation of
management. coupling of nodes. resources based on
demand.
Failure Moderate, High, with no central High, with
Tolerance dependent on the control point, redundancy and
cluster head, which allowing better fault failover mechanisms
can be a single point tolerance. built into the cloud
of failure. infrastructure.
Task Centralized, with Decentralized, with On-demand task
Management tasks distributed and tasks managed by management, with
managed by the individual nodes in resources allocated
cluster head. the grid. dynamically by the
cloud service.
Usage Suitable for high- Ideal for large-scale, Best for scalable
performance distributed tasks that applications
computing tasks that can tolerate some
require low latency latency.
and high inter-node
communication.
Dependency High dependency on Low dependency on High dependency on
the central node for any single node, internet connectivity
task distribution and enhancing and the cloud
resource robustness. provider's
management. infrastructure.
Examples High-performance Large-scale scientific Cloud services like
computing clusters experiments like Amazon Web
in scientific research SETI@home or Services (AWS),
or financial modeling. CERN's data Google Cloud
analysis. Platform, and
Microsoft Azure.

MODULE 2
1.] Full, Para, Hybrid - Virtualizations
Full Virtualization:
Full virtualization, often referred to as native virtualization, is a virtualization technique where
the hypervisor completely simulates the underlying hardware. This allows virtual machines
(VMs) to run on a hypervisor as if they were running directly on physical hardware. The
guest operating systems operate under the illusion that they are running on actual physical
resources, which means they remain completely unaware that they are running in a
virtualized environment.

In full virtualization, the hypervisor handles all the communication between the guest OS and
the physical hardware, ensuring that the guest OS is isolated from the physical resource
layers. This isolation provides flexibility since almost all available operating systems, such as
Windows, Linux, and others, can function as guest OS on the hypervisor without any
modification. Full virtualization is beneficial as it allows the execution of unmodified versions
of operating systems, thereby supporting a wide range of legacy software and applications.

Examples of full virtualization solutions include VMware's ESXi Server and Microsoft Virtual
Server. These products are well-known for their ability to virtualize x86 architectures, which
require binary translation to manage sensitive CPU instructions. In this method, the majority
of CPU instructions are executed directly on the hardware, but those that cannot be safely
executed are translated by the hypervisor, which does introduce some overhead. However,
this method ensures that operating systems that cannot be modified (such as Windows) can
run efficiently in a virtualized environment.

Para-Virtualization:
Para-virtualization, or OS-assisted virtualization, differs from full virtualization in that the
guest operating systems are aware of the virtualization. In this technique, part of the
virtualization management responsibilities is transferred from the hypervisor to the guest OS.
To participate in this, guest operating systems must be specially modified—a process known
as porting. This modification is necessary because standard operating systems are not
equipped to handle the requirements of para-virtualization.

The key advantage of para-virtualization is its ability to reduce the overhead that typically
occurs with full virtualization by minimizing the need for binary translation. Instead of the
hypervisor handling all interactions with the hardware, as in full virtualization, the modified
guest OS can interact more directly with the hypervisor, leading to better performance.
However, the downside is that it requires the operating system's source code to be modified,
which is not feasible for proprietary systems like Windows.

Xen is one of the best-known examples of a para-virtualization hypervisor. Originally


developed as an open-source project at the University of Cambridge, Xen has evolved into a
popular solution for both server and desktop virtualization. However, because para-
virtualization requires changes to the guest OS, it is not suitable for all environments,
particularly where the operating system cannot be modified.

Hybrid Virtualization (Hardware-Assisted Virtualization):


Hybrid virtualization, also known as hardware-assisted virtualization, represents a middle
ground between full and para-virtualization. This technique relies on specific hardware
features built into the CPU to support virtualization. Both Intel (with its Intel VT technology)
and AMD (with AMD-V technology) have developed processors that include virtualization
support directly in the CPU. These processors allow some privileged CPU instructions from
the guest OS to be executed directly by the CPU, without needing the hypervisor to
intervene. This reduces the performance overhead associated with binary translation in full
virtualization and the requirement for OS modification in para-virtualization.

Hardware-assisted virtualization makes the hypervisor implementation simpler and more


maintainable, as the need for extensive software-based emulation is reduced. The
hypervisor can now handle non-sensitive instructions through direct execution, and sensitive
instructions are efficiently managed by the CPU's virtualization extensions. This results in
better overall performance while maintaining compatibility with a wide range of operating
systems.

This approach is highly portable, as it allows unmodified guest operating systems to run on
the hypervisor. It combines the benefits of full virtualization's compatibility with the efficiency
of para-virtualization. However, the success of hardware-assisted virtualization depends on
the availability of suitable hardware that supports these advanced features. Without the
appropriate hardware, this method cannot be used, which can limit its applicability in
environments with older or less capable systems.

2.] Case Study - VMware/Xen


Xen:
Xen is an open-source virtualization platform that primarily implements para-virtualization but
has also evolved to support full virtualization using hardware-assisted techniques.
Developed initially by researchers at the University of Cambridge, Xen has become one of
the most widely used platforms for both desktop and server virtualization. The platform is
designed to provide high-performance execution of guest operating systems by minimizing
the overhead typically associated with virtualization.

In Xen's architecture, the hypervisor, which runs at the highest privilege level (Ring 0),
controls access to the underlying hardware. The guest operating systems run within
domains, which are virtual machine instances that are managed by the Xen hypervisor. This
architecture leverages the x86 privilege model, where different security levels (rings) control
access to system resources. The hypervisor's role is to manage these domains and ensure
that the guest OS cannot access hardware directly, which prevents potential conflicts and
security issues.

Para-virtualization in Xen requires the guest operating systems to be modified to work with
the hypervisor. This modification involves altering the OS codebase to handle privileged
instructions that would normally require direct hardware access. As a result, not all operating
systems can be used in a Xen environment unless they are open-source or the necessary
changes can be made. For instance, open-source operating systems like Linux are fully
supported by Xen, while proprietary systems like Windows require hardware-assisted
virtualization to run without modification.

Xen's strength lies in its ability to support a wide range of use cases, from server
consolidation to cloud computing platforms. It is particularly well-suited for environments
where performance is critical, and the flexibility of modifying the OS is available. However, its
reliance on OS modifications can be a limitation in environments where such changes are
not possible, making it less versatile than full virtualization platforms like VMware.

VMware:
VMware is one of the leading providers of virtualization technology, known for its full
virtualization approach. Unlike Xen, which initially focused on para-virtualization, VMware's
technology is based on fully abstracting the underlying hardware and presenting it to the
guest operating systems as if they were running on physical hardware. This method allows
unmodified operating systems to run in a virtualized environment without any changes to
their code.

VMware supports both Type I and Type II hypervisors, depending on the environment. Type I
hypervisors, such as VMware ESXi, are installed directly on the hardware (bare-metal),
providing a robust and high-performance virtualization environment ideal for server
virtualization. Type II hypervisors, like VMware Workstation, are installed on top of an
existing operating system, making them more suitable for desktop virtualization.

The core of VMware's full virtualization technology lies in its use of binary translation for
handling sensitive CPU instructions. In a typical x86 architecture, some instructions behave
differently depending on the privilege level (ring) they are executed in. Since guest OS
typically operates in Ring 1 in a virtualized environment, while the hypervisor runs in Ring 0,
VMware's hypervisor uses binary translation to intercept and translate these sensitive
instructions into a form that can be safely executed. This process ensures that the guest OS
can run without modification, a significant advantage for operating systems like Windows,
where the source code is not available for alteration.

VMware also provides full virtualization of I/O devices, such as network controllers, USB
controllers, and storage devices. This capability allows the guest OS to interact with these
devices as if they were directly attached to the physical machine, ensuring compatibility and
performance. While the use of binary translation introduces some overhead, VMware
minimizes its impact by applying this translation only to a subset of instructions, allowing
most operations to be executed directly on the hardware.

The main advantage of VMware's approach is its compatibility with a wide range of operating
systems and hardware, making it a versatile solution for both enterprise and individual users.
The ability to run unmodified operating systems is particularly crucial for environments where
modifying the OS is not feasible. However, the overhead associated with binary translation,
while minimized, can still impact performance compared to native execution. Despite this,
VMware remains one of the most popular and widely adopted virtualization platforms,
particularly in enterprise environments where its robust feature set and compatibility are
highly valued.
3.] Hypervisor Architecture
A hypervisor, also known as a Virtual Machine Monitor (VMM), is a critical component in
virtualization that allows multiple virtual machines (VMs) to run on a single physical machine.
The hypervisor abstracts the physical hardware resources and provides virtualized
environments for each guest operating system (OS), enabling them to run independently.

Key Components of Hypervisor Architecture:


1. Dispatcher:
The dispatcher is responsible for rerouting instructions from the virtual machines to the
appropriate modules within the hypervisor. This ensures that each VM's operations are
managed correctly and efficiently.
2. Allocator:
The allocator manages the allocation of physical resources such as CPU, memory, and
storage to each VM. Whenever a VM attempts to execute an instruction that modifies
its allocated resources, the allocator determines how much of the physical resources
will be provided.
3. Interpreter:
The interpreter handles privileged instructions from the guest OS. These are critical
system-level instructions that usually require direct hardware access. In a virtualized
environment, the interpreter ensures these instructions are executed safely, maintaining
isolation between the VMs.

Types of Hypervisors:
Type 1 Hypervisors (Bare Metal):
These hypervisors are installed directly on the physical hardware, without a host
operating system. They offer better performance as they have direct access to
hardware resources. Examples include VMware ESXi and Microsoft Hyper-V.
Type 2 Hypervisors (Hosted):
These hypervisors run on top of an existing operating system. The host OS provides an
additional layer between the hypervisor and the hardware, which can make installation
and configuration easier, but may result in some performance overhead. Examples
include VMware Workstation and VirtualBox.

Hypervisor’s Role in Virtualization:


The hypervisor is crucial for managing the execution of VMs, ensuring that they are isolated
from one another, and providing each VM with the necessary resources from the physical
hardware. This architecture enables efficient resource utilization and allows multiple
operating systems to run on a single hardware platform.

MODULE 3
1.] Eucalyptus
Eucalyptus stands for Elastic Utility Computing Architecture for Linking Your
Programs To Useful Systems. It is an open-source Infrastructure-as-a-Service (IaaS)
platform that enables the creation of private and hybrid cloud environments. Eucalyptus is
designed to provide cloud computing capabilities by being installed over existing distributed
computing resources. It originated as a research project at the University of California, Santa
Barbara, and in 2009, Eucalyptus Systems was formed to support its commercialization.

Key Features and Capabilities:


1. Open-Source and Linux-Based:
Eucalyptus is a Linux-based platform, which means it leverages the stability and
security of the Linux operating system. Being open-source, it allows users to modify,
adapt, and enhance the platform to meet specific needs. This flexibility makes
Eucalyptus a popular choice for organizations looking to build private clouds using
existing IT infrastructure.
2. Hybrid Cloud Support:
One of the standout features of Eucalyptus is its ability to integrate with Amazon Web
Services (AWS). In 2012, Eucalyptus Systems entered into an agreement with Amazon,
allowing Eucalyptus to be compatible with AWS. This integration permits seamless
transfer of instances between a Eucalyptus private cloud and AWS public cloud,
enabling the creation of hybrid cloud environments. This capability allows organizations
to extend their on-premises infrastructure to the cloud, providing greater flexibility and
scalability.
3. Storage Compatibility:
Eucalyptus offers a storage cloud API that emulates Amazon’s Simple Storage Service
(S3). This compatibility ensures that applications and workloads designed for AWS can
easily be migrated or extended to a Eucalyptus-powered private cloud, without
significant changes to the codebase or architecture.

Goals of Eucalyptus:
Eucalyptus aims to enhance the understanding and adoption of cloud computing by
providing a robust platform for the development, testing, and deployment of cloud
applications. Some specific goals include:

Homogenizing Local IT Environment: Eucalyptus seeks to harmonize local IT


environments with public clouds, allowing organizations to create a seamless hybrid
cloud setup.
Providing a Development Platform: It serves as a platform for developing,
debugging, and previewing applications before they are deployed in public clouds.
Supporting the Open Source Community: Eucalyptus is designed to provide a basic
software development platform that can be freely used and improved by the open-
source community.

Key Benefits:
1. Organizational Agility:
Eucalyptus allows organizations to reduce delays in resource provisioning, speeding up
time-to-market with self-service resource allocation. This agility is crucial for businesses
that need to respond quickly to changing market conditions.
2. Operational Efficiency:
Eucalyptus integrates well with the existing AWS ecosystem and management tools,
enabling efficient operation of private clouds with familiar tools and interfaces.
3. Infrastructure Flexibility:
Organizations can build private clouds using their existing IT infrastructure, reducing
the need for additional investments in new hardware.
4. Dynamic Scalability:
Eucalyptus supports elastic scaling, allowing resources to be scaled up or down based
on demand. This feature is particularly useful in environments with variable workloads.
5. Regulatory Compliance:
Eucalyptus provides precise control over cloud resources and performance on an
organization's own hardware, which is essential for maintaining compliance with
government and industry regulations.

Components of Eucalyptus:
1. Cloud Controller (CLC):
The CLC is the administrative interface for managing the cloud. It handles user API
requests, authentication, accounting, reporting, and overall cloud management. Only
one CLC exists per cloud.
2. Walrus:
Walrus provides persistent storage to all virtual machines within the Eucalyptus cloud. It
is similar to Amazon’s S3, and there are no restrictions on the types of data that can be
stored.
3. Cluster Controller (CC):
The CC manages a specific cluster within the Eucalyptus cloud, communicating with
the Storage Controller and Node Controller. It acts as the front end for the cluster.
4. Storage Controller (SC):
The SC manages Eucalyptus block volumes and snapshots, similar to AWS's Elastic
Block Store (EBS). It interacts with the Cluster Controller and Node Controller to
manage storage resources.
5. Node Controller (NC):
The NC hosts the virtual machine instances and manages the virtual network
endpoints. It is responsible for the execution of virtual machines on the physical
hardware.
6. VMware Broker:
An optional component that provides an AWS-compatible interface for VMware
environments, running on the Cluster Controller.

Conclusion:
Eucalyptus is a powerful and flexible IaaS platform that enables the creation of private and
hybrid clouds, offering a range of benefits from organizational agility to regulatory
compliance. Its compatibility with AWS and its comprehensive set of components make it a
versatile solution for enterprises looking to harness the power of cloud computing on their
own terms.
2.] OpenStack/Apache CloudStack
OpenStack
OpenStack is a free and open-source Infrastructure-as-a-Service (IaaS) solution designed
to manage and control large pools of compute, storage, and networking resources
throughout a data center. It was initiated in July 2010 as a joint project between Rackspace,
a U.S.-based IaaS cloud service provider, and NASA, which contributed parts of its Nebula
Cloud Platform technology to the project.

The main goal of OpenStack is to provide a ubiquitous open-source cloud computing


platform that can be used to set up both public and private clouds. The project has since
grown significantly, with contributions from over 200 companies, including major technology
firms such as AT&T, AMD, Dell, Cisco, HP, IBM, Oracle, and Red Hat.

OpenStack is now governed by the OpenStack Foundation, a non-profit organization


founded in 2012 to promote and oversee the development and adoption of OpenStack. All
the code associated with OpenStack is released under the Apache 2.0 license, making it
freely available for use and modification by anyone.

OpenStack’s modular architecture allows users to deploy different services independently,


depending on their needs. Some of the core services include Nova for computing, Swift for
object storage, Neutron for networking, and Horizon for the dashboard interface. This
flexibility makes OpenStack a powerful tool for building scalable and efficient cloud
environments.

Apache CloudStack
Apache CloudStack is another open-source IaaS cloud computing platform, originally
developed by Cloud.com, a software company based in California. In 2011, CloudStack was
acquired by Citrix Systems, which later handed over the project to the Apache Software
Foundation. Following this transition, CloudStack made its first stable release as part of the
Apache project.

CloudStack provides a comprehensive solution for deploying and managing large networks
of virtual machines, as a highly scalable IaaS platform. It supports a wide range of
hypervisors, including VMware, KVM, and XenServer, making it versatile for various cloud
environments.

One of the unique features of CloudStack is its support for Amazon Web Services (AWS)
APIs, allowing for the creation of hybrid clouds that can integrate with existing AWS
infrastructures. This compatibility enables organizations to extend their cloud capabilities
while leveraging familiar AWS tools and services.

The project is actively maintained and has a strong community backing, with the software
being distributed under the Apache License 2.0. This ensures that CloudStack remains a
reliable and flexible choice for building both public and private cloud environments.

You might also like