Cloud Computing QB
Cloud Computing QB
Unit-1 INTRODUCTION
Part-A
1) Define Cloud Computin. [CO1, k1]
Cloud Computing is defined as storing and accessing of data and computing services over the internet. It
doesn’t store any data on your personal computer. It is the on-demand availability of computer services like
servers, data storage, networking, databases, etc. The main purpose of cloud computing is to give access to
data centres to many users. Users can also access data from a remote server.
Examples of Cloud Computing Services: AWS, Azure
There are many characteristics of Cloud Computing here are few of them:
1. On-demand self-services
2. Broad network access
3. Rapid elasticity
4. Resource pooling
5. Measured service
Cloud Elasticity is the property of a cloud to grow or shrink capacity for CPU, memory, and storage
resources to adapt to the changing demands of an organization. Cloud Elasticity can be automatic, without
need to perform capacity planning in advance of the occasion, or it can be a manual process where the
organization is notified, they are running low on resources and can then decide to add or reduce capacity
when needed. Monitoring tools offered by the cloud provider dynamically adjust the resources allocated to
an organization without impacting existing cloud-based operations.
5) What is on-demand provisioning? [CO1, k1]
On-demand computing (ODC) is a delivery model in which computing resources are made available to the
user as needed. The resources may be maintained within the user's enterprise or made available by a cloud
service provider.
Cloud computing offers many benefits — so much so that more and more businesses are migrating their
infrastructures and data to cloud services and platforms. Some startups rely entirely on cloud computing for
benefits like:
1. Scalability
2. Cost
3. Speed
4. Productivity
5. Performance
6. Security
7. Disaster recovery
8) Enlist any two advantages of Distributed Systems. [CO1, k2]
➢ The ability of fault tolerance
The distributed system can tolerate the system or software fault efficiently. It will help when a problem
arises in one area by allowing a continuous workflow. The distributed system uses multiple devices with
the same capabilities and programming backup procedures.
➢ Autonomy
As we know, data is shared in a distributed system, and because of this, each site or system can retain a
degree of control over the data stored locally.
1) Illustrate the evolution of Distributed Computing to grid and Cloud Computing [CO1, k2]
Cloud computing is all about renting computing services. This idea first came in the 1950s. In making
cloud computing what it is today, five technologies played a vital role. These are distributed systems and its
peripherals, virtualization, web 2.0, service orientation, and utility computing.
• Distributed Systems:
It is a composition of multiple independent systems but all of them are depicted as a single
entity to the users. The purpose of distributed systems is to share resources and also use them
effectively and efficiently.
• Mainframe computing:
Mainframes which first came into existence in 1951 are highly powerful and reliable computing
machines. These are responsible for handling large data such as massive input-output operations.
Even today these are used for bulk processing tasks such as online transactions etc.
• Cluster computing:
In 1980s, cluster computing came as an alternative to mainframe computing. Each machine in
the cluster was connected to each other by a network with high bandwidth. These were way
cheaper than those mainframe systems. These were equally capable of high computations.
• Grid computing:
In 1990s, the concept of grid computing was introduced. It means that different systems were
placed at entirely different geographical locations and these all were connected via the internet.
These systems belonged to different organizations and thus the grid consisted of heterogeneous
nodes.
• Virtualization:
It was introduced nearly 40 years back. It refers to the process of creating a virtual layer over the
hardware which allows the user to run multiple instances simultaneously on the hardware.
• Web 2.0:
It is the interface through which the cloud computing services interact with the clients. It is
because of Web 2.0 that we have interactive and dynamic web pages. It also increases flexibility
among web pages. Popular examples of web 2.0 include Google Maps, Facebook, Twitter, etc.
• Service orientation:
It acts as a reference model for cloud computing. It supports low-cost, flexible, and evolvable
applications. Two important concepts were introduced in this computing model. These were
Quality of Service (QoS) which also includes the SLA (Service Level Agreement) and Software
as a Service (SaaS).
• Utility computing:
It is a computing model that defines service provisioning techniques for services such as
compute services along with other major services such as storage, infrastructure, etc which are
provisioned on a pay-per-use basis.
1. Resources Pooling
Resource pooling is one of the essential features of cloud computing. Resource pooling means that a cloud
service provider can share resources among multiple clients, each providing a different set of services
according to their needs.
2. On-Demand Self-Service
It is one of the important and essential features of cloud computing. This enables the client to continuously
monitor server uptime, capabilities and allocated network storage.
3. Easy Maintenance
This is one of the best cloud features. Servers are easily maintained, and downtime is minimal or sometimes
zero. Cloud computing powered resources often undergo several updates to optimize their capabilities and
potential
A key feature and advantage of cloud computing is its rapid scalability. This cloud feature enables cost-
effective handling of workloads that require a large number of servers but only for a short period. Many
customers have workloads that can be run very cost-effectively due to the rapid scalability of cloud
computing.
5. Economical
This cloud feature helps in reducing the IT expenditure of the organizations. In cloud computing, clients need
to pay the administration for the space used by them.
Reporting Services is one of the many cloud features that make it the best choice for organizations. The
measurement and reporting service is helpful for both cloud providers and their customers.
7. Security
Data security is one of the best features of cloud computing. Cloud services make a copy of the stored data to
prevent any kind of data loss. If one server loses data by any chance, the copied version is restored from the
other server.
8. Automation
Automation is an essential feature of cloud computing. The ability of cloud computing to automatically
install, configure and maintain a cloud service is known as automation in cloud computing. In simple words,
it is the process of making the most of the technology and minimizing the manual effort.
9. Resilience
Resilience in cloud computing means the ability of a service to quickly recover from any disruption. The
resilience of a cloud is measured by how fast its servers, databases and network systems restart and recover
from any loss or damage. Availability is another key feature of cloud computing. Since cloud services can be
accessed remotely, there are no geographic restrictions or limits on the use of cloud resources.
A big part of the cloud's characteristics is its ubiquity. The client can access cloud data or transfer data to the
cloud from any location with a device and internet connection. These capabilities are available everywhere in
the organization and are achieved with the help of internet.
Benefits of Cloud Services
Cloud services have many benefits, so let's take a closer look at some of the most important ones.
Flexibility
Cloud computing lets users access files using web-enabled devices such as smartphones and laptops. The
ability to simultaneously share documents and other files over the Internet can facilitate collaboration
between employees.
Users of cloud systems can work from any location as long as you have an Internet connection. Most of the
major cloud services offer mobile applications, so there are no restrictions on what type of device you're
using.
Cost savings
Using web-based services eliminates the need for large expenditures on implementing and maintaining the
hardware. Cloud services work on a pay-as-you-go subscription model.
Automatic updates
With cloud computing, your servers are off-premises and are the responsibility of the service provider.
Providers update systems automatically, including security updates.
Disaster recovery
Cloud-based backup and recovery ensure that your data is secure. Implementing robust disaster recovery was
once a problem for small businesses, but cloud solutions now provide these organizations with the cost-
effective solutions with the expertise they need.
IaaS
IaaS
is
a cloud computing model in which computing resources are hosted in a public cloud, private cloud, or hybrid
cloud. Businesses can use the IaaS model to shift some or all of their use of on-premises or collocated data
centre infrastructure to the cloud, where the infrastructure is owned and managed by a cloud provider. These
cost-effective infrastructure elements can include compute, network, and storage hardware as well as other
components and software.
IaaS Examples
• Test and development: With IaaS, DevOps teams can set up and take down test and development
environments quickly and at low cost, so they can get new applications to market faster.
• Traditional applications: IaaS supports both cloud-native applications and traditional enterprise
applications, including enterprise resource planning (ERP) and business analytics applications.
• Website hosting and apps: Many businesses run their websites on IaaS to optimize costs. IaaS also
supports web and mobile apps, which can be quickly deployed and scaled.
• Storage, backup, and recovery: Storing and backing up data on-premises, as well as planning for
and recovering from disasters, requires a great deal of time and expertise. Moving infrastructure to the
cloud helps businesses reduce costs and frees them up to focus on other tasks.
• High performance computing: With its pay-as-you-go model, IaaS makes high performance
computing (HPC) and other data-intensive, project-oriented tasks more affordable.
PaaS
The term platform as a service (PaaS) refers to a cloud computing model where a third party delivers
hardware and software tools to users over the internet.
.SaaS
SaaS is a cloud-based software delivery model in which the cloud provider develops and maintains cloud
application software, provides automatic software updates, and makes software available to its customers
over the internet on a pay-as-you-go basis.
• It is a uni-processor machine capable of executing a single instruction which operates on a single data
stream
• Machine instructions are processed sequentially
• All instructions and data to be processed have to be stored in primary memory.
• This is limited by the rate at which the computer can transfer information internally.
• It is a multiprocessor machine capable of executing the same instructions on all the CPU's but
operating on different data streams
• this model is well suited for scientific computing which involves lots of vector and matrix operations.
MISD - Multiple Instructions Single Data
Parallel Computing:
In parallel computing multiple processors performs multiple tasks assigned to them simultaneously.
Memory in parallel systems can either be shared or distributed. Parallel computing provides concurrency
and saves time and money.
Distributed Computing:
In distributed computing we have multiple autonomous computers which seems to the user as single
system. In distributed systems there is no shared memory and computers communicate with each other
through message passing. In distributed computing a single task is divided among different computers.
• The terms parallel computing and distributed computing are used interchangeably.
• It implies a tightly coupled system.
• It is characterised by homogeneity of components (Uniform Structure).
• Multiple Processors share the same physical memory.
Parallel Processing
• The terms parallel computing and distributed computing are used interchangeably.
• It implies a tightly coupled system.
• It is characterised by homogeneity of components (Uniform Structure).
• Multiple Processors share the same physical memory.
Parallel Processing
• Save time and money: More resources at a task will shorten its time for completion, with potential
cost savings.
• Provide concurrency: Single computing resources can only do one task at a time.
• Serial computing limits: Transmission speeds depend directly upon hardware.
Cloud computing refers to a technology where software and hardware services are delivered over the internet
via a network of various remote services. The servers store, manage, and process data to enable users to upgrade
or expand their current infrastructure.
Challenges
The biggest concern with cloud computing is data security and privacy. As organizations adopt the cloud on a
global scale, the risks have become graver than ever, with lots of consumer and business data available for
hackers to breach.
2. Compliance Risks
Compliance rules are getting more stringent due to the increased cyberattacks and data privacy issues.
Regulatory bodies like HIPAA, GDPR, etc., ensure organizations comply with applicable state or federal
rules and regulations to maintain data security and privacy for their business and customers.
4. Cloud Migration
Cloud migration means moving your data, services, applications, systems, and other information or assets
from on-premises (servers or desktops) to the cloud. This process enables computing capabilities to take place
on the cloud infrastructure instead of on-premise devices.
5. Incompatibility
While moving your workload to the cloud from on-premises, incompatibility issues may arise between the
cloud services and on-premises infrastructure.
Unit-2
For example, multiple business processes in an organization require the user authentication functionality.
Instead of rewriting the authentication code for all business processes, you can create a single authentication
service and reuse it for all applications.
SOA is an architectural style for building software applications that use services available in a network such
as the web. The applications built using SOA are mostly web based that uses web architecture defined by the
world wide web consortium(W3C). These web applications are often distributed over the networks which aim
to make services interoperable, extensible, effective.
3) “Although virtualization is widely accepted today; it does have its limits”. Comment the statement
[CO2,K5]
Yes, because not every application or server is going to work within an environment of virtualization. That
means an individual or corporation may require a hybrid system to function properly. This still saves time
and money in the long run, but since not every vendor supports virtualization and some may stop supporting
it after initially starting it, there is always a level of uncertainty when fully implementing this type of system
• It is also called a “hypervisor,” this is one of many hardware virtualization techniques that allow
multiple operating systems, termed guests, to run concurrently on a host computer.
• The hypervisor presents to the guest operating systems a virtual operating platform and manages the
execution of the guest operating systems.
The key challenges of a cloud computing are security, integration adaption, agility and QoS aspects like
performance latency and availability. These challenges can be addressed with a SOA-based architecture
using concept of service arbitrage and service aggregation because of SOA, cloud computing leverage has
many advantages like,
➢ Service Reusability
As we know that the large amounts of compute, storage and networking resources are needed to build a
cluster, grid or cloud solution. These resources need to be aggregated at one place to offer a single system
image. Therefore, the concept of virtualization comes into the picture where resources can be aggregated
together to fulfil the request for resources provisioning with rapid speed as a single system image.
Virtualization is technology that you can use to create virtual representations of servers, storage, networks,
and other physical machines. Virtual software mimics the functions of physical hardware to run multiple
virtual machines simultaneously on a single physical machine. Businesses use virtualization to use their
hardware resources efficiently and get greater returns from their investment. It also powers cloud computing
services that help organizations manage infrastructure more efficiently.
REST stands for Representational State Transfer and API stands for Application Program Interface. REST
is a software architectural style that defines the set of rules to be used for creating web services. Web
services which follow the REST architectural style are known as RESTful web services.
A Restful system consists of a:
• client who requests for the resources.
• server who has the resources.
Part-B
virtualization, and host-based virtualization. The hypervisor is also known as the VMM (Virtual Machine
Monitor). They both perform the same virtualization operations.
The hypervisor supports hardware-level virtualization (see Figure 3.1(b)) on bare metal devices like CPU,
memory, disk and network interfaces. The hypervisor software sits directly between the physical hardware
and its OS. This virtualization layer is referred to as either the VMM or the hypervisor. The hypervisor
provides hyper calls for the guest OSes and applications. Depending on the functionality, a hypervisor can
assume a micro-kernel architecture like the Microsoft Hyper-V. Or it can assume a monolithic hypervisor
architecture like the VMware ESX for server virtualization.
Depending on implementation technologies, hardware virtualization can be classified into two cate-gories: full
virtualization and host-based virtualization. Full virtualization does not need to modify the host OS. It relies
on binary translation to trap and to virtualize the execution of certain sensitive, nonvirtualizable instructions.
The guest OSes and their applications consist of noncritical and critical instructions. In a host-based system,
both a host OS and a guest OS are used. A virtualization software layer is built between the host OS and guest
OS. These two classes of VM architecture are introduced next.
This approach was implemented by VMware and many other software companies. As shown in Figure 3.6,
VMware puts the VMM at Ring 0 and the guest OS at Ring 1. The VMM scans the instruction stream and
identifies the privileged, control- and behaviour-sensitive instructions. When these instructions are identified,
they are trapped into the VMM, which emulates the behaviour of these instructions. The method used in this
emulation is called binary translation. Therefore, full virtualization combines binary translation and direct
execution. The guest OS is completely decoupled from the underlying hardware. Consequently, the guest OS
is unaware that it is being virtualized.
An alternative VM architecture is to install a virtualization layer on top of the host OS. This host OS is still
responsible for managing the hardware. The guest OSes are installed and run on top of the virtualization layer.
Dedicated applications may run on the VMs. Certainly, some other applications
When the x86 processor is virtualized, a virtualization layer is inserted between the hardware and the OS.
According to the x86 ring definition, the virtualization layer should also be installed at Ring 0. Different
instructions at Ring 0 may cause some problems. In Figure 3.8, we show that para-virtualization replaces
nonvirtualizable instructions with hypercalls that communicate directly with the hypervisor or VMM.
However, when the guest OS kernel is modified for virtualization, it can no longer run on the hardware directly.
This is a Linux para-virtualization system—a part of the Linux version 2.6.20 kernel. Memory management
and scheduling activities are carried out by the existing Linux kernel. The KVM does the rest, which makes it
simpler than the hypervisor that controls the entire machine. KVM is a hardware-assisted para-virtualization
tool, which improves performance and supports unmodified guest OSes such as Windows, Linux, Solaris, and
other UNIX variants.
Unlike the full virtualization architecture which intercepts and emulates privileged and sensitive instructions at
runtime, para-virtualization handles these instructions at compile time. The guest OS kernel is modified to
replace the privileged and sensitive instructions with hyper calls to the hypervisor or VMM. Xen assumes such
a para-virtualization architecture.
Kernel-based Virtual Machine (KVM) is a software feature that you can install on physical Linux machines
to create virtual machines. A virtual machine is a software application that acts as an independent computer
within another physical computer.
High performance
KVM is engineered to manage high-demanding applications seamlessly. All guest operating systems inherit
the high performance of the host operating system—Linux. The KVM hypervisor also allows virtualization to
be performed as close as possible to the server hardware, which further reduces process latency.
Security
Virtual machines running on KVM enjoy security features native to the Linux operating system, including
Security-Enhanced Linux (SELinux). This ensures that all virtual environments strictly adhere to their
respective security boundaries to strengthen data privacy and governance.
Stability
KVM has been widely used in business applications for more than a decade. It enjoys excellent support from
a thriving open-source community. The source code that powers KVM is mature and provides a stable
foundation for enterprise applications.
Cost efficiency
KVM is free and open source, which means businesses do not have to pay additional licensing fees to host
virtual machines.
Flexibility
KVM provides businesses many options during installations, as it works with various hardware setups. Server
administrators can efficiently allocate additional CPU, storage, or memory to a virtual machine with KVM.
KVM also supports thin provisioning, which only provides the resources to the virtual machine when
needed.
Virtualization helps us to use one server to compute the task of all three servers; run all the applications
conforming to their operating environment efficiently. In simple words, all the physical servers with different
operating systems and applications run on one physical server. i.e., one server is running 3 Virtual Machines
simultaneously.
Virtual Machine
Virtualization is achieved by virtual machines. It mimics a physical machine function using cloud parts which
are virtual hardware devices. The virtual machine (software) replaces the functionalities of a physical server
which is followed traditionally.
Virtualization Types
Virtualization types are as follows:
1. Application Virtualization
2. Network Virtualisation
3. Storage Virtualisation
4. Desktop Virtualisation
5. Server Virtualisation
Virtualization is performed with the help of a hypervisor or VMM. Hypervisors are of two types - Type 1 and
Type 2
Type 1 Hypervisor runs directly on the host with simple programming. This doesn’t require an individual
operating system to compute. It is called bare metal or native. It lies between the OS and hardware.
Type 2 Hypervisor functions on top of the OS. It is called a host machine. This implies that there will be no
direct point of access to the hardware. And the VM running in this type will be managed by the Virtual
Machine Monitor (VMM).
It is located at the microprocessor and microcontroller levels. It includes the basic sets of instructions. In the
physical system, we write some code logic and emulate it as software. The host machine has a CPU and other
devices. To communicate with the machine, instructions are required. The native-level instruction is
converted to a higher level.
To convert a lower level to a higher-level certain processing power and computing cycles are required; it
finds the similarities between host os and guest os suitable for x86 architecture. It finds similarities to reduce
the computation cycles.
Operating System Level
It keeps track of the libraries and DS used to avoid redundancy. It also maintains the prerequisite
of the host machine.
Library Level
It is the programming API used by the application level for the execution of some code. At this level, API can
be called ABI (Application Binary Interface). It is automated to convert programmatic logic to the binary
format which cut downs the emulation process.
Application Level
The code is written and executed at this level only after the successful completion of all other four levels.
The Internet is the worldwide connectivity of hundreds of thousands of computers of various types that
belong to multiple networks. On the World Wide Web, a web service is a standardized method for
propagating messages between client and server applications. A web service is a software module that is
intended to carry out a specific set of functions. Web services in cloud computing can be found and invoked
over the network. The web service would be able to deliver functionality to the client that invoked the web
service.
XML and HTTP is the most fundamental web services platform. The following components are used by all
typical web services:
SOAP stands for “Simple Object Access Protocol.” It is a transport-independent messaging protocol. SOAP
is built on sending XML data in the form of SOAP Messages. A document known as an XML document is
attached to each message. Only the structure of the XML document, not the content, follows a pattern. The
best thing about Web services and SOAP is that everything is sent through HTTP, the standard web
protocol.
UDDI is a standard for specifying, publishing and discovering a service provider’s online servic es. It
provides a specification that aids in the hosting of data via web services.
If a web service can’t be found, it can’t be used. The client invoking the web service should be aware of the
location of the web service. Second, the client application must understand what the web service does in
order to invoke the correct web service.
Implementation:
Remote procedure calls are what are used to make these requests. Calls to methods hosted by the relevant
web service are known as Remote Procedure Calls (RPC). Example: Flipkart offers a web service that
displays prices for items offered on Flipkart.com. The front end or presentation layer can be written in .Net
or Java, but the web service can be communicated using either programming language.
The data that is exchanged between the client and the server, which is XML, is the most important part of a
web service design. XML (Extensible markup language) is a simple intermediate language that is understood
by various programming languages. It is a counterpart to HTML. As a result, when programs communicate
with one another, they do so using XML. This creates a common platform for applications written in different
programming languages to communicate with one another.
6) What are the types of clusters and explain about virtual clusters and Resource Management? [CO2,
K1]
Traditional VM is initialized, the administrator needs to manually write configuration information or specify
the configuration sources. When more VMs join a network, an inefficient configuration always causes problems
with overloading or underutilization. Amazon’s Elastic Compute Cloud (EC2) is a good example of a web
service that provides elastic computing power in a cloud. EC2 permits customers to create VMs and to manage
user accounts over the time of their use. Most virtualization platforms, including XenServer and VMware ESX
Server, support a brid-ging mode which allows all domains to appear on the network as individual hosts. By
using this mode, VMs can communicate with one another freely through the virtual network interface card and
configure the network automatically.
Virtual clusters are built with VMs installed at distributed servers from one or more physical clus-ters. The
VMs in a virtual cluster are interconnected logically by a virtual network across several physical networks.
Figure 3.18 illustrates the concepts of virtual clusters and physical clusters. Each virtual cluster is formed with
physical machines or a VM hosted by multiple physical clusters. The virtual cluster boundaries are shown as
distinct boundaries.
1.1 Fast Deployment and Effective Scheduling
The system should have the capability of fast deployment. Here, deployment means two things: to construct
and distribute software stacks (OS, libraries, applications) to a physical node inside clusters as fast as possible,
and to quickly switch runtime environments from one user’s virtual cluster to another user’s virtual cluster.
The template VM can be distributed to several physical hosts in the cluster to customize the VMs. In addition,
existing software packages reduce the time for customization as well as switching virtual environments.
In a cluster built with mixed nodes of host and guest systems, the normal method of operation is to run
everything on the physical machine. When a VM fails, its role could be replaced by another VM on a different
node, as long as they both run with the same guest OS. In other words, a physical node can fail over to a VM
on another host.
3. Migration of Memory, Files, and Network Resources
Since clusters have a high initial cost of ownership, including space, power conditioning, and cool-ing
equipment, leasing or sharing access to a common cluster is an attractive solution when demands vary over
time.
This is one of the most important aspects of VM migration. Moving the memory instance of a VM from one
physical host to another can be approached in any number of ways. But traditionally, the concepts behind the
techniques tend to share common implementation paradigms. The techniques employed for this purpose depend
upon the characteristics of application/workloads supported by the guest OS.
To support VM migration, a system must provide each VM with a consistent, location-independent view of the
file system that is available on all hosts. A simple way to achieve this is to provide each VM with its own virtual
disk which the file system is mapped to and transport the contents of this virtual disk along with the other states
of the VM. However, due to the current trend of high-capacity disks, migration of the contents of an entire disk
over a network is not a viable solution. Another way is to have a global file system across all machines where
a VM could be located. This way removes the need to copy files from one machine to another because all files
are network-accessible.
A migrating VM should maintain all open network connections without relying on forwarding mechanisms on
the original host or on support from mobility or redirection mechanisms. To enable remote systems to locate
and communicate with a VM, each VM must be assigned a virtual IP address known to other entities. This
address can be distinct from the IP address of the host machine where the VM is currently located. Each VM
can also have its own distinct virtual MAC address. The VMM maintains a mapping of the virtual IP and MAC
addresses to their corresponding VMs. In general, a migrating VM includes all the protocol states and carries
its IP address with it.
In Section 3.2.1, we studied Xen as a VMM or hypervisor, which allows multiple commodity OSes to share
x86 hardware in a safe and orderly fashion. The following example explains how to perform live migration of
a VM between two Xen-enabled host machines. Domain 0 (or Dom0) performs tasks to create, terminate, or
migrate to another host. Xen uses a send/recv model to transfer states across VMs.
Xen supports live migration. It is a useful feature and natural extension to virtualization platforms that allows
for the transfer of a VM from one physical machine to another with little or no downtime of the services hosted
by the VM. Live migration transfers the working state and memory of a VM across a net-work when it is
running.
The Purdue VIOLIN Project applies live VM migration to reconfigure a virtual cluster environment. Its purpose is to achieve
better resource utilization in executing multiple cluster jobs on multiple cluster domains. The project leverages the maturity of
VM migration and environment adaptation technology. The approach is to enable mutually isolated virtual environments for
executing parallel applications on top of a shared physical infrastructure consisting of multiple domains. Figure 3.25 illustrates
the idea with five concurrent virtual environments, labeled as VIOLIN 1–5, sharing two physical clusters.
Unit -3
Part-A
1) Bring out the difference between Private Cloud and Public Cloud. [CO3, K2]
Example: Amazon web service (AWS) and Example: Microsoft KVM, HP, Red Hat &
Google Apennine etc. VMWare etc.
Community cloud is a cloud infrastructure that allows systems and services to be accessible by a group of
several organizations to share the information. It is owned, managed, and operated by one or more organizations
in the community, a third party, or a combination of them.
The Research Compute Cloud (RC2) is a private cloud, built by IBM, that interconnects the computing and IT
resources at eight IBM Research Centers scattered throughout the United States, Europe, and Asia.
6) What are the basics requirements for cloud architecture design? [CO3, K1]
➢ The basic requirements for cloud architecture design are given as follows
➢ The cloud architecture design must provide automated delivery of cloud services along with automated
management
➢ It must support latest web standards like Web 2.0 or higher and REST or RESTful APIs.
➢ It must support very large-scale HPC infrastructure with both physical and virtual machines.
7) What are the different layers in layered cloud architecture design? [CO3,K1]
Cloud computing entails the delivery of a variety of computing services over the internet. This can be
achieved in a number of ways that we’ll outline here. There are three layers of cloud computing services to
cover:
A CSP (cloud service provider) is a third-party company that provides scalable computing resources that
businesses can access on demand over a network.
➢ Service Deployment
➢ Service Orchestration
➢ Cloud service Management
➢ Security
• Security
• Control
• Reliability
• Compatibility
• Locked-in Features
Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading
scalability, data availability, security, and performance. Customers of all sizes and industries can use
Amazon S3 to store and protect any amount of data for a range of use cases, such as data lakes, websites,
mobile applications, backup and restore, archive, enterprise applications, IoT devices, and big data
analytics.
Part-B
1) List the cloud deployment models and give a detailed note about them. [CO3,K3]
• Minimal Investment
• Infrastructure Management is not required
• No maintenance
• Dynamic Scalability
• Less secure
• Low customization
Private Cloud
The private cloud deployment model is the exact opposite of the public cloud deployment model. It’s a one -
on-one environment for a single user (customer).
Hybrid Cloud
By bridging the public and private worlds with a layer of proprietary software, hybrid cloud computing gives
the best of both worlds. With a hybrid solution, you may host the app in a safe environment while taking
advantage of the public cloud’s cost savings. Organizations can move data and applications between different
clouds using a combination of two or more cloud deployment methods, depending on their needs.
Hybrid Cloud
Community Cloud
It allows systems and services to be accessible by a group of organizations. It is a distributed system that is
created by integrating the services of different clouds to address the specific needs of a community, industry,
or business. The infrastructure of the community could be shared between the organization which has shared
concerns or tasks. It is generally managed by a third party or by the combination of one or more organizations
in the community.
Community Cloud
Multi-Cloud
We’re talking about employing multiple cloud providers at the same time under this paradigm, as the name
implies. It’s similar to the hybrid cloud deployment approach, which combines public and private cloud
resources. Instead of merging private and public clouds, multi-cloud uses many public clouds.
• Reduced Latency: To reduce latency and improve user experience, you can choose cloud
regions and zones that are close to your clients.
• High availability of service: It’s quite rare that two distinct clouds would have an incident at
the same moment. So, the multi-cloud deployment improves the high availability of your services.
Disadvantages of the Multi-Cloud Model
• Complex
• Security issue
On the basis of the services cloud offers, we can count on the following cloud delivery models:
3) Describe the service and deployment models of cloud computing environment with illustration.
[CO3,K2]
It works as your virtual computing environment with a choice of deployment model depending on how much
data you want to store and who has access to the infrastructure.
Public Cloud
The name says it all. It is accessible to the public. Public deployment models in the cloud are perfect for
organizations with growing and fluctuating demands. It also makes a great choice for companies with low-
security concerns.
Benefits of Public Cloud
o Minimal Investment
o No Hardware Setup
o No Infrastructure Management
Private Cloud
Now that you understand what the public cloud could offer you, of course, you are keen to know what a private
cloud can do. Companies that look for cost efficiency and greater control over data & resources will find the
private cloud a more suitable choice.
o Data Privacy
o Security
o Supports Legacy Systems
Limitations of Private Cloud
o Higher Cost
o Fixed Scalability
o High Maintenance
Community Cloud
The community cloud operates in a way that is similar to the public cloud. There's just one difference - it allows
access to only a specific set of users who share common objectives and use cases. This type of deployment
model of cloud computing is managed and hosted internally or by a third-party vendor. However, you can also
choose a combination of all three.
o Smaller Investment - A community cloud is much cheaper than the private & public cloud and provides
great performance
o Setup Benefits - The protocols and configuration of a community cloud must align with industry
standards, allowing customers to work much more efficiently.
Hybrid Cloud
As the name suggests, a hybrid cloud is a combination of two or more cloud architectures. While each model
in the hybrid cloud functions differently, it is all part of the same architecture. Further, as part of this deployment
of the cloud computing model, the internal or external providers can offer resources.
o Cost-Effectiveness
o Security
o Flexibility
o Complexity
o Specific Use Case
4) Explain in brief NIST cloud computing reference architecture? [CO3, K1]
The NIST Cloud Computing Reference Architecture and Taxonomy was designed to accurately communicate
the components and offerings of cloud computing. The guiding principles used to create the reference
architecture were:
2. Develop a solution that does not stifle innovation by defining a prescribed technical solution
The NIST cloud computing reference architecture defines five major actors. Each actor is an entity (a person
or an organization) that participates in a transaction or process and/or performs tasks in cloud computing.
• Cloud provider
• Cloud auditor
• Cloud carrier
• Cloud services broker (CSB)
• Service aggregation: A CSB combines and integrates multiple services into one or more new
services
• Service arbitrage: Service arbitrage is similar to service aggregation except that the services
being aggregated are not fixed.
5) Explain in detail about Cloud storage along with pros and cons? [CO3,K1]
Cloud storage is a cloud computing concept where data is stored on the internet by a cloud computing
provider who manages and administers data storage as a service. It is less expensive and more scalable to
store data on the cloud instead of on physical devices like hard drives. It gives users the ability to share and
access files remotely without access to their local storage systems.
• It is a service model where the data is transmitted and stored on a third-party managed remoted
system.
• It is usually priced at a per-consumption, monthly rate.
1. Cost Saving
4. Regulatory Compliance
5. Ransomware/Malware Protection
6. Usability or Accessibility
7. Flexibility
8. Automation
9. Scalable
10. Reliability
6) Explain in detail about Virtualization for data centre automation. (OR) what do
you mean bycentre automation using Virtualization? [CO3, K1]
Data centres have grown rapidly in recent years, and all major IT companies are pouring their
resources into building new data centres. In addition, Google, Yahoo!, Amazon, Microsoft, HP,
Apple, and IBM are all in the game.
In data centres, a large number of heterogeneous workloads can run on servers at various times. These
heterogeneous workloads can be roughly divided into two categories: chatty workloads and nonwinter-active
workloads.
2. Virtual Storage ManagementThe term “storage virtualization” was widely used before the renaissance of
system virtualization. Yet the term has a different meaning in a system virtualization environment. Previously,
storage virtualization was largely used to describe the aggregation and repartitioning of disks at very coarse
time scales for use by physical machines
3. Cloud OS for Virtualized Data Centers
Data centers must be virtualized to serve as cloud providers. Table 3.6 summarizes
four virtual infrastructure (VI) managers and OSes. These VI managers and OSes are specially tailored
for virtualizing data centers which often own a large number of servers in clusters. Nimbus, Eucalyptus
and Open Nebula are all open-source software available to the general public. Only vSphere 4 is a
proprietary OS for cloud resource virtualization and management over data centres.
Unit-4
Part-A
1) List any four host security threats in public IaaS [CO4, K2]
The most common host security threats in public threats in public IaaS public cloud are
➢ Stealing the keys like SSH private keys those are used to access and manage hosts
➢ Attacking unpatched and vulnerable services by listening on standard ports like NetBIOS, SSH
Transport Layer Security, or TLS, is a widely adopted security protocol designed to facilitate privacy and
data security for communications over the Internet. A primary use case of TLS is encrypting the
communication between web applications and servers, such as web browsers loading a website.
3) Discuss on the application and use of identity and access management. [CO4, K3]
Identity and access management (IAM) ensures that the right people and job roles in your organization
(identities) can access the tools they need to do their jobs. Identity management and access systems enable
your organization to manage employee apps without logging into each app as an administrator.
4) What are the various challenges in building the trust environment? [CO4, K1]
Trust and security are salient issues for organizations that can potentially benefit from migration of their
business to the cloud
➢ Lack of trust between service providers and cloud users can prevent cloud computing from being
generally accepted as a solution for a demand service
➢ It can generate Lack of transparency, difficulty in communication and confidentiality between cloud
service provider and cloud users
5) Differentiate between Authentication and Authorization [CO4, K2]
Authentication Authorization
Determines whether users are who they Determines what users can and cannot access
claim to be
Challenges the user to validate Verifies whether access is allowed through policies
credentials (for example, through and rules
passwords, answers to security
questions, or facial recognition)
Generally, transmits info through an Generally, transmits info through an Access Token
ID Token
Generally governed by the OpenID Generally governed by the OAuth 2.0 framework
Connect (OIDC) protocol
➢ Misconfiguration
➢ Unauthorized Access
➢ Data Loss
➢ Malware Injections
Privacy is nothing but the right and obligations of individuals and organizations with respect to collection,
retention, disclosure of personal information”. Although privacy is important aspect of security
8) What are the Extended Cloud Computing Services? [CO4, K1]
1. Hardware as a Service (HaaS).
2. Network as a Service (NaaS).
3. Location as a Service (LaaS),
4. Security as a Service (“SaaS”).
5. Data as a Service (DaaS).
6. Communication as a Service (CaaS)
PART B
1. Explain in detail about cloud resource provisioning methods. [CO4, K1]
Cloud provisioning means allocating a cloud service provider’s resource to a customer. It is a key feature of
cloud computing. It refers to how a client gets cloud services and resources from a provider. The cloud
services that customers can subscribe to include infrastructure-as-a-service (IaaS), software-as-a-service
(SaaS), and platform-as-a-service (PaaS) in public or private environments.
They sign formal contracts with the cloud service provider. The provider then prepares and delivers the
agreed-upon resources or services. The customers are charged a flat fee or billed every month.
Also referred to as “on-demand cloud provisioning,” customers are provided with resources on runtime. In
this delivery model, cloud resources are deployed to match customers’ fluctuating demands. Deployments can
scale up to accommodate spikes in usage and down when demands decrease.
User Cloud Provisioning
In this delivery model, customers add a cloud device themselves. Also known as “cloud self-service,” clients
buy resources from the cloud service provider through a web interface or portal.
Cloud provisioning has several benefits that are not available with traditional provisioning approaches, such
as:
• Scalability
• Speed
• Cost savings
Data security uses tools and technologies that enhance visibility of a company's data and how it is being
used. These tools can protect data through processes like data masking, encryption, and redaction of
sensitive information. The process also helps organizations streamline their auditing procedures and
comply with increasingly stringent data protection regulations.
➢ Data Erasure
➢ Data Masking
➢ Data Resiliency
➢ Insider Threats
➢ Malware
➢ Ransomware
Virtualized security is now effectively necessary to keep up with the complex security demands of a
virtualized network, plus it’s more flexible and efficient than traditional physical security. Here are some of
its specific benefits:
• Cost-effectiveness
• Flexibility
• Operational efficiency
• Regulatory compliance
Virtualized security can take the functions of traditional security hardware appliances (such as firewalls and
antivirus protection)
The increased complexity of virtualized security can be a challenge for IT, which in turn leads to increased
risk.
Physical security different from virtualized security
Traditional physical security is hardware-based, and as a result, it’s inflexible and static. The traditional
approach depends on devices deployed at strategic points across a network and is often focused on protecting
the network perimeter (as with a traditional firewall).
There are many features and types of virtualized security, encompassing network security, application
security, and cloud security.
• Segmentation
• Micro-segmentation
• Isolation
Monitoring and auditing: Monitoring, auditing, and reporting compliance by users regarding
access to resources within the organization based on the defined policies.
IAM processes support the following operational activities:
Provisioning: Provisioning can be thought of as a combination of the duties of the
human resources and IT departments, where users are given access to data repositories or systems,
applications, and databases based on a unique user identity. Deprovisioning works in the opposite
manner, resulting in the deletion or deactivation of an identity or of privileges assigned to the user
identity.
Credential and attribute management: These processes are designed to manage the life cycle
of credentials and user attributes— create, issue, manage, revoke—to inappropriate account use.
Credentials are usually bound to an individual and are verified during the authentication process.
The processes include provisioning of attributes, static (e.g., standard text password) and
dynamic (e.g., one-time password) credentials that comply with a password standard (e.g.,
passwords resistant to dictionary attacks), handling password expiration, encryption management
of credentials during transit and at rest, and access policies of user attributes (privacy and
handling of attributes for various regulatory reasons).Minimize the business risk associated with
identityimpersonation
• Authorization management
• Compliance management
Cloud Identity Administration: Cloud identity administrative functions should focus on life
cycle management of user identities in the cloud—provisioning, deprovisioning, identity
federation, SSO, password or credentials management, profile management, and administrative
management. Organizations that are not capable of supporting federation should explore cloud-
based identity management services. This new breed of services usually synchronizes an
organization’s internal directories with its directory (usually multitenant) and acts as a proxy IdP
for the organization.
Federated Identity (SSO): Organizations planning to implement identity federation that enables
SSO for users can take one of the following two paths (architectures):
• Implement an enterprise IdP within an organization perimeter.
• Integrate with a trusted cloud-based identity management service provider.
Both architectures have pros and cons.
Enterprise identity provider: In this architecture, cloud services will delegate authentication to
an organization’s IdP. In this delegated authentication architecture, the organization federates
identities within a trusted circle of CSP domains. A circle of trust can be created with all the
domains that are authorized to delegate authentication to the IdP. In this deployment architecture,
where the organization will provide and support an IdP, greater control can be exercised over user
identities, attributes, credentials, and policies for authenticating and authorizing users to a cloud
service.
IdP deployment
architecture.
Cloud computing models of the future will likely combine the use of SaaS (and other
XaaS's as appropriate), utility computing, and Web 2.0 collaboration technologies to leverage the
Internet to satisfy their customers' needs. New business models being developed as a result of the
move to cloudcomputing are creating not only new technologies and business operational
processes but also newsecurity requirements and challenges
Fig: Evolution of Cloud Services
SaaS plays the dominant cloud service model and this is the area where the most critical need for
security practices are required
Security issues that are discussed with cloud-computing vendor:
1. Privileged user access—Inquire about who has specialized access to data, and about the
hiring and management of such administrators.
2. Regulatory compliance—Make sure that the vendor is willing to undergo external audits
and/or security certifications.
3. Data location—Does the provider allow for any control over the location of data?
4. Data segregation—Make sure that encryption is available at all stages, and that these
encryption schemes were designed and tested by experienced professionals.
5. Recovery—Find out what will happen to data in the case of a disaster. Do they offer complete
restoration? If so, how long would that take?
6. Investigative support—Does the vendor have the ability to investigate any inappropriate or
illegal activity?
7. Long-term viability—What will happen to data if the company goes out of business? How
will data be returned, and in what format?
The security practices for the SaaS environment are as follows:
2. What are the key features in Google App Engine application environment? [CO5, K1]
• dynamic web serving, with full support for common web technologies
• persistent storage with queries, sorting and transactions
• automatic scaling and load balancing
• APIs for authenticating users and sending email using google accounts
3. What are the advantages of Google App Engine? [CO5, K1]
• Scalability
• Lower total cost of ownership
• Rich set of APIs
• Fully featured SDK for local development software development kit (sdk)
• Ease of deployment
• Web administration console and diagnostic utilities
Test
Test
Deploy
Buid
Deploy
Buid
Update Manage
5.What are the service provided by Google App Engine? [CO5, K1]
Wide range of services available
• User service
• Blobstore
• Task Queues
• Mail Service
6. Describe the services available in User services? [CO5, K2]
• It provides a simple API for authentication and authorizationIt detect if a user is signed in App
• It detects if a user is an admin
7. What are the three authentication options in User service? [CO5, K1]
• Google Account
• Google Apps domains usersOpenID - experimental
8. What the services available in User services? [CO5, k1]
• It provides a simple API for authentication and authorizationIt detect if a user is signed in App
• It detects if a user is an admin
9. What are the three authentication options in User service? [CO5, k1]
➢ Physiological biometrics
➢ Password
➢ Token Authentication
❖ Google Docs
GAE ARCHITECTURE
TECHNOLOGIES USED BY GOOGLE ARE
🞂 When the user wants to get the data, he/she will first send an authorized data requests to Google Apps.
🞂 It forwards the request to the tunnel server.
🞂 The tunnel servers validate the request identity.
🞂 If the identity is valid, the tunnel protocol allows the SDC to set up a connection, authenticate,
and encrypt the data that flows across the Internet.
🞂 SDC also validates whether a user is authorized to access a specified resource.
🞂 Application runtime environment offers a platform for web programming and execution.
🞂 It supports two development languages: Python and Java.
🞂 Software Development Kit (SDK) is used for local application development.
🞂 The SDK allows users to execute test runs of local applications and upload application code.
🞂 Administration console is used for easy management of user application development cycles.
🞂 GAE web service infrastructure provides special guarantee flexible use and management of storage and
network resources by GAE.
🞂 Google offers essentially free GAE services to all Gmail account owners.
🞂 We can register for a GAE account or use your Gmail account name to sign up for the service.
🞂 The service is free within a quota.
🞂 If you exceed the quota, extra amount will be charged.
🞂 Allows the user to deploy user-built applications on top of the cloud infrastructure.
🞂 They are built using the programming languages and software tools supported by the provider (e.g.,
Java, Python)
GAE APPLICATIONS
Well-known GAE applications
Overall, MapReduce breaks the data flow into two phases, map phase and reduce phase
Mapreduce Workflow
Application writer specifies
❖ A pair of functions called Mapper and Reducer and a set of input files and submits the job
❖ Input phase generates a number of FileSplits from input files (one per Map task)
❖ The Map phase executes a user function to transform input key-pairs into a new set of key-
pairs
❖ The framework Sorts & Shuffles the key-pairs to output nodes
❖ The Reduce phase combines all key-pairs with the same key into new keypairs
❖ The output phase writes the resulting pairs to files as “parts”
Characteristics of MapReduce is characterized by:
❖ Its simplified programming model which allows the user to quickly write and test
distributed systems
❖ Its efficient and automatic distribution of data and workload across machines
❖ Its flat scalability curve. Specifically, after a Mapreduce program is written and functioning
on 10 nodes, very little-if any- work is required for making that same program run on 1000
nodes
The core concept of MapReduce in Hadoop is that input may be split into logical chunks, and each
chunk may be initially processed independently, by a map task. The results of these individual
processing chunks can be physically partitioned into distinct sets, which are then sorted. Each sorted
chunk is passed to a reduce task.
A map task may run on any compute node in the cluster, and multiple map tasks may
berunning in parallel across the cluster. The map task is responsible for transforming the input
records into key/value pairs. The output of all of the maps will be partitioned, and each partition
will be sorted. There will be one partition for each reduce task. Each partition’s sorted keys and
the values associated with the keys are then processed by the reduce task. There may be multiple
reduce tasks running in parallel on the cluster.
The application developer needs to provide only four items to the Hadoop
framework: the class that will read the input records and transform them into one key/value
pair per record, a map method, a reduce method, and a class that will transform the key/value
pairs that the reduce method outputs into output records.
My first MapReduce application was a specialized web crawler. This crawler
received as input large sets of media URLs that were to have their content fetched and
processed. The media items were large, and fetching them had a significant cost in time and
resources.The job had several steps:
Input File:
Welcome to Hadoop Class
Hadoop is good
Rathinam Technical Campus CS8791-Cloud Computing Unit V Notes
Hadoop is bad
Hadoop is an Apache open source framework written in java that allows distributed
processing of large datasets across clusters of computers using simple programming models.
Hadoop is designed to scale up from single server to thousands of machines, each offering
local computation and storage.
1
Rathinam Technical Campus CS8791-Cloud Computing Unit V Notes
Hadoop runs applications using the MapReduce algorithm, where the data is processed in
parallel on different CPU nodes.
Users of Hadoop:
❖ Hadoop is running search on some of the Internet's largest sites:
o Amazon Web Services: Elastic MapReduce
o AOL: Variety of uses, e.g., behavioral analysis & targeting
o Ebay: Search optimization (532-node cluster)
o Facebook: Reporting/analytics, machine learning (1100 m.)
o LinkedIn: People You May Know (2x50 machines)
o Twitter: Store + process tweets, log files, other data Yahoo: >36,000 nodes; biggest
cluster is 4,000 nodes
Hadoop Architecture
❖ Hadoop has a Master Slave Architecture for both Storage & Processing
❖ Hadoop framework includes following four modules:
❖ Hadoop Common: These are Java libraries and provide file system and OS level abstractions
and contains the necessary Java files and scripts required to start Hadoop.
2
Rathinam Technical Campus CS8791-Cloud Computing Unit V Notes
❖ Hadoop YARN: This is a framework for job scheduling and cluster resource management.
❖ Hadoop Distributed File System (HDFS): A distributed file system that provides high-
throughput access to application data.
❖ HadoopMapReduce: This is system for parallel processing of large data sets.
HDFS
To store a file in this architecture,
HDFS splits the file into fixed-size blocks (e.g., 64 MB) and stores them on workers (Data
Nodes).
3
The mapping of blocks to Data Nodes is determined by the Name Node.
The NameNode (master) also manages the file system’s metadata and namespace.
Namespace is the area maintaining the metadata, and metadata refers to all the information
stored by a file system that is needed for overall management of all files.
NameNode in the metadata stores all information regarding the location of input
splits/blocks in all DataNodes.
Each DataNode, usually one per node in a cluster, manages the storage attached to the
node.
Each DataNode is responsible for storing and retrieving its file blocks
HDFS- Features
Distributed file systems have special requirements
Performance
Scalability
Concurrency Control
Fault Tolerance
Security Requirements
HDFS-Write Operation
Writing to a file:
To write a file in HDFS, a user sends a “create” request to the NameNode to create a new
file in the file system namespace.
If the file does not exist, the NameNode notifies the user and allows him to start writing
data to the file by calling the write function.
The first block of the file is written to an internal queue termed the data queue.
A data streamer monitors its writing into a DataNode.
Each file block needs to be replicated by a predefined factor.
The data streamer first sends a request to the NameNode to get a list of suitable DataNodes
to store replicas of the first block.
The steamer then stores the block in the first allocated DataNode.
Afterward, the block is forwarded to the second DataNode by the first DataNode.
The process continues until all allocated DataNodes receive a replica of the first block from
the previous DataNode.
Once this replication process is finalized, the same process starts for the second block.
5
Rathinam Technical Campus CS8791-Cloud Computing Unit V Notes
7
Rathinam Technical Campus CS8791-Cloud Computing Unit V Notes
Compute (Nova)
OpenStack Compute is also known as OpenStack Nova.
Nova is the primary compute engine of OpenStack, used for deploying and managing
virtual machine.
OpenStack Compute manages pools of computer resources and work with
virtualization technologies.
Nova can be deployed using hypervisor technologies such as KVM, VMware, LXC,
XenServer, etc.
Dashboard (Horizon)
OpenStack Horizon is a web-based graphical interface that cloud administrators and users
can access tomanage OpenStack compute, storage and networking services.
To service providers it provides services such as monitoring,billing, and other
management tools.
Networking (Neutron)
Neutron provides networking capability like managing networks and IP addresses
for OpenStack.
OpenStack networking allows users to create their own networks and connects devices
and servers to one or more networks.
Neutron also offers an extension framework, which supports deploying and managing of
9
Rathinam Technical Campus CS8791-Cloud Computing Unit V Notes
other network services such as virtual private networks (VPN), firewalls, load balancing, and
intrusion detection system (IDS)
10
Rathinam Technical Campus CS8791-Cloud Computing Unit V Notes
Telemetry (Ceilometer)
It provides customer billing, resource tracking, and
alarming capabilities across all OpenStack core
components.
Orchestration (Heat)
Heat is a service to orchestrate (coordinates) multiple composite cloud applications
using templates.
Workflow (Mistral)
Mistral is a service that manages workflows.
User typically writes a workflow using workflow language and uploads the workflow definition.
The user can start workflow manually.
Database (Trove)
Trove is Database as a Service for OpenStack.
Allows users to quickly and easily utilize the features of a database without the
burden of handling complex administrative tasks.
11
Rathinam Technical Campus CS8791-Cloud Computing Unit V Notes
12
Rathinam Technical Campus CS8791-Cloud Computing Unit V Notes
Users will specify several parameters like the Hadoop version number, the cluster
topology type, node flavor details (defining disk space, CPU and RAM settings), and
others.
Messaging (Zaqar)
Zaqar is a multi-tenant cloud messaging service for Web developers.
DNS (Designate)
Designate is a multi-tenant API for managing DNS.
Search (Searchlight)
Searchlight provides advanced and consistent search capabilities across various
OpenStack cloud services.
13
Rathinam Technical Campus CS8791-Cloud Computing Unit V Notes
14
Rathinam Technical Campus CS8791-Cloud Computing Unit V Notes
This alarming service enables the ability to trigger actions based on defined rules
against an event data collected by Ceilometer.
• The Inter-Cloud environment provides benefits like diverse Geographical locations, better
application resilience and avoiding vendor lock-in to the cloud client.
• Benefits for the cloud provider are expand-on-demand and better service level agreements
(SLA) to the cloud client.
Types of Inter-Cloud
✓ Federation Clouds
✓ Multi-Cloud
Federation Clouds
A Federation cloud is an Inter-Cloud where a set of cloud providers willingly
interconnect their cloud infrastructures in order to share resources among each other.
The cloud providers in the federation voluntarily collaborate to exchange resources.
This type of Inter-Cloud is suitable for collaboration of governmental clouds.
15
Rathinam Technical Campus CS8791-Cloud Computing Unit V Notes
Multi-Cloud
In a Multi-Cloud, a client or service uses multiple independent clouds.
A multi-cloud environment has no volunteer interconnection and sharing of the cloud
service providers’ infrastructures.
Managing resource provisioning and scheduling is the responsibility of client or their
representatives.
This approach is used to utilize resources from both governmental clouds and private
cloud portfolios.
Types of Multi-cloud are Services and Libraries
Cloud Federation
Provides Federated cloud ecosystem by connecting multiple cloud computing
providers using a common standard.
The combination of disparate things, so that they can act as one.
Cloud federation refers to the unionization of software infrastructure and platform
services from disparate networks that can be accessed by a client.
The federation of cloud resources is facilitated through network gate ways that
connect public or external clouds like private or internal clouds
It is owned by a single entity and/or community clouds owned by several co-
operating entities.
Creating a hybrid cloud computing environment.
It is important to note that federated cloud computing services still relay on the
existing of physical data centers.
• The federation of cloud resources allows a client to choose best cloud service
providers
16
Rathinam Technical Campus CS8791-Cloud Computing Unit V Notes
• Federation across different cloud resources pools allows applications to run in the
17
Rathinam Technical Campus CS8791-Cloud Computing Unit V Notes
• This provides customers to with the ability to access cloud services without
the need for reconfiguration when using resources from different service
providers.