0% found this document useful (0 votes)
32 views

CC Handbook 2022

This document provides an introduction to cloud computing including: 1. Definitions of cloud computing and its key characteristics like on-demand self-service, broad network access, resource pooling, rapid elasticity, and measured service. 2. Advantages of cloud computing like backup/restore, collaboration, accessibility, low maintenance costs, mobility, and pay-per-use model. 3. Components of cloud computing including clients, data centers, distributed servers, cloud providers, subscriptions, cloud brokers, and service level agreements. 4. Features of cloud computing such as resource pooling, on-demand self-service, easy maintenance, large network access, and availability.

Uploaded by

Raj Dalal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views

CC Handbook 2022

This document provides an introduction to cloud computing including: 1. Definitions of cloud computing and its key characteristics like on-demand self-service, broad network access, resource pooling, rapid elasticity, and measured service. 2. Advantages of cloud computing like backup/restore, collaboration, accessibility, low maintenance costs, mobility, and pay-per-use model. 3. Components of cloud computing including clients, data centers, distributed servers, cloud providers, subscriptions, cloud brokers, and service level agreements. 4. Features of cloud computing such as resource pooling, on-demand self-service, easy maintenance, large network access, and availability.

Uploaded by

Raj Dalal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 93

Cloud Computing (3170717)

Hand Book
Year :2023-2024

CE Department

Subject Coordinator Faculty Member:


Prof. Puja Chaturvedi (CE)
INDEX
Unit 1: Introduction

Unit 2 : Software As A Service

Unit 3 : Abstraction And Virtualization

Unit 4: Cloud Infrastructure And Cloud Resource Management

Unit 5: Security

Unit 6: Cloud Middleware

Unit 7: Cloud Based Case Studies


Unit 1: Introduction
1.1 What is Cloud?
The term Cloud refers to a Network or Internet. In other words, we can say that Cloud is something,
which is present at remote locations. Cloud can provide services over public and private networks, i.e.,
WAN, LAN or VPN.
Applications such as e-mail, web conferencing, customer relationship management (CRM) execute on
cloud.
What is Cloud Computing?
Cloud Computing refers to manipulating, configuring, and accessing the hardware and software
resources remotely. It offers online data storage, infrastructure, and application.

What we can do
● Developing new applications and services
● Storage, back up, and recovery of data
● Hosting blogs and websites
● Delivery of software on demand
● Analysis of data
● Streaming videos and audios

1.2 Before the cloud computing


Small as well as large IT companies,follow the traditional methods to provide the IT infrastructure. That
means for any IT company, we need a Server Room that is the basic need of IT companies.In that server
room, there should be a database server, mail server, networking, firewalls, routers, modem, switches,
QPS (Query Per Second means how much queries or load will be handled by the server), configurable
system, high net speed, and the maintenance engineers.Traditional IT infrastructure is constructed around
physical hardware. Starting with a group of desktop computers, these are connected wirelessly or via
cable to a network switch and often a server of some sort.
This server usually holds the company’s data and applications and often provides centralized management
of the users and other devices on the network. Typically, the server is installed on the business premises.
As such, any employee working in that office will be able to access the stored information via their
computer. To establish such IT infrastructure, we need to spend lots of money. To overcome all these
problems and to reduce the IT infrastructure cost, Cloud Computing comes into existence.

1.3 Characteristics Of Cloud Computing


On Demand Self Service: Cloud Computing allows the users to use web services and resources on
demand. One can logon to a website at any time and use them.
Broad Network Access:Since cloud computing is completely web based, it can be accessed from
anywhere and at any time.
Resource Pooling:Cloud computing allows multiple tenants to share a pool of resources. One can share a
single physical instance of hardware, database and basic infrastructure.
Rapid Elasticity:It is very easy to scale the resources vertically or horizontally at any time. Scaling of
resources means the ability of resources to deal with increasing or decreasing demand.The resources being
used by customers at any given point of time are automatically monitored.
Measured Service:In this service cloud provider controls and monitors all the aspects of cloud service.
Resource optimization, billing, and capacity planning etc. depend on it.
Agility:The cloud works in a distributed computing environment. It shares resources among users and
works very fast.
High availability and reliability :The availability of servers is high and more reliable because the
chances of infrastructure failure are minimum.
Multi-Sharing: With the help of cloud computing, multiple users and applications can work more
efficiently with cost reductions by sharing common infrastructure.
Device and Location Independence : Cloud computing enables the users to access systems using a web
browser regardless of their location or what device they use e.g. PC, mobile phone, etc. As infrastructure
is off-site (typically provided by a third-party) and accessed via the Internet, users can connect from
anywhere.
Maintenance : Maintenance of cloud computing applications is easier, since they do not need to be
installed on each user's computer and can be accessed from different places. So, it reduces the cost also.
Low Cost : By using cloud computing, the cost will be reduced because to take the services of cloud
computing, IT companies need not to set its own infrastructure and pay-as-per usage of resources.

1.4 Advantages Of Cloud Computing

● Back-up and restore data : Once the data is stored in the cloud, it is easier to get back-up and
restore that data using the cloud.
● Improved collaboration : Cloud applications improve collaboration by allowing groups of people
to quickly and easily share information in the cloud via shared storage.
● Excellent accessibility : Cloud allows us to quickly and easily access store information anywhere,
anytime in the whole world, using an internet connection. An internet cloud infrastructure
increases organization productivity and efficiency by ensuring that our data is always accessible.
● Low maintenance cost : Cloud computing reduces both hardware and software maintenance costs
for organizations.
● Mobility : Cloud computing allows us to easily access all cloud data via Mobile.
● pay-per-use model : Cloud computing offers Application Programming Interfaces (APIs) to the
users for access services on the cloud and pays the charges as per the usage of service.
● Unlimited storage capacity : Cloud offers us a huge amount of storage capacity for storing our
important data such as documents, images, audio, video, etc. in one place.
● Data security : Data security is one of the biggest advantages of cloud computing. Cloud offers
many advanced features related to security and ensures that data is securely stored and handled.

1.5 Disadvantages Of Cloud Computing


● Internet Connectivity: in cloud computing, every data (image, audio, video, etc.) is stored on the
cloud, and we access this data through the cloud by using the internet connection. If you do not
have good internet connectivity, you cannot access this data. However, we have no other way to
access data from the cloud.
● Vendor lock-in : Vendor lock-in is the biggest disadvantage of cloud computing. Organizations
may face problems when transferring their services from one vendor to another. As different
vendors provide different platforms, that can cause difficulty moving from one cloud to another.
● Limited Control : cloud infrastructure is completely owned, managed, and monitored by the
service provider, so the cloud users have less control over the function and execution of services
within a cloud infrastructure.
● Security : you should be aware that you will be sending all your organization's sensitive
information to a third party, i.e., a cloud computing service provider. While sending the data on the
cloud, there may be a chance that your organization's information is hacked by Hackers.

1.5 Cloud Component

● Clients/Consumers : They are just the desktops where they have their place on desks.These might
be also in the form of laptops, mobiles, tablets to enhance mobility. Clients hold the responsibility
of interaction which pushes for the management of data on cloud servers.
● Datacentre: It is an array of servers that houses the subscribed application.Progressing the IT
industry has brought the concept of virtualizing servers,where the software might be installed
through the utilization of various instances of virtual servers. This approach streamline the process
of managing dozens of virtual servers on multiple physical servers
● Distributed Servers: These are considered asa server where that is housed in the otherlocation.
So, the physical servers might not be housed in a similar location. Even the distributed server and
the physical servers appear to be in different locations, they perform as they are so close to each
other.
● Cloud provider: person and organization for making a service available to interested based on
market demands(AWS, Microsoft Azure, Google Cloud)
● Subscription: Define your interest in consuming service
● Cloud Broker: Organization creates and maintains relationships with multiple cloud service
providers .Selecting best provider for each Customer and monitoring the services.
● SLA(Service Level Agreement): Contract between provider and consumer that specify consumers
requirements and providers commitment to fulfilling them has -privacy ,Security backup and
recovery procedure.

1.6 Feature Of Cloud

● Resources Pooling : Cloud provider pulled the computing resources to provide services to
multiple customers with the help of a multi-tenant model
● On-Demand Self-Service :user can continuously monitor the server uptime, capabilities, and
allotted network storage. With this feature, the user can also monitor the computing capabilities.
● Easy Maintenance: The servers are easily maintained and the downtime is very low and even in
some cases, there is no downtime.Cloud Computing comes up with an updateThe updates are more
compatible with the devices .
● Large Network Access: The user can access the data of the cloud or upload the data to the cloud
from anywhere just with the help of a device and an internet connection.
● Availability: The capabilities of the Cloud can be modified as per the use and can be extended a
lot. allows the user to buy extra Cloud storage if needed for a very small amount.
● Automatic System: Cloud computing automatically analyzes the data needed and supports a
metering capability at some level of services. It will provide transparency for the host as well as
the customer.
● Economical : It is the one-time investment as the company has to buy the storage and a small part
of it can be provided to the many companies which save the host from monthly or yearly costs.
● Security: It creates a snapshot of the data stored so that the data may not get lost even if one of the
servers gets damaged.
● Pay as you go : the user has to pay only for the service or the space they have utilized. There is no
hidden or extra charge which is to be paid. The service is economical and most of the time some
space is allotted for free.
● Measured Service : supporting charge-per-use capabilities.
● Economical: It is the one-time investment as the company has to buy the storage and a small part
of it can be provided to the many companies which save the host from monthly or yearly costs.
● Security: It creates a snapshot of the data stored so that the data may not get lost even if one of the
servers gets damaged.
● Pay as you go : the user has to pay only for the service or the space they have utilized. There is no
hidden or extra charge which is to be paid. The service is economical and most of the time some
space is allotted for free.
● Measured Service: supporting charge-per-use capabilities.
● Latest Version Available: provide latest version as long as you connected

1.7 Advance Feature Of Cloud


● Virtualization Support.: The multi-tenancy aspect of clouds requires multiple customers with
disparate requirements to be served by a single hardware infrastructure.Virtualized resources
(CPUs, memory, etc.) can be sized and resized with certain flexibility. These features make
hardware virtualization,the ideal technology to create a virtual infrastructure that partitions a data
center among multiple tenants.

● Self-Service, On-Demand Resource Provisioning.:This feature enables users to directly obtain


services from clouds,such as tailoring its software, configurations, and security policies,without
interacting with a human system administrator. which users can easily interact with the system

● Multiple Backend Hypervisors.:It provides a uniform management layer regardless of the


virtualization technology used. This characteristic is more visible in open-source VImanagers,
which usually provide pluggable drivers to interact with multiple hypervisors

● Storage Virtualization.:Virtualizing storage means abstracting logical storage from physical


storage. By consolidating all available storage devices in a data center,it allows creating virtual
disks independent from device and location.

● Virtual Networking.:Virtual networks allow creating an isolated network on top of a physical


infrastructure independently from physical topology andlocations Support for creating and
configuring virtual networks to group VMsplaced throughout a data center is provided by most VI
managers.

● High Availability and Data Recovery.:The high availability (HA) feature of VI managers aims at
minimizing application downtime and preventing business disruption. A few VI managers
accomplish this by providing a failover mechanism, which detects failure of both physical and
virtual servers and restarts VMs onhealthy physical servers. This style of HA protects from
host,but not VM, failures
1.8 Challenges Of Cloud

● Security and Privacy:Security and Privacy of information is the biggest challenge to cloud
computing. Security and privacy issues can be overcome by employing encryption, security
hardware and security applications.

● Portability: that applications should easily be migrated from one cloud provider to another. There
must not be vendor lock-in. However, it is not yet made possible because each of the cloud
providers uses different standard languages for their platforms.

● Interoperability:It means the application on one platform should be able to incorporate services
from the other platforms. It is made possible via web services,but developing such web services is
very complex.

● Computing Performance:Data intensive applications on cloud require high network bandwidth,


which results in high cost. Low bandwidth does not meet the desired computing performance of
cloud applications.

● Reliability and Availability:It is necessary for cloud systems to be reliable and robust because
most of the businesses are now becoming dependent on services provided by third-party.

● Data Lock-In and Standardization:A major concern of cloud computing users is about having
their data locked-in by a certain provider. Users may want to move data and applications out from
a provider that does not meet their requirements.

● Isolation Failure:This risk involves the failure of an isolation mechanism that separates storage,
memory, and routing between the different tenants.

● Insecure or Incomplete Data Deletion:It is possible that the data requested for deletion may not
get deleted. It Happens because either of the following reasonsExtra copies of data are stored but
are not available at the time ofdeletionDisk that stores data of multiple tenants is destroyed.

● Internet Connection:required constant internet connection and does not work on low internet
connection
1.9 Cloud Computing Architecture

Cloud computing technology is used by both small and large organizations to store the information in the
cloud and access it from anywhere at any time using the internet connection.
Cloud computing architecture is divided into the following two parts -
▪ Front End :The front end is used by the client. It contains client-side interfaces and applications that are
required to access the cloud computing platforms. The front end includes web servers (including Chrome,
Firefox, internet explorer, etc.)
▪ Back End : The back end is used by the service provider. It manages all the resources that are required
to provide cloud computing services. It includes a huge amount of data storage, security mechanisms,
virtual machines, deploying models, servers, traffic control mechanisms, etc.
Components of Cloud Computing Architecture
1. Client Infrastructure:Client Infrastructure is a Front end component. It provides GUI (Graphical User
Interface) to interact with the cloud.
2. Application:The application may be any software or platform that a client wants to access.
3. Service:A Cloud Services manages which type of service you access according to the client’s
requirement.
Software as a Service (SaaS) – It is also known as cloud application services. Mostly, SaaS
applications run directly through the web browser means we do not require to download and install
these applications. Google Apps, Salesforce Dropbox
Platform as a Service (PaaS) – It is also known as cloud platform services. It is quite similar to
SaaS, but the difference is that PaaS provides a platform for software creation, but using SaaS, we
can access software over the internet without the need of any platform. Windows Azure,
Force.com.
Infrastructure as a Service (IaaS) – It is also known as cloud infrastructure services. It is
responsible for managing applications data, middleware, and runtime environments. Example:
Amazon Web Services (AWS) EC2
4. Runtime Cloud:Runtime Cloud provides the execution and runtime environment to the virtual
machines.
5. Storage:Storage is one of the most important components of cloud computing. It provides a huge
amount of storage capacity in the cloud to store and manage data.
6. Infrastructure:It provides services on the host level, application level, and network level. Cloud
infrastructure includes hardware and software components such as servers, storage, network devices,
virtualization software, and other storage resources that are needed to support the cloud computing model.
7. Management: Management is used to manage components such as application, service, runtime cloud,
storage, infrastructure, and other security issues in the backend and establish coordination between them.
8. Security:Security is an in-built back end component of cloud computing. It implements a security
mechanism in the back end.
9. Internet:The Internet is a medium through which the front end and back end can interact and
communicate with each other.

1.10 Layers And Services Of Cloud

1) SaaS (Software as a Service)


● SaaS or software as a service is a software distribution model in which applications are hosted by a
vendor or service provider and made available to customers over a network (internet).
● Through the internet this service is available to users anywhere in the world. instead of purchasing
the software, they subscribe to it, usually on a monthly basis via the internet.
● It is compatible with all internet enabled devices.
● Many important tasks like accounting, sales, invoicing and planning all can be performed using
SaaS.
● Example: Salesforce.com
2) PaaS (Platform as a Service)
● It provides a platform and environment to allow developers to build applications and services and
this service is hosted in the cloud and accessed by the users via the internet and services are
constantly updated & new features added. It provides a platform to support application
development.
● includes software support and management services, storage, networking, deploying, testing,
collaborating, hosting and maintaining applications
● Google app engine is an example of Paas

● Types of PaaS
● Stand-alone development environments :The stand-alone PaaS works as an independent entity
for a specific function. It does not include licensing or technical dependencies on specific SaaS
applications
● Application delivery-only environments :The application delivery PaaS includes on-demand
scaling and application security.
● Open platform as a service: offers an open source software that helps a PaaS provider to run
applications.
● Add-on development facilities:The add-on PaaS allows customization of the existing SaaS
platform.

3) IaaS (Infrastructure as a Service)


● It provides access to computing resources in a virtualized environment “the cloud” on the internet.
● It provides computing infrastructure like virtual server space, network connections, bandwidth,
load balancers and IP addresses.
● The pool of hardware resources is extracted from multiple servers and networks usually distributed
across numerous data centers. This provides redundancy and reliability
● Amazon Web Services(AWS) is an example of Iaas
● It is a complete package for computing. For small scale businesses who are looking to cut costs on
IT infrastructure, Annually a lot of money is spent in maintenance and buying new components
like hard-drives, network connections, external storage devices etc. which a business owner could
have saved for other expenses by using IaaS.
4) XaaS(Anything as a Service)
extensive variety of services and applications emerging for users to access on demand over the Internet.
There are many other examples of XaaS as follows. Network as a Service ,Storage as a Service ,Database
as a Service ,Information as a Service ,Integration as a Service ,Security as a Service ,Disaster Recovery
as a Service (DRaaS) ,Communications as a Service

1.11 Deployment Models

Public Cloud: Available to the public owned by a single organization selling cloud service. The cloud
provider is responsible for the creation and on-going maintenance of the public cloud and its IT resources.
Private Cloud:Operated solely for a single organization. Private cloud enables an organization to use
cloud computing technology as a means of centralizing access to IT resources by different parts, locations,
or departments of the organization.
Community Cloud: Shared by several entities that have a common purpose. its access is limited to a
specific community of cloud consumers. The community cloud may be jointly owned by the community
members or by a third-party cloud provider that provides a public cloud with limited access.
Hybrid Cloud:combination of two or more private / community or public clouds.,A hybrid cloud is a
cloud environment comprising two or more different cloud deployment models. a cloud consumer may
choose to deploy cloud services processing sensitive data to a private cloud and other, less sensitive cloud
services to a public cloud.

1.12 Virtualization
● Virtualization is a technique, which allows the sharing of a single physical instance of an
application or resource among multiple organizations or tenants (customers). It does this by
assigning a logical name to a physical resource and providing a pointer to that physical resource
when demanded.
● The Multitenant architecture offers virtual isolation among the tenants. Hence, the organizations
can use and customize their application as though they each have their instances running.
● Refers as the abstraction of the resources across many aspects of computing .one physical machine
supports many virtual machine that run parallel
● It is the abstraction layer that decouples the physical hardware from the operating system to deliver
greater IT resource utilization and flexibility
● It allows multiple virtual machines with heterogeneous OS to run in isolation side by side

● What is the concept behind Virtualization?


Creation of a virtual machine over existing operating systems and hardware is known as Hardware
Virtualization. A Virtual machine provides an environment that is logically separated from the
underlying hardware. The machine on which the virtual machine is going to create is known as
Host Machine and that virtual machine is referred as a Guest Machine
Mainly Virtualization means, running multiple operating systems on a single machine but sharing
all the hardware resources. And it helps us to provide the pool of IT resources so that we can share
these IT resources in order to get benefits in the business.

Benefits of Virtualization
Cost Savings : The ability to run multiple virtual machines in one piece of physical infrastructure
drastically reduces the footprint and the associated cost. Moreover, as this consolidation is done at the
core, we don’t need to maintain as many servers. We also have a reduction in electricity consumption and
the overall maintenance cost.
Agility and Speed : Spinning up a virtual machine is a straightforward and quick approach. It’s a lot
simpler than provisioning entirely new infrastructure. For instance, if we need a development/test region
for a team, it’s much faster to provision a new VM for the system administrators. Besides, with an
automated process in place, this task is swift and similar to other routine tasks.
1. More flexible and efficient allocation of resources.
2. Enhance development productivity hence improved performance
3. It lowers the cost of IT infrastructure.
4. Remote access and rapid scalability.
5. High availability and disaster recovery.
6. Pay per use of the IT infrastructure on demand.
7. Enables running multiple operating systems.
8. Data center and energy efficiency saving:As company reduces the size of the their hardware and
server footprints , they lower their energy consumption
9. Operational expenditure saving : Once the server are virtualized your IT staff can greatly reduce the
ongoing administration and management of manual work
10. Virtual machine is completed isolated from the host machine and other virtual machines

Cons of Virtualization
1. Not all hardware or software can be virtualized

Types Of Virtualization
1. Hardware
2. Network
3. Storage
4. Desktop
5. Data
6. Memory
7. Application
Unit 2: Software As A Service

2.1 The Evolution Of SaaS

Cloud computing made the process simple when there was no necessity for even the installation of the
software on the computer.

Today exponential growth of SaaS and continued improvements to functionality make it a valid option
even for enterprise level businesses .It's also much cheaper and easier to use .SaaS customers frequently
cite cost saving as one of its primary benefits .You can find SaaS products for almost any business
application you can think of .

The SaaS paradigm is on the fast track due to its innate powers and potential. Executives, entrepreneurs,
and end-users are ecstatic about the tactic as well as strategic success of the emerging and evolving SaaS
paradigm. A number of positive and progressive developments started to grip this model.Newer resources
and activities are being consistently revised to be delivered as IT as a Service (ITaaS) is the most recent
and efficient delivery method in the decisive IT landscape. With The meteoric and mesmerizing rise of the
service orientation principles, every single IT resource, activity and infrastructure is being viewed and
visualized as a service that sets the tone for the grand unfolding of the dreamt service era. This is
accentuated due to the pervasive Internet.

Integration as a service (IaaS) is the budding and distinctive capability of clouds in fulfilling the business
integration requirements. Increasingly business applications are deployed in clouds to reap the business
and technical benefits. On the other hand, there are still innumerable applications and data sources locally
stationed and sustained primarily due to the security reason. The question here is how to create a seamless
connectivity between those hosted and on-premise applications to empower them to work together.
IaaS overcomes these challenges by smartly utilizing the time-tested business-to-business (B2B)
integration technology as the value-added bridge between SaaS solutions and in-house business
applications.

1. The Web is the largest digital information superhighway


2. The Web is the largest repository of all kinds of resources such as web pages, applications comprising
enterprise components, business services
3. The Web is turning out to be the open, cost-effective and generic business execution
platform(E-commerce, business, auction, etc.happen in the web for global users) comprising a wider
variety of containers, adaptors, drivers, connectors, etc.
4. The Web is the global-scale communication infrastructure ( Video conferencing, IP TV etc,)
5. The Web is the next-generation discovery, Connectivity, and integration middleware
2.2 Challenges In SaaS

The integration is a time consuming and a tedious task for other cloud and on-premise applications, while
onboarding SaaS software. Some of these challenges for SaaS integration include; cloud integration, IT
infrastructure, security and many more. Therefore the crucial question to answer would be about how to
reduce the costs and help achieve integration efforts while onboarding new SaaS software

1. Hybrid IT Infrastructure: More and more companies are aiming for a hybrid IT infrastructure that
combines on-premise software with SaaS applications. However, integrating SaaS with your existing IT
infrastructure can become the biggest hurdle. Though public cloud services bring a lot of benefits, failure
to integrate SaaS tools with existing IT tools and software can negate its benefits. In order to facilitate this
cloud integration, SaaS providers and your IT staff need to work closely together.

2. Access Control: Another challenge that businesses face when transitioning into the cloud is access
control. The access control and monitoring settings that are applicable in a traditional software are not
successfully carried forward to SaaS applications. Admins should have complete control over who (user)
can access what, especially during the transition phase.

3. Cost of Integration: Another major factor for SaaS integration is cost. The integration of existing
software with SaaS requires a high level of expertise. Businesses may need to hire highly skilled
technicians and cloud consulting companies for complicated endeavors. Getting it right may seem
expensive, but getting it wrong can cause real headaches. The best strategy is to count the cost and use of
methods & tools that are reliable and vetted. Integration-as-a-service (IaaS) is one such model that has
received wider adoption and popularity in recent years due to its low-cost approach in solving the
integration conundrum.

4. Time Constraints: Most companies opting for SaaS are generally in a hurry to get the application up
and running. Moving from on-premises to the cloud is time-consuming and can lead to real productivity
issues if not appropriately managed. Integrating SaaS with your traditional applications can prolong, as a
result of which your work may get delayed. This is another challenge that lies ahead in SaaS integration.
Businesses need to plan carefully for any SaaS integration and factor any contingencies and other delays.

5. Inadequate Integration: If the integration is not up to the mark, many problems can arise, wreaking
havoc on an organization. your users are uploading files and making changes in different systems,
invoices are sent to wrong customers, your data is leaked, automatic information gathering is not so
automatic, so on and so forth. Lower productivity, lost revenue, and low employee morale could be
negative consequences of a poorly executed integration. The best practice for successful integration
strategy is carefully examining your SaaS vendors and not relying on just one approach or methodology
but remaining flexible to adopt the right solution.

6. Integration Conundrum: Organization without a method of synchronizing data between multiple lines
of business are at a serious disadvantages in terms of maintaining accurate data ,forecasting and
automating key business processes Real time data and functionality sharing is an essential integrating of
cloud
7. APIs are Insufficient :Many SaaS providers have responded to the integration challenge by developing
application programming interfaces(APIs).Unfortunately, accessing and managing data via an API
requires a significant amount of coding as well as maintenance due to frequentAPI modifications and
updates.

8. Data transmission security:For any relocated application to provide the promised value for businesses
and users, the minimum requirement is the interoperability between SaaS applications and on-premise
enterprise packages. As SaaS applications were not initially designed keeping the interoperability
requirement in mind, the integration process has become a little tougher assignment. There are other
obstructions and barriers that come in the way of routing messages between on-demand applications and
on-premise resources

9. The Impacts of Cloud:On the infrastructural front, in the recent past,the clouds have arrived onto the
scene powerfully and have extended the horizon and the boundary of business applications, events and
data.That is, business applications, development platforms etc. are getting moved to elastic, online and
on-demand cloud infrastructures.Precisely speaking, increasingly for business, technical, financial and
green reasons, applications and services are being readied and relocated to highly scalable and available
clouds.

2.3 SaaS integration service

SaaS integration, or SaaS application integration, involves connecting a SaaS application with another
cloud-based app or an on-premise software via application programming interfaces (APIs). Once
connected, the app can request and share data freely with the other app or on-premise system.

It performs transformation of data model ,handles connectivity ,performs message routing ,converts
communication protocol and potentially manages the composition of multiple requests.

The constraining attribute of Saas application

● Dynamic nature of Saas is constantly changing

● Massive amount of information that need to move between Saas and on premise systems daily and the
need to maintain data quality and integrity.

● Limited access: Access to cloud resources is more limited than local application .Accessing local
application is quite simple and faster .Imbedding integration points in local as well as custom application
is easier .

● Once applications move to the cloud ,customs applications must be designed to support integration
because there is no longer that low level of access.

● Dynamic resource : Cloud resources are virtualized and service oriented

● Performance : Cloud support application scalability and resource elasticity.


● Lack of control, security:mean companies give up a certain amount of data control to a third-party
vendor. They’re also trusting the SaaS or cloud vendor to secure the data, and without a proper
integration solution that can govern data flows and protect sensitive information
● Incomplete solutions: many enterprises, many SaaS vendors don’t offer a complete integration solution
to their customers. Because cloud providers have an API to exchange information,
● Improper connections: data comes in many forms and sizes and may have to adhere to numerous
compliance mandates, these connections must be able to accommodate secure data interactions across
various protocols and between on-premise and cloud systems.

2.4 SaaS integration Of Product And Platforms

1) Jitterbit

● Jitterbit cloud integration enables organization to replicate ,cleanse and synchronize their
cloud based data seamlessly and securely with their on premise enterprise application and
system
● Beside user-friendly interfaces and wizard tools, Jitterbit supports not only XML but also
focuses on Web services. Jitterbitt focuses on data integration in the context of point to
point application integration , ETL and SOA.
● Jitterbit supports SOA, event-driven architectures, and additional data integration methods,
and can easily scale to fit any cloud integration initiative.
● It is a fully graphical integration solution that provides users a versatile platform and a suite
of productivity tools to reduce the integration efforts sharply.
● It can be used as standalone or with existing infrastructure that enables users to create
projects or consume and modify existing ones offered by the open source community or
service provider .
● Jitterbit consist of two parts
Integration environment :point and click graphical user interface that enables to the quickly
configure ,test ,deploy ,and manage integration projects on the jitterbit
Integration server : a powerful and scalable run time engine that processes all the
integration operations ,fully configurable and manageable from the jitterbit application.

2) Boomi Software
● It is an integration service that is completely on demand and connects any combination of
SaaS, PaaS ,cloud and on premise application without the burden of installing and
maintaining software packages or applications.
● Boomi offers the pure Saas integration solution that enables to quickly develop and deploy
connect

3) Bungee Connect
● Bungee Connect web application development and hosting platform. Developers use it to
build desktop-like web applications that leverage multiple web services and databases,
● provides development, testing, deployment, and hosting in a single, on demand platform.
● Bungee Connect reduces the efforts to integrate multiple web services into a single
application .Applications built with Bungee Connect run at native speeds on each platform
.An application built in java with Bungee Connect will run natively on all targeted
platforms .
● Bungee Connect includes the following features:
● Interaction delivered entirely via browser with no download or plug-in for developers or
end users
● Delivery of highly interactive user experience without compromising accessibility and
security
● Automated integration of web services (SOAP/REST) and databases (MySQL/
PostgreSQL)
● Built-in team collaboration testing ,scalability, reliability, security
● Deep instrumentation of end-user application utilization for analytics
● Utility pricing model based on end-user application

4) OpSource Connect
● It unifying different SaaS application as well as legacy application running behind a
corporate firewall
● FEATURE:
● Service bus
● Service connector
● Connect certified integrator program
● Connect service exchange
● Web service enablement program
5) SnapLogic
● SnapLogic is a platform to integrate applications and data, allowing you to quickly connect
apps and data sources . The company is also branching out into connecting and integrating
data from IoT devices .

● SnapLogic offers a solution that provides flexibility for today's data integration challenges

1. Changing data sources :SaaS and on premise application ,Web APIs , and RSS feeds

2. Changing deployment options : On premise , hosted ,private and public cloud platforms

3. Changing delivery needs: Databases ,files , and data services

● Advantages :Includes many built in integration and easy tracking of feeds into a system

● Disadvantages: CAn take time to understand how the platform works , error handling not
built -in

2.4 Virtual machine provisioning and migration service

Virtual machine provisioning enables the cloud providers to make efficient utilization of available
resources and make a good profit out of it .A cloud provider provisions their resources either statically or
dynamically .In static Virtual machine provisioning the current demand of the user is not considered.

• Historically, when there is a need to install a new server for a certain workload to provide a particular
service for a client, lots of effort was exerted by the IT administrator, and much time was spent to install
and provision a new server.

1) Check the inventory for a new machine,


2) get one,
3) format, install OS required,
4) and install services;

● Now, with the emergence of virtualization technology and the cloud computing IaaS model:

● It is just a matter of minutes to achieve the same task. All you need is to provision a virtual server
through a self-service interface with small steps to get what you desire with the required specifications.
1) provisioning this machine in a public cloud like Amazon Elastic Compute Cloud (EC2)
2) using a virtualization management software package or a private cloud management solution installed
at your data center in order to provision the virtual machine inside the organization and within the private
cloud setup.
Analogy for Migration Services:

• Previously, whenever there was a need for performing a server‘s upgrade or performing maintenance
tasks, you would exert a lot of time and effort, because it is an expensive operation to maintain or upgrade
a main server that has lots of applications and users.

• Now, with the advance of the revolutionized virtualization technology and migration services associated
with 48 hypervisors‘ capabilities, these tasks (maintenance, upgrades, patches, etc.) are very easy and need
no time to accomplish

• Provisioning a new virtual machine is a matter of minutes, saving lots of time and effort, Migrations of a
virtual machine is a matter of millisecondsVirtual Machine Provisioning and Manageability

• Virtual Machine Lifecycle management (VMLM) is a set of processes designed to help administrators
oversee the implementation , delivery ,operation and maintenance of virtual machines (VMs) over the
course of their existence.

1) IT service request

2) VM provision processing

● select a server from a pool of available servers along with the appropriate OS template you need to
provision the virtual machine.

● load the appropriate software (operating system )you selected in the previous step, device drivers,
middleware, and the
● customize and configure the machine (e.g., IP address, Gateway) to configure an associated
network and storage resources.

● Finally, the virtual server is ready to start with its newly loaded software.

● Server provisioning is defining a server's configuration based on the organization requirements, a


hardware, and software component (proces-sor, RAM, storage, networking, operating system,
applications,\ etc.)

Virtual machine can be provisioned

1. Manually installing operating system

2. Cloning of existing VM

3. VM template

4. Importing physical server from another hosting platform

Problem of virtual machine provisioning it provision so rapidly that documenting and managing the VM
cycle become difficult

2.5 Virtual machine migration

Process of moving virtual machines from one host server or storage location to another . In the process all
key machine and resources are completely virtualized

Migration Time: Migration time refers to the total amount of time required to transfer a virtual machine at
source to destination node without affecting its availability.

It is used for load balancing and physical machine fault tolerant .It can also be used to reduce power
consumption in cloud data centers.
Virtual machine migration Techniques

1) Hot (live) Migration - Virtual machine keeps running while migration and does not lose its status.
● Also called hot or real time migration
● Movement is done while power is on
● unnoticed with user
● Facilitates proactive maintenance upon failure
● VM should be shared
● CPU compatibility check is required
● Used for load balancing
● Ex : Xen hypervisor

Live migration timeline


Stage 1: Reservation. A request is issued to migrate an OS from host A to host B (a precondition is that
the necessary resources exist on B and on aVM container of that size).

Stage 2: Iterative Pre-Copy. During the first iteration, all pages are transferred from A to B. Subsequent
iterations copy only those pages dirtied during the previous transfer phase.

Stage 3: Stop-and-Copy Running OS instance at A is suspended, and its network traffic is redirected to
B. As described in reference 21, CPU state and any remaining inconsistent memory pages are then
transferred. At the end of this stage, there is a consistent suspended copy of the VM at both A and B. The
copy at A is considered primary and is resumed in case of failure.

Stage 4: Commitment. Host B indicates to A that it has successfully received a consistent OS image.
Host A acknowledges this message as a commit-ment of the migration transaction. Host A may now
discard the original VM, and host B becomes the primary host.

Stage 5: Activation. The migrated VM on B is now activated. Post-migration code runs to reattach the
device’s drivers to the new machine and advertise moved IP addresses.

Assumption This approach to failure management ensures that at least one host has a consistent VM
image at all times during migration. It depends on the assumption that the original host remains stable
until the migration commits and that the VM may be suspended and resumed on that host with no risk of
failure.
2) Cold ( non- live) migration: The status of the VM loses and user can notice the service
interruption

● It occurs when the VM is shut down .


● Also called hot or regular migration
● Movement is done while power is off
● The virtual machines are not required to be on a shared storage
● CPU compatibility check is not required
● Simple to implement

Steps in cold migration

Step 1 : The configuration files, log files, as well as the disks of the virtual machine, are moved from the
source host to the destination host’s associated storage area.
Step 2: The virtual machine is registered with the new host.

Step 3: After the migration is completed, the old version of the virtual machine is deleted from the source
host
Unit 3: Abstraction And Virtualisation

● It is the abstraction layer that decouples the physical hardware from the operating system to deliver
greater IT resource utilization and flexibility
● It allows multiple virtual machines with heterogeneous OS to run in isolation side by side
● Virtualization is an abstraction technique where the finer details of the hardware layout are hidden from
the upper layers of computing such as an operating system or application
● Its provides a sense of existence of computing resources in a way that may not be real

3.1 Component Of Virtualization

1) Physical Server/Hardware /Infrastructure: Group of real hardware resources such as


CPU,memory,disk and network.These resources are actually shared by the virtualization layer and
the system running on it .It is owned by the cloud service provider which housed in the datacenter
2) Virtualization Layer: Also known as VMM(Virtual Machine Monitor) or hypervisor is the
software that runs the virtual machine and serves as a bridge between the virtual machine and
physical hardware .It carries several task like passing the instruction from the virtual machine to
hardware
● Routing network traffic
● Isolating one virtual to another
● Creating and deleting new virtual machine
3) Virtual Machine :Artificial computer system created by the host (hypervisor) as per the desired
specification based on the availability of the actual physical hardware resources.on a single host
one can run one or more VM.Some common operation on VM are

● Creating,edit and deleting VM


● CLone VM
● Snapshot of VM
● Power off and on

● Migrating the VM

4) Guest Operating System : Whereas the host operating system is software installed on a computer
to interact with the hardware, the guest operating system is software installed onto and running on
the virtual machine. The guest OS can be different from the host OS in virtualization and is either
part of a partitioned system or part of a virtual machine. It mainly provides another OS for
applications. While the guest OS shares resources with the host OS, the two operate independently
of one another. These various operating systems can run at the same time, but the host operating
system must be started initially.

5) Applications: All the application that one can run on the operating systems like Excel, Word etc

3.2) Advantages/needs /goals Of Virtualization

● Reduced upfront hardware and continuing operating costs


● Minimized or eliminated downtime
● Increased IT productivity and responsiveness
● Greater business continuity and disaster recovery response
● Simplified data center management
● Faster provisioning of applications and resources
● Improved security
● Cost saving
● Resource optimazation

3.3) Challenges of Virtualization

● Cloud to be a single point of failure : if the host was go down for any reason the one is likely to
lose the access of the VM hosted on it

● Not Everything can be virtualized: some hardware dependent application require specific hardware
to be present for running like USB flashing software or Bluetooth dongle

● Required skill staff for managing virtualized environment ,installation of Guest Os ,provisioning
,upgrading and security control
3.4) Implementation Level Of Virtualization
● It is not sufficient today to use just a single software in computing. Today the professionals look to test
their software and program across various platforms. However, there are challenges here because of
varied constraints. This gives rise to the concept of virtualization. Virtualization lets the users create
several platform instances, which could be various applications and operating systems.
● It is not simple to set up virtualization. Your computer runs on an operating system that gets configured
on some particular hardware. It is not feasible or easy to run a different operating system using the
same hardware.
● To do this, you will need a hypervisor.It is a bridge between the hardware and the virtual operating
system, which allows smooth functioning.
● Talking of the Implementation levels of virtualization in cloud computing, there are a total of five levels
that are commonly used. Let us now look closely at each of these levels of virtualization
implementation in cloud computing.

3.4.1) Instruction Set Architecture Level


It is the definition of the storage resources and the instruction that manipulate data are documented
in ISA
ISA virtualization can work through ISA emulation. This is used to run many legacy codes that
were written for a different configuration of hardware. These codes run on any virtual machine
using the ISA. With this, a binary code that originally needed some additional layers to run is now
capable of running on the x86 machines. It can also be tweaked to run on the x64 machine. With
ISA, it is possible to make the virtual machine hardware agnostic.

For the basic emulation, an interpreter is needed, which interprets the source code and then
converts it into a hardware format that can be read. This then allows processing.
3.4.2) Hardware Abstraction Level
True to its name HAL lets the virtualization perform at the level of the hardware. This makes use
of a hypervisor which is used for functioning. At this level, the virtual machine is formed, and this
manages the hardware using the process of virtualization. It allows the virtualization of each of the
hardware components, which could be the input-output device, the memory, the processor, etc.
3.4.3) Operating System Level
At the level of the operating system, the virtualization model is capable of creating a layer that is
abstract between the operating system and the application. This is an isolated container that is on
the operating system and the physical server, which makes use of the software and hardware.
Each of these then functions in the form of a server.

When there are several users, and no one wants to share the hardware, then this is where the
virtualization level is used. Every user will get his virtual environment using a virtual hardware
resource that is dedicated. In this way, there is no question of any conflict.

3.4.4) Library Level Virtualization


The operating system is cumbersome, and this is when the applications make use of the API that
is from the libraries at a user level. These APIs are documented well, and this is why the library
virtualization level is preferred in these scenarios. API hooks make it possible as it controls the
link of communication from the application to the system.

3.4.5) Application Level Virtualization


The application-level virtualization is used when there is a desire to virtualize only one
application and is the last of the implementation levels of virtualization in cloud computing. One
does not need to virtualize the entire environment of the platform.

This is generally used when you run virtual machines that use high-level languages. The
application will sit above the virtualization layer, which in turn sits on the application program.
It lets the high-level language programs compiled to be used in the application level of the virtual
machine run seamlessly.

3.5) Types Of Virtualization


1. Hardware

● It is the most common type of virtualization as it provides advantages of hardware


utilization and application uptime. The basic idea of the technology is to combine many
small physical servers into one large physical server, so that the processor can be used more
effectively and efficiently.

● The operating system that is running on a physical server gets converted into a well-defined
OS that runs on the virtual machine and The hypervisor controls the processor, memory,
and other components by allowing different OS to run on the same machine without the
need for a source code.

● Hardware virtualization is further subdivided into the following types:

1. Full Virtualization – In it, the complete simulation of the actual hardware takes place to
allow software to run an unmodified guest OS.

2. Para Virtualization – In this type of virtualization, software unmodified runs in a modified


OS as a separate system.

3. Partial Virtualization – In this type of hardware virtualization, the software may need
modification to run.

2. Network

● It refers to the management and monitoring of a computer network as a single managerial


entity from a single software-based administrator’s console. It is intended to allow network
optimization of data transfer rates, scalability, reliability, flexibility, and security.

● Network virtualization is specifically useful for networks experiencing a huge, rapid, and
unpredictable increase of usage and improved network productivity and efficiency.

● The ability to run multiple virtual networks with each has a separate control and data plan.
It co-exists together on top of one physical network. It can be managed by individual
parties that are potentially confidential to each other.

● Network virtualization provides a facility to create and provision virtual networks—logical


switches, routers, firewalls, load balancer, Virtual Private Network (VPN), and workload
security within days or even in weeks.

● Two categories:

1. Internal: Provide network-like functionality to a single system.

2. External: Combine many networks, or parts of networks into a virtual unit.

3. Storage

● In this type of virtualization, multiple network storage resources are present as a single
storage device for easier and more efficient management of these resources.

● It provides various advantages as follows: Improved storage management in a


heterogeneous IT environment, Easy updates, better availability , Reduced downtime ,
Better storage utilization and Use for backup and recovery

● there are two types of storage virtualization:

1. Block- It works before the file system exists. It replaces controllers and takes over
at the disk level.
2. File- The server that uses the storage must have software installed on it in order to
enable file-level usage.

4. Desktop

● It provides the work convenience and security. As one can access remotely, you are able to
work from any location and on any PC. It provides a lot of flexibility for employees to
work from home or on the go. It also protects confidential data from being lost or stolen by
keeping it safe on central servers.

● allows the users’ OS to be remotely stored on a server in the data center.

● Users who want specific operating systems other than Windows Server will need to have a
virtual desktop.

● Main benefits of desktop virtualization are user mobility, portability, easy management of
software installation, updates and patches.

5. Data

● Without any technical details, you can easily manipulate data and know how it is formatted
or where it is physically located. It decreases the data errors and workload.

● This is the kind of virtualization in which the data is collected from various sources and
managed that at a single place without knowing more about the technical information like
how data is collected, stored & formatted then arranged that data logically so that its virtual
view can be accessed by its interested people and stakeholders, and users through the
various cloud services remotely

● It can be used to performing various kind of tasks such as:Data-integration


,Business-integration ,Service-oriented architecture data-services and Searching
organizational data

6. Memory

● It introduces a way to decouple memory from the server to provide a shared, distributed or
networked function. It enhances performance by providing greater memory capacity
without any addition to the main memory. That’s why a portion of the disk drive serves as
an extension of the main memory.

● There are two type of implementation

1. Application-level integration – Applications running on connected computers


directly connect to the memory pool through an API or the file system.
2. Operating System Level Integration – The operating system first connects to the
memory pool, and makes that pooled memory available to applications.
7. Application

● Application virtualization helps a user to have remote access to an application from a


server. The server stores all personal information and other characteristics of the
application but can still run on a local workstation through the internet. Example of this
would be a user who needs to run two different versions of the same software.
Technologies that use application virtualization are hosted applications and packaged
applications.

3.6) Load balancing

● Load balancing is the process of distributing workloads across multiple servers . It prevents any
single server from getting overloaded and possibly breaking down. It improves service availability
and helps prevent downtimes. It uses server to route traffic to multiple server which in turn share
workload
● Without load balancers, newly spun virtual servers wouldn’t be able to receive the incoming traffic
in a coordinated fashion or if at all. Some virtual servers might even be left handling zero traffic
while others become overloaded.
● Load balancing divide into three approaches
1. Centralized approach: a single node is responsible for managing the distribution within
the whole system.
2. Distributed approach: each node independently builds its own load vector by collecting
the load information of other nodes. Decisions are made locally using local load vectors.
This approach is more suitable for widely distributed systems such as cloud computing.

3. Mixed approach: A combination between the two approaches to take advantage of each
approach

METRICS OF LOAD BALANCING

● Scalability is the ability of an algorithm to perform load balancing for a system with any finite
number of nodes. This metric should be improved.

● Resource Utilization is used to check the utilization of re-sources. It should be optimized for
efficient load balancing.

● Performance is used to check the efficiency of the system. This has to be improved at a
reasonable cost, e.g., reduce task response time while keeping acceptable delays.

● Response Time is the amount of time taken to respond by a particular load balancing algorithm in
a distributed system. This parameter should be minimized.
● Overhead Associated determines the amount of overhead involved while implementing a
load-balancing algorithm. It is composed of overhead due to movement of tasks, inter-processor
and interprocess

● communication. This should be minimized so that a load balancing technique can work
efficiently.

● Throughput is used to calculate the number of task whose execution has been completed .it
should be high

● Fault tolerance is ability of an algorithm to perform uniform load balancing in case of node
failure

● Migration time to migrate the job of resource from one node to another it should be minimized

3.7) Hypervisor

● A hypervisor is a form of virtualization software used in Cloud hosting to divide and allocate the
resources on various pieces of hardware and provides partitioning, isolation or abstraction

● This technique allows multiple guest operating systems (OS) to run on a single host system at the
same time , sometimes also called a virtual machine manager (VMM)

● A hypervisor allows a single host computer to support multiple virtual machines (VMs) by sharing
resources including memory and processing.

● provide greater IT versatility because the guest VMs are independent of the host hardware which is
one of the major benefits of the Hypervisor. This implies that they can be quickly switched
between servers. it helps us to reduce the Space efficiency, the Energy uses , Maintenance
requirements of the server
Benefits of hypervisors

● Speed: The hypervisors allow virtual machines to be built instantly unlike bare-metal servers. This
makes provisioning resources for complex workloads much simpler.

● Efficiency: Hypervisors that run multiple virtual machines on the resources of a single physical
machine often allow for more effective use of a single physical server.

● Flexibility: Since the hypervisor distinguishes the OS from the underlying hardware, the program
no longer relies on particular hardware devices or drivers, bare-metal hypervisors enable operating
systems and their related applications to operate on a variety of hardware types.

● Portability: Multiple operating systems can run on the same physical server thanks to hypervisors
(host machine). The hypervisor's virtual machines are portable because they are separate from the
physical computer.

3.7.1)Reference model of Hypervisor

● DISPATCHER:The dispatcher behaves like the entry point of the monitor and reroutes the
instructions of the virtual machine instance to one of the other two modules.

● ALLOCATOR: The allocator is responsible for deciding the system resources to be provided to the
virtual machine instance.It means whenever a virtual machine tries to execute an instruction that
results in changing the machine resources associated with the virtual machine, the allocator is
invoked by the dispatcher.
● INTERPRETER: The interpreter module consists of interpreter routines.These are executed,
whenever a virtual machine executes a privileged instruction.

3.7.2)Types Of Hypervisor

Type 1: Bare Metal Hypervisor

● A type 1 hypervisor functions as a light operating system that operates directly on the host's
hardware they are isolated from the attack-prone operating system they are extremely stable.

● They are usually faster and more powerful than hosted hypervisors. The majority of enterprise
businesses opt for bare-metal hypervisors for their data center computing requirements.

● VMware ESXi, Citrix XenServer and Microsoft Hyper-V hypervisor , Xen

● Pros: Such kinds of hypervisors are very efficient because they have direct access to the physical
hardware resources(like Cpu, Memory, Network, Physical storage). This causes the empowerment
of security because there is nothing of any kind of the third party resource so that the attacker
couldn’t compromise with anything.

● Cons: One problem with Type-1 hypervisors is that they usually need a dedicated separate
machine to perform its operation and to instruct different VMs and control the host hardware
resources.

Xen Hypervisor
● It is an open source type 1 hypervisor that allows to run multiple virtual machines on a single host
machine
● Characteristics and features of Xen
1. Wide adoption and distribution
2. Open source and flexible
3. Support multiple guest operating systems
4. High scalability and performances
5. Small size
6. Provide security

Xen Architecture:

Physical hardware: bottom most layer that consist of the actual hardware devices such as CPU ,RAm
and storage enclosed in bare metal server

Xen hypervisor: runs directly on the hardware and is responsible for managing CPU ,memory and other
hardware component

Domain 0-The guest OS, which has control ability, is called Domain 0, and the others are called Domain
U. Domain 0 is a privileged guest OS of Xen. It is first loaded when Xen boots without any file system
drivers being available. Domain 0 is designed to access hardware directly and manage devices.
Therefore, one of the responsibilities of Domain 0 is to allocate and map hardware resources for the guest
domains (the Domain U domains).

Guest OS: created virtual machine running its own OS and application

2) Type 2: Hosted Hypervisor

● The type 2 hypervisor is a software layer or framework that runs on a traditional operating
system. It operates by separating the guest and host operating systems. The host operating system
schedules VM services, which are then executed on the hardware.

● Individual users who wish to operate multiple operating systems on a personal computer should
use a form 2 hypervisor. This type of hypervisor also includes virtual machines with it.
● Such hypervisors don't run directly over the underlying hardware; rather they run as an
application in a Host system(physical machine). software installed on an operating system.
Hypervisor asks the operating system to make hardware calls.

● Example :VMware Player or Parallels Desktop.

● The type-2 hypervisor is very useful for engineers, security analysts(for checking malware, or
malicious source code and newly developed applications).

● Pros: Such kind of hypervisors allow quick and easy access to a guest Operating System alongside
the host machine running. These hypervisors usually come with additional useful features for guest
machines. Such tools enhance the coordination between the host machine and guest machine.

● Cons: Here there is no direct access to the physical hardware resources so the efficiency of these
hypervisors lags in performance as compared to the type-1 hypervisors, and potential security risks
are also there an attacker can compromise the security weakness if there is access to the host
operating system so he can also access the guest operating system.

Comparison between the two hypervisor


3) KVM Hypervisor

● KVM is a unique and popular open-source hypervisor built into Linux distributions that allows
creation of VMs on the Linux OS. It has characteristics of both Type 1 and Type 2 hypervisors.
● Since KVM is a kernel module of Linux, it converts the host OS itself into a bare metal Type 1
hypervisor. it is part of the code that interacts with other applications, which can compete with
other kernel modules for resources, giving the installation some characteristics of Type 2.

● KVM offers all the hardware, compute, and storage support of Type 1 hypervisors, including live
migration of VMs, scalability, scheduling, and low latency. VMs created with KVM are
empirically known to be secure.

3.8) Hardware Level Virtualization

1) Full Virtualization

● it is underlying Hardware that is entirely simulated. The Guest software does not need to perform
any modifications to run their applications..The hardware architecture completely simulates what
the guest program gains. There’s a very similar environment to a server operating system.

2) Para-Virtualization

● The hardware does not simulate, and the guest program runs an independent device in case of
para-virtualization. The hardware does not need to be simulated, but an API that changes the
guests’ operating system is used. A specific command is given, sent to the hypervisor from the
operating system and called hypercalls. These hypercalls are used to control memory.
3. Emulation Virtualization

● Emulation Virtualization. In this type of Virtualization, the virtual machine simulates the
Hardware, thus becoming independent of it. In this Virtualization, the guest operating system is
not required to perform any modifications.

3.9) Virtualization Of CPU ,Memory and I/O Devices


Any computer system requires CPU , Memory and I/O Running virtual machines is no different .Cpu,
memory and I/O devices are virtualized as real hardware to the guest OS devices are virtualized

3.9.1) Memory Virtualization


● In a virtual execution environment, virtual memory virtualization involves sharing the physical
system memory in RAM and dynamically allocating it to the physical memory of the VMs.Vm
are provided with a contiguous address space which may not be contiguous on the real physical
memory.

● guest OS continues to control the mapping of virtual addresses to the physical memory addresses
of VMs. But the guest OS cannot directly access the actual machine memory.Each VM maintains
its page tables that provide the mapping between the virtual page number to physical page
numbers as assigned by the hypervisor.

● Guest virtual address (virtual address used by guest OS/ virtual machine)-GVA

● Guest physical address (physical address used by the Guest OS) GPA

● Host physical address(actual physical address of memory which are not virtual)-HPA

● It is process of Mapping of GVA with GPA and then with HPA to fetch actual data

● Two approaches used

1) Software – use by hypervisor to manage transaction called SHADOW PAGE TABLE

● Shadow page tables are used by the hypervisor to keep track of the state in which the guest
"thinks" its page tables should be. The guest can't be allowed access to the hardware page
tables because then it would essentially have control of the machine. So, the hypervisor
keeps the "real" mappings (guest virtual -> host physical) in the hardware when the
relevant guest is executing, and keeps a representation of the page

● Since each page table of the guest OSes has a separate page table in the VMM
corresponding to it, the VMM page table is called the shadow page table.Then the
physical memory addresses are translated to machine addresses using another set of page
tables defined by the hypervisor. Then the physical memory addresses are translated to
machine addresses using the hypervisor.
2) Hardware- NESTED PAGE TABLE (used by AMD ) and EXTENDED PAGE TABLE
(Intel)

● Nested page tables add another layer of indirection to virtual memory. It provides
hardware assistance to the two-stage address translation in a virtual execution environment
by using a technology called nested paging

● Nested paging implements some memory management in hardware, which can greatly
accelerate hardware virtualization

● Nested paging eliminates the overhead caused by VM exits and page table accesses. In
essence, with nested page tables the guest can handle paging without intervention from the
hypervisor. Nested paging thus significantly improves virtualization performance.

● When the guest OS changes the virtual memory to a physical memory mapping, the VMM
updates the shadow page tables to enable a direct lookup. Nested paging eliminates the
overhead caused by VM exits and page table accesses.

● Virtual memory virtualization is similar to the virtual memory support provided by


modern operating systems.
● In a traditional execution environment, the operating system maintains mappings of virtual
memory to machine memory using page tables, which is a one-stage mapping from virtual
memory to machine memory including a memory management unit (MMU) and a
translation lookaside buffer (TLB) to optimize virtual memory performance.

● However, in a virtual execution environment, virtual memory virtualization involves


sharing the physical system memory in RAM and dynamically allocating it to the physical
memory of the VMs.

● Benefits to use memory virtualization:

1. Higher memory utilization by sharing contents and consolidating more virtual machines
on a physical host.

2. Ensuring some memory space exists before halting services until memory frees up.

3. Access to more memory than the chassis can physically allow.

4. Advanced server virtualization functions, like live migrations.

3.9.2) Input / Output Virtualization


● I/O virtualization involves managing the routing of I/O requests between virtual devices
and the shared physical hardware. VIRTUALIZATION

● How it works

1. Guest OS looking for I/O devives.


2. Hypervisor report all device and disk space
3. Guest OS load device driver.

I/O MMU : virtualizes I/O the same way an MMU virtualizes memory. It maps device memory
addresses to real physical addresses. It can keep different guests' DMA out of each other's way.

Device pass through allows both the device and the guest OS to be unaware that any address translation
may be going on

Device isolation lets a device assigned to a VM directly access its memory without interfering with other
guests.
Interrupt remapping is necessary so that the right interrupt goes to the right VM.

3.9.2.1) BENEFITS
1. Abstracting resources provides more flexibility through faster provisioning and increased
utilization of the underlying physical infrastructure.

2. An IT administrator is able to spin up a large number of VMs on an individual server, which


reduces the need for new hardware. Thousands of VMs could be deployed in a larger server
cluster.

3. independently adding or removing servers from the cluster and running multiple operating
systems (OSes) on a host machine.

4. Attach a single cable interconnect to support networking and storage I/O

5. Reduce the cost of data center cooling, heating and power

6. Scale for rapid redeployment as I/O profiles change

3.9.2.2) Three ways to implement I/O virtualization:


1. Full device emulation: guest OS is completely isolated by the virtual machine from the
virtualization layer and hardware.Hypervisor presents the physical I/O devices as “real” devices
to the virtual machine and captures all the hardware instructions for these virtual devices to
execute on the actual physical I/O devices

● Device emulation for I/O virtualization implemented inside the middle layer that maps
real I/O devices into the virtual devices for the guest device driver to use. (Full device
emulation)

● All the functions of a device or bus infrastructure, such as device enumeration,


identification, interrupts, and DMA, are replicated in software.

● This software is located in the VMM and acts as a virtual device. The I/O access requests
of the guest OS are trapped in the VMMwhich interacts with the I/O devices.
2. Para-virtualization:guest OS is not completely isolated but it is partially isolated by the virtual
machine from the virtualization layer and hardware. VMware and Xen are some examples of
paravirtualization.

● It is also known as the split driver model consisting of a frontend driver and a backend
driver. They interact with each other via a block of shared memory.

● The frontend driver manages the I/O requests of the guest OSes and the backend driver is
responsible for managing the real I/O devices and multiplexing the I/O data of different
VMs.

● para-I/O-virtualization achieves better device performance than full device emulation, it


comes with a higher CPU overhead.

3. Direct I/O:Direct I/O virtualization lets the VM access devices directly.VM are allowed access to
the physical I/O devices directly .Generally used for networking in VM
3.9.2.3) Difference between Full Virtualization and Para Virtualization

3.9.3) CPU Virtualization


CPU Virtualization is one of the cloud-computing technologies that requires a single CPU to work, which
acts as multiple machines working together.

Type Of CPU Virtualization

1. Software-based : application code gets executed on the processor and the privileged code gets
translated first, and that translated code gets executed directly on the processor. guest programs that are
based on privileged coding runs very smooth and fast

2. Hardware-Assisted CPU Virtualization Here, the guest user uses a different version of code and
mode of execution known as a guest mode. The guest code mainly runs on guest mode. The best part in
hardware-assisted CPU Virtualization is that there is no requirement for translation while using it for
hardware assistance.
3. Virtualization and Processor-Specific Behavior Despite having specific software behavior of the
CPU model, the virtual machine still helps in detecting the processor model on which the system runs.
The processor model is different based on the CPU and the wide variety of features it offers

4. Performance Implications of CPU Virtualization CPU Virtualization adds the amount of overhead
based on the workloads and virtualization used. Any application depends mainly on the CPU power
waiting for the instructions to get executed first. Such applications require the use of CPU Virtualization
that gets the command or executions that are needed to be executed first.

Benefits

Using CPU Virtualization, the overall performance and efficiency are improved to a great extent because
it usually takes virtual machines to work on a single CPU, sharing resources acting like using multiple
processors at the same time. This saves cost and money.

As CPU Virtualization uses virtual machines to work on separate operating systems on a single sharing
system, security is also maintained by it. The machines are also kept separate from each other.

The hardware requirement is less and the physical machine usage is absent, that is why the cost is very
less, and timing is saved. offers great and fast deployment procedure options so that it reaches the client
without any hassle

3.10) Virtual Cluster And Resource Management


● Dictionary meaning cluster is an aggregation or group of similar things .Cluster allows you to
effectively manage the resources and carry out various operations on them .

● One can club virtual resources in their respective clusters and effectively manage them.Virtual
clusters allow aggregating virtual resources for effective operations and management .

● Virtual clusters are built with VMs installed at distributed servers from one or more physical
clus-ters. The VMs in a virtual cluster are interconnected logically by a virtual network across
several physical networks.
PROPERTIES OF VIRTUAL CLUSTER

● The virtual cluster nodes can be either physical or virtual machines. Multiple VMs running with
different OSes can be deployed on the same physical node.

● A VM runs with a guest OS, which is often different from the host OS, that manages the resources
in the physical machine, where the VM is implemented.

● The purpose of using VMs is to consolidate multiple functionalities on the same server. This will
greatly enhance server utilization and application flexibility

● The size (number of nodes) of a virtual cluster can grow or shrink dynamically, similar to the way
an overlay network varies in size in a peer-to-peer (P2P) network.

● VMs can be colonized (replicated) in multiple servers for the purpose of promoting distributed
parallelism, fault tolerance, and disaster recovery.

● The failure of any physical nodes may disable some VMs installed on the failing nodes. But the
failure of VMs will not pull down the host system.

● There can be multiple clusters in a virtual environment

● The size of each clusters can be very

● The various vendors provide several features that work only in virtual clusters and not on
independent hypervisors or virtual machines .Cluster high availability is a feature that works only
on clusters.

3.11) Virtualization For Data Center Automation


● What is DATA CENTER: a data center is a physical facility that organizations use to house
their critical applications and data. A data center's design is based on a network of computing and
storage resources that enable the delivery of shared applications and data.

● DATA CENTER AUTOMATION: means that huge volumes of hardware, software, and
database resources in these data centers can be allocated dynamically to millions of Internet users
simultaneously, with guaranteed QoS and cost-effectiveness.

● Google, Yahoo!Amazon, Microsoft, HP, Apple, and IBM are all in the game. All these companies
have invested billions of dollars in data-center construction and automation.

● Virtualization development highlights high availability (HA), backup services, workload


balancing, and further increases in client bases.

● Server Consolidation in Data Centers: Server consolidation is the process of migrating network
services and applications from multiple computers to a singular computer. which include multiple
physical computers to multiple virtual computers on one host computer.

● consolidate computers for several reasons, such as minimizing power consumption, simplifying
administration duties, or reducing overall cost.
● In data centers, a large number of heterogeneous workloads can run on servers at various times
that are of two types

Chatty workloads may burst at some point and return to a silent state at some other point.
Example a web video

Non Interactive workloads do not require people’s efforts to make progress after they are
submitted. Example-High-performance computing

Why we need data server automation

● At various stages, the requirements for resources of these workloads are dramatically different , to
guarantee that a workload will always be able to cope with all demand levels, the workload is
statically allo-cated enough resources so that peak demand is satisfied.

● It is common that most servers in data centers are underutilized. A large amount of hardware,
space, power, and management cost of these servers is wasted.

● Server consolidation is an approach to improve the low utility ratio of hardware resources by
reducing the number of physical servers.

Advantages of the server virtualization

● Consolidation enhances hardware utilization. Many underutilized servers are consolidated into
fewer servers to enhance resource utilization. Consolidation also facilitates backup services and
disaster recovery.

● Enables more provisioning and deployment of resources. In a virtual environment, the images of
the guest OSes and their applications are readily cloned and reused.

● The total cost of ownership is reduced. Server virtualization causes deferred purchases of new
servers, a smaller data-center footprint, lower maintenance costs, and lower power, cooling, and
cabling requirements.

● improves availability and business continuity. The crash of a guest OS has no effect on the host
OS or any other guest OS. It becomes easier to transfer a VM from one server to another, because
virtual servers are unaware of the underlying hardware.
Unit 4: Cloud Infrastructure And Cloud Resource Management

4.1 Architectural Design Of Compute And Storage Clouds


● Major design goals of cloud computing platforms are scalability ,virtualization, efficiency and
reliability. cloud management receive the user request and then finds the correct resource and
then call the provisioning service which invoke resources in the cloud .it need to support both
physical and virtual machine , it should be built to serve many user simultaneously ,so
multitasking is necessary in cloud infrastructure.

● Basic performance metrics are System throughput and efficiency , multitasking scalability ,
System availability , Security index , Cost effectiveness

4.2 Layered Cloud Architecture Development

● It is developed at three layer Infrastructure ,platform,and application .These layers are


implemented on virtualization and standardization of hardware and software resources
provisioned in the cloud.

● the services to public ,private and hybrid cloud are conveyed to user s through the networking
support over the internet .It is clear that the infrastructure layer is deployed first to support IaaS
type of service

● this IaaS is foundation to build the platform layer of the cloud for supporting PaaS to built the
virtualization compute ,storage and network resource
● the platform layer is general purpose and repeated usage of the collection of software resources
.The application layer is formed with the collection of all needed software module for SaaS
application

4.3 Design Challenges


Challenge 1—Service Availability and Data Lock-in Problem
Challenge 2—Data Privacy and Security Concerns
Challenge 3—Unpredictable Performance and Bottlenecks
Challenge 4—Distributed Storage and Widespread Software Bugs
Challenge 5—Cloud Scalability, Interoperability, and Standardization
Challenge 6—Software Licensing and Reputation Sharing

4.4 Cloud Interoperability And Portability


● Interoperability: is the ability of two or more systems or applications to exchange information and
to mutually use the information that has been exchanged.
● cloud interoperability is the capacity or extent at which one cloud service is connected with the
other by trading data as per strategy to get results.
● the ability for one cloud service to interact with other cloud services by exchanging information
according to a prescribed method to obtain predictable results.

● Cloud data portability : It is the capability of moving information from one cloud service to
another and so on without expecting to re-enter the data.
● Cloud application portability: It is the capability of moving an application from one cloud
service to another or between a client’s environment and a cloud service. The application may
require recompiling or relinking for the target cloud service, but it should not be necessary to make
significant changes to the application code.
4.4.1) Scenario where Portability needed

a) Customer switches cloud provider


b) customer use cloud service from multiple provider.

c) customer link one cloud service to another service of diffferent cloud.

4.4.2)Challenges faced in Cloud Portability and Interoperability :

● If we move the application to another cloud, then, naturally, data is also moved. And for some
businesses, data is very crucial. But unfortunately, most cloud service providers charge a small
amount of money to get the data into the cloud.
● The degree of mobility of data can also act as an obstacle. Moving data from one cloud to another
cloud, the capability of moving workload from one host to another should also be accessed.
● As data is highly important in business, the safety of customer’s data should be ensured.varying
software stack make portability more challenging multiple API across several dimensions

4.5 Inter Cloud Resource Management


● Inter-Cloud computing has been formally defined as A cloud model that, for the purpose of
guaranteeing service quality, such as the performance and availability of each service, allows
on-demand reassignment of resources and transfer of workload through a interworking of cloud
systems of different cloud provider based on coordination of each consumers requirements for
service quality with each providers SLA and use of standard interfaces.

● An Inter-Cloud allows for the dynamic coordination and distribution of load among a set of cloud
data centers
4.5.1) Type Of Inter Cloud

1) A Federation is achieved when a set of cloud providers voluntarily interconnect their


infrastructures to allow sharing of resources among each other . As identified, this type of
Inter-Cloud is mostly viable for governmental clouds or private cloud portfolios.

a) Centralized – in every instance of this group of architectures, there is a central entity that
either performs or facilitates resource allocation. Usually, this central entity acts as a
repository where available cloud resources are registered but may also have other
responsibilities like acting as a marketplace for resources.

b) Peer-to-Peer – in the architectures from this group, clouds communicate and negotiate
directly with each other without mediators.

2) The term Multi-Cloud denotes the usage of multiple, independent clouds by a client or a service.
Unlike a Federation, a Multi-Cloud environment does not imply volunteer interconnection and
sharing of providers’ infrastructures.

a) Services – application provisioning is carried out by a service that can be hosted either
externally or in-house by the cloud clients. Most such services include broker components
in themselves. Typically, application developers specify an SLA or a set of provisioning
rules, and the service performs the deployment and execution in the background, in a way
respecting these predefined attributes
b) Libraries – often, custom application brokers that directly take care of provisioning and
scheduling application components across clouds are needed. Typically, such approaches
make use of Inter-Cloud libraries that facilitate the usage of multiple clouds in a uniform
way.

4.5.2) Six Layer Stack

● Intercloud or 'cloud of clouds’-refer to a theoretical model for cloud computing services.


Combining many different individual clouds into one seamless mass in terms of on-demand
operations.

● The intercloud would simply make sure that a cloud use resources beyond its reach. Taking
advantage of pre-existing contracts with other cloud providers.

● A single cloud cannot always fulfill the requests or provide required services. When two or more
clouds have to communicate with each other, or another intermediary comes into play and
federates the resources of two or more clouds.

● Six layers of cloud services Software as a Service(SaaS) ,Platform as a Service(PaaS)


,Infrastructure as a Service(IaaS) , Hardware / Virtualization Cloud Services(HaaS) ,Network
Cloud Services (NaaS) ,Location Cloud Services(LaaS)
● The top layer offers SaaS which provides cloud applications.
● PaaS sits on top of IaaS infrastructure.
● The bottom three layers are more related to physical requirements.
● The bottommost layer provides Hardware as a Service (HaaS).
● NaaS is used for interconnecting all the hardware components.
● Location as a Service (LaaS), provides security to all the physical hardware and network
resources.
● The cloud infrastructure layer can be further subdivided as
○ Data as a Service (DaaS)
○ Communication as a Service (CaaS)
○ Infrastructure as a Service(IaaS)
● Cloud players are divided into three classes:
○ Cloud service providers and IT administrators
○ Software developers or vendors
○ End users or business users.

4.6 Resource Provisioning And Resource Provisioning Method

● Provisioning of compute resources provides cloud services by signing SLA with the end user .The
SLA must commit sufficient resources such as CPU ,memory and bandwidth that the user can use
for a preset period.

● underprovisioning of resource will lead to broken SLA and penalties,overprovisioning of resources


will lead to resource underutilization and a decrease in revenue for the provider
● There are three type of resource provisioning

1) Demand Driven Method: provides static resources and has been used in grid computing
for many years. This method adds or removes computing instances based on the current
utilization level of the allocated resources. In general, when a resource has surpassed a
threshold for a certain amount of time, the scheme increases that resource based on
demand. This method is easy to implement. The scheme does not work out right if the
workload changes abruptly
2) Event Driven Method:This scheme adds or removes machine instances based on a
specific time event. The scheme works better for seasonal or predicted events .During these
events, the number of users grows before the event period and then decreases during the
event period.
3) Popularity Driven Method:In this method, the Internet searches for popularity of certain
applications and creates the instances by popularity demand. The scheme anticipates
increased traffic with popularity.

4.7 Market Oriented Resource Management


● market-oriented resource management is necessary to regulate the supply and demand of Cloud
resources at market equilibrium, provide feedback in terms of economic incentives for both Cloud
consumers and providers, and promote QoS-based resource allocation mechanisms that
differentiate service requests based on their utility

There are basically four main entities involved


● Users/Brokers: Users or brokers acting on their behalf submit service requests from anywhere in
the world to the Data Center and Cloud to be processed.
● SLA Resource Allocator: The SLA Resource Allocator acts as the interface between the Data
Center/Cloud service provider and external users/brokers. It requires the interaction of the
following mechanisms to support SLA-oriented resource management
● Service Request Examiner and Admission Control: When a service request is first submitted,
the Service Request Examiner and Admission Control mechanism interprets the submitted request
for QoS requirements before determining whether to accept or reject the request. Thus, it ensures
that there is no overloading of resources whereby many service requests cannot be fulfilled
successfully due to limited resources available. It also needs the latest status information regarding
resource availability (from VM Monitor mechanism) and workload processing (from Service
Request Monitor mechanism) in order to make resource allocation decisions effectively. Then, it
assigns requests to VMs and determines resource entitlements for allocated VMs.
● VMs: Multiple VMs can be started and stopped dynamically on a single physical machine to meet
accepted service requests, hence providing maximum flexibility to configure various partitions of
resources on the same physical machine to different specific requirements of service requests.
● In addition, multiple VMs can concurrently run applications based on different operating system
environments on a single physical machine since every VM is completely isolated from one
another on the same physical machine.
● Physical Machines: The Data Center comprises multiple computing servers that provide resources
to meet service demands.
● Pricing: The Pricing mechanism decides how service requests are charged. For instance, requests
can be charged based on submission time (peak/off-peak), pricing rates (fixed/changing) or
availability of resources (supply/demand). Pricing serves as a basis for managing the supply and
demand of computing resources within the Data Center and facilitates in prioritizing resource
allocations effectively.
● Accounting: The Accounting mechanism maintains the actual usage of resources by requests so
that the final cost can be computed and charged to the users. In addition, the maintained historical
usage information can be utilized by the Service Request Examiner and Admission Control
mechanism to improve resource allocation decisions.
● Dispatcher: The Dispatcher mechanism starts the execution of accepted service requests on
allocated VMs.
● Service Request Monitor: The Service Request Monitor mechanism keeps track of the execution
progress of service requests.

4.8 Global Cloud Exchange And Market

● The market directory allows participants to locate providers or consumers with the right offers.
● Auctioneers periodically clear bids and asks received from market participants.
● The banking system ensures that financial transactions pertaining to agreements between
participants are
● carried out.
● Brokers perform the same function in such a market as they do in real-world markets: they mediate
between consumers and providers by buying capacity from the provider and sub-leasing these to
the consumers. A broker can accept requests from many users who have a choice of submitting
their requirements to different brokers.
● Consumers, brokers and providers are bound to their requirements and related compensations
through SLAs. An SLA specifies the details of the service to be provided in terms of metrics
agreed upon by all parties, and penalties for meeting and violating the expectations, respectively.
● Pricing can be either fixed or variable depending on the market conditions.
● An admission-control mechanism at a provider’s end selects the auctions to participate in or the
brokers to negotiate with, based on an initial estimate of the utility.
● The negotiation process proceeds until an SLA is formed or the participants decide to break off.
● These mechanisms interface with the resource management systems of the provider in order to
guarantee the allocation being offered or negotiated can be reclaimed, so that SLA violations do
not occur.
● The resource management system also provides functionalities such as advance reservations that
enable guaranteed provisioning of resource capacity.
● Brokers gain their utility through the difference between the price paid by the consumers for
gaining resource shares and that paid to the providers for leasing their resources. Therefore, a
broker has to choose those users whose applications can provide it maximum utility. A broker
interacts with resource providers and other brokers to gain or to trade resource shares. A broker is
equipped with a negotiation module that is informed by the current conditions of the resources and
the current demand to make its decisions.
● Consumers have their own utility functions that cover factors such as deadlines, fidelity of results,
and turnaround time of applications. They are also constrained by the amount of resources that
they can request at any time, usually by a limited budget. Consumers also have their own limited
IT infrastructure that is generally not completely exposed to the Internet. Therefore, a consumer
participates in the utility market through a resource management proxy that selects a set of brokers
based on their offerings. He then forms SLAs with the brokers that bind the latter to provide the
guaranteed resources.
● The enterprise consumer then deploys his own environment on the leased resources or uses the
provider’s interfaces in order to scale his applications.

4.9 Emerging Cloud Management Standard


Working to Address Management Interoperability for Cloud Systems Technologies like cloud computing
and virtualization have been embraced by enterprise IT managers seeking to better deliver services to their
customers, lower IT costs and improve operational efficiencies.
a) Cloud Management Working Group (CMWG) - Models the management of cloud services and
the operations and attributes of the cloud service lifecycle through its work on the Cloud
Infrastructure Management Interface (CIMI).
b) Cloud Auditing Data Federation Working Group (CADF) - Defines the CADF standard, a full
event model anyone can use to fill in the essential data needed to certify, self-manage and
self-audit application security in cloud environments.
c) Software Entitlement Working Group (SEWG) - Focuses on the interoperability with which
software inventory and product usage are expressed, allowing the industry to better manage
licensed software products and product usage.
d) Open Virtualization Working Group (OVF) - Produces the OVF standard, which provides the
industry with a standard packaging format for software solutions based on virtual systems.

4.9.1) open cloud consortium


● OCC - formerly the Open Cloud Consortium)is non-profit venture which provides cloud
computing and data commons resources to support "scientific, environmental, medical and health
care research.
● OCC manages and operates resources including the Open Science Data Cloud (OSDC),
● supports the development of standards for cloud computing and frameworks for interoperating
between clouds
● supports the development of benchmarks for cloud computing;supports open source software for
cloud computing
● manages a testbed for cloud computing called the Open Cloud Testbed; sponsors workshops and
other events related to cloud computing.
● it helps to transporting the large database and balance the data management and data analysis.
● It focuses on large data cloud .

4.9.2) Open Virtualization Format


● Open Virtualization Format (OVF) is an open standard for packaging and distributing virtual
appliances or, more generally, software to be run in virtual machines.
● The standard describes an open, secure, portable, efficient and extensible format for the packaging
and distribution of software to be run in virtual machines.

4.9.3) OPEN VIRTUAL FORMAT


● The OVF standard is not tied to any particular hypervisor or instruction set architecture. The unit
of packaging and distribution is a so-called OVF Package which may contain one or more virtual
systems each of which can be deployed to a virtual machine. Some of the important features of
OVF include:
● Support for Content Verification: OVF supports integrity checking and the verification of content
depending on the industry-standard public key infrastructure. It also provides a strategy for
management and software licensing.
● Validation Support: While installing the virtual machine life cycle management process, OVF
supports the validation of every single virtual machine and the complete package. Detailed
user-readable descriptive information is also provided with every package.
● Support for Single and Multiple Virtual Machine (VM) Configurations: OVF supports both
standard single VM packages and complex multi tier package services that include multiple
interdependent VMs.
● Extensibility: OVF is designed to be extensible, and can support new technological advancements.
● Enables Portable Packaging: Because the OVF is platform-independent, it allows for
platform-specific enhancements.
● Vendor and Platform Independence: OVF is independent of a specific host platform, virtualization
platform, or guest operating system.
Unit 5: Security
5.1 SECURITY OVERVIEW
● Secure cloud computing encompasses three core capabilities:
Confidentiality: is the ability to keep information secret from people who shouldn’t have
access
Integrity: that systems operate as they are intended to function and produce outputs that
are not unexpected or misleading.
Availability speaks to maintaining service uptime for cloud infrastructure and cloud-based
services, which includes preventing denial-of-service (DoS) attacks.
● Cloud security is the protection of data stored online via cloud computing platforms from theft
,leakage and deletion
● It refers to an array of policies ,technological procedure services and solutions designed to support
safe functionality when building ,deploying and managing cloud based applications and associated
data .
● It refers to both physical and and logical security issues across all the different service model of
software ,platform and infrastructure .it also refer how the service are delivered in the public
,Private,Hybrid and Community delivery model
● These six areas are:
(1) security of data at rest
(2) security of data in transit
(3) authentication of users/applications/ processes
(4) robust separation between data belonging to different customers
(5) cloud legal and regulatory issues
(6) incident response.
● The cloud model is important in several ways:
● Ensures proper data integrity and safety since the data that gets transmitted online through servers
are sensitive data.
● Lots of hacking cases have been observed while transmitting data that is a very common topic for
any business purposes, but cloud computing technology assured us the best safety feature system
for all cloud storage devices and applications.
● While cloud technology provides cloud provider services at a very effective cost, the security
systems also came to provide the most efficient platform at such cost benefitting for every user.
● There have been various government regulatory authority services that ensure why cloud security
and choosing the right cloud provider are equally important. Under the Data Privacy Act, the cloud
providers perform efficiently, which outsources the organization’s critical data over the cloud
protecting every customer’s private data utilizing every service of the cloud provider.
● The third-party providers also get in touch with the cloud systems that provide necessary security
and data privacy and also encrypt data before reaching directly to the client.

5.2 Advantages of Cloud Computing Security


● Protection against DoS attack: Many companies face the Distributed Denial of Service (DDoS)
attack, a major threat that hampers company data before reaching the desired user. That is why
cloud computing security plays a major role in data protection because it filters data in the cloud
server before reaching cloud applications by erasing the threat of data hacking.
● Data security and data integrity: Cloud servers are real and easy targets of falling into the trap of
data breaches. Without proper care, data will be hampered, and hackers will get their hands on it.
Cloud security servers ensure the best quality security protocols that help in protecting sensitive
information and maintain data integrity.
● Flexible feature: Cloud computing security provides the best flexibility when data traffic is
concerned. Therefore, during high traffic users get the flexibility in a happening server crash. The
user also gets scalability resulting in cost reduction when the high flow of traffic ends.
● Availability and 24/7 support system: Cloud servers are available and provide the best backup
solution giving a constant 24/7 support system benefitting the users as well as clients.

5.3 Challenges And Risk


● Data Protection: securing data both in rest and transit data needs to be encrypted at all times with
clearly defined roles who will get access to encryption keys
● User Authentication: Limiting actress of data and monitoring who access the Data In order to
maintain the authorization company need to able to view data access logs and audit trails
● Disaster and Data Breach: Contingency planning Cloud serving as a single centralized repository
for a companies mission critical data ,the risk of having tat data compromised due to data breach or
temporarily made unavailable due to natural disaster are real concern
● Confidentiality: Refer to the limiting information access Sensitive information should be kept
secret from individual who are not authorized to see the information
5.3.1) Common Cloud Security Threat
● Identity, authentication and access management – This includes the failure to use multi-factor
authentication, misconfigured access points, weak passwords, lack of scalable identity
management systems, and a lack of ongoing automated rotation of cryptographic keys, passwords
and certificates.
● Vulnerable public APIs – From authentication and access control to encryption and activity
monitoring, application programming interfaces must be designed to protect against both
accidental and malicious attempts to access sensitive data.
● Account takeover – Attackers may try to eavesdrop on user activities and transactions, manipulate
data, return falsified information and redirect users to illegitimate sites.
● Malicious insiders – A current or former employee or contractor with authorized access to an
organization’s network, systems or data may intentionally misuse the access in a manner that leads
to a data breach or affects the availability of the organization’s information systems.
● Data sharing – Many cloud services are designed to make data sharing easy across organizations,
increasing the attack surface area for hackers who now have more targets available to access
critical data.
● Denial-of-service attacks – The disruption of cloud infrastructure can affect multiple
organizations simultaneously and allow hackers to harm businesses without gaining access to their
cloud services accounts or internal network.

5.3 Software As A Security Service


It refers to the securing user privacy and corporate data in subscription based cloud applications.It carries
a large amount of sensitive data and can be accessed from almost any device by a mass of users thus
posing a risk of privacy and sensitive information.
Pillars to SaaS Security
● Access Management: vendors must provide a unified framework to manage user authentication
through business rules
● Network Control: Security group controls who can access specific instances across the network
● Perimeter Network Control: controlling the traffic flowing into and out of a data centre
network.
● VM Management: Ensuring your infrastructure is secure requires frequent updates directly to
your virtual machines
● Data Protection: preventing data breach primarily by using various method for data encryption
both at rest and in transit
● Scalability and Reliability: Vertical scaling is limited by only being able to get as large as the
size of the server.Horizontal scaling means ability to connect multiple hardware so work as a
single unit

5.3.1) Software Security Challenges


● Data Security: Data level Security and sensitive data is the domain of the enterprise not the cloud
computing provider. It is required to protect the data where ever the data may flow.

● Application Security:Application Security, processes , secure coding guidelines ,training and testing
scripts and tools are typically a collaborative effort between the security and the development teams.It
should be a collaborative effort between the security and the product development team.

● Deployment Security:refer to act of creating different instances on hardware and deployed gusset
operating system in each of them

● Risk MAnagement: identification of data and its link to business processes,application and data stores and
assignment of ownership

● Risk Assessment: critical to helping information security organization make informed decisions when
balancing the dueling priorities of business utility and protection assets

● Security Portfolio Management: lack of project management can lead to project never being completed
unsustainable and unrealistic workload and expectation because project are not prioritized according to
strategy ,goals and resources

● Security Awareness : not providing proper awareness and training to the people who may need the can
expose the company ti a variety of security risks

● Third Party Risk Management: it may result in damage to the provider reputation ,revenue loss and legal
action should the provider be found

● Forensic : used to retrieve and analysis data .They analysis data to reconstructs event

5.4 Secure Software Development Life Cycle


● Investigation – upper management specifying the process, outcomes, and goals of the project, as
well as its budget and other constraints.

● Analyst – A preliminary analysis of existing security policies or programs, along with


documented current threats and associated controls are conducted.

● Logical Design – team members create and develop the blueprint for security, and examine as
well as implement key policies that influence later decisions.

● Physical Design – team members evaluate the technology needed to support the security
blueprint, generate alternative solutions, and agree upon a design.
● Implementation – It was acquired, tested, implemented, and tested again. Personnel issues are
evaluated and specific training and education programs conducted

● Maintenance – After implementation it must be operated, properly managed, and kept up to date
by means of established procedures.

A cloud security architecture is defined by the security layers, design, and structure of the platform, tools,
software, infrastructure, and best practices that exist within a cloud security solution. A cloud security
architecture provides the written and visual model to define how to configure and secure activities and
operations within the cloud, including such things as identity and access management; methods and
controls to protect applications and data; overall security; processes for instilling security principles into
cloud services development and operations
● Cloud Consumer: A person or organization that maintains a business relationship with, and uses
service from, cloud providers.

● Cloud Provider: A person, organization, or entity responsible for making a service available to
interested parties.

● Cloud Auditor: A party that can conduct independent assessment of cloud services, information
system operations, performance and security of the cloud implementation.

● Cloud Carrier: An intermediary that provides connectivity and transport of cloud services from
cloud providers to cloud consumers.

● Cloud Broker: An entity that manages the use, performance and delivery of cloud services, and
negotiates relationships between cloud providers and cloud consumers.
5.5 Security levels
5.5.1) Application Security
● Application security is the process of developing, adding, and testing security features within
applications to prevent security vulnerabilities against threats such as unauthorized access and
modification.

● Types of application security

○ Authentication: When software developers build procedures into an application to ensure


that only authorized users gain access to it. Authentication procedures ensure that a user is
who they say they are. This can be accomplished by requiring the user to provide a
username and password when logging in to an application.

○ Authorization: the user may be authorized to access and use the application. The system
can validate that a user by comparing the user’s identity with a list of authorized users.
Authentication must happen before authorization

○ Encryption: protect sensitive data from being seen or even used by a cybercriminal. where
traffic containing sensitive data travels between the end user and the cloud, that traffic can
be encrypted to keep the data safe.

○ Logging: If there is a security breach in an application, logging can help identify who got
access to the data and how. Application log files provide a time-stamped record of which
aspects of the application were accessed and by whom.

○ Application security testing: A necessary process to ensure that all of these security
controls work properly.

5.5.2) Data Security:


● Data level Security and sensitive data is the domain of the enterprise not the cloud computing
provider.Data level security is required to protect the data where ever the data flow that include
Force encryption for data and Permit only specific users to access the data.It can provide
compliance with Payment Card Industry Data Security Standard (PCIDSS). Key mechanism to
store data are Access control , Auditing , Authentication and Authorization
● Challenges of data security are Data residency ,Data privacy , Industry and regulation compliance

5.5.3) Virtual machine Security:


● Virtualization security is the collective measures, procedures and processes that ensure the
protection of a virtualization infrastructure / environment.It addresses the security issues faced by
the components of a virtualization environment and methods through which it can be mitigated or
prevented.

● virtualization security may include processes such as:

○ Implementation of security controls and procedures granularly at each virtual machine.


○ Securing virtual machines, virtual network and other virtual appliance with attacks and
vulnerabilities surfaced from the underlying physical device.

○ Ensuring control and authority over each virtual machine.

○ Creation and implementation of security policy across the infrastructure / environment

● Benefits

○ Cost-effectiveness: Pricing for cloud-based virtualized security services is often


determined by usage, which can mean additional savings for organizations that use
resources efficiently.

○ Flexibility: It provides protection across multiple data centers and in multi-cloud and
hybrid cloud environments, allowing an organization to take advantage of the full benefits
of virtualization while also keeping data secure.

○ Operational efficiency: Quicker and easier to deploy than hardware-based security,


virtualized security doesn’t require IT teams to set up and configure multiple hardware
appliances.

○ Regulatory compliance: Traditional hardware-based security is static and unable to keep


up with the demands of a virtualized network

○ different types of virtualized security

Segmentation, or making specific resources available only to specific applications


and users. controlling traffic between different network segments or tiers.

Micro-segmentation, or applying specific security policies at the workload level to


create granular secure zones and limit an attacker’s ability to move through the
network. Micro-segmentation divides a data center into segments and allows IT
teams to define security controls for each segment individually,

Isolation or separating independent workloads and applications on the same


network. used to isolate virtual networks from the underlying physical
infrastructure, protecting the infrastructure from attack.

5.6 Identity And Access Management (IAM)


● The concept of identity in the cloud can refer to users and cloud resources IAM policies are sets
of permission policies that can be attached to either users or cloud resources to authorize what
they access and what they can do with it. It is the overarching discipline for verifying a user’s
identity and their level of access to a particular system. Within that scope, both authentication and
access control
● This is indeed a great way of controlling information about users on the network. control user
identities across the entire system by setting up policies. This is undoubtedly a highly intelligent
way of handling the security of the uniqueness of an enterprise.

● Different rights in IAM are

IAM role : is an IAM identity that you can create in your account that has specific permissions.
An IAM role is similar to an IAM user, in that it is an AWS identity with permission policies that
determine what the identity can and cannot do in AWS.

IAM Group: a Group of user s to which common policies can be attached

IAM Policy: a document that define the effect ,action ,resources and optional condition and
method to perform the operation,

Type of IAM policy:-

○ Identity-based policies – Attach managed inline policies to IAM identities (users, groups
to which users belong, or roles). Identity-based policies grant permissions to an identity.

○ Resource-based policies – Attach inline policies to resources. It grant permissions to the


principal that is specified in the policy. Principals can be in the same account as the
resource or in other accounts.

○ Permissions boundaries – Use a managed policy as the permissions boundary for an IAM
entity . That policy defines the maximum permissions that the identity-based policies can
grant to an entity.

○ Organizations SCPs – service control policy (SCP) limit permissions that identity-based
policies or resource-based policies grant to entities within the account, but do not grant
permissions.

○ Access control lists (ACLs) – Use ACLs to control which principals in other accounts can
access the resource to which the ACL is attached. ACLs are similar to resource-based
policies. ACLs are cross-account permissions policies that grant permissions to the
specified principal.

○ Session policies – Session policies limit the permissions that the role or user's
identity-based policies grant to the session. Session policies limit permissions for a created
session, but do not grant permissions.

● IAM follows the PARC model

P-Principals(users ,group, programs)

A-Actios(create ,read,update,delete)

R-Resources(OS,network,files etc)

C-COndition(time of the day,type of OS)


5.6.1) IAM Challenges in cloud computing

1. Identity Provisioning / Deprovisioning

This concerns providing a secure and timely management of on-boarding (provisioning) and
off-boarding (de-provisioning) of users in the cloud.

When a user has successfully authenticated to the cloud, a portion of the system resources in
terms of CPU cycles, memory, storage and network bandwidth is allocated. Depending on the
capacity identified for the system, these resources are made available on the system even if no
users have been logged on.

Depending on the number of users, the system resources are allocated as and when required, and
scaled down regularly, based on projected capacity requirements. Simultaneously, adequate
measures need to be in place to ensure that as usage of the cloud drops, system resources are
made available for other objectives; else they will remain unused and constitute a dead
investment.

2. Maintaining a single ID across multiple platforms and organizations

It is tough for the organizations to keep track of the various logins and ID that the employees
maintain throughout their tenure. The centralized federated identity management (FIdM) is the
answer for this issue. Here users of cloud services are authenticated using a company chosen
identity provider (IdP).

By enabling a single sign-on facility, the organization can extend IAM processes and practices to
the cloud and implement a standardized federation model to support single sign-on to cloud
services.

3. Compliance Visibility: Who has access to what

When it comes to cloud services, it’s important to know who has access to applications and data,
where they are accessing it, and what they are doing with it. Your IAM should be able to provide
a centralised compliance reports across access rights, provisioning/deprovisioning, and end-user
and administrator activity. There should be a central visibility and control across all your systems
for auditing purposes.

4. Security when using 3rd party or vendor network

A lot of services and applications used in the cloud are from 3rd party or vendor networks. You
may have secured your network, but can’t guarantee that their security is adequate.

If you are facing any of these challenges, then Sysfore can help you to establish a secure and
integrated IAM practices, processes and procedures in a scalable, effective and efficient manner
for your organization.
5.6.2) Identity Management Life Cycle

5.6.3) Function Of IAM

1. Privileged user control

2. Access management/single sign-on (SSO)

3. User authentication/federation

4. Identity management and role management

5. Data loss protection/prevention

6. Log management

5.6.4) BENEFITS OF IAM

● Eliminating weak passwords: IAM systems enforce best practices in credential management, and
can practically eliminate the risk that users will use weak or default passwords. They also ensure
users frequently change passwords.

● Mitigating insider threats: IAM can limit the damage caused by malicious insiders, by ensuring
users only have access to the systems with privileges without supervision.

● Multi-factor security: help enterprises progress from two-factor to three-factor authentication,


using capabilities like iris scanning, fingerprint sensors, and face recognition.
● Improved security: use IAM to identify policy violations or remove inappropriate access
privileges, without having to search through multiple distributed systems.

● Common platform for access and identity management information: You can apply the same
security pollies across all the operating platforms and devices used by the organization.

● Ease of use: IAM simplifies signup, sign-in and user management processes for application
owners, end-users and system administrators.

● Productivity gains: IAM centralizes and automates the identity and access management lifecycle,
This can improve processing time for access and identity changes and reduce errors.

● Reduced IT Costs: can lower operating costs, you no longer need local identities for external uses
that services can reduce the need to buy and maintain on-premise infrastructure.

5.7) Machine Imaging


● Machine imaging is a process that is used to provide system portability, and provision and deploy
systems in the cloud through capturing the state of systems using a system image.

● A system image makes a copy or a clone of the entire computer system inside a single file. The
image is made by using a program called system imaging program and can be used later to restore
a system image.

● A machine image is a Compute Engine resource that stores all the configuration, metadata,
permissions, and data from one or more disks required to create a virtual machine (VM) instance.
You can use a machine image in many system maintenance scenarios, such as instance creation,
backup and recovery, and instance cloning

● Machine images can be used to create instances. You can use machine image to make copies of an
instance that contains most of the VM configurations of the source instance. These copies can
then be used for troubleshooting, scaling VM instances, debugging, or system maintenance.

● Machine imaging is mostly run on virtualization platform due to this it is also called as Virtual
Appliances and running virtual machines are called instances.

● For example Amazon Machine Image (AMI) is a system image that is used in cloud computing.
The Amazon Web Services uses AMI to store copies of a virtual machine. An AMI is a file
system image that contains an operating system, all device drivers, and any applications and state
information that the working virtual machine would have. The AMI files are encrypted and
compressed for security purpose and stored in AmazonS3 (Simple Storage System)

● Because many users share the cloud ,the cloud helps you track information about images ,such as
ownership ,history and so on. you can choose whether image private ,exclusively for your own
use, or to be share with other users in your organization

● if you are an independent software vendor ,you can also add your image to the public catalog
5.8) Autonomic Security
● Autonomic computing refers to a self-managing computing model in which computer systems
reconfigure themselves in response to changing conditions and are self-healing.

● Autonomic Systems : Autonomic systems are based on the human autonomic nervous system,
which is self-managing, monitors changes that affect the body, and maintains internal balances.

● Such a system requires sensory inputs, decision-making capability, and the ability to implement
remedial activities to maintain an equilibrium state of normal operation

● Example when it use like malicious attacks,hardware or software faults,excessive CPU


utilization,power failures,organizational policies ,inadvertent operator errors,interaction with
other systems,software updates

5.8.1) Characteristics of Autonomic Security

1.Self-awareness : system “knows itself” and is aware of its state and its behaviors.

2.Self-configuring : system should be able to configure and reconfigure itself under varying and
unpredictable conditions.

3.Self-optimizing : a system should be able to detect sub-optimal behaviors and optimize itself to
improve its execution.

4.Self-healing :system should be able to detect and recover from potential problems and continue
to function smoothly.

5.Self-protecting: system should be capable of detecting and protecting its resources from both
internal and external attack and maintaining overall system security and integrity.

● Autonomic self-protection involves detecting a harmful situation and taking actions that will
mitigate the situation.

● These systems will also be designed to predict problems from analysis of sensory inputs and
initiate corrective measures.

● An autonomous system security response is based on network knowledge, capabilities of


connected resources, information aggregation, the complexity of the situation, and the impact on
affected applications.

5.9) SAN (Storage Area Network)


● SAN (Storage Area Network) used for transferring the data between the servers and the storage
devices , fiber channel and switches. Data is identified by disk block. It allow multiple server
access to a pool of storage in which any server can potentially access any storage unit
● Components of Storage Area Network (SAN): Node ports, Cables, interconnect device such as:
Hubs, switches, directors,Storage arrays and SAN management Software

● Storage traffic over Fiber Channel avoids the TCP/IP packetization and latency issues, as well as
any local area network congestion, ensuring the highest access speed available for media and
mission critical stored data

● SAN Benefits:

○ Extremely fast data access with low latency.

○ Relieves stress on a local area network.

○ Can be scaled up to the limits of the interconnect.

○ Often the only solution for demanding applications requiring concurrent shared access.

○ Security is also a main advantage of SAN. If users want to secure their data, then SAN is a
good option to use. Users can easily implement various security measures on SAN.

○ Storage devices can be easily added or removed from the network. If users need more
storage, then they simply add the devices.

○ The cost of this storage network is low as compared to others.

○ Another big advantage of using the SAN (Storage Area Network) is better disk utilization.
● Limitations of SAN

○ its cost and administration requirements—having to dedicate and maintain both a separate
Ethernet network for metadata file requests and implement a Fibre Channel network can
be a considerable investment.

5.10) NAS (Network Attached Storage)


● Network-attached Storage (Commonly known as NAS) is a file storage device which is connected
to the network and enables multiple users to access data from the centralized disk capacity. The
users on a LAN access the shared storage by the ethernet connection.
● It is basically designed for those network systems, which may be processing millions of
operations per minute. It supports the storage device for the organization, which need a reliable
network system. It is more economical than the file servers and more versatile than the external
disks.

● data is identified by file name as well as byte offset. File system is managed by Head unit such as
CPU and Memory. In this for backup and recovery, files are used instead of block by block
copying technique.

● Hard drive array are contained and managed by this dedicated device which connect through a
network and facilitates access to data using file centric data access protocol like NFS(network file
system) and SMB(server message block)

● It allow more hard disk storage space to be added in network that already utilizes servers without
shutting them down for maintenance and upgrades.

● NAS devices doesn't have to be located in server it exist anywhere in LAN

● Components of Network Attached Storage (NAS):Head unit: CPU, Memory ,Network Interface
Card (NIC),Optimized operating system

● Summary of NAS Benefits:

○ Relatively inexpensive.

○ Ease of administration.

○ 24/7 and remote data availability.

○ Wide array of systems and sizes to choose from.

○ Drive failure-tolerant storage volumes and automatic backup

○ The architecture of NAS is easy to install and configure.

○ Every user or client in the network can easily access to Network Attached Storage.

○ A main advantage of NAS is that it is more reliable than the simple hard disks.
○ Another big advantage of NAS is that it offers the consolidated storage space within the
own network of an organization.

○ The performance is good in serving the files.

○ The devices of NAS are scalable and can be easily accessed remotely.

○ NAS is managed easily. It takes less time for storing and recovering the data from any
computer over the LAN.

○ It also offers security.

○ It offers an affordable option for both small businesses and homes for private cloud
storage.

● Limitation of NAS

scale and performance.:As more users need access, the server might not be able to keep up. it will
need to be replaced with a more powerful system latency (slow or retried connections) is usually
not noticed by users for small files, but can be a major problem in demanding environments such
as video production and editing.

5.11) SAN Vs NAS


5.12) Disaster Recovery In Clouds
● Cloud disaster recovery (CDR) is a cloud-based managed service that helps you quickly recover
your organization’s critical systems after a disaster and provides you remote access to your
systems in a secure virtual environment.

● Cloud disaster recovery has changed everything by eliminating the need for traditional
infrastructure and significantly reducing downtime.

● It takes a very different approach than traditional DR. Instead of loading them servers with the OS
and patching to the last configuration used in production, cloud disaster recovery encapsulates the
entire server, which includes the operating system, applications, patches, and data into a single
software bundle or virtual server. The virtual server is then backed up to an offsite data center on
a virtual host . It is not dependent on hardware, the OS, applications and data can be migrated
from one DC to another faster

● Type of disaster

Natural disaster: such as flood or earthquake that are rare but not infrequent

Technical disaster: include power failure or loss of network connectivity

Human disaster: include misconfiguration or even malicious third party access to cloud service

● Why is disaster recovery important?


○ Creating protocols for disaster recovery is vital for the smooth operation of business. In
the event of a disaster, a company with disaster recovery protocols and options can
minimize the disruption to their services and reduce the overall impact on business
performance.

○ Minimal service interruption means a reduced loss of revenue which, in turn, means user
dissatisfaction is also minimized.

○ Having plans for disaster in place also means your company can define its Recovery Time
Objective (RTO) and its Recovery Point Objective (RPO). The RTO is the maximum
acceptable delay between the interruption and continuation of the service and the RPO is
the maximum amount of time between data recovery points.

○ Most successful disaster recovery strategy that never be implement


Unit 6: Cloud Middleware

6.1 OPENSTACK
● Openstack is a Iaas Software tool for managing and building cloud computing platforms for public
and private clouds.

● It is supported by some of the largest and well-known software companies in software hosting and
development. a non-profit organization, looking after community-building and project
development, manages the OpenStack.

● It is an open source cloud platform that controls pool of compute ,storage,and networking resource
throughout a data center

● Cloud computing makes horizontal scaling easy, which means functions that have benefit from
running in parallel can serve more users by spinning up occurrences.

● it is open-source software means any user who wants to access the source code can make the
changes to the code quickly and freely

● Component of Openstack
6.2 Microsoft Azure
● Microsoft Azure is a Microsoft cloud service provider that provides cloud computing services like
computation, storage, security and many other domains.

● It provides services in the form of Infrastructure as a service, Platform as a Service and Software
as a service. It even provides serverless computing meaning, you just put your code and all your
backend activities as managed by Microsoft Azure.

● Azure queue storage is a service for storing large numbers of message that can be accessed from
anywhere in the world via HTTP

● Azure has low operational cost because it runs on its own servers whose only job is to make the
cloud functional and bug-free, it’s usually a whole lot more reliable than your own, on-location
server.

● COMPONENT OF MICROSOFT AZURE

○ Windows Azure Compute:Windows azure provides a hosting environment for managed


code. It provides computation service through roles. Windows azure supports 3 types of
roles:

Web roles used for web application programming and supported by IIS7
Worker roles used for background processing of web roles.
VM roles used for migrating applications to Windows azure easily

○ Windows Azure Storage: It provides 4 types of storage services:

Queues for messaging between web roles and worker roles.


Tables for storing structural data.
BLOBs (Binary Large Objects) to store text, files or large data.
Windows Azure Drives These can be uploaded and downloaded via blobs.

○ Windows Azure AppFabric:AppFabric provides infrastructure services for developing,


deploying and managing Windows azure applications. It provides 5 services:

Service bus provides secure connectivity between distributed and disconnected applications
in the cloud.
Access control It grants access to applications and services based on the identity user. So
the authorization decisions are pulled out from application
It provides caching for high speed access, scaling, and high availability of data to
applications.
It provides integration between Windows Azure applications and other SAAS.
Composite App It provides a hosting environment for web services and workflows.

● Use for
○ Build a web application that runs and stores data
○ Create virtual machine to develop and test or run
○ Develop massively scalable applications with many users
○ Azure keep backups of all your valuable data. In disaster situations, you can recover all
your data in a single click without your business getting affected.

● ADVANTAGES
○ Microsoft Azure offers high availability It offers you a strong security profile, good
scalability options, It is a cost-effective solution for an IT budget.
○ Azure allows you to use any framework, language, or tool and also allows businesses to
build a hybrid infrastructure

Architecture

6.3 CloudSim
● CloudSim is an open-source framework, which is used to simulate cloud computing infrastructure
and services. It is developed by the CLOUDS Lab organization and is written entirely in Java. It is
used for modelling and simulating a cloud computing environment prior to software development
in order to reproduce tests and results.

● Benefits of Simulation over the Actual Deployment:


○ No capital investment involved. With a simulation tool like CloudSim there is no
installation or maintenance cost.
○ Easy to use and Scalable. You can change the requirements such as adding or deleting
resources by changing just a few lines of code.
○ Risks can be evaluated at an earlier stage. In Cloud Computing utilization of real testbeds
limits the experiments to the scale of the testbed and makes the reproduction of results an
extremely difficult undertaking. With simulation, you can test your product against test
cases and resolve issues before actual deployment without any limitations.
○ No need for try-and-error approaches. Instead of relying on theoretical and imprecise
evaluations which can lead to inefficient service performance and revenue generation, you
can test your services in a repeatable and controlled environment free of cost with
CloudSim.
● Architecture

● CloudSim Core Simulation Engine provides interfaces for the management of resources such as
VM, memory and bandwidth of virtualized Datacenters.

● CloudSim layer manages the creation and execution of core entities such as VMs Cloudlets, Hosts
etc. It also handles network-related execution along with the provisioning of resources and their
execution and management.

● User Code is the layer controlled by the user. The developer can write the requirements of the
hardware specifications in this layer according to the scenario. Some of the most common classes
used during simulation are:
● Datacenter: used for modelling the foundational hardware equipment. This class provides methods
to specify the functional requirements of the Datacenter as well as methods to set the allocation
policies of the VMs etc.

● Host: this class executes actions related to management of virtual machines. It also defines policies
for provisioning memory and bandwidth to the virtual machines, as well as allocating CPU cores
to the virtual machines.

● VM: this class represents a virtual machine by providing data members defining a VM’s
bandwidth, RAM,

● Cloudlet: a cloudlet class represents any task that is run on a VM, like a processing , a memory
access , or a file updating task etc. It stores parameters defining the characteristics of a task such as
its length, size and provides methods similarly to VM class while also providing methods that
define a task’s execution time, status, cost and history.

● Datacenter Broker: is an entity acting on behalf of the user/customer. It is responsible for


functioning of VMs, including VM creation, management, destruction and submission of cloudlets
to the VM.

● CloudSim: this is the class responsible for initializing and starting the simulation environment after
all the necessary cloud entities have been defined and later stopping after all the entities have been
destroyed.

● Feature :

○ Energy aware computational research


○ Support for modeling and simulation of large scale cloud computing data centers
○ Support for data center network topologies
○ Support for dynamic insertion of simulation elements,stops and resume of simulation
○ Support user defined policies for allocation of host to VM

6.4 EyeOS

● It is free cloud computing operating system software that let you access all your necessary files ,
folders ,office ,calendar ,contact and much more

● It is mainly written in PHP ,XML and javascript

● Its desktop looklike ordinary desktop but can be customized on the basis of theme and it support
30 languages

● FEATURE:

○ Desktop: similar to regular operating systems OFfice related task : supports Ms-office
document ,spreadsheet and presentation
○ System and file management: uploading /downloading multiple files to the cloud
,compressing in ZIP format and dedicated picture view for slide show
● The goals for eyeOS include:

○ Being able to work from everywhere, regardless of whether or not you are using a
full-featured, modern computer, a mobile gadget, or a completely obsolete PC.
○ Sharing resources easily between different work centers at company, or working from
different places and countries on the same projects.
○ Always enjoying the same applications with the same open formats, and forgetting the
usual compatibility problems between office suites and traditional operating systems.
○ Being able to continue working if you have to leave your local computer or if it just
crashes, without loosing data or time: Just log in to your eyeOS from another place and
continue working.

6.5 Aneka

● Aneka is an Application Platform-as-a-Service (Aneka PaaS) for Cloud Computing. It acts as a


framework for building customized applications and deploying them on either public or private
Clouds. One of the key features of

● Aneka is its support for provisioning resources on different public Cloud providers such as
Amazon EC2, Windows Azure and GoGrid.

● It manage distributed applications with the help of .NET framework .It provide developers with
rich set of API for transparently exploiting such resources and expressing the business logic

● Aneka is a market oriented cloud development and management platform with rapid application
development and workload distribution capabilities.

● It also provide a tool for managing the cloud allowing administrators to easily start,stop and deploy
instances of the Aneka container on new resources and then reconfigure them dynamically to alter
the behavior of the cloud

● Applications managed by the Aneka container can be dynamically mapped to heterogeneous


resources which can grow or shrink according to the application needs. This elasticity is achieved
by means of the resource provisioning framework which is composed primarily of services built
into the Aneka fabric layer.

● There are three classes of services that characterize the container:

○ Execution Services. They are responsible for scheduling and executing applications. Each
of the programming models supported by Aneka defines specialized implementations of
these services for managing the execution of a unit of work defined in the model.
○ Foundation Services. These are the core management services of the Aneka container.
They are in charge of metering applications, allocating resources for execution, managing
the collection of available nodes, and keeping the services registry updated.
○ Fabric Services:They constitute the lowest level of the services stack of Aneka and
provide access to the resources managed by the cloud. An important service in this layer is
the Resource Provisioning Service, which enables horizontal scaling in the cloud. Resource
provisioning makes Aneka elastic and allows it to grow or to shrink dynamically to meet
the QoS requirements of applications.

6.6 Google App Engine

● Google App Engine is a Platform as a Service (PaaS) product that provides Web app developers
and enterprises with access to Google's scalable hosting and tier 1 Internet service.

● The App Engine requires that apps be written in Java or Python, store data in Google BigTable and
use the Google query language. Non-compliant applications require modification to use App
Engine.

● Google App Engine provides more infrastructure than other scalable hosting services such as
Amazon Elastic Compute Cloud (EC2). The App Engine also eliminates some system
administration and developmental tasks to make it easier to write scalable applications.

● Google App Engine is free up to a certain amount of resource usage. Users exceeding the per-day
or per-minute usage rates for CPU resources, storage, number of API calls or requests and
concurrent requests can pay for more of these resources.

● FEATURE :

○ Languages and Runtime: GAE allows you to use PHP, Python, or Go for writing any app’s
engine application. It also allows you to test and deploy an application locally with the
SDK tools.

○ Standard Features: Data search, retrieval, and storage includes functions like Cloud SQL,
search, blobstore, logs, and datastore,Communications functions like URL fetch, mail.

○ Preview Features: functions will be made generally available for users in future releases.
Such features comprise MapReduce, Cloud Storage Library, and Sockets.

○ Secure Framework : Google offers one of the most secure frameworks worldwide and it
rarely allows any unauthorized access to its servers. Google assures your app’s availability
to the globe as it packs impeccable privacy and security policies.

○ Simple Start: The app engine can easily start as there is no need for additional hardware or
product to be purchased.

○ Simple to Use:GAE integrates every tool you require for developing, testing, launching,
and updating the apps

○ Reliability and Performance:Google has been a household name for years now, so there is
no denying about its performance and reliability.

○ Cost Minimization:There is no need to hire additional engineers for managing your servers.
The saved funds can be used for other business activities.
○ Platform Independence:Migrating your data to other platforms does not require hefty tasks
and there is also no dependency on GAE.

● Advantages of GAE include:

○ Readily available servers with no configuration requirement


○ Power scaling function all the way down to "free" when resource usage is minimal
○ Automated cloud computing tools
Unit 7: Cloud Base Case Study

7.1) Emerging Market And Cloud

● Cloud computing applied to range of area including E-commerce education ,healthcare ,


government ,telecommunicating , community building , banking

● New development are flourishing in emerging market making them attractive to both global and
local cloud provider in search of new revenue opportunities

● Accessibility of clod may become a chief factor in the ability of these markets to expand their
global and local trade capabilities with other emerging market

● This will impact the driving job creation and increasing access to new products and business
configuration

● Developing government the cloud can support efforts to enhance their ability to provide services in
an economical and effective manner to citizen in area such as healthcare ,education
,telecommunication

● Offers practically unlimited potential in developing nation like India

● Cloud in education : It is bit slower in education with a small number of organization saying the
cloud currently has a pervasive presence in either type of economy .

● Cloud in retail : three quarter of retail sector already in cloud has a strong presence in developing
economies .

● Manufacturing : organization in both developed and developing economies say the cloud has a
significant presence now .Cloud computing is being used to reduce supply chain cost , connect
suppliers and support partnership between customers and suppliers Ensuring common standards
across machines ,communication protocols and a host of other cyber physicals challenges to be
met

● Banking service : to make digital payments

7.2 Implement Cloud Service

1. Define your project

2. Select the platform

3. Understand security platform

4. Select our cloud computing service provider

5. Determine service level agreements


6. Understand who owns recovery

7. Migrate in phases

8. Think ahead

7.3 Common attribute of Cloud based services


● Virtualization

● Multi-tenancy

● Network Access

● On demand

● Elastic

● Metering /Chargeback

7.4 Eucalyptus

● Stands for Elastic Utility Computing Architecture for Linking Programs To Useful System.

● It is used to build private ,public and hybrid clouds .It can also produce your own data center into
private clouds and allow you to extend the functionality to many other organizations

● Eucalyptus is an open-source infrastructure for the implementation of cloud computing on


computer clusters. It is considered one of the earliest tools developed as a surge computing

● Its name is an acronym for “elastic utility computing architecture for linking your programs to
useful systems.” implements infrastructure as a service (IaaS) methodology for solutions in private
and hybrid clouds.

● provides a platform for a single interface so that users can calculate the resources available in
private clouds and the resources available externally in public cloud services. It is designed with
extensible and modular architecture for Web services. It also implements the industry standard
(AWS) API. helps it to export a large number of APIs for users

● Challenges :

○ Extensibility: Simple architecture and open internal APIs

○ Networking: Virtual private network per cloud and must function as an overlay

○ Security: must compatible with local security policy

○ Packaging , installation , maintenance : system administration staff is an important


constituency for update
● Eucalyptus has the following key features:

○ Support for multiple users with the help of a single cloud

○ Support for Linux and Windows virtual machines

○ Accounting reports

○ Use of WS-Security to ensure secure communication between internal resources and


processes

○ The option to configure policies and service level agreements based on users and the
environment

○ Provisions for group, user management and security groups

● Architecture

● Node controller (NC) controls the execution, inspection , and termination of VM instances on the
host where it runs.

● Cluster controller (CC) gathers information about and schedules VM execution on specific node
controllers, as well as manages virtual instance network.

● storage controller (SC) is a put/get storage service that implements Amazon’s S3 interface and
provides a way for storing and accessing VM images and user data.

● Cloud controller (CLC) is the entry point into the cloud for users and administrators. It queries
node managers for information about resources, makes high-level scheduling decisions, and
implements them by making requests to cluster controllers.
● Walrus (W) controller component that manages the storage services . Requests are communicated
using the SOAP/ REST

● Client interface: The CLC essential acts as a translator between the internal Eucalyptus system
interfaces and some defined external clients interface

● SLA implementation and management : it implemented as extension to the message handling


service which can inspect modify and reject the message as well as the state stored by VM control

7.5 AppScale

● Appscale is open source distribute software system that implements a cloud platform as a service
.The goal is to provide developers with a rapid APT driven platform that can run application on
any cloud

● It make application easy to deploy and scale over cloud fabrics ,make it portable across the service

● It is compatible with Google App Engine and executes GAE on premise or over the other cloud
infrastructure without modification

● It execute GAE application over Amazon EC@ and Eucalyptus as well as XEN and KVM that
supports python ,java

● It abstract and multiplexes cloud and system services across multiple application enabling write
one rn anywhere program development for cloud

● Its implements a multi tier distributed web service stack with automatic development ,load
balancing and scaling along with API adaptors for alternative for each service API

● FEATURE:

○ It is ease of use and high availability that users have to come to expect from public cloud
platforms and infrastructures .
○ This include elasticity and fault detection ,recovery authentication and user control ,
monitoring and logging cross cloud data and application migration hybrid cloud
multitasking offline analytics disaster recovery

You might also like