0% found this document useful (0 votes)
173 views

Cs8791 Cloud Computing Unit2 Notes

This document discusses service-oriented computing and architectures. It defines what a service is and key characteristics of services including explicit boundaries, autonomy, sharing schemas and contracts rather than interfaces, and compatibility determined by policy. It describes service-oriented architecture (SOA) and key concepts like loose coupling, abstraction, reusability, and discoverability. SOA provides a model for building enterprise applications and systems. Web services are presented as a technology that supports SOA.

Uploaded by

Teju Melapattu
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
173 views

Cs8791 Cloud Computing Unit2 Notes

This document discusses service-oriented computing and architectures. It defines what a service is and key characteristics of services including explicit boundaries, autonomy, sharing schemas and contracts rather than interfaces, and compatibility determined by policy. It describes service-oriented architecture (SOA) and key concepts like loose coupling, abstraction, reusability, and discoverability. SOA provides a model for building enterprise applications and systems. Web services are presented as a technology that supports SOA.

Uploaded by

Teju Melapattu
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

UNIT II CLOUD ENABLING TECHNOLOGIES

Service Oriented Architecture – REST and Systems of Systems – Web Services –


Publish- Subscribe Model – Basics of Virtualization – Types of Virtualization –
Implementation Levels of Virtualization – Virtualization Structures – Tools and
Mechanisms – Virtualization of CPU – Memory – I/O Devices –Virtualization Support
and Disaster Recovery.

Service-oriented computing

 Service-oriented computing organizes distributed systems in terms of services,


which represent the major abstraction for building systems.
 Service orientation expresses applications and software systems as
aggregations of services that are coordinated within a service-oriented
architecture (SOA).
 Even though there is no designed technology for the development of service-
oriented software systems, Web services are the de facto approach for
developing SOA.
 Web services, the fundamental component enabling cloud computing systems,
leverage the Internet as the main interaction channel between users and the
system.

What is a service?
A service encapsulates a software component that provides a set of coherent and
related functionalities that can be reused and integrated into bigger and more complex
applications. The term service is a general abstraction that encompasses several
different implementations using different technologies and protocols.

Four major characteristics:


Boundaries are explicit.
A service-oriented application is generally composed of services that are spread across
different domains, trust authorities, and execution environments. Generally, crossing
such boundaries is costly; therefore, service invocation is explicit by design and often
leverages message passing. With respect to distributed object programming, whereby

1
remote method invocation is transparent, in a service-oriented computing
environment the interaction with a service is explicit and the interface of a service is
kept minimal to foster its reuse and simplify the interaction.

• Services are autonomous.


 Services are components that exist to offer functionality and are aggregated
and coordinated to build more complex system.
 They are not designed to be part of a specific system, but they can be
integrated in several software systems, even at the same time.
 With respect to object orientation, which assumes that the deployment of
applications is atomic, service orientation considers this case an exception
rather than the rule and puts the focus on the design of the service as an
autonomous component.
 The notion of autonomy also affects the way services handle failures. Services
operate in an unknown environment and interact with third-party applications.
 Therefore, minimal assumptions can be made concerning such environments:
applications may fail without notice, messages can be malformed, and clients
can be unauthorized.
 Service-oriented design addresses these issues by using transactions, durable
queues, redundant deployment and failover, and administratively managed
trust relationships among different domains.

• Services share schema and contracts, not class or interface definitions.

 Services are not expressed in terms of classes or interfaces, as happens in


object-oriented systems, but they define themselves in terms of schemas and
contracts.
 A service advertises a contract describing the structure of messages it can send
and/or receive and additional constraint—if any—on their ordering.
 Because they are not expressed in terms of types and classes, services are
more easily consumable in wider and heterogeneous environments.

2
 At the same time, a service orientation requires that contracts and schema
remain stable over time, since it would be possible to propagate changes to all
its possible clients.
 To address this issue, contracts and schema are defined in a way that allows
services to evolve without breaking already deployed code.
 Technologies such as XML and SOAP provide the appropriate tools to support
such features rather than class definition or an interface declaration.

• Services compatibility is determined based on policy.


 Service orientation separates structural compatibility from semantic
compatibility.
 Structural compatibility is based on contracts and schema and can be validated
or enforced by machine-based techniques.
 Semantic compatibility is expressed in the form of policies that define the
capabilities and requirements for a service.
 Policies are organized in terms of expressions that must hold true to enable
the normal operation of a service.

SOA – Service-oriented architecture


• SOA is an architectural style supporting service orientation. It organizes a
software system into a collection of interacting services.
• SOA encompasses a set of design principles that structure system development
and provide means for integrating components into a coherent and
decentralized system.
• SOA based computing packages functionalities into a set of interoperable
services, which can be integrated into different software systems belonging to
separate business domains.
• There are two major roles within SOA:
– Service Provider
– Service Consumer

3
Service Provider
 The service provider is the maintainer of the service and the organization that
makes available one or more services for others to use.
 To advertise services, the provider can publish them in a registry, together with
a service contract that specifies the nature of the service, how to use it, the
requirements for the service, and the fees charged.
Service Consumer
 The service consumer can locate the service metadata in the registry and
develop the required client components to bind and use the service.
 Service providers and consumers can belong to different organization bodies or
business domains.
 It is very common in SOA-based computing systems that components play the
roles of both service provider and service consumer.

Service Orchestration
 Services might aggregate information and data retrieved from other services or
create workflows of services to satisfy the request of a given service consumer.
 This practice is known as service orchestration, which more generally describes
the automated arrangement, coordination, and management of complex
computer systems, middleware, and services.
Service Choreography
 Another important interaction pattern is service choreography, which is the
coordinated interaction of services without a single point of control.

SOA provides a reference model for architecting several software systems, especially
enterprise business applications and systems.
which characterize SOA platforms, are winning features within an enterprise context:
• Standardized service contract.
Services adhere to a given communication agreement, which is specified through one
or more service description documents.

4
• Loose coupling.
 Services are designed as self-contained components, maintain relationships
that minimize dependencies on other services, and only require being aware of
each other.
 Service contracts will enforce the required interaction among services. This
simplifies the flexible aggregation of services and enables a more agile design
strategy that supports the evolution of the enterprise business.
• Abstraction.
 A service is completely defined by service contracts and description documents.
 They hide their logic, which is encapsulated within their implementation.
 The use of service description documents and contracts removes the need to
consider the technical implementation details and provides a more intuitive
framework to define software systems within a business context.
Reusability.
 Designed as components, services can be reused more effectively, thus
reducing development time and the associated costs.
 Reusability allows for a more agile design and cost-effective system
implementation and deployment.
 Therefore, it is possible to leverage third-party services to deliver required
functionality by paying an appropriate fee rather developing the same capability
in-house.
• Autonomy.
 Services have control over the logic they encapsulate and, from a service
consumer point of view, there is no need to know about their implementation.
• Lack of state.
 By providing a stateless interaction pattern (at least in principle), services
increase the chance of being reused and aggregated, especially in a scenario
in which a single service is used by multiple consumers that belong to different
administrative and business domains.
• Discoverability.
 Services are defined by description documents that constitute supplemental
 metadata through which they can be effectively discovered.

5
 Service discovery provides an effective means for utilizing third-party resources.
• Composability.
 Using services as building blocks, sophisticated and complex operations can be
implemented.
 Service orchestration and choreography provide a solid support for composing
services and achieving business goals.
SOA Technologies –
Web Services
 The first implementations of SOA have leveraged distributed object
programming technologies such as CORBA and DCOM.
 Later Web Services are the prominent technology for implementing SOA
systems and applications.
 They leverage Internet technologies and standards for building distributed
systems.
 Several aspects make Web Services the technology of choice for SOA.
 First, they allow for interoperability across different platforms and programming
languages.
 Second, they are based on well-known and vendor-independent standards such
as HTTP, SOAP, and WSDL.
 Third, they provide an intuitive and simple way to connect heterogeneous
software systems, enabling quick composition of services in distributed
environment.

WS technology Stack

 Web services are the prominent technology for implementing SOA systems and
applications.
 They leverage Internet technologies and standards for building distributed
systems.
 Several aspects make Web services the technology of choice for SOA. First,
they allow for interoperability across different platforms and programming
languages.
 Second, they are based on well-known and vendor-independent standards such
as HTTP, SOAP, XML, and WSDL.

6
 Third, they provide an intuitive and simple way to connect heterogeneous
software systems, enabling the quick.
 composition of services in a distributed environment. Finally, they provide the
features required by enterprise business applications to be used in an industrial
environment.
 They define facilities for enabling service discovery, which allows system
architects to more efficiently compose SOA applications, and service metering
to assess whether a specific service complies with the contract between the
service provider and the service consumer.
 The concept behind a Web service is very simple. Using as a basis the object-
oriented abstraction a Web service exposes a set of operations that can be
invoked by leveraging Internet-based protocols. Method operations support
parameters and return values in the form of complex and simple types.
 The semantics for invoking Web service methods is expressed through
interoperable standards such as XML and WSDL, which also provide a complete
framework for expressing simple and complex types in a platform-independent
manner.
 Web services are made accessible by being hosted in a Web server; therefore,
HTTP is the most popular transport protocol used for interacting with Web
services. Figure describes the common-use case scenarios for Web services.

 System architects develop a Web service with their technology of choice and
deploy it in compatible Web or application servers.

7
 The service description document, expressed by means of Web Service
Definition Language (WSDL), can be either uploaded to a global registry or
attached as a metadata to the service itself.
 Service consumers can look up and discover services in global catalogs using
Universal Description Discovery and Integration (UDDI) or, most likely, directly
retrieve the service metadata by interrogating the Web service first.
 The Web service description document allows service consumers to
automatically generate clients for the given service and embed them in their
existing application.
 Web services are now extremely popular, so bindings exist for any mainstream
programming language in the form of libraries or development support tools.
 Use of Web services seamless and straightforward with respect to technologies
such as CORBA that require much more integration effort.
 Moreover, being interoperable, Web services constitute a better solution for
SOA with respect to several distributed object frameworks, such as .NET
Remoting, Java RMI, and DCOM/COM1, which limit their applicability to a single
platform or environment.
 Besides the main function of enabling remote method invocation by using Web-
based and interoperable standards, Web services encompass several
technologies that put together and facilitate the integration of heterogeneous
applications and enable service-oriented computing.
 Figure shows the Web service technologies stack that lists all the components
of the conceptual framework describing and enabling the Web services
abstraction.

WSFL
Web Service Flow
Static  UDDI
Service Discovery
Quality of Service

Direct  UDDI
Service Publication
Management

WSDL Service Description


Security

SOAP XML-based Messaging

HTTP, FTP,e-mail, MQ, IIOP, …. Network

8
 These technologies cover all the aspects that allow Web services to operate in
a distributed environment, from the specific requirements for the networking
to the discovery of services.
 The backbone of all these technologies is XML, which is also one of the causes
of Web services’ popularity and ease of use.
 XML-based languages are used to manage the low-level interaction for Web
service method calls (SOAP), for providing metadata about the services
(WSDL), for discovery services (UDDI), and other core operations.
 In practice, the core components that enable Web services are SOAP and
WSDL.

Publish Subscribe Model


• System architects develop a Web service with their technology of choice and
deploy it in compatible Web or application servers.
• The service description document, expressed by means of Web Service
Definition Language (WSDL), can be either uploaded to a global registry or
attached as a metadata to the service itself.
• Service consumers can look up and discover services in global catalogs using
Universal Description Discovery and Integration (UDDI) or, most likely, directly
retrieve the service metadata by interrogating the Web service first.
• All these process on the Web is called Publish Subscribe Model
SOAP Messages
Example: shows an example of a SOAP message used to invoke a Web
service method that retrieves the price of a given stock and the
corresponding reply.

9
REST and Systems of Systems

Representational State Transfer (REST)


 Despite the fact that XML documents are easy to produce and process in any
platform or programming language, SOAP has often been considered quite
inefficient because of the excessive use of markup that XML imposes for
organizing the information into a well-formed document.
 Therefore, lightweight alternatives to the SOAP/XML pair have been proposed
to support Webservices.
 The most relevant alternative is Representational StateTransfer (REST), which
provides a model for designing network-based software systems utilizing the
client/ server model and leverages the facilities provided by HTTP for IPC
without additional burden.
 In a RESTful system ,a client sends a request over HTTP using the standard
HTTP methods (PUT, GET, POST, and DELETE), and the server issues a
response that includes the representation of the resource.
 By relying on this minimal support, it is possible to provide whatever it needed
to replace the basic and most important functionality provided by SOAP, which
is method invocation.
 The GET, PUT, POST, and DELETE methods constitute a minimal set of
operations for retrieving, adding, modifying, and deleting data.

10
 To gether with an appropriate URI organization to identify resources, all the
atomic operations required by a Webservice are implemented.
 The content of data is still transmitted using XML as part of the HTTP content,
but the additional markup required by SOAP is removed.
 For this reason, REST represents a lightweight alternative to SOAP, which works
effectively in contexts where additional aspects beyond those manageable
through HTTP are absent.
 One of them is security; RESTful Web services operate in an environment where
no additional security beyond the one supported by HTTP is required.
 This is not a great limitation,and RESTful Web services are quite popular and
used to deliver functionalities at enterprisescale: Twitter, Yahoo! (searchAPIs,
maps, photos, etc), Flickr, and Amazon.com all leverage REST.
 Besides those directly supporting Webservices, other technologies that
characterizeWeb2.0 provide and contribute to enrich and empower
Webapplications and then SOA-based systems.
 These fall under the names of
 Asynchronous Java Script and XML(AJAX),
 JavaScript Standard Object Notation(JSON), and others.
 AJAX is a conceptual framework based on JavaScript and XML that enables
asynchronous behavior in Web applications by leveraging the computing
capabilities of modern Web browsers.
 AJAX uses XML to exchange data with Webservices and applications; an
alternative to XML is JSON, which allows representing objects and collections
of objects in a platform-independent manner. Often it is preferred to transmit
data in a AJAX context

Basics of Virtualization
 Virtualization technology is one of the fundamental components of cloud
computing, especially in regard to infrastructure-based services.
 Virtualization allows the creation of a secure, customizable, and isolated
execution environment for running applications, even if they are untrusted,
without affecting other users’ applications.
 The basis of this technology is the ability of a computer program or a
combination of software and hardware—to emulate an executing environment
separate from the one that hosts such programs.
 For example, we can run Windows OS on top of a virtual machine, which itself
is running on Linux OS.

11
 Virtualization provides a great opportunity to build elastically scalable systems
that can provision additional capability with minimum costs.
 Therefore, virtualization is widely used to deliver customizable computing
environments on demand.
 Virtualization is a large umbrella of technologies and concepts that are meant
to provide an abstract environment—whether virtual hardware or an operating
system—to run applications.
 The term virtualization is often synonymous with hardware virtualization,
which plays a fundamental role in efficiently delivering Infrastructure-as-a-
Service (IaaS) solutions for cloud computing.
 In fact, virtualization technologies have a long trail in the history of computer
science and have been available in many flavors by providing virtual
environments at the operating system level, the programming language level,
and the application level.
 Virtualization technologies provide a virtual environment for not only executing
applications but also for storage, memory, and networking.
 Since its inception, virtualization has been sporadically explored and adopted,
but in the last few years there has been a consistent and growing trend to
leverage this technology.
 Virtualization technologies have gained renewed interested recently due to the
confluence of several phenomena:
Increased performance and computing capacity.
 Nowadays, the average end-user desktop PC is powerful enough to meet
almost all the needs of everyday computing, with extra capacity that is rarely
used.
 Almost all these PCs have resources enough to host a virtual machine manager
and execute a virtual machine with by far acceptable performance.
 The same consideration applies to the high-end side of the PC market, where
supercomputers can provide immense compute power that can accommodate
the execution of hundreds or thousands of virtual machines.
 Underutilized hardware and software resources. Hardware and software
underutilization is occurring due to (1) increased performance and computing
capacity, and (2) the effect of limited or sporadic use of resources.
 Computers today are so powerful that in most cases only a fraction of their
capacity is used by an application or the system.
 Moreover, if we consider the IT infrastructure of an enterprise, many computers
are only partially utilized whereas they could be used without interruption on a
24/7/365 basis.
 For example, desktop PCs mostly devoted to office automation tasks and used
by administrative staff are only used during work hours, remaining completely
unused overnight.
 Using these resources for other purposes after hours could improve the
efficiency of the IT infrastructure. To transparently provide such a service, it

12
 would be necessary to deploy a completely separate environment, which can
be achieved through virtualization.
Lack of space.
 The continuous need for additional capacity, whether storage or compute
power, makes data centers grow quickly.
 Companies such as Google and Microsoft expand their infrastructures by
building data centers as large as football fields that are able to host thousands
of nodes.
 Although this is viable for IT giants, in most cases enterprises cannot afford to
build another data center to accommodate additional resource capacity.
 This condition, along with hardware underutilization, has led to the diffusion of
a technique called server consolidation,for which virtualization technologies are
fundamental.
Greening initiatives.
 Recently, companies are increasingly looking for ways to reduce the amount of
energy they consume and to reduce their carbon footprint.
 Data centers are one of the major power consumers; they contribute
consistently to the impact that a company has on the environment.
 Maintaining a data center operation not only involves keeping servers on, but
a great deal of energy is also consumed in keeping them cool.
 Infrastructures for cooling have a significant impact on the carbon footprint of
a data center.
 Hence, reducing the number of servers through server consolidation will
definitely reduce the impact of cooling and power consumption of a data center.
Virtualization technologies can provide an efficient way of consolidating servers.
Rise of administrative costs.
 Power consumption and cooling costs have now become higher than the cost
of IT equipment.
 Moreover, the increased demand for additional capacity, which translates into
more servers in a data centre, is also responsible for a significant increment in
 Administrative costs.
 Computers in particular, servers do not operate all on their own, but they
require care and feeding from system administrators.
 Common system administration tasks include hardware monitoring, defective
hardware replacement, server setup and updates, server resources monitoring,
and backups.
 These are labour-intensive operations, and the higher the number of servers
that have to be managed, the higher the administrative costs.
 Virtualization can help reduce the number of required servers for a given
workload, thus reducing the cost of the administrative personnel.

13
Implementation levels of virtualization
Virtualization

Defn: is a computer architecture technology by which multiple virtual machines (VMs)


are multiplexed in the same hardware machine
 purpose of a VM is to enhance resource sharing by many users and improve
computer performance(resource utilization and application flexibility)

Levels of Virtualization Implementation


 A traditional computer runs with a host operating system specially tailored for its
hardware architecture

Fig-3.8: Traditional computer


 After virtualization, different user applications managed by their own operating
systems (guest OS) can run on the same hardware, independent of the host
OS.
 done by adding additional software, called a virtualization layer/ hypervisor /
virtual machine monitor (VMM)

Fig-3.9: After virtualization

14
 The main function of the software layer for virtualization is to
virtualize the physical hardware of a host machine into virtual
resources to be used by the VMs.
 virtualization software creates the abstraction of VMs by interposing a
virtualization layer at various levels of a computer system.
 Various operational levels where virtualization layers included are
1. Instruction set architecture (ISA) level
2. Hardware abstraction level(HAL)
3. Operating system level/Server Virtualization
4. Library support level/Middleware Support level
5. Application level

Fig-3.10: Various levels of Virtualization


Instruction Set Architecture (ISA) level
 ISA level virtualization is performed by emulating a given ISA by the ISA of
the host machine
 Therefore, it is possible to run a large amount of legacy binary code written
for various processors on any given new hardware host machine.
 There are two approaches
i) Code Interpretation
 It is basic emulation method
 An interpreter program interprets the source instructions to target instructions
one by one.

15
 this process is relatively slow
ii) Dynamic Binary Translation
 translates basic blocks of dynamic source instructions to target instructions
 Instruction set emulation requires binary translation and optimization.
 A virtual instruction set architecture (V-ISA) requires adding a processor-specific
software translation layer to the compiler.
Hardware Abstraction Level
 generates a virtual hardware environment for a VM
 idea is to virtualize a computer’s resources, such as its processors,
memory,and I/O devices.
 The intention is to upgrade the hardware utilization rate by multiple users
concurrently.
 The idea was implemented in the IBM VM/370 in the 1960s.
 Recently, the Xen hypervisor has been applied to virtualize x86-based machines to
run Linux or other guest OS applications
Operating System Level/Container Based Virtualization/Server
Virtualization/ Virtualization Support at the OS Level
Defn: refers to an abstraction layer between traditional OS and user applications

Why OS-Level Virtualization?


 There are many issues in hardware level virtualization. They are
 it is slow to initialize a hardware-level VM because each VM creates its own
image from scratch
 storing the VM images also becomes an issue
 full virtualization at the hardware level leads to slow performance and low
density,
 Therefore OS-level virtualization provides a feasible solution for these hardware-
level virtualization issues
OS-Level Virtualization
 inserts a virtualization layer inside an operating system to partition a machine’s
physical resources
 It enables multiple isolated VMs within a single operating system kernel.

16
 Also called as virtual execution environment (VE), Virtual Private System (VPS), or
simply container, Single-OS image Virtualization.
 VEs look like real servers
 most OS-level virtualization systems are Linux-based

Fig-3.11: OpenVZ virtualizationlayer inside the host OS, which provides


some OS images to create VMs
Advantages of OS-Level Virtualization
i. have minimal startup/shutdown costs, low resource requirements, and high
scalability
ii. it is possible for a VM and its host environment to synchronize state changes when
necessary.
are achieved by

(1) All OS-level VMs on the same physical machine share a single operating
system kernel
(2) the virtualization layer can be designed in a way that allows processes in VMs
to access as many resources of the host machine as possible, but never to modify
them
Disadvantages of OS-Level Virtualization
i. all the VMs at operating system level on a single container must have the same
kind of guest operating system.
Library Level Virtualization/Middleware Support for Virtualization
 also known as user-level Application Binary Interface (ABI) or API
emulation
 can create execution environments for running alien programs on a
platform rather than creating a VM to run the entire operating system.

17
 API call interception and remapping are the key functions performed.
 Different library-level virtualization systems are
a. Windows Application Binary Interface (WABI): middleware to convert
Windows system calls to Solaris system calls
b. Lxrun: a system call emulator that enables Linux applications written for x86
hosts to run on UNIX systems.
c. WINE(Windows Emulator):Offers library support for virtualizing x86
processors to run Windows applications on UNIX hosts
d. Visual MainWin: Offers a compiler support system to develop Windows
applications using Visual Studio to run on some UNIX hosts.
e. vCUDA
 CUDA is a programming model and library for general-purpose GPUs
 vCUDA virtualizes the CUDA library and can be installed on guest OSes.
 CUDA applications are difficult to run on hardware-level VMs directly.
 vCUDA virtualizes the CUDA library and can be installed on guest
OSes.
 When CUDA applications run on a guest OS and issues a call to the CUDA
API, vCUDA intercepts the call and redirects it to the CUDA API
running on the host OS
 vCUDA employs a client-server model to implement CUDA virtualization
 It consists of three user space components
1. the vCUDA library
 resides in the guest OS as a substitute for the standard CUDA library.
 It is responsible for intercepting and redirecting API calls from the
client to the stub.
 vCUDA also creates vGPUs and manages them.
2. a virtual GPU in the guest OS (which acts as a client) : functionality
of a vGPU
 It abstracts the GPU structure and gives applications a uniform view of
the underlying hardware

18
 when a CUDA application in the guest OS allocates a device’s memory
the vGPU can return a local virtual address to the application and notify
the remote stub to allocate the real device memory
 the vGPU is responsible for storing the CUDA API flow
3. the vCUDA stub in the host OS (which acts as a server)
 receives and interprets remote requests and creates a corresponding
execution context for the API calls from the guest OS, then returns the
results to the guest OS.
 also manages actual physical resource allocation

Fig-3.12: Basic concept of the vCUDA architecture


User-Application Level
 Virtualization at the application level virtualizes an application as a VM.
 On a traditional OS, an application runs as a process. Therefore, it is called as
process-level virtualization
 Most popular approach is to deploy(i.e., install) high level language (HLL) VMs.
 In this Level, the virtualization layer sits as an application program on top
of the operating system, and the layer exports an abstraction of a VM
that can run programs written and compiled to a particular abstract
machine definition. E.g. Java Virtual Machine (JVM).
 Other forms of application-level virtualization are known as application
isolation, application sandboxing, or application streaming.
Relative Merits of Different Approaches

19
 Below table shows the Relative Merits of Virtualization at Various Levels (More
“X”’s Means Higher Merit, with a Maximum of 5 X’s).

cost to implement that particular virtualization refers to the effort required to isolate
level resources committed to different VMs

Fig-3.13: Merits of Different Approaches


VMM Design Requirements and Providers
Design requirements of VMM are
1. Provides a duplicate or essentially identical to the original machine environment
for program
2. programs run in this environment should show minor decreases in speed.
3. a VMM should be in complete control of the system resources. It includes
a) The VMM is responsible for allocating hardware resources for programs
b) it is not possible for a program to access any resource not explicitly allocated
to it
c) it is possible under certain circumstances for a VMM to regain control of
resources already allocated.
Design requirements in terms of differences are
1. differences caused by the availability of system resources(arises w when
more than one VM is running on the same machine)
2. differences caused by timing dependencies

20
Virtualization Structures/Tools and Mechanisms
Depending on the position of the virtualization layer VM architectures are classified
into
1. Hypervisor architecture
2. Para-virtualization
3. Host-based virtualization

Hypervisor and Xen Architecture


Hypervisor
 Hypervisor supports hardware-level virtualization.
 Hypervisor/VMM software sits directly between the physical hardware and its OS.
Based on the functionality, a hypervisor are classified into
a) a micro-kernel architecture like the Microsoft Hyper-V
 includes only the basic and unchanging functions (such as physical memory
management and processor scheduling).
 The device drivers and other changeable components are outside the
hypervisor
b) a monolithic hypervisor architecture like the VMware ESX.
 Includes all functions and device drivers.
 the size of the hypervisor code of a micro-kernel hypervisor is smaller
than that of a monolithic hypervisor.
XEN Architecture
 is an open source hypervisor program
 is a micro-kernel hypervisor
 does not include any device drivers natively
 Provides a virtual environment located between the hardware and the OS.
 The core components of a Xen system are the hypervisor, kernel, and applications
 In xen hypervisor not all guest OSes are created equal
 The guest OS, which has control ability, is called Domain 0, and the others are
called Domain U.
Domain 0
 is a privileged guest OS of Xen.

21
 It is first loaded when Xen boots without any file system drivers being available.
 Domain 0 is designed to access hardware directly and manage devices.
 one of the responsibilities of Domain 0 is to allocate and map hardware
resources for the guest domains (the Domain U domains).
 Xen is based on Linux and its security level is C2.Therefore security policies are
needed to improve the security of Domain 0.
 Pictorial representation of XEN architecture

Fig-3.14:Xen Architecture

Binary Translation with Full Virtualization


Hardware virtualization can be classified into two categories based on implementation
technologies
1. Full virtualization
 does not need to modify the host OS.
 relies on binary translation to trap and to virtualize the execution of certain
sensitive, nonvirtualizable instructions
2. Host-based virtualization.
 both a host OS and a guest OS are used.
 A virtualization software layer is built between the host OS and guest OS.
Full Virtualization
 In full virtualization, noncritical instructions run on the hardware directly while
critical instructions are discovered and replaced with traps into the VMM to be
emulated by software.
 Both the hypervisor and VMM are considered full virtualization
 Why are only critical instructions trapped into the VMM?
 binary translation can incur a large performance overhead

22
 Noncritical instructions do not control hardware or threaten the security of the
system, but critical instructions do.
 Running noncritical instructions on hardware not only can promote efficiency,
but also can ensure system security.
Note:
o The traditional x86 processor offers four instruction execution rings: Rings 0,1,
2, and 3.
o The lower the ring number, the higher the privilege of instruction being
executed.
o The OS is responsible for managing the hardware and the privileged
instructions to execute at Ring 0, while user-level applications run at Ring 3.
Binary Translation of Guest OS Requests Using a VMM
 This approach was implemented by VMware
 Pictorial representation of this implementation

Fig-3.15: Indirect execution of complex instructions via binary translation


of guest OS requests using the VMM plus direct Execution of simple
instructions on the same host
 VMware puts the VMM at Ring 0 and the guest OS at Ring 1.
 The VMM scans the instruction stream and identifies the privileged, control- and
behavior-sensitive instructions. Then these instructions are trapped into the VMM,
which emulates the behavior of these instructions.
 The emulation method is called binary translation.
 Full virtualization combines binary translation and direct execution.
 The guest OS is unaware that it is being virtualized.
 The performance of full virtualization on the x86 architecture is typically 80 percent
to 97 percent that of the host machine.

23
 Drawbacks => Performance of full virtualization may not be ideal (meaning is
best), because, binary translation which is time-consuming
Host-Based Virtualization
 Guest OS is aware that it is virtualized
 virtualization layer is installed on top of the host OS
 host OS is still responsible for managing the hardware
 The guest OSes are installed and run on top of the virtualization layer. Dedicated
applications may run on the VMs
 Advantages
 the user can install this VM architecture without modifying the host OS.
 the host-based approach appeals to many host machine configurations
 disadvantage
 performance is too low(When an application requests hardware access, it
involves four layers of mapping which downgrades performance significantly.)
 When the ISA of a guest OS is different from the ISA of the underlying
hardware, binary translation must be adopted.
Para-Virtualization with Compiler Support
 Modifies the guest operating systems.
 para-virtualization reduce the virtualization overhead, and improve
performance by modifying only the guest OS kernel
 concept of a para-virtualized VM architecture

Fig-3.16:para-virtualized VM Architecture

 guest operating systems are paravirtualized and are assisted by an intelligent


compiler to replace the nonvirtualizable OS instructions by hypercalls.

24
Fig-3.17:para-virtualizedguest OS assisted by an intelligentcompiler to
replace nonvirtualizableOS
instructions by hypercalls
 The guest OS kernel is modified to replace the privileged and sensitive
instructions with hypercalls to the hypervisor or VMM.
 The guest OS running in a guest domain may run at Ring 1 instead of at Ring 0.
 This implies that the guest OS may not be able to execute some privileged and
sensitive instructions.
 The privileged instructions are implemented by hypercalls to the hypervisor.
 After replacing the instructions with hypercalls, the modified guest OS emulates
the behavior of the original guest OS
 E.g., KVM(Kernel-Based VM)
 is a Linux para-virtualization system—a part of the Linux version 2.6.20 kernel
 Memory management and scheduling activities are carried out by the existing
Linux kernel and KVM does the rest
 KVM is a hardware-assisted para-virtualization tool, which improves
performance and supports unmodified guest OSes such as Windows, Linux,
Solaris, and other UNIX variants.
 Pictorial representation of KVM

25
Fig-3.18: KVM Architecture

Para-Virtualization Architecture
Advantage
 Performance is high
Disadvantages
 compatibility and portability
 the cost of maintaining
 para-virtualized OSes is high, because they may require deep OS kernel
modifications
Table-3.1: Difference between full and para virtualization
S.No. Full Virtualization Para Virtualization
1. intercepts and emulates privileged replaces privileged and sensitive
and sensitive instructions at runtime instructions with hypercalls at
compile time.
2. Not aware that it is virtualized aware that it is virtualized
3. performance is too low performance is high

Virtualization of CPU, Memory and I/O devices


 Processors x86 employ a special running mode and instructions called as h/w
assisted virtualization

H/W Support Virtualization


 Modern OS & processors allow multiple processes to run simultaneously.
 If no protection mechanism is given, then all instructions from different processes
will access h/w directly and cause a system crash
 Therefore all processes have 2 modes. They are

26
o User mode o Supervisor mode
 Instructions running in supervisor mode are called privileged instruction, others
are non -privileged instructions
 There are many h/w virtualization products are available
 Example
o VMware workstation is a VM s/w that allows users to setup multiple x86 & X86-
64 virtual computers and that run one or more VMs simultaneously with the host
OS.
o Xen is a hypervisor for use in IA-32, x86-64, Itanium, and PowerPC 970 hosts.
o KVM can support hardware-assisted virtualization using the Intel VT-x or AMD-v
and VirtIO framework.

Fig-3.19: Intel hardware support for virtualization of processor,


memory, and I/O devices
 Intel provides a hardware-assist technique to make virtualization easy and
improve performance
 For processor virtualization, Intel offers the VT-x or VT-i technique.
 VT-x adds a privileged mode (VMX Root Mode) and some instructions to
processors. This enhancement traps all sensitive instructions in the VMM
automatically.
 For memory virtualization, Intel offers the EPT (Extended Page Table), which
translates the virtual address to the machine’s physical addresses to improve
performance.
 For I/O virtualization, Intel implements VT-d (virtualization for direct I/O) and
VT-c (Virtualization for connectivity) to support.

27
CPU Virtualization
 VM is a duplication of existing computer system in which a majority of the VM
instructions are executed on the host processor
 Unpriviledged instructions of VMs run directly on the host machine
 Critical instruction should be handled carefully for correctness and stability
 The critical instructions are divided into three categories:
o Privileged instructions o Behavior sensitive instructions
o Control-sensitive
instructions
 Privileged instructions execute in a privileged mode and will be trapped if
executed outside this mode.
 Control-sensitive instructions attempt to change the configuration of resources
used.
 Behavior-sensitive instructions have different behaviors depending on the
configuration of resources, including the load and store operations over the
virtual memory.
 A CPU architecture is virtualizable if
o VM’s privileged and unprivileged instructions run in CPU’s user mode
o VMM run in supervisor mode
 When the privileged instructions are executed by VM , it is trapped by VMM(i.e
it act as a unified mediator)
Hardware-Assisted CPU Virtualization
 Additional mode called privilege mode level (some people call it Ring-1) is
added to x86 processors.
 Therefore, operating systems can still run at Ring 0 and the hypervisor runs at
Ring -1.
 All the privileged and sensitive instructions are trapped in the hypervisor
automatically. This technique removes the difficulty of implementing binary
translation of full virtualization.
 E.g.: Intel Hardware-Assisted CPU Virtualization

28
Fig-3.20: Intel hardware-assisted CPU virtualization
 Intel calls the privilege level of x86 processors the VMX Root Mode
 Transition from the hypervisor to the guest OS and from guest OS to hypervisor
occurs through VM entry and VM exit
Memory Virtualization
 In traditional execution environment, OS maps virtual memory to machine memory
using page tables (called as one stage mapping) and also makes use of MMU and
TLB.
 In virtual execution environment, virtual memory virtualization involves
sharing the physical system memory in RAM & dynamically allocating it
to the physical memory of VMs
 Two-stage mapping process should be maintained by guest OS and the VM
i. Stage I : Guest Virtual Memory  Guest Physical memory
ii. Stage II: Guest Physical Memory Host Machine memory

Fig-3.21: Two-stage Mapping process

29
 Each Page table of the guest OS has a separate page table in the VMM, the
VMM page table is called as shadow Page Table.
 Nested Page table add another layer of indirection to virtual memory
 VMware uses shadow page tables to perform virtual memory to machine
memory address translation
 Processor uses TLB h/w to map the virtual memory directly to the machine
memory to avoid two levels of translation
 Example: Extended Page Table by Intel for Memory Virtualization
 Intel offers a Virtual Processor ID (VPID) to improve use of the TLB.
 The Page table in Guest is used for converting Guest Virtual address(GVA) to
Guest Physical Address(GPA)
 Guest Physical Address(GPA) can be converted to the host physical address
(HPA) using EPT.

Fig-3.22: Memory Virtualization


I/O Virtualization
 I/O virtualization involves managing the routing of I/O request between virtual
devices and the shared physical hardware
 There ways to perform I/O Virtualization. They are

30
o Full Device emulation
o Para virtualization
o Direct I/O Virtualization
Full Device Emulation

Fig-3.23: Device emulation for I/O virtualization


 First approach of I/O Virtualization
 Emulates well known , real world devices
 All the functions of a device or bus infrastructure, such as device enumeration,
identification, interrupts, DMA are replicated in software
 This S/W is located in the VMM & acts as virtual device
 I/O access requests of guest OS are trapped in the VMM which interacts with
I/O devices.
Para virtualization
 Used in Xen Consists of a front end and a backend driver
 Front end driver is running in Domain U and backend driver is running in Domain 0
 They interact with each other via a shared memory.
 The front end driver manages the I/O request of the guest OS and the backend
driver is responsible for managing the real I/O devices and multiplexing the I/O
data of different VMMs.

Fig-3.24: Para virtualization

31
Direct I/O Virtualization
 VM access the I/O device directly
Self Virtualized I/O(SV-I/O)
 Goal: Harness (Meaning bind) the rich resources of a multicore processor.
 All tasks associated with vitalizing an I/O device are encapsulated in SV-I/O
 Provides Virtual devices and an associated access API to VMs and a management
API to the VMM.
 SV-IO defines virtual interface (VIF) for every kind of virtualized I/O device such
as virtual n/w interface, virtual block disk, virtual camera disk etc.
 Guest OS interacts through Virtual Interface device(VIF) drivers

What is Disaster Recovery?


 Disaster recovery is an organization’s method of regaining access and
functionality to its IT infrastructure after events like a natural disaster, cyber
attack.
 A variety of disaster recovery (DR) methods can be part of a disaster recovery
plan.

How does disaster recovery work


• Disaster recovery relies upon the replication of data and computer processing
in an off-premises location not affected by the disaster.
• When servers go down because of a natural disaster, equipment failure or
cyber attack, a business needs to recover lost data from a second location
where the data is backed up.
• Ideally, an organization can transfer its computer processing to that remote
location as well in order to continue operations.
5 top elements of an effective disaster recovery plan

• Disaster recovery team: This assigned group of specialists will be


responsible for creating, implementing and managing the disaster recovery
plan.

32
• This plan should define each team member’s role and responsibilities. In the
event of a disaster, the recovery team should know how to communicate with
each other, employees, vendors, and customers.
• Risk evaluation: Assess potential hazards that put your organization at risk.
Depending on the type of event, strategize what measures and resources will
be needed to resume business.
• For example, in the event of a cyber attack, what data protection measures will
the recovery team have in place to respond?
• Business-critical asset identification: A good disaster recovery plan
includes documentation of which systems, applications, data, and other
resources are most critical for business continuity, as well as the necessary
steps to recover data.
• Backups: Determine what needs backup (or to be relocated), who should
perform backups, and how backups will be implemented.
• Include a recovery point objective (RPO) that states the frequency of backups
and a recovery time objective (RTO) that defines the maximum amount of
downtime allowable after a disaster.
• These metrics create limits to guide the choice of IT strategy, processes and
procedures that make up an organization’s disaster recovery plan.
• The amount of downtime an organization can handle and how frequently the
organization backs up its data will inform the disaster recovery strategy.
• Testing and optimization: The recovery team should continually test and
update its strategy to address ever-evolving threats and business needs.
• By continually ensuring that a company is ready to face the worst-case
scenarios in disaster situations, it can successfully navigate such challenges.
• In planning how to respond to a cyber attack, for example, it’s important that
organizations continually test and optimize their security and data protection
strategies and have protective measures in place to detect potential security
breaches.
How to build a disaster recovery team
• Crisis management: This leadership role commences recovery plans,
coordinates efforts throughout the recovery process, and resolves problems or
delays that emerge.
• Business continuity: The expert overseeing this ensures that the recovery
plan aligns with the company’s business needs, based on the business impact
analysis.

33
• Impact assessment and recovery: The team responsible for this area of
recovery has technical expertise in IT infrastructure including servers, storage,
databases and networks.
• IT applications:
This role monitors which application activities should be implemented based on a
restorative plan. Tasks include application integrations, application settings and
configuration, and data consistency.
What are the types of disaster recovery
• Back-up: This is the simplest type of disaster recovery and entails storing data
off site or on a removable drive. However, just backing up data provides only
minimal business continuity help, as the IT infrastructure itself is not backed
up.

Cold Site: In this type of disaster recovery, an organization sets up a basic


infrastructure in a second, rarely used facility that provides a place for
employees to work after a natural disaster or fire.
• It can help with business continuity because business operations can continue,
but it does not provide a way to protect or recover important data, so a cold
site must be combined with other methods of disaster recovery.
• Hot Site: A hot site maintains up-to-date copies of data at all times. Hot sites
are time-consuming to set up and more expensive than cold sites, but they
dramatically reduce down time.
• Disaster Recovery as a Service (DRaaS): In the event of a disaster or
ransomware attack, a DRaaS provider moves an organization’s computer
processing to its own cloud infrastructure, allowing a business to continue
operations seamlessly from the vendor’s location, even if an organization’s
servers are down.
• DRaaS plans are available through either subscription or pay-per-use models.
• There are pros and cons to choosing a local DRaaS provider: latency will be
lower after transferring to DRaaS servers that are closer to an organization’s
location, but in the event of a widespread natural disaster, a DRaaS that is
nearby may be affected by the same disaster.
• Back Up as a Service: Similar to backing up data at a remote location, with
Back Up as a Service, a third party provider backs up an organization’s data,
but not its IT infrastructure.
• Datacenter disaster recovery: The physical elements of a data center can
protect data and contribute to faster disaster recovery in certain types of
disasters.

34
• For instance, fire suppression tools will help data and computer equipment
survive a fire.
• A backup power source will help businesses sail through power outages without
grinding operations to a halt. Of course, none of these physical disaster
recovery tools will help in the event of a cyber attack.
• Point-in-time copies: Point-in-time copies, also known as point-in-time
snapshots, make a copy of the entire database at a given time.
• Data can be restored from this back-up, but only if the copy is stored off site
or on a virtual machine that is unaffected by the disaster.

Instant recovery: Instant recovery is similar to point-in-time copies, except


that instead of copying a database, instant recovery takes a snapshot of an
entire virtual machine.

SEVEN LAYERS OF VIRTUALIZATION


The virtualization layer will partition the physical resource of the underlying
physical server into multiple virtual machines with different workloads. The fascinating
thing about this virtualization layer is that it schedules, allocates the physical resource,
and makes each virtual machine think that it totally owns the whole underlying
hardware’s physical resource (processor, disks, rams, etc.).
Virtual machine’s technology makes it very flexible and easy to manage
resources in cloud computing environments, because they improve the utilization of
such resources by multiplexing many virtual machines on one physical host (server
consolidation). These machines can be scaled up and down on demand with a high
level of resources’ abstraction.
Virtualization enables high, reliable, and agile deployment mechanisms and
management of services, providing on-demand cloning and live migration services
which improve reliability. Accordingly, having an effective management’s suite for
managing virtual machines’ infrastructure is critical for any cloud computing
infrastructure as a service (IaaS) vendor.

Access virtualization: Allows applications to work with remote client devices without
change, even though those remote devices were never been thought of or available
when the application was written. XenDesktop from Citrix is an example of products
that work in this layer of virtualization.

35
Application virtualization: Allows applications written for one OS version of OS to
happily execute in another environment; these environments are often a brand new
OS version or a wholly completely different OS. This type of software would build it
possible for associate degree application written for Windows XP to figure simply fine
on Windows seven or Windows eight. AppZero fits into this layer of virtualization, as
will XenApp from Citrix, App-V from Microsoft and VMware ThinApp.

Processing virtualization: Allows one system to support workloads as if it had been


several systems, or permits one workload to encounter several systems as if it had
been one computing resource. VM software is one in all 5 differing types of software
that live at this layer. one in all today’s hottest catch phrases, software-defined
datacenter (SDDC), is essentially the utilization of this kind of computer code,
combined with some of alternative virtualization layers. Citrix XenServer, Microsoft
Hyper-V and VMware vServer are all samples of VM software that lives in this layer of
virtualization. Adaptive Computing Moab and IBM Platform Computing LSF are each
samples of cluster managers that additionally live at this layer of virtualization.

Storage virtualization: Allows workloads to access storage while not having to


grasp wherever the data is stored, what kind of device is storing the data, or whether
or not the storage is attached on to the system hosting the workload, to a storage
server simply down the LAN, or to storage within the cloud. Another one in all today’s

36
most talked-about catch phrases, software-defined storage (SDS), is associate degree
example of this technology. Open-E DSS, VMware VSAN are samples of storage
virtualization technology.
Network virtualization: Allows systems to figure with alternative systems safely
and firmly, while not having to worry an excessive amount of about the details of the
underlying network. Yet another current catchphrase, software-defined networking
(SDN), is associate degree implementation of network virtualization. Product that
supply network virtualization embody the Cisco extensible Network Controller (XNC)
and Juniper cloud.
Management of Virtualized environments: Allows IT administrators and
operators to simply monitor and manage virtual environments across boundaries. The
boundaries will include the physical location of systems; OSes in use; applications or
workloads in use; network topology; storage implementation; and the way client
systems connect with the applications. This is an important part of SDN, SDS and
SDDC, a bunch of companies supply management and monitoring software.
Security for Virtualized environments: Monitors and protects all of the other
layers of virtualization in order that only approved use are often manufactured from
the resources. Like management of virtualized environments, this layer is a crucial a
part of SDN, SDS and SDDC. Bitdefender, Kaspersky, TrendMicro, McAfee and lots of
others play during this space of the virtualization market.

37

You might also like