The document discusses the Open Grid Services Architecture (OGSA) and related concepts. Some key points:
- OGSA is a service-oriented architecture for grids based on integrating grid and web services concepts.
- The Open Grid Services Infrastructure (OGSI) specification defines interfaces and protocols for services in a grid environment to provide interoperability.
- Core constructs of OGSA include functional blocks, protocols, grid services, APIs, and software development kits.
Web services allow software applications to communicate over the World Wide Web through standards such as HTTP and XML. There are two main types of web services: SOAP-based "big" web services which use XML messages and WSDL definitions, and RESTful web services which access networked resources through uniform commands like HTTP and have a simpler architecture. A service-oriented architecture is a collection of services that communicate to deliver added functionality, and web services provide a common way to connect different software applications running on various platforms.
A Study Of Web Services And Its ImplicationsTony Lisko
This document presents a study comparing Web Services using SOAP and REST frameworks. It discusses that SOAP uses XML messaging over HTTP and defines standards for operations, while REST uses HTTP methods to manipulate resources and has a stateless architecture. The document provides details on SOAP and REST architectures and implementations, and concludes that while SOAP is widely used in enterprise applications, REST has advantages for performance and is growing in popularity for public applications due to its simplicity.
This document summarizes the anatomy of a web service. It discusses that web services allow applications and devices to communicate independently of platform or language. It describes the key components of web services - SOAP messages for communication, WSDL files that describe the service, and UDDI for finding services. The document provides details on how web services work using SOAP and WSDL. It explains that web services address limitations of prior technologies like EDI and CORBA by being more open, standardized, and compatible.
Secc tutorials development and deployment of rest web services in java_v2.0Aravindharamanan S
This document provides a tutorial on developing and deploying REST web services in Java. It introduces REST as an architectural style for web services that uses simple HTTP requests and focuses on representing resources with URIs. The tutorial demonstrates building a basic "Hello World" REST service in Java and developing an Android client to access these RESTful services. It explains the key REST principles of using HTTP methods to perform CRUD operations on resources and representing everything as URIs in a stateless manner.
The document provides an overview of web services and the key components that make up the web services framework. It discusses the goals of enabling universal interoperability and widespread adoption of web services using standards. The core components that enable application-to-application interaction over the web are described as SOAP for messaging, WSDL for service descriptions, UDDI for service discovery, and WSFL for composition of web services. The web services framework is being rapidly standardized and adopted to bring a new level of interoperability to web applications.
This document provides an overview of Service Oriented Architecture (SOA) and its enabling technologies. It discusses key SOA principles like loose coupling, standardized service contracts, and service reusability. The document also covers major SOA objectives, benefits, architecture layers, and the differences between SOA and web services. Web services are described as a standardized way for applications to communicate over the web using XML, SOAP, WSDL and other standards. The document contrasts SOA with public-subscribe and pull-based vs push-based messaging architectures.
Web services concepts, protocols and developmentishmecse13
Web services allow applications to communicate over the Internet through open standards and protocols. They are self-contained, modular applications that can be described, published, located, and invoked over a network, typically the Internet. Key technologies that enable web services include XML, SOAP, WSDL, and UDDI. SOAP is a messaging protocol that allows communication between applications over HTTP. WSDL describes how to access web services and what operations they perform. UDDI provides a registry for businesses to publish and discover web services.
An implementation of embedded RESTful Web services.pdfNaomi Hansen
This document presents an implementation of RESTful web services on an embedded system using a Xilinx Spartan-3E FPGA board. The prototype provides home device control over a 100Mbps LAN network. It implements the necessary protocols for web services including TCP/IP from layers 2 to 4. The application layer processes REST requests and generates XML responses to control a washing machine simulator. Testing with a Java client program demonstrated the basic functions of the RESTful web services including retrieving descriptions and executing device functions over HTTP.
Performance of Web Services on Smart Phone PlatformsIOSR Journals
This document discusses and compares the performance of Web Services on smart phone platforms using SOAP and REST. It begins with an introduction to Web Services and the problems with using SOAP on mobile devices due to its limitations in processing power, bandwidth usage, and flexibility. It then proposes using RESTful Web Services as an alternative as they avoid XML parsing and are based on the lightweight HTTP protocol. The document analyzes the performance of SOAP versus REST Web Services on a mobile device to determine which is more efficient for smart phones.
The document provides an overview of web services and their components. It discusses Service Oriented Architecture (SOA) and how web services implement SOA. The key components of web services identified are XML-RPC, SOAP, WSDL, and UDDI. SOAP is an XML-based protocol for exchanging messages between computers. WSDL provides a standard way to describe web services. UDDI allows services to be published and discovered.
The document provides an overview of web services, including their key features, architecture, and core technologies. It discusses how web services use standards like XML, SOAP, WSDL, and UDDI to allow software components to communicate over the internet in a manner that is self-contained, self-describing, and platform-independent. WSDL files describe web service operations and messages using an XML format, while SOAP is the messaging protocol used to make remote procedure calls between clients and services.
A web service is defined as a software system designed to support interoperable machine-to-machine interaction over a network. Put in another way, Web services provide a framework for system integration, independent of programming language and operating system. Web services are widely deployed in current distributed systems and have become the technology of choice. The suitability of Web services for integrating heterogeneous systems is largely facilitated through its extensive use of the Extensible Markup Language (XML). Thus, the security of a Web services based system depends not only on the security of the services themselves, but also on the confidentiality and integrity of the XML based SOAP messages used for communication. Recently, Web services have generated great interests in both vendors and researchers. A web service, based on existing Internet protocols and open standards, and provides a flexible solution to the problem of application integration. This paper provides an overview of the web services, web service security and the various algorithms used for encryption of the SOAP messages.
Web services allow software components to communicate over the internet using standard protocols like HTTP and XML. They provide reusable business logic that can be accessed remotely by other applications. Some key advantages of web services include being simple, loosely coupled, stateless, and firewall friendly. The core technologies that enable web services are SOAP, WSDL, UDDI, and DISCO which handle messaging, description, discovery, and integration of web services. The typical lifecycle of a web service involves implementing it using a .NET web service, testing the service, consuming or using the service in a client application, and potentially publishing the service for discovery.
This document discusses Service Oriented Architecture (SOA) and web services. It defines SOA as an architectural style that promotes loose coupling between services. The key aspects of SOA include services being coarse-grained, loosely coupled, platform independent, and having standard interfaces. Web services are discussed as a common method for implementing SOA using XML, SOAP, WSDL and UDDI standards. The roles of these standards and developing both web service providers and consumers are explained.
This document discusses web service composition, which involves combining existing web services to create new processes. It covers both syntactic composition methods like orchestration and choreography, as well as semantic composition using ontologies. Specific composition languages and standards discussed include BPEL, WS-CDL, OWL-S, and WSMO. The goal of semantic composition is to automate service discovery, invocation, and interoperation by describing services in a machine-understandable way.
This document provides an overview of Java web services. It discusses the key concepts of web services architecture including WSDL, SOAP, and UDDI. WSDL is an XML format for describing web services, SOAP is a messaging protocol for making procedure calls over a network, and UDDI is a registry for web services. The document also provides details on how these technologies interact and the role they play in web services.
The document discusses Web 2.0 and RESTful services and how they can be used to make CICS information available to more end-users through Atom feeds. It provides definitions of Web 2.0, RESTful services, and Atom feeds. It then describes how the IBM CICS Transaction Server V3 includes a SupportPac that allows publishing CICS data like file records or queue items as Atom feeds accessed via RESTful URLs.
Here are some sample web services projects to try:
- Currency conversion service: Converts between currencies using live exchange rates
- Weather service: Gets current weather conditions for a city by calling a public API
- Book search service: Searches book titles and descriptions from a database
- Calculator service: Provides basic math operations like add, subtract, multiply, divide
- Address validation service: Validates and standardizes address fields for a location
- Image processing service: Resizes, crops or applies filters to images uploaded to a server
These cover common domains like finance, data, calculation etc. and demonstrate basic CRUD operations, external API calls, file uploads etc. Good for learning core web service concepts.
Performance Evaluation of Web Services In Linux On MulticoreCSCJournals
Contemporary Business requires the ability to seamlessly exchange information between internal
business units, customers, and partner, is vital for success. Most organizations employ a variety of
different applications to store and exchange data in dissimilar way and therefore cannot “communicate” to
one another productively [1]. Service Oriented Architecture (SOA) components provide services to other
components via communication protocols typically over a network [2].The technologies like DCOM, RMI,
COBRA, Web Services etc. are developed using SOA, which contributed best to fulfill requirements to
some extent, but components result from these technologies are mostly either language specific or
platform specific,[3]. The services or components developed for one platform may not be able to
communicate and reusable in other platform, as they are mostly language specific or platform specific.
“World Wide Web Consortium (W3C) International community to develop web standards” issued WS-*
specifications for programming language vendors for Web services, which confirms a standard means of
interoperating between different software applications running on a variety of platforms or frameworks
[4][5]. This paper tests web services performance gain along with interoperability, reusability by using
“NAS Parallel Benchmarks (NPB)” set of program [6] developed by NASA Advanced Supercomputing
Division to evaluate the performance of supercomputers.
Microsoft .NET is a software framework that allows for the creation of web services and applications that can integrate and share information across devices, systems and languages. It consists of common language runtime, class libraries, ASP.NET for web applications and Windows Forms for desktop apps. .NET uses XML and SOAP to connect systems and web services provide reusable applications. The framework and tools like Visual Studio allow developers to build and deploy cross-platform applications and services.
Service Oriented Architecture Updated Luqmanguesteb791b
This document provides an overview of service oriented architecture (SOA) and web services. It defines SOA as an architectural style that promotes loose coupling between components. The key benefits of SOA include flexibility, reusability, and the ability to integrate systems. Web services are described as a standard way to implement SOA using XML, SOAP, WSDL and UDDI. The roles of a service, consumer and provider in SOA are also outlined.
A distributed system is a collection of computational and storage devices connected through a communications network. In this type of system, data, software, and users are distributed.
This document summarizes the key aspects of web services and Windows Communication Foundation (WCF) services. It discusses how web services use standard technologies like WSDL, XML, and SOAP to allow different systems to communicate over a network. It also outlines the basic concepts of WCF services, including why they were created and how to create a simple WCF service in 6 steps, from generating a project to testing it using the WCF Test Client.
Web services allow programs developed in different languages to communicate over a network by exchanging XML messages. A web service is a software module that uses HTTP and XML to provide a standardized interface. Key components of web services include SOAP for messaging, WSDL for describing available services, and UDDI for discovering services. A client can search UDDI to find a WSDL file describing a web service and then use SOAP calls defined in WSDL to invoke the service functionality over the network.
A Web service typically carries comprehensive business logic, can be searched through the Web, and has to be accessed through the Internet or Intranet environment. Above all, a Web service is published in a standard language and accessed through a standard protocol.
Improving Your Web Services Thorough Semantic Web TechniquesGihan Wikramanayake
J P Liyanage, G N Wikramanayake (2006) "Improving Your Web Services Thorough Semantic Web Techniques" In: 8th International Information Technology Conference on Innovations for a Knowledge Economy, pp. 14-23 Infotel Lanka Society, Colombo, Sri Lanka: IITC Oct 12-13, ISBN: 955-8974-04-8
An implementation of embedded RESTful Web services.pdfNaomi Hansen
This document presents an implementation of RESTful web services on an embedded system using a Xilinx Spartan-3E FPGA board. The prototype provides home device control over a 100Mbps LAN network. It implements the necessary protocols for web services including TCP/IP from layers 2 to 4. The application layer processes REST requests and generates XML responses to control a washing machine simulator. Testing with a Java client program demonstrated the basic functions of the RESTful web services including retrieving descriptions and executing device functions over HTTP.
Performance of Web Services on Smart Phone PlatformsIOSR Journals
This document discusses and compares the performance of Web Services on smart phone platforms using SOAP and REST. It begins with an introduction to Web Services and the problems with using SOAP on mobile devices due to its limitations in processing power, bandwidth usage, and flexibility. It then proposes using RESTful Web Services as an alternative as they avoid XML parsing and are based on the lightweight HTTP protocol. The document analyzes the performance of SOAP versus REST Web Services on a mobile device to determine which is more efficient for smart phones.
The document provides an overview of web services and their components. It discusses Service Oriented Architecture (SOA) and how web services implement SOA. The key components of web services identified are XML-RPC, SOAP, WSDL, and UDDI. SOAP is an XML-based protocol for exchanging messages between computers. WSDL provides a standard way to describe web services. UDDI allows services to be published and discovered.
The document provides an overview of web services, including their key features, architecture, and core technologies. It discusses how web services use standards like XML, SOAP, WSDL, and UDDI to allow software components to communicate over the internet in a manner that is self-contained, self-describing, and platform-independent. WSDL files describe web service operations and messages using an XML format, while SOAP is the messaging protocol used to make remote procedure calls between clients and services.
A web service is defined as a software system designed to support interoperable machine-to-machine interaction over a network. Put in another way, Web services provide a framework for system integration, independent of programming language and operating system. Web services are widely deployed in current distributed systems and have become the technology of choice. The suitability of Web services for integrating heterogeneous systems is largely facilitated through its extensive use of the Extensible Markup Language (XML). Thus, the security of a Web services based system depends not only on the security of the services themselves, but also on the confidentiality and integrity of the XML based SOAP messages used for communication. Recently, Web services have generated great interests in both vendors and researchers. A web service, based on existing Internet protocols and open standards, and provides a flexible solution to the problem of application integration. This paper provides an overview of the web services, web service security and the various algorithms used for encryption of the SOAP messages.
Web services allow software components to communicate over the internet using standard protocols like HTTP and XML. They provide reusable business logic that can be accessed remotely by other applications. Some key advantages of web services include being simple, loosely coupled, stateless, and firewall friendly. The core technologies that enable web services are SOAP, WSDL, UDDI, and DISCO which handle messaging, description, discovery, and integration of web services. The typical lifecycle of a web service involves implementing it using a .NET web service, testing the service, consuming or using the service in a client application, and potentially publishing the service for discovery.
This document discusses Service Oriented Architecture (SOA) and web services. It defines SOA as an architectural style that promotes loose coupling between services. The key aspects of SOA include services being coarse-grained, loosely coupled, platform independent, and having standard interfaces. Web services are discussed as a common method for implementing SOA using XML, SOAP, WSDL and UDDI standards. The roles of these standards and developing both web service providers and consumers are explained.
This document discusses web service composition, which involves combining existing web services to create new processes. It covers both syntactic composition methods like orchestration and choreography, as well as semantic composition using ontologies. Specific composition languages and standards discussed include BPEL, WS-CDL, OWL-S, and WSMO. The goal of semantic composition is to automate service discovery, invocation, and interoperation by describing services in a machine-understandable way.
This document provides an overview of Java web services. It discusses the key concepts of web services architecture including WSDL, SOAP, and UDDI. WSDL is an XML format for describing web services, SOAP is a messaging protocol for making procedure calls over a network, and UDDI is a registry for web services. The document also provides details on how these technologies interact and the role they play in web services.
The document discusses Web 2.0 and RESTful services and how they can be used to make CICS information available to more end-users through Atom feeds. It provides definitions of Web 2.0, RESTful services, and Atom feeds. It then describes how the IBM CICS Transaction Server V3 includes a SupportPac that allows publishing CICS data like file records or queue items as Atom feeds accessed via RESTful URLs.
Here are some sample web services projects to try:
- Currency conversion service: Converts between currencies using live exchange rates
- Weather service: Gets current weather conditions for a city by calling a public API
- Book search service: Searches book titles and descriptions from a database
- Calculator service: Provides basic math operations like add, subtract, multiply, divide
- Address validation service: Validates and standardizes address fields for a location
- Image processing service: Resizes, crops or applies filters to images uploaded to a server
These cover common domains like finance, data, calculation etc. and demonstrate basic CRUD operations, external API calls, file uploads etc. Good for learning core web service concepts.
Performance Evaluation of Web Services In Linux On MulticoreCSCJournals
Contemporary Business requires the ability to seamlessly exchange information between internal
business units, customers, and partner, is vital for success. Most organizations employ a variety of
different applications to store and exchange data in dissimilar way and therefore cannot “communicate” to
one another productively [1]. Service Oriented Architecture (SOA) components provide services to other
components via communication protocols typically over a network [2].The technologies like DCOM, RMI,
COBRA, Web Services etc. are developed using SOA, which contributed best to fulfill requirements to
some extent, but components result from these technologies are mostly either language specific or
platform specific,[3]. The services or components developed for one platform may not be able to
communicate and reusable in other platform, as they are mostly language specific or platform specific.
“World Wide Web Consortium (W3C) International community to develop web standards” issued WS-*
specifications for programming language vendors for Web services, which confirms a standard means of
interoperating between different software applications running on a variety of platforms or frameworks
[4][5]. This paper tests web services performance gain along with interoperability, reusability by using
“NAS Parallel Benchmarks (NPB)” set of program [6] developed by NASA Advanced Supercomputing
Division to evaluate the performance of supercomputers.
Microsoft .NET is a software framework that allows for the creation of web services and applications that can integrate and share information across devices, systems and languages. It consists of common language runtime, class libraries, ASP.NET for web applications and Windows Forms for desktop apps. .NET uses XML and SOAP to connect systems and web services provide reusable applications. The framework and tools like Visual Studio allow developers to build and deploy cross-platform applications and services.
Service Oriented Architecture Updated Luqmanguesteb791b
This document provides an overview of service oriented architecture (SOA) and web services. It defines SOA as an architectural style that promotes loose coupling between components. The key benefits of SOA include flexibility, reusability, and the ability to integrate systems. Web services are described as a standard way to implement SOA using XML, SOAP, WSDL and UDDI. The roles of a service, consumer and provider in SOA are also outlined.
A distributed system is a collection of computational and storage devices connected through a communications network. In this type of system, data, software, and users are distributed.
This document summarizes the key aspects of web services and Windows Communication Foundation (WCF) services. It discusses how web services use standard technologies like WSDL, XML, and SOAP to allow different systems to communicate over a network. It also outlines the basic concepts of WCF services, including why they were created and how to create a simple WCF service in 6 steps, from generating a project to testing it using the WCF Test Client.
Web services allow programs developed in different languages to communicate over a network by exchanging XML messages. A web service is a software module that uses HTTP and XML to provide a standardized interface. Key components of web services include SOAP for messaging, WSDL for describing available services, and UDDI for discovering services. A client can search UDDI to find a WSDL file describing a web service and then use SOAP calls defined in WSDL to invoke the service functionality over the network.
A Web service typically carries comprehensive business logic, can be searched through the Web, and has to be accessed through the Internet or Intranet environment. Above all, a Web service is published in a standard language and accessed through a standard protocol.
Improving Your Web Services Thorough Semantic Web TechniquesGihan Wikramanayake
J P Liyanage, G N Wikramanayake (2006) "Improving Your Web Services Thorough Semantic Web Techniques" In: 8th International Information Technology Conference on Innovations for a Knowledge Economy, pp. 14-23 Infotel Lanka Society, Colombo, Sri Lanka: IITC Oct 12-13, ISBN: 955-8974-04-8
Lidar for Autonomous Driving, LiDAR Mapping for Driverless Cars.pptxRishavKumar530754
LiDAR-Based System for Autonomous Cars
Autonomous Driving with LiDAR Tech
LiDAR Integration in Self-Driving Cars
Self-Driving Vehicles Using LiDAR
LiDAR Mapping for Driverless Cars
We introduce the Gaussian process (GP) modeling module developed within the UQLab software framework. The novel design of the GP-module aims at providing seamless integration of GP modeling into any uncertainty quantification workflow, as well as a standalone surrogate modeling tool. We first briefly present the key mathematical tools on the basis of GP modeling (a.k.a. Kriging), as well as the associated theoretical and computational framework. We then provide an extensive overview of the available features of the software and demonstrate its flexibility and user-friendliness. Finally, we showcase the usage and the performance of the software on several applications borrowed from different fields of engineering. These include a basic surrogate of a well-known analytical benchmark function; a hierarchical Kriging example applied to wind turbine aero-servo-elastic simulations and a more complex geotechnical example that requires a non-stationary, user-defined correlation function. The GP-module, like the rest of the scientific code that is shipped with UQLab, is open source (BSD license).
"Boiler Feed Pump (BFP): Working, Applications, Advantages, and Limitations E...Infopitaara
A Boiler Feed Pump (BFP) is a critical component in thermal power plants. It supplies high-pressure water (feedwater) to the boiler, ensuring continuous steam generation.
⚙️ How a Boiler Feed Pump Works
Water Collection:
Feedwater is collected from the deaerator or feedwater tank.
Pressurization:
The pump increases water pressure using multiple impellers/stages in centrifugal types.
Discharge to Boiler:
Pressurized water is then supplied to the boiler drum or economizer section, depending on design.
🌀 Types of Boiler Feed Pumps
Centrifugal Pumps (most common):
Multistage for higher pressure.
Used in large thermal power stations.
Positive Displacement Pumps (less common):
For smaller or specific applications.
Precise flow control but less efficient for large volumes.
🛠️ Key Operations and Controls
Recirculation Line: Protects the pump from overheating at low flow.
Throttle Valve: Regulates flow based on boiler demand.
Control System: Often automated via DCS/PLC for variable load conditions.
Sealing & Cooling Systems: Prevent leakage and maintain pump health.
⚠️ Common BFP Issues
Cavitation due to low NPSH (Net Positive Suction Head).
Seal or bearing failure.
Overheating from improper flow or recirculation.
ADVXAI IN MALWARE ANALYSIS FRAMEWORK: BALANCING EXPLAINABILITY WITH SECURITYijscai
With the increased use of Artificial Intelligence (AI) in malware analysis there is also an increased need to
understand the decisions models make when identifying malicious artifacts. Explainable AI (XAI) becomes
the answer to interpreting the decision-making process that AI malware analysis models use to determine
malicious benign samples to gain trust that in a production environment, the system is able to catch
malware. With any cyber innovation brings a new set of challenges and literature soon came out about XAI
as a new attack vector. Adversarial XAI (AdvXAI) is a relatively new concept but with AI applications in
many sectors, it is crucial to quickly respond to the attack surface that it creates. This paper seeks to
conceptualize a theoretical framework focused on addressing AdvXAI in malware analysis in an effort to
balance explainability with security. Following this framework, designing a machine with an AI malware
detection and analysis model will ensure that it can effectively analyze malware, explain how it came to its
decision, and be built securely to avoid adversarial attacks and manipulations. The framework focuses on
choosing malware datasets to train the model, choosing the AI model, choosing an XAI technique,
implementing AdvXAI defensive measures, and continually evaluating the model. This framework will
significantly contribute to automated malware detection and XAI efforts allowing for secure systems that
are resilient to adversarial attacks.
Passenger car unit (PCU) of a vehicle type depends on vehicular characteristics, stream characteristics, roadway characteristics, environmental factors, climate conditions and control conditions. Keeping in view various factors affecting PCU, a model was developed taking a volume to capacity ratio and percentage share of particular vehicle type as independent parameters. A microscopic traffic simulation model VISSIM has been used in present study for generating traffic flow data which some time very difficult to obtain from field survey. A comparison study was carried out with the purpose of verifying when the adaptive neuro-fuzzy inference system (ANFIS), artificial neural network (ANN) and multiple linear regression (MLR) models are appropriate for prediction of PCUs of different vehicle types. From the results observed that ANFIS model estimates were closer to the corresponding simulated PCU values compared to MLR and ANN models. It is concluded that the ANFIS model showed greater potential in predicting PCUs from v/c ratio and proportional share for all type of vehicles whereas MLR and ANN models did not perform well.
RICS Membership-(The Royal Institution of Chartered Surveyors).pdfMohamedAbdelkader115
Glad to be one of only 14 members inside Kuwait to hold this credential.
Please check the members inside kuwait from this link:
https://www.rics.org/networking/find-a-member.html?firstname=&lastname=&town=&country=Kuwait&member_grade=(AssocRICS)&expert_witness=&accrediation=&page=1
In tube drawing process, a tube is pulled out through a die and a plug to reduce its diameter and thickness as per the requirement. Dimensional accuracy of cold drawn tubes plays a vital role in the further quality of end products and controlling rejection in manufacturing processes of these end products. Springback phenomenon is the elastic strain recovery after removal of forming loads, causes geometrical inaccuracies in drawn tubes. Further, this leads to difficulty in achieving close dimensional tolerances. In the present work springback of EN 8 D tube material is studied for various cold drawing parameters. The process parameters in this work include die semi-angle, land width and drawing speed. The experimentation is done using Taguchi’s L36 orthogonal array, and then optimization is done in data analysis software Minitab 17. The results of ANOVA shows that 15 degrees die semi-angle,5 mm land width and 6 m/min drawing speed yields least springback. Furthermore, optimization algorithms named Particle Swarm Optimization (PSO), Simulated Annealing (SA) and Genetic Algorithm (GA) are applied which shows that 15 degrees die semi-angle, 10 mm land width and 8 m/min drawing speed results in minimal springback with almost 10.5 % improvement. Finally, the results of experimentation are validated with Finite Element Analysis technique using ANSYS.
3. SCHEME OF PRESENTATION
2.1 Service-oriented Architecture
2.2 REST And Systems Of Systems
2.3 Web Services
2.4 Publish-subscribe Model
3
4. 2.1 SERVICE ORIENTED ARCHITECTURE
Technology has advanced at breakneck speeds over the past decade, and
many changes are still occurring.
However, in this chaos, the value of building systems in terms of
services has grown in acceptance and it has become a core idea of most
distributed systems.
Loose coupling and support of heterogeneous implementations makes
services more attractive than distributed objects.
Web services move beyond helping various types of applications to
exchanging information.
4
5. CONT..,
The technology also plays an increasingly important role in
accessing, programming on, and integrating a set of new and
existing applications.
In general, SOA is about how to design a software system that
makes use of services of new or legacy applications through their
published or discoverable interfaces.
These applications are often distributed over the networks. SOA also
aims to make service interoperability extensible and effective.
It prompts architecture styles such as loose coupling, published
interfaces, and a standard communication model in order to support
this goal.
5
6. CONT..,
The World Wide Web Consortium (W3C) defines SOA as a form of distributed
systems architecture characterized by the following properties:
Logical view
Message orientation
Description orientation
Granularity
Network orientation
Platform-neutral
6
7. 2.2 REST And Systems of Systems
REST is a software architecture style for distributed systems,
particularly distributed hypermedia systems, such as the World Wide
Web.
It has recently gained popularity among enterprises such as Google,
Amazon, Yahoo!, and especially social networks such as Facebook
and Twitter because of its simplicity, and its ease of being published
and consumed by clients.
REST was introduced and explained by Roy Thomas Fielding, one
of the principal authors of the HTTP specification, in his doctoral
dissertation in 2000 and was developed in parallel with the
HTTP/1.1 protocol.
7
8. CONT..,
The REST architectural style is based on four principles:
Resource Identification through URIs
Uniform, Constrained Interface
Self-Descriptive Message
Stateless Interactions
8
9. CONT..,
RESTful web services can be considered an alternative to SOAP stack
or “big web services,” because of their simplicity, lightweight nature,
and integration with HTTP.
With the help of URIs and hyperlinks, REST has shown that it is
possible to discover web resources without an approach based on
registration to a centralized repository.
Recently, the web Application Description Language (WADL) has
been proposed as an XML vocabulary to describe RESTful web
services, enabling them to be discovered and accessed immediately by
potential clients.
9
10. CONT..,
However, there are not a variety of toolkits for developing RESTful
applications.
Also, restrictions on GET length, which does not allow encoding of
more than 4 KB of data in the resource URI, can create problems
because the server would reject such malformed URIs, or may even
be subject to crashes.
REST is not a standard. It is a design and architectural style for
large- scale distributed systems.
10
11. DIAGRAM - REST INTERACTION BETWEEN
USER AND SERVER IN HTTP SPECIFICATION
11
12. 2.3 WEB SERVICES
In an SOA paradigm, software capabilities are delivered and
consumed via loosely coupled, reusable, coarse-grained,
discoverable, and self-contained services interacting via a message-
based communication model.
The web has becomes a medium for connecting remote clients with
applications for years, and more recently, integrating applications
across the Internet has gained in popularity.
The term “web service” is often referred to a self-contained, self-
describing, modular application designed to be used and accessible
by other software applications across the web.
12
14. CONT..,
Once a web service is deployed, other applications and other web
services can discover and invoke the deployed service
In fact, a web service is one of the most common instances of an SOA
implementation.
The W3C working group defines a web service as a software system
designed to support interoperable machine-to-machine interaction over a
network.
According to this definition, a web service has an interface described in
a machine-executable format (specifically Web Services Description
Language or WSDL).
14
15. CONT..,
Other systems interact with the web service in a manner prescribed
by its description using SOAP messages, typically conveyed using
HTTP with an XML serialization in conjunction with other web-
related standards.
The technologies that make up the core of today’s web services are
as follows:
Simple Object Access Protocol (SOAP)
Web Services Description Language (WSDL)
Universal Description, Discovery, and Integration (UDDI)
15
16. CONT..,
SOAP is an extension, and an evolved version of XML-RPC, a
simple and effective remote procedure call protocol which uses
XML for encoding its calls and HTTP as a transport mechanism,
introduced in 1999.
According to its conventions, a procedure executed on the server
and the value it returns was a formatted in XML.
However, XML-RPC was not fully aligned with the latest XML
standardization.
16
17. CONT..,
Moreover, it did not allow developers to extend the request or
response format of an XML-RPC call.
As the XML schema became a W3C recommendation in 2001,
SOAP mainly describes the protocols between interacting parties
and leaves the data format of exchanging messages to XML schema
to handle.
17
18. CONT..,
The major difference between web service technology and other
technologies such as J2EE, CORBA, and CGI scripting is its
standardization, since it is based on standardized XML, providing a
language-neutral representation of data.
Most web services transmit messages over HTTP, making them
available as Internet-scale applications.
In addition, unlike CORBA and J2EE, using HTTP as the tunneling
protocol by web services enables remote communication through
firewalls and proxies.
18
19. CONT..,
SOAP-based web services are also referred to as “big web services”.
Services can also be considered a web service, in an HTTP context.
SOAP-based web services interaction can be either synchronous or
asynchronous, making them suitable for both request-response and
one-way exchange patterns, thus increasing web service availability
in case of failure.
19
20. 2.3.1 WS-I Protocol Stack
Unlike RESTful web services that do not cover QoS and contractual
properties, several optional specifications have been proposed for
SOAP-based web services to define nonfunctional requirements and
to guarantee a certain level of quality in message communication as
well as reliable, transactional policies, such as WS-Security, WS-
Agreement, WS-Reliable Messaging, WS-Transaction, and WS-
Coordination as shown in Below Figure.
20
22. CONT..,
SOAP messages are encoded using XML, which requires that all
self-described data be sent as ASCII strings.
The description takes the form of start and end tags which often
constitute half or more of the message’s bytes.
Transmitting data using XML leads to a considerable transmission
overhead, increasing the amount of transferred data by a factor 4 to
10.
Moreover, XML processing is compute and memory intensive and
grows with both the total size of the data and the number of data
fields, making web services inappropriate for use by limited-profile
devices, such as handheld PDAs and mobile phones.
22
23. CONT..,
Web services provide on-the-fly software composition, through the
use of loosely coupled, reusable software components.
By using Business Process Execution Language for Web Services
(BPEL4WS), a standard executable language for specifying
interactions between web services recommended by OASIS, web
services can be composed together to make more complex web
services and workflows.
BPEL4WS is an XML-based language, built on top of web service
specifications, which is used to define and manage long-lived service
orchestrations or processes.
23
24. CONT..,
In BPEL, a business process is a large-grained stateful service,
which executes steps to complete a business goal. That goal can be
the completion of a business transaction, or fulfillment of the job of
a service.
The steps in the BPEL process execute activities (represented by
BPEL language elements) to accomplish work.
Those activities are centered on invoking partner services to perform
tasks (their job) and return results back to the process.
BPEL enables organizations to automate their business processes by
orchestrating services.
24
25. CONT..,
Workflow in a grid context is defined as “The automation of the
processes, which involves the orchestration of a set of Grid services,
agents and actors that must be combined together to solve a problem
or to define a new service.”
The JBPM Project, built for the JBoss open source middleware
platform, is an example of a workflow management and business
process execution system.
Another workflow system, Taverna, has been extensively used in
life science applications.
There are a variety of tools for developing and deploying web
services in different languages, among them SOAP engines such as
Apache Axis for Java, gSOAP for C++, the Zolera Soap
Infrastructure (ZSI) for Python, and Axis2/Java and Axis2/C.
25
26. CONT..,
These toolkits, consisting of a SOAP engine and WSDL tools for
gen- erating client stubs, considerably hide the complexity of web
service application development and integration
As there is no standard SOAP mapping for any of the
aforementioned languages, two different implementations of SOAP
may produce different encodings for the same objects.
Since SOAP can combine the strengths of XML and HTTP, as a
standard transmission protocol for data, it is an attractive technology
for heterogeneous distributed computing environments, such as
grids and clouds, to ensure interoperability.
Open Grid Services Architecture (OGSA) grid services are
extensions of web services and in new grid middleware, such as
Globus Toolkit 4 and its latest released version GT5, pure standard
web services. Amazon S3 as a cloud-based persistent storage service
is accessible through both a SOAP and a REST inter- face.
26
27. CONT..,
However, REST is the preferred mechanism for communicating
with S3 due to the difficulties of processing large binary objects in
the SOAP API, and in particular, the limitation that SOAP puts on
the object size to be managed and processed.
A SOAP message consists of an envelope used by the applications
to enclose information that need to be sent. An envelope contains a
header and a body block. The EncodingStyle element refers
to the URI address of an XML schema for encoding elements of the
message.
Each element of a SOAP message may have a different encoding,
but unless specified, the encoding of the whole mes- sage is as
defined in the XML schema of the root element.
27
28. CONT..,
The header is an optional part of a SOAP message that may contain
auxiliary information as mentioned earlier, which does not exist in
this example.
The body of a SOAP request-response message contains the main
information of the conversa- tion, formatted in one or more XML
blocks.
In this example, the client is calling CreateBucket of the Amazon S3
web service interface. In case of an error in service invocation, a
SOAP message including a Fault element in the body will be
forwarded to the service client as a response, as an indicator of a
protocol-level error.
28
29. 2.3.2 WS-* CORE SOAP HEADER STANDARDS
There are many categories and several overlapping standards in each
category. Many are expanded with XML, WSDL, SOAP, BPEL,
WS-Security, UDDI, WSRF, and WSRP.
The number and complexity of the WS-* standards have
contributed to the growing trend of using REST and not web
services.
It was a brilliant idea to achieve interoperability through self-
describing messages, but experience showed that it was too hard to
build the required tooling with the required performance and short
implementation time.
29
31. 2.4 PUBLISH-SUBSCRIBE MODEL
An important concept here is “publish-subscribe” which describes a
particular model for linking source and destination for a message
bus.
Here the producer of the message (publisher) labels the message in
some fashion; often this is done by associating one or more topic
names from a (controlled) vocabulary.
Then the receivers of the message (subscriber) will specify the
topics for which they wish to receive associated messages.
Alternatively, one can use content-based delivery systems where the
content is queried in some format such as SQL.
31
32. CONT..,
The use of topic or content-based message selection is termed
message filtering.
Note that in each of these cases, we find a many-to-many
relationship between publishers and subscribers.
Publish-subscribe messaging middleware allows straightforward
implementation of notification or event-based programming models.
The messages could, for example, be labeled by the desired
notifying topic (e.g., an error or completion code) and contain
content elaborating the notification.
32
34. 2.5 BASICS OF VIRTUALIZATION
Virtualization is a large umbrella of technologies and concepts that are
meant to provide an abstract environment—whether virtual hardware or
an operating system—to run applications.
The term virtualization is often synonymous with hardware
virtualization, which plays a fundamental role in efficiently delivering
Infrastructure-as-a-Service (IaaS) solutions for cloud computing.
In fact, virtualization technologies have a long trail in the history of
computer science and have been available in many flavors by providing
virtual environments at the operating system level, the programming
language level, and the application level.
34
35. CONT..,
Moreover, virtualization technologies provide a virtual environment for
not only executing applications but also for storage, memory, and
networking.
Since its inception, virtualization has been sporadically explored and
adopted, but in the last few years there has been a consistent and
growing trend to leverage this technology.
35
36. CONT..,
Virtualization technologies have gained renewed interested recently
due to the confluence of several phenomena:
Increased performance and computing capacity.
Underutilized hardware and software resources.
Lack of space.
Greening initiatives.
Rise of administrative costs.
36
37. CONT..,
These can be considered the major causes for the diffusion of hardware
virtualization solutions as well as the other kinds of virtualization.
The first step toward consistent adoption of virtualization technologies was
made with the wide spread of virtual machine-based programming languages:
In 1995 Sun released Java, which soon became popular among developers.
The ability to integrate small Java applications, called applets, made Java a
very successful platform, and with the beginning of the new millennium Java
played a significant role in the application server market segment, thus
demonstrating that the existing technology was ready to support the execution
of managed code for enterprise-class applications.
37
38. CONT..,
In 2002 Microsoft released the first version of .NET Framework,
which was Microsoft’s alternative to the Java technology.
Based on the same principles as Java, able to support multiple
programming languages, and featuring complete integration with
other Microsoft technologies, .NET Framework soon became the
principal development platform for the Microsoft world and quickly
became popular among developers.
38
39. CONT..,
In 2006, two of the three “official languages” used for development
at Google, Java and Python, were based on the virtual machine
model.
This trend of shifting toward virtualization from a programming
language perspective demonstrated an important fact: The
technology was ready to support virtualized solutions without a
significant performance overhead.
This paved the way to another and more radical form of
virtualization that now has become a fundamental requisite for any
data center management infrastructure.
39
40. 2.6 CHARACTERISTICS OF VIRTUALIZATION
Virtualization is a broad concept that refers to the creation of a virtual
version of something, whether hardware, a software environment, storage,
or a network.
In a virtualized environment there are three major components: guest, host,
and virtualization layer.
The guest represents the system component that interacts with the
virtualization layer rather than with the host, as would normally happen.
The host represents the original environment where the guest is supposed to
be managed.
The virtualization layer is responsible for recreating the same or a different
environment where the guest will operate.
40
42. CONT..,
The main common characteristic of all these different implementations
is the fact that the virtual environment is created by means of a
software program.
The ability to use software to emulate such a wide variety of
environments creates a lot of opportunities, previously less attractive
because of excessive overhead introduced by the virtualization layer.
The technologies of today allow profitable use of virtualization and
make it possible to fully exploit the advantages that come with it. Such
advantages have always been characteristics of virtualized solutions.
42
44. i) Increased security
The ability to control the execution of a guest in a completely
transparent manner opens new possibilities for delivering a secure,
controlled execution environment.
The virtual machine represents an emulated environment in which
the guest is executed.
All the operations of the guest are generally performed against the
virtual machine, which then translates and applies them to the host.
This level of indirection allows the virtual machine manager to
control and filter the activity of the guest, thus preventing some
harmful operations from being performed.
44
45. CONT..,
Resources exposed by the host can then be hidden or simply
protected from the guest.
Moreover, sensitive information that is contained in the host can be
naturally hidden without the need to install complex security
policies.
Increased security is a requirement when dealing with untrusted
code. For example, applets downloaded from the Internet run in a
sandboxed3 version of the Java Virtual Machine (JVM), which
provides them with limited access to the hosting operating system
resources.
Both the JVM and the .NET runtime provide extensive security
policies for customizing the execution environment of applications.
45
46. CONT..,
Hardware virtualization solutions such as VMware Desktop, Virtual
Box, and Parallels provide the ability to create a virtual computer
with customized virtual hardware on top of which a new operating
system can be installed.
By default, the file system exposed by the virtual computer is
completely separated from the one of the host machine.
This becomes the perfect environment for running applications
without affecting other users in the environment.
46
47. ii) Managed execution
Virtualization of the execution environment not only allows
increased security, but a wider range of features also can be
implemented.
In particular,
Sharing,
Aggregation,
Emulation,
Isolation are the most relevant features.
47
49. CONT..,
Sharing: Virtualization allows the creation of a separate computing
environments within the same host. In this way it is possible to fully
exploit the capabilities of a powerful guest, which would otherwise
be underutilized.
Sharing is a particularly important feature in virtualized data
centers, where this basic feature is used to reduce the number of
active servers and limit power consumption.
49
50. CONT..,
Aggregation: Not only is it possible to share physical resource
among several guests, but virtualization also allows aggregation,
which is the opposite process.
A group of separate hosts can be tied together and represented to
guests as a single virtual host.
This function is naturally implemented in middleware for distributed
computing, with a classical example represented by cluster
management software, which harnesses the physical resources of a
homogeneous group of machines and represents them as a single
resource.
50
51. CONT..,
Emulation: Guest programs are executed within an environment that is
controlled by the virtualization layer, which ultimately is a program.
This allows for controlling and tuning the environment that is exposed
to guests.
For instance, a completely different environment with respect to the
host can be emulated, thus allowing the execution of guest programs
requiring specific characteristics that are not present in the physical
host.
This feature becomes very useful for testing purposes, where a specific
guest has to be validated against different platforms or architectures and
the wide range of options is not easily accessible during development.
51
52. CONT..,
Again, hardware virtualization solutions are able to provide virtual
hardware and emulate a particular kind of device such as Small
Computer System Interface (SCSI) devices for file I/O, without the
hosting machine having such hardware installed.
Old and legacy software that does not meet the requirements of current
systems can be run on emulated hardware without any need to change
the code.
This is possible either by emulating the required hardware architecture
or within a specific operating system sandbox, such as the MS-DOS
mode in Windows 95/98.
Another example of emulation is an arcade-game emulator that allows
us to play arcade games on a normal personal computer.
52
53. CONT..,
Isolation: Virtualization allows providing guests—whether they are
operating systems, applications, or other entities—with a completely
separate environment, in which they are executed.
The guest program performs its activity by interacting with an
abstraction layer, which provides access to the underlying resources.
Isolation brings several benefits; for example, it allows multiple
guests to run on the same host without interfering with each other.
Second, it provides a separation between the host and the guest. The
virtual machine can filter the activity of the guest and prevent
harmful operations against the host.
53
54. CONT..,
Besides these characteristics, another important capability enabled by
virtualization is performance tuning.
This feature is a reality at present, given the considerable advances in
hardware and software supporting virtualization.
It becomes easier to control the performance of the guest by finely
tuning the properties of the resources exposed through the virtual
environment.
This capability provides a means to effectively implement a quality-of-
service (QoS) infrastructure that more easily fulfills the service-level
agreement (SLA) established for the guest.
54
55. CONT..,
For instance, software-implementing hardware virtualization solutions can
expose to a guest operating system only a fraction of the memory of the
host machine or set the maximum frequency of the processor of the virtual
machine.
Another advantage of managed execution is that sometimes it allows easy
capturing of the state of the guest program, persisting it, and resuming its
execution.
55
56. CONT..,
This, for example, allows virtual machine managers such as Xen
Hypervisor to stop the execution of a guest operating system, move
its virtual image into another machine, and resume its execution in a
completely transparent manner.
This technique is called virtual machine migration and constitutes
an important feature in virtualized data centers for optimizing their
efficiency in serving application demands.
56
57. iii) Portability
The concept of portability applies in different ways according to the
specific type of virtualization considered.
In the case of a hardware virtualization solution, the guest is packaged
into a virtual image that, in most cases, can be safely moved and
executed on top of different virtual machines.
Except for the file size, this happens with the same simplicity with
which we can display a picture image in different computers.
Virtual images are generally proprietary formats that require a specific
virtual machine manager to be executed.
57
58. CONT..,
In the case of programming-level virtualization, as implemented by the
JVM or the .NET runtime, the binary code representing application
components (jars or assemblies) can be run without any recompilation on
any implementation of the corresponding virtual machine.
This makes the application development cycle more flexible and application
deployment very straightforward: One version of the application, in most
cases, is able to run on different platforms with no changes.
Finally, portability allows having your own system always with you and
ready to use as long as the required virtual machine manager is available.
This requirement is, in general, less stringent than having all the
applications and services you need available to you anywhere you go.
58
59. 2.7 TYPES OF VIRTUALIZATION
Virtualization covers a wide range of emulation techniques that are
applied to different areas of computing.
The first classification discriminates against the service or entity that is
being emulated. Virtualization is mainly used to emulate execution
environments, storage, and networks.
Among these categories, execution virtualization constitutes the oldest,
most popular, and most developed area.
Therefore, it deserves major investigation and a further categorization. In
particular we can divide these execution virtualization techniques into
two major categories by considering the type of host they require.
59
60. CONT..,
Process-level techniques are implemented on top of an existing operating
sys- tem, which has full control of the hardware.
System-level techniques are implemented directly on hardware and do
not require—or require a minimum of support from—an existing
operating system.
Within these two categories we can list various techniques that offer the
guest a different type of virtual computation environment:
Bare hardware, operating system resources, low-level programming
language, and application libraries.
60
62. 2.7.1 EXECUTION VIRTUALIZATION
Execution virtualization includes all techniques that aim to emulate an
execution environment that is separate from the one hosting the
virtualization layer.
All these techniques concentrate their interest on providing support for
the execution of programs, whether these are the operating system, a
binary specification of a program compiled against an abstract machine
model, or an application.
Therefore, execution virtualization can be implemented directly on top of
the hardware by the operating system, an application, or libraries
dynamically or statically linked to an application image.
62
63. i) MACHINE REFERENCE MODEL
Virtualizing an execution environment at different levels of the
computing stack requires a reference model that defines the interfaces
between the levels of abstractions, which hide implementation details.
From this perspective, virtualization techniques actually replace one of
the layers and intercept the calls that are directed toward it.
Therefore, a clear separation between layers simplifies their
implementation, which only requires the emulation of the interfaces and
a proper interaction with the underlying layer.
At the bottom layer, the model for the hardware is expressed in terms of
the Instruction Set Architecture (ISA), which defines the instruction set
for the processor, registers, memory, and interrupt management.
63
64. CONT..,
ISA is the interface between hardware and software, and it is important to
the operating system (OS) developer (System ISA) and developers of
applications that directly manage the underlying hardware (User ISA).
The application binary interface (ABI) separates the operating system layer
from the applications and libraries, which are managed by the OS.
ABI covers details such as low-level data types, align- ment, and call
conventions and defines a format for executable programs.
System calls are defined at this level. This interface allows portability of
applications and libraries across operating systems that implement the same
ABI.
64
65. CONT..,
The highest level of abstraction is represented by the application
programming interface (API), which interfaces applications to
libraries and/or the underlying operating system.
For any operation to be performed in the application level API, ABI
and ISA are responsible for making it happen.
The high-level abstraction is converted into machine-level
instructions to per- form the actual operations supported by the
processor.
The machine-level resources, such as proces- sor registers and main
memory capacities, are used to perform the operation at the
hardware level of the central processing unit (CPU).
65
66. CONT..,
This layered approach simplifies the development and imple- mentation of
computing systems and simplifies the implementation of multitasking and
the coexistence of multiple executing environments.
In fact, such a model not only requires limited knowledge of the entire
computing stack, but it also provides ways to implement a minimal
security model for managing and accessing shared resources.
For this purpose, the instruction set exposed by the hardware has been
divided into different security classes that define who can operate with
them.
The first distinction can be made between privileged and nonprivileged
instructions.
66
67. CONT..,
Non-privileged instructions are those instructions that can be used without
interfering with other tasks because they do not access shared resources.
This category contains, for example, all the floating, fixed-point, and
arithmetic instructions.
Privileged instructions are those that are executed under specific restrictions
and are mostly used for sensitive operations, which expose (behavior-
sensitive) or modify (control-sensitive) the privileged state.
For instance, behavior-sensitive instructions are those that operate on the I/O,
whereas control-sensitive instructions alter the state of the CPU registers.
Some types of architecture feature more than one class of privileged
instructions and implement a finer control of how these instructions can be
accessed.
67
68. CONT.., Diagram: Security rings
and privilege modes
For instance, a possible implementation
features a hierarchy of privileges in the form
of ring-based security:
Ring 0, Ring 1, Ring 2, and Ring 3; Ring 0 is
in the most privileged level and Ring 3 in the
least privileged level.
Ring 0 is used by the kernel of the OS, rings 1
and 2 are used by the OS-level services, and
Ring 3 is used by the user.
Recent systems support only two levels, with
Ring 0 for supervisor mode and Ring 3 for
user mode.
68
69. CONT..,
All the current systems support at least two different execution modes:
supervisor mode and user mode.
The first mode denotes an execution mode in which all the instructions
(privileged and non-privileged) can be executed without any restriction.
This mode, also called master mode or kernel mode, is generally used by the
operating system (or the hypervisor) to perform sensitive operations on
hardware- level resources.
In user mode, there are restrictions to control the machine-level resources. If
code running in user mode invokes the privileged instructions, hardware
interrupts occur and trap the potentially harmful execution of the instruction.
69
70. CONT..,
Despite this, there might be some instructions that can be invoked as
privileged instructions under some conditions and as non-privileged
instructions under other conditions.
The distinction between user and supervisor mode allows us to
understand the role of the hypervisor and why it is called that.
Conceptually, the hypervisor runs above the supervisor mode, and from
here the prefix hyper- is used.
In reality, hypervisors are run in supervisor mode, and the division
between privileged and non-privileged instructions has posed challenges
in designing virtual machine managers.
70
71. CONT..,
It is expected that all the sensitive instructions will be executed in
privileged mode, which requires supervisor mode in order to avoid traps.
Without this assumption it is impossible to fully emulate and manage the
status of the CPU for guest operating systems.
Unfortunately, this is not true for the original ISA, which allows 17
sensitive instructions to be called in user mode.
This prevents multiple operating systems managed by a single hypervisor to
be isolated from each other, since they are able to access the privileged state
of the processor and change it.
71
72. CONT..,
More recent implementations of ISA (Intel VT and AMD Pacifica) have
solved this problem by redesigning such instructions as privileged ones.
By keeping in mind this reference model, it is possible to explore and
better understand the various techniques utilized to virtualize execution
environments and their relationships to the other components of the
system.
72
73. ii) HARDWARE-LEVEL VIRTUALIZATION
Hardware-level virtualization is a virtualization technique that provides an
abstract execution environment in terms of computer hardware on top of
which a guest operating system can be run.
In this model, the guest is represented by the operating system, the host by the
physical computer hardware, the virtual machine by its emulation, and the
virtual machine manager by the hypervisor.
The hypervisor is generally a program or a combination of software and
hardware that allows the abstraction of the underlying physical hardware.
Hardware-level virtualization is also called system virtualization, since it
provides ISA to virtual machines, which is the representation of the hardware
interface of a system. This is to differentiate it from process virtual machines,
which expose ABI to virtual machines.
73
74. CONT..,
Hypervisors - A fundamental element of hardware virtualization is the
hypervisor, or virtual machine manager (VMM). It recreates a hardware
environment in which guest operating systems are installed. There are two
major types of hypervisor: Type I and Type II.
Type I hypervisors run directly on top of the hardware. Therefore, they
take the place of the operating systems and interact directly with the ISA
interface exposed by the underlying hardware, and they emulate this
interface in order to allow the management of guest operating systems.
This type of hypervisor is also called a native virtual machine since it
runs natively on hardware.
74
75. CONT.., Graphical diagram for
two hypervisor
Type II hypervisors require the
support of an operating system to
provide virtualization services.
This means that they are programs
managed by the operating system,
which interact with it through the ABI
and emulate the ISA of virtual
hardware for guest operating systems.
This type of hypervisor is also called a
hosted virtual machine since it is
hosted within an operating system.
TYPE-II TYPE-I
75
76. CONT..,
Conceptually, a virtual machine manager is internally organized as described
in below Figure.
Three main modules, dispatcher, allocator, and interpreter, coordinate their
activity in order to emulate the underlying hardware.
The dispatcher constitutes the entry point of the monitor and reroutes the
instructions issued by the virtual machine instance to one of the two other
modules.
The allocator is responsible for deciding the system resources to be provided
to the VM: whenever a virtual machine tries to execute an instruction that
results in changing the machine resources associated with that VM, the
allocator is invoked by the dispatcher.
76
78. CONT..,
The interpreter module consists of interpreter routines. These are executed
whenever a virtual machine executes a privileged instruction: a trap is
triggered and the corresponding routine is executed.
The design and architecture of a virtual machine manager, together with
the underlying hardware design of the host machine, determine the full
realization of hardware virtualization, where a guest operating system can
be transparently executed on top of a VMM as though it were run on the
underlying hardware.
The criteria that need to be met by a virtual machine manager to
efficiently support virtualization were established by Goldberg and Popek
in 1974.
78
79. CONT..,
Three properties have to be satisfied:
Equivalence. A guest running under the control of a virtual machine
manager should exhibit the same behavior as when it is executed
directly on the physical host.
Resource control. The virtual machine manager should be in
complete control of virtualized resources.
Efficiency. A statistically dominant fraction of the machine
instructions should be executed without intervention from the virtual
machine manager.
79
80. CONT..,
The major factor that determines whether these properties are
satisfied is represented by the lay- out of the ISA of the host running
a virtual machine manager.
Popek and Goldberg provided a classification of the instruction set
and proposed three theorems that define the properties that hardware
instructions need to satisfy in order to efficiently support
virtualization.
80
81. THEOREMS
THEOREM 1
For any conventional third-generation computer, a VMM may be
constructed if the set of sensitive instructions for that computer is a subset of
the set of privileged instructions.
THEOREM 2
A conventional third-generation computer is recursively virtualizable if:
• It is virtualizable and
• A VMM without any timing dependencies can be constructed for it.
THEOREM 3
A hybrid VMM may be constructed for any conventional third-
generation machine in which the set of user-sensitive instructions is a subset
of the set of privileged instructions.
81
84. HARDWARE-ASSISTED VIRTUALIZATION
This term refers to a scenario in which the hardware provides architectural
support for building a virtual machine manager able to run a guest
operating system in complete isolation.
This technique was originally introduced in the IBM System/370. At
present, examples of hardware-assisted virtualization are the extensions to
the x86-64 bit architecture introduced with Intel VT (formerly known as
Vanderpool) and AMD V (formerly known as Pacifica).
These extensions, which differ between the two vendors, are meant to
reduce the performance penalties experienced by emulating x86 hardware
with hypervisors. Before the introduction of hardware-assisted
virtualization, software emulation of x86 hardware was significantly
costly from the performance point of view.
84
85. CONT..,
The reason for this is that by design the x86 architecture did not meet the
formal requirements introduced by Popek and Goldberg, and early products
were using binary translation to trap some sensitive instructions and provide
an emulated version.
Products such as VMware Virtual Platform, introduced in 1999 by VMware,
which pioneered the field of x86 virtualization, were based on this
technique.
After 2006, Intel and AMD introduced processor extensions, and a wide
range of virtualization solutions took advantage of them: Kernel-based
Virtual Machine (KVM), VirtualBox, Xen, VMware, Hyper-V, Sun xVM,
Parallels, and others.
85
86. FULL VIRTUALIZATION
Full virtualization refers to the ability to run a program, most likely an
operating system, directly on top of a virtual machine and without any
modification, as though it were run on the raw hardware.
To make this possible, virtual machine managers are required to provide a
complete emulation of the entire underlying hardware. The principal
advantage of full virtualization is complete isolation, which leads to
enhanced security, ease of emulation of different architectures, and
coexistence of different systems on the same platform.
Whereas it is a desired goal for many virtualization solutions, full
virtualization poses important concerns related to performance and
technical implementation.
86
87. CONT..,
A key challenge is the interception of privileged instructions such as I/O
instructions: Since they change the state of the resources exposed by the
host, they have to be contained within the virtual machine manager.
A simple solution to achieve full virtualization is to provide a virtual
environment for all the instructions, thus posing some limits on perfor-
mance.
A successful and efficient implementation of full virtualization is
obtained with a combination of hardware and software, not allowing
potentially harmful instructions to be executed directly on the host. This
is what is accomplished through hardware-assisted virtualization.
87
88. PARA VIRTUALIZATION
This is a not-transparent virtualization solution that allows implementing
thin virtual machine managers.
Para virtualization techniques expose a software interface to the virtual
machine that is slightly modified from the host and, as a consequence,
guests need to be modified.
The aim of para virtualization is to provide the capability to demand the
execution of performance-critical operations directly on the host, thus
preventing performance losses that would otherwise be experienced in
managed execution.
This allows a simpler implementation of virtual machine managers that
have to simply transfer the execution of these operations, which were
hard to virtualize, directly to the host.
To take advantage of such an opportunity, guest operating systems
88
89. PARTIAL VIRTUALIZATION.
Partial virtualization provides a partial emulation of the underlying hard- ware, thus
not allowing the complete execution of the guest operating system in complete
isolation.
Partial virtualization allows many applications to run transparently, but not all the
features of the operating system can be supported, as happens with full virtualization.
An example of partial virtualization is address space virtualization used in time-
sharing systems; this allows multiple applications and users to run concurrently in a
separate memory space, but they still share the same hardware resources (disk,
processor, and network).
Historically, partial virtualization has been an important milestone for achieving full
virtualization, and it was implemented on the experimental IBM M44/44X.
Address space virtualization is a common feature of contemporary operating systems.
89
90. OPERATING SYSTEM-LEVEL VIRTUALIZATION
Operating system-level virtualization offers the opportunity to create different and
separated execution environments for applications that are managed concurrently.
Differently from hardware virtualization, there is no virtual machine manager or
hypervisor, and the virtualization is done within a single operating system, where
the OS kernel allows for multiple isolated user space instances.
The kernel is also responsible for sharing the system resources among instances
and for limiting the impact of instances on each other.
A user space instance in general contains a proper view of the file system, which is
completely isolated, and separate IP addresses, software configurations, and access
to devices.
Operating systems supporting this type of virtualization are general-purpose, time-
shared operating systems with the capability to provide stronger namespace and
resource isolation.
90
91. CONT.,
This virtualization technique can be considered an evolution of the chroot
mechanism in Unix systems.
The chroot operation changes the file system root directory for a process and its
children to a specific directory.
As a result, the process and its children cannot have access to other portions of the
file system than those accessible under the new root directory. Because Unix
systems also expose devices as parts of the file system, by using this method it is
possible to completely isolate a set of processes.
Following the same principle, operating system-level virtualization aims to
provide separated and multiple execution containers for running applications.
Compared to hardware virtualization, this strategy imposes little or no overhead
because applications directly use OS system calls and there is no need for
emulation.
91
92. PROGRAMMING LANGUAGE-LEVEL VIRTUALIZATION
Programming language-level virtualization is mostly used to achieve ease of
deployment of applications, managed execution, and portability across different
platforms and operating systems.
It consists of a virtual machine executing the byte code of a program, which is
the result of the compilation process. Compilers implemented and used this
technology to produce a binary format representing the machine code for an
abstract architecture.
The characteristics of this architecture vary from implementation to
implementation. Generally these virtual machines constitute a simplification of
the underlying hardware instruction set and provide some high-level instructions
that map some of the features of the languages compiled for them.
At runtime, the byte code can be either interpreted or compiled on the fly—or
jitted5—against the underlying hardware instruction set.
92
93. CONT.,
The main advantage of programming-level virtual machines, also called process virtual
machines, is the ability to provide a uniform execution environment across different platforms.
Programs compiled into byte code can be executed on any operating system and platform for
which a virtual machine able to execute that code has been provided.
From a development life- cycle point of view, this simplifies the development and deployment
efforts since it is not necessary to provide different versions of the same code.
The implementation of the virtual machine for different platforms is still a costly task, but it is
done once and not for any application.
Moreover, process virtual machines allow for more control over the execution of programs
since they do not provide direct access to the memory.
Security is another advantage of managed programming languages; by filtering the I/O
operations, the process virtual machine can easily support sandboxing of applications.
As an example, both Java and .NET provide an infrastructure for pluggable security policies
and code access security frameworks. All these advantages come with a price: performance.
93
94. APPLICATION-LEVEL VIRTUALIZATION
Application-level virtualization is a technique allowing applications to be
run in runtime environments that do not natively support all the features
required by such applications.
In this scenario, applications are not installed in the expected runtime
environment but are run as though they were.
In general, these techniques are mostly concerned with partial file systems,
libraries, and operating system component emulation.
Such emulation is performed by a thin layer—a program or an operating
system component—that is in charge of executing the application.
Emulation can also be used to execute program binaries compiled for
different hardware architectures.
94
95. CONT.,
In this case, one of the following strategies can be implemented:
• Interpretation: In this technique every source instruction is interpreted by an
emulator for executing native ISA instructions, leading to poor performance.
Interpretation has a minimal startup cost but a huge overhead, since each
instruction is emulated.
• Binary translation: In this technique every source instruction is converted to
native instructions with equivalent functions. After a block of instructions is
translated, it is cached and reused. Binary translation has a large initial
overhead cost, but over time it is subject to better performance, since
previously translated instruction blocks are directly executed.
95
96. CONT.,
Application virtualization is a good solution in the case of missing libraries
in the host operating system; in this case a replacement library can be linked
with the application, or library calls can be remapped to existing functions
available in the host system.
Another advantage is that in this case the virtual machine manager is much
lighter since it provides a partial emulation of the runtime environment
compared to hardware virtualization.
Moreover, this technique allows incompatible applications to run together.
Compared to programming-level virtualization, which works across all the
applications developed for that virtual machine, application-level
virtualization works for a specific environment: It supports all the
applications that run on top of a specific environment.
96
97. OTHER TYPES OF VIRTUALIZATION
Storage virtualization
Network virtualization
Desktop virtualization
Application server virtualization
97
98. STORAGE VIRTUALIZATION
Storage virtualization is a system administration practice that allows
decoupling the physical organization of the hardware from its logical
representation. Using this technique, users do not have to be worried about
the specific location of their data, which can be identified using a logical
path.
Storage virtualization allows us to harness a wide range of storage facilities
and represent them under a single logical file system. There are different
techniques for storage virtualization, one of the most popular being network-
based virtualization by means of storage area networks (SANs). SANs use a
network-accessible device through a large bandwidth connection to provide
storage facilities.
98
99. NETWORK VIRTUALIZATION
Network virtualization combines hardware appliances and specific software for the creation
and management of a virtual network.
Network virtualization can aggregate different physical networks into a single logical
network (external network virtualization) or provide network-like functionality to an
operating system partition (internal network virtualization).
The result of external network virtualization is generally a virtual LAN (VLAN). A VLAN is
an aggregation of hosts that communicate with each other as though they were located under
the same broadcasting domain.
Internal network virtualization is generally applied together with hardware and operating
system-level virtualization, in which the guests obtain a virtual network interface to
communicate with.
There are several options for implementing internal network virtualization: The guest can
share the same network interface of the host and use Network Address Translation (NAT) to
access the network; the virtual machine manager can emulate, and install on the host, an
additional network device, together with the driver; or the guest can have a private network
only with the guest.
99
100. DESKTOP VIRTUALIZATION
Desktop virtualization abstracts the desktop environment available on a personal
computer in order to provide access to it using a client/server approach.
Desktop virtualization provides the same out- come of hardware virtualization
but serves a different purpose.
Similarly to hardware virtualization, desktop virtualization makes accessible a
different system as though it were natively installed on the host, but this system
is remotely stored on a different host and accessed through a network
connection.
Moreover, desktop virtualization addresses the problem of making the same
desktop environment accessible from everywhere.
Although the term desktop virtualization strictly refers to the ability to remotely
access a desktop environment, generally the desktop environment is stored in a
remote server or a data center that provides a high-availability infrastructure and
ensures the accessibility and persistence of the data.
100
101. CONT.,
In this scenario, an infrastructure supporting hardware virtualization is fundamental to provide
access to multiple desktop environments hosted on the same server; a specific desktop
environment is stored in a virtual machine image that is loaded and started on demand when a
client connects to the desktop environment.
This is a typical cloud computing scenario in which the user leverages the virtual
infrastructure for performing the daily tasks on his computer.
The advantages of desktop virtualization are high availability, persistence, accessibility, and
ease of management. Security issues can prevent the use of this technology.
The basic services for remotely accessing a desktop environment are implemented in software
components such as Windows Remote Services, VNC, and X Server.
Infrastructures for desktop virtualization based on cloud computing solutions include Sun
Virtual Desktop Infrastructure (VDI), Parallels Virtual Desktop Infrastructure (VDI), Citrix
XenDesktop, and others.
101
102. APPLICATION SERVER VIRTUALIZATION
Application server virtualization abstracts a collection of application
servers that provide the same services as a single virtual application server
by using load-balancing strategies and providing a high-availability
infrastructure for the services hosted in the application server.
This is a particular form of virtualization and serves the same purpose of
storage virtualization: providing a better quality of service rather than
emulating a different environment.
102
103. 2.8 IMPLEMENTATION LEVELS OF VIRTUALIZATION
Virtualization is a computer architecture technology by which multiple
virtual machines (VMs) are multiplexed in the same hardware machine.
The idea of VMs can be dated back to the 1960s.
The purpose of a VM is to enhance resource sharing by many users and
improve computer performance in terms of resource utilization and
application flexibility.
Hardware resources (CPU, memory, I/O devices, etc.) or software
resources (operating system and software libraries) can be virtualized in
various functional layers.
This virtualization technology has been revitalized as the demand for
distributed and cloud computing increased sharply in recent years.
103
104. CONT.,
The idea is to separate the hardware from the software to yield better system
efficiency.
For example, computer users gained access to much enlarged memory space
when the concept of virtual memory was introduced.
Similarly, virtualization techniques can be applied to enhance the use of
compute engines, networks, and storage.
According to a 2009 Gartner Report, virtualization was the top strategic
technology poised to change the computer industry.
With sufficient storage, any computer platform can be installed in another
host computer, even if they use processors with different instruction sets and
run with distinct operating systems on the same hardware.
104
105. 2.8.1 LEVELS OF VIRTUALIZATION IMPLEMENTATION
A traditional computer runs with a host operating system specially
tailored for its hardware architecture, as shown in below Figure (a).
After virtualization, different user applications managed by their own
operating systems (guest OS) can run on the same hardware, independent
of the host OS.
This is often done by adding additional software, called a virtualization
layer as shown in Figure 3.1(b).
This virtualization layer is known as hypervisor or virtual machine
monitor (VMM).
The VMs are shown in the upper boxes, where applications run with
their own guest OS over the virtualized CPU, memory, and I/O
resources.
105
108. CONT.,
The main function of the software layer for virtualization is to virtualize
the physical hardware of a host machine into virtual resources to be used
by the VMs, exclusively.
This can be implemented at various operational levels, as we will discuss
shortly.
The virtualization software creates the abstraction of VMs by interposing a
virtualization layer at various levels of a computer system.
Common virtualization layers include the instruction set architecture (ISA)
level, hardware level, operating system level, library support level, and
application level
108
110. 2.8.1.1 Instruction Set Architecture Level
At the ISA level, virtualization is performed by emulating a given ISA by
the ISA of the host machine.
For example, MIPS binary code can run on an x86-based host machine
with the help of ISA emulation.
With this approach, it is possible to run a large amount of legacy binary
code written for various processors on any given new hardware host
machine.
Instruction set emulation leads to virtual ISAs created on any hardware
machine.
The basic emulation method is through code interpretation. An interpreter
program interprets the source instructions to target instructions one by
one.
110
111. CONT.,
One source instruction may require tens or hundreds of native target
instructions to perform its function.
Obviously, this process is relatively slow. For better performance, dynamic
binary translation is desired.
This approach translates basic blocks of dynamic source instructions to
target instructions.
The basic blocks can also be extended to program traces or super blocks to
increase translation efficiency. Instruction set emulation requires binary
translation and optimization.
A virtual instruction set architecture (V-ISA) thus requires adding a
processor-specific software translation layer to the compiler.
111
112. 2.8.1.2 Hardware Abstraction Level
Hardware-level virtualization is performed right on top of the bare hardware.
On the one hand, this approach generates a virtual hardware environment for a
VM.
On the other hand, the process manages the underlying hardware through
virtualization.
The idea is to virtualize a computer’s resources, such as its processors, memory,
and I/O devices.
The intention is to upgrade the hardware utilization rate by multiple users
concurrently.
The idea was implemented in the IBM VM/370 in the 1960s. More recently, the
Xen hypervisor has been applied to virtualize x86-based machines to run Linux or
other guest OS applications.
112
113. 2.8.1.3 Operating System Level
This refers to an abstraction layer between traditional OS and user
applications.
OS-level virtualization creates isolated containers on a single physical
server and the OS instances to utilize the hardware and software in data
centers.
The containers behave like real servers. OS-level virtualization is
commonly used in creating virtual hosting environments to allocate
hardware resources among a large number of mutually distrusting users.
It is also used, to a lesser extent, in consolidating server hardware by
moving services on separate hosts into containers or VMs on one server.
113
114. 2.8.1.4 Library Support Level
Most applications use APIs exported by user-level libraries rather than
using lengthy system calls by the OS.
Since most systems provide well-documented APIs, such an interface
becomes another candidate for virtualization.
Virtualization with library interfaces is possible by controlling the
communication link between applications and the rest of a system
through API hooks.
The software tool WINE has implemented this approach to support
Windows applications on top of UNIX hosts.
Another example is the vCUDA which allows applications executing
within VMs to leverage GPU hardware acceleration.
114
115. 2.8.1.5 User-Application Level
Virtualization at the application level virtualizes an application as a VM.
On a traditional OS, an application often runs as a process. Therefore,
application-level virtualization is also known as process-level virtualization.
The most popular approach is to deploy high level language (HLL) VMs. In
this scenario, the virtualization layer sits as an application program on top of
the operating system, and the layer exports an abstraction of a VM that can
run programs written and compiled to a particular abstract machine definition.
Any program written in the HLL and compiled for this VM will be able to
run on it. The Microsoft .NET CLR and Java Virtual Machine (JVM) are two
good examples of this class of VM.
115
116. CONT.,
Other forms of application-level virtualization are known as application
isolation, application sandboxing, or application streaming.
The process involves wrapping the application in a layer that is isolated
from the host OS and other applications.
The result is an application that is much easier to distribute and remove
from user workstations.
An example is the LANDesk application virtualization platform which
deploys software applications as self-contained, executable files in an
isolated environment without requiring installation, system modifications,
or elevated security privileges.
116
117. 2.8.2 VMM DESIGN REQUIREMENTS AND
PROVIDERS
As mentioned earlier, hardware-level virtualization inserts a layer
between real hardware and traditional operating systems.
This layer is commonly called the Virtual Machine Monitor (VMM) and
it manages the hardware resources of a computing system.
Each time programs access the hardware the VMM captures the process.
In this sense, the VMM acts as a traditional OS.
One hardware component, such as the CPU, can be virtualized as several
virtual copies. Therefore, several traditional operating systems which are
the same or different can sit on the same set of hardware simultaneously.
117
118. CONT.,
There are three requirements for a VMM:
First, a VMM should provide an environment for pro- grams which is
essentially identical to the original machine.
Second, programs run in this environ- ment should show, at worst, only
minor decreases in speed.
Third, a VMM should be in complete control of the system resources.
Any program run under a VMM should exhibit a function identical to that
which it runs on the original machine directly.
Two possible exceptions in terms of differences are permitted with this
requirement: differences caused by the availability of system resources and
differences caused by timing dependencies. The former arises when more
than one VM is run- ning on the same machine.
118
119. CONT.,
The hardware resource requirements, such as memory, of each VM are
reduced, but the sum of them is greater than that of the real machine
installed.
The latter qualification is required because of the intervening level of
software and the effect of any other VMs concurrently existing on the
same hardware.
Obviously, these two differences pertain to performance, while the
function a VMM provides stays the same as that of a real machine.
However, the identical environment requirement excludes the behavior
of the usual time-sharing operating system from being classed as a
VMM.
119
120. CONT.,
A VMM should demonstrate efficiency in using the VMs. Compared with
a physical machine, no one prefers a VMM if its efficiency is too low.
Traditional emulators and complete software interpreters (simulators)
emulate each instruction by means of functions or macros.
Such a method provides the most flexible solutions for VMMs.
However, emulators or simulators are too slow to be used as real
machines.
To guarantee the efficiency of a VMM, a statistically dominant subset of
the virtual processor’s instructions needs to be executed directly by the
real processor, with no software intervention by the VMM.
120
121. CONT.,
Complete control of these resources by a VMM includes the following
aspects:
(1) The VMM is responsible for allocating hardware resources for
programs;
(2) it is not possible for a program to access any resource not explicitly
allocated to it; and
(3) it is possible under certain circumstances for a VMM to regain control
of resources already allocated. Not all processors satisfy these require-
ments for a VMM.
121
122. CONT.,
A VMM is tightly related to the architectures of processors. It is difficult
to implement a VMM for some types of processors, such as the x86.
Specific limitations include the inability to trap on some privileged
instructions.
If a processor is not designed to support virtualiza- tion primarily, it is
necessary to modify the hardware to satisfy the three requirements for a
VMM.
This is known as hardware-assisted virtualization.
122
123. 2.8.3 VIRTUALIZATION SUPPORT AT THE OS LEVEL
With the help of VM technology, a new computing mode known as cloud
computing is emerging.
Cloud computing is transforming the computing landscape by shifting
the hardware and staffing costs of managing a computational center to
third parties, just like banks.
However, cloud computing has at least two challenges. The first is the
ability to use a variable number of physical machines and VM instances
depending on the needs of a problem.
123
124. CONT.,
For example, a task may need only a single CPU during some
phases of execution but may need hundreds of CPUs at other times.
The second challenge concerns the slow operation of instantiating
new VMs.
Currently, new VMs originate either as fresh boots or as replicates
of a template VM, unaware of the current application state.
Therefore, to better support cloud computing, a large amount of
research and development should be done.
124
125. 2.8.3.1 Why OS-Level Virtualization?
As mentioned earlier, it is slow to initialize a hardware-level VM
because each VM creates its own image from scratch.
In a cloud computing environment, perhaps thousands of VMs need to
be initialized simultaneously.
Besides slow operation, storing the VM images also becomes an issue.
As a matter of fact, there is considerable repeated content among VM
images.
Moreover, full virtualization at the hardware level also has the
disadvantages of slow performance and low density, and the need for
para-virtualization to modify the guest OS.
To reduce the performance overhead of hardware-level virtualization,
even hardware modification is needed. OS-level virtualization provides a
feasible solution for these hardware-level virtualization issues.
125
126. CONT.,
Operating system virtualization inserts a virtualization layer inside an
operating system to partition a machine’s physical resources.
It enables multiple isolated VMs within a single operating system kernel.
This kind of VM is often called a virtual execution environment (VE),
Virtual Private System (VPS), or simply container.
From the user ’s point of view, VEs look like real ser- vers. This means a
VE has its own set of processes, file system, user accounts, network
interfaces with IP addresses, routing tables, firewall rules, and other
personal settings.
Although VEs can be customized for different people, they share the same
operating system kernel. Therefore, OS-level virtualization is also called
single-OS image virtualization.
126
127. The OpenVZ virtualization layer inside the host OS,
which provides some OS images to create VMs quickly.
Figure illustrates operating system virtualization from the point of view of a machine stack.
127
128. 2.8.3.2 Advantages of OS Extensions
Compared to hardware-level virtualization, the benefits of OS extensions are
twofold:
(1) VMs at the operating system level have minimal startup/shutdown costs,
low resource requirements, and high scalability; and
(2) For an OS-level VM, it is possible for a VM and its host environment to
synchronize state changes when necessary.
128
129. CONT.,
These benefits can be achieved via two mechanisms of OS-level
virtualization:
(1) All OS-level VMs on the same physical machine share a single
operating system kernel; and
(2) the virtualization layer can be designed in a way that allows processes
in VMs to access as many resources of the host machine as possible, but
never to modify them.
In cloud computing, the first and second benefits can be used to
overcome the defects of slow initialization of VMs at the hardware level,
and being unaware of the current application state, respectively.
129
130. 2.8.3.3 Disadvantages of OS Extensions
The main disadvantage of OS extensions is that all the VMs at operating
system level on a single container must have the same kind of guest
operating system.
That is, although different OS-level VMs may have different operating
system distributions, they must pertain to the same operating system
family.
For example, a Windows distribution such as Windows XP cannot run on
a Linux-based container.
However, users of cloud computing have various preferences. Some
prefer Windows and others prefer Linux or other operating systems.
Therefore, there is a challenge for OS-level virtualization in such cases.
130
131. CONT.,
The virtualization layer is inserted inside the OS to partition the
hardware resources for multiple VMs to run their applications in
multiple virtual environments.
To implement OS-level virtualization, isolated execution environ-
ments (VMs) should be created based on a single OS kernel.
Furthermore, the access requests from a VM need to be redirected to the
VM’s local resource partition on the physical machine.
For example, the chroot command in a UNIX system can create several
virtual root directories within a host OS. These virtual root directories
are the root directories of all VMs created.
131
132. CONT.,
There are two ways to implement virtual root directories: duplicating
common resources to each VM partition; or sharing most resources with
the host environment and only creating private resource copies on the
VM on demand.
The first way incurs significant resource costs and overhead on a
physical machine.
This issue neutralizes the benefits of OS-level virtualization, compared
with hardware-assisted virtualization. Therefore, OS-level virtualization
is often a second choice.
132
133. 2.8.3.4 Virtualization on Linux or Windows Platforms
By far, most reported OS-level virtualization systems are Linux-based.
Virtualization support on the Windows-based platform is still in the research
stage.
The Linux kernel offers an abstraction layer to allow software processes to
work with and operate on resources without knowing the hardware details.
New hardware may need a new Linux kernel to support. Therefore, different
Linux plat- forms use patched kernels to provide special support for extended
functionality.
However, most Linux platforms are not tied to a special kernel. In such a case,
a host can run several VMs simultaneously on the same hardware. Two OS
tools (Linux vServer and OpenVZ) support Linux platforms to run other
platform-based applications through virtualiza- tion.
133
134. Table summarizes several examples of OS- level virtualization tools that have been
developed in recent years.
Virtualization support for Linux and Windows platforms
134
135. 2.8.4 MIDDLEWARE SUPPORT FOR VIRTUALIZATION
Library-level virtualization is also known as user-level Application Binary
Interface (ABI) or API emulation.
This type of virtualization can create execution environments for running
alien programs on a platform rather than creating a VM to run the entire
operating system.
API call interception and remapping are the key functions performed.
This section provides an overview of several library-level virtualization
systems: namely the Windows Application Binary Interface (WABI), lxrun,
WINE, Visual MainWin, and vCUDA, which are summarized in Table.
135
137. CONT.,
The WABI offers middleware to convert Windows system calls to
Solaris system calls.
Lxrun is really a system call emulator that enables Linux
applications written for x86 hosts to run on UNIX systems.
Similarly, Wine offers library support for virtualizing x86 processors
to run Windows applications on UNIX hosts.
Visual MainWin offers a compiler support system to develop
Windows applications using Visual Studio to run on some UNIX
hosts.
137
138. 2.9 VIRTUALIZATION STRUCTURES/TOOLS AND
MECHANISMS
In general, there are three typical classes of VM architecture.
Before virtualization, the operating system manages the hardware.
After virtualization, a virtualization layer is inserted between the hardware
and the operating system.
In such a case, the virtualization layer is responsible for converting
portions of the real hardware into virtual hardware.
138
139. CONT.,
Therefore, different operating systems such as Linux and Windows
can run on the same physical machine, simultaneously.
Depending on the position of the virtualization layer, there are
several classes of VM architectures, namely the hypervisor
architecture, para-virtualization, and host-based virtualization.
The hypervisor is also known as the VMM (Virtual Machine
Monitor). They both perform the same virtualization operations.
139
140. 2.9.1 HYPERVISOR AND XEN ARCHITECTURE
The hypervisor supports hardware-level virtualization on bare metal
devices like CPU, memory, disk and network interfaces.
The hypervisor software sits directly between the physical hardware
and its OS. This virtualization layer is referred to as either the VMM
or the hypervisor.
The hypervisor provides hyper-calls for the guest OS’es and
applications.
Depending on the functionality, a hypervisor can assume a micro-
kernel architecture like the Microsoft Hyper-V.
Or it can assume a monolithic hypervisor architecture like the
VMware ESX for server virtualization.
140
141. CONT.,
A micro-kernel hypervisor includes only the basic and unchanging
functions (such as physical memory management and processor
scheduling).
The device drivers and other changeable components are outside the
hypervisor.
A monolithic hypervisor implements all the aforementioned functions,
including those of the device drivers.
Therefore, the size of the hypervisor code of a micro-kernel hypervisor is
smaller than that of a monolithic hypervisor.
Essentially, a hypervisor must be able to convert physical devices into
virtual resources dedicated for the deployed VM to use.
141
142. 2.9.1.1 The Xen Architecture
Xen is an open source hypervisor program developed by Cambridge
University.
Xen is a micro-kernel hypervisor, which separates the policy from the
mechanism.
The Xen hypervisor implements all the mechanisms, leaving the policy to be
handled by Domain 0.
Xen does not include any device drivers natively. It just provides a mechanism
by which a guest OS can have direct access to the physical devices.
As a result, the size of the Xen hypervisor is kept rather small. Xen provides a
virtual environment located between the hardware and the OS.
A number of vendors are in the process of developing commercial Xen
hypervisors, among them are Citrix XenServer and Oracle VM.
142
143. The Xen architecture’s special domain 0 for control and
I/O, and several guest domains for user applications.
143
144. CONT.,
The core components of a Xen system are the hypervisor, kernel, and
applications. The organization of the three components is important.
Like other virtualization systems, many guest OSes can run on top of the
hypervisor.
However, not all guest OSes are created equal, and one in particular
controls the others.
The guest OS, which has control ability, is called Domain 0, and the
others are called Domain U. Domain 0 is a privileged guest OS of Xen.
It is first loaded when Xen boots without any file system drivers being
available. Domain 0 is designed to access hardware directly and manage
devices.
144
145. CONT.,
Therefore, one of the responsibilities of Domain 0 is to allocate and map
hardware resources for the guest domains (the Domain U domains).
For example, Xen is based on Linux and its security level is C2. Its
management VM is named Domain 0, which has the privilege to manage
other VMs implemented on the same host.
If Domain 0 is compromised, the hacker can control the entire system. So, in
the VM system, security policies are needed to improve the security of
Domain 0.
Domain 0, behaving as a VMM, allows users to create, copy, save, read,
modify, share, migrate, and roll back VMs as easily as manipulating a file,
which flexibly provides tremendous benefits for users.
Unfortunately, it also brings a series of security problems during the software
life cycle and data lifetime.
145
146. CONT.,
Traditionally, a machine’s lifetime can be envisioned as a straight line
where the current state of the machine is a point that progresses
monotonically as the software executes.
During this time, con- figuration changes are made, software is installed,
and patches are applied.
In such an environment, the VM state is akin to a tree: At any point,
execution can go into N different branches where multiple instances of a
VM can exist at any point in this tree at any given time.
VMs are allowed to roll back to previous states in their execution (e.g.,
to fix configuration errors) or rerun from the same point many times
(e.g., as a means of distributing dynamic content or circulating a “live”
system image).
146
147. 2.9.2 BINARY TRANSLATION WITH FULL
VIRTUALIZATION
Depending on implementation technologies, hardware virtualization can
be classified into two categories: full virtualization and host-based
virtualization.
Full virtualization does not need to modify the host OS. It relies on binary
translation to trap and to virtualize the execution of certain sensitive,
nonvirtualizable instructions.
The guest OSes and their applications consist of noncritical and critical
instructions.
In a host-based system, both a host OS and a guest OS are used. A
virtuali- zation software layer is built between the host OS and guest OS.
These two classes of VM architec- ture are introduced next.
147
148. 2.9.2.1 Full Virtualization
With full virtualization, noncritical instructions run on the hardware
directly while critical instructions are discovered and replaced with traps
into the VMM to be emulated by software.
Both the hypervisor and VMM approaches are considered full
virtualization.
Why are only critical instructions trapped into the VMM? This is because
binary translation can incur a large performance overhead.
Noncritical instructions do not control hardware or threaten the security
of the system, but critical instructions do.
Therefore, running noncritical instructions on hardware not only can
promote efficiency, but also can ensure system security.
148
149. 2.9.2.2 Binary Translation of Guest OS Requests Using a VMM
This approach was implemented by VMware and many other software
companies. As shown in Figure, VMware puts the VMM at Ring 0 and
the guest OS at Ring 1.
The VMM scans the instruction stream and identifies the privileged,
control- and behavior-sensitive instructions.
When these instructions are identified, they are trapped into the VMM,
which emulates the behavior of these instructions.
The method used in this emulation is called binary translation.
Therefore, full virtualization combines binary translation and direct
execution.
The guest OS is completely decoupled from the underlying hardware.
Consequently, the guest OS is unaware that it is being virtualized.
149
150. Indirect execution of complex instructions via binary
translation of guest OS requests using the VMM plus direct
execution of simple instructions on the same host
150
151. CONT.,
The performance of full virtualization may not be ideal, because it
involves binary translation which is rather time-consuming.
In particular, the full virtualization of I/O-intensive applications is a
really a big challenge.
Binary translation employs a code cache to store translated hot
instructions to improve performance, but it increases the cost of
memory usage.
At the time of this writing, the performance of full virtualization on the
x86 architecture is typically 80 percent to 97 percent that of the host
machine.
151
152. 2.9.2.3 Host-Based Virtualization
An alternative VM architecture is to install a virtualization layer on top of
the host OS.
This host OS is still responsible for managing the hardware. The guest
OS’es are installed and run on top of the virtualization layer.
Dedicated applications may run on the VMs. Certainly, some other
applications can also run with the host OS directly.
This host- based architecture has some distinct advantages, as enumerated
next.
First, the user can install this VM architecture without modifying the host
OS. The virtualizing software can rely on the host OS to provide device
drivers and other low-level ser- vices. This will simplify the VM design
and ease its deployment.
152
153. CONT.,
Second, the host-based approach appeals to many host machine
configurations.
Compared to the hypervisor/VMM architecture, the performance of the
host-based architecture may also be low.
When an application requests hardware access, it involves four layers of
mapping which downgrades performance significantly.
When the ISA of a guest OS is different from the ISA of the underlying
hardware, binary translation must be adopted.
Although the host-based architecture has flexibility, the performance is
too low to be useful in practice.
153
154. 2.9.3 PARA-VIRTUALIZATION WITH COMPILER
SUPPORT
Para-virtualization needs to modify the guest operating systems. A
para-virtualized VM provides special APIs requiring substantial OS
modifications in user applications.
Performance degradation is a critical issue of a virtualized system.
No one wants to use a VM if it is much slower than using a physical
machine.
The virtualization layer can be inserted at different positions in a
machine soft- ware stack.
However, para-virtualization attempts to reduce the virtualization
overhead, and thus improve performance by modifying only the
guest OS kernel.
154
155. Para-virtualized VM architecture, which involves modifying the
guest OS kernel to replace non-virtualizable instructions with
hyper-calls for the hypervisor or the VMM to carry out the
virtualization process
FIGURE (A)
155
156. The use of a para-virtualized guest OS assisted by an
intelligent compiler to replace non-virtualizable OS
instructions by hyper-calls.
FIGURE(B)
156
157. CONT.,
Figure illustrates the concept of a para-virtualized VM architecture. The
guest operating systems are para-virtualized.
They are assisted by an intelligent compiler to replace the non-
virtualizable OS instructions by hyper-calls as illustrated in Figure.
The traditional x86 processor offers four instruction execution rings:
Rings 0, 1, 2, and 3.
The lower the ring number, the higher the privilege of instruction being
executed.
The OS is responsible for managing the hardware and the privileged
instructions to execute at Ring 0, while user-level applications run at
Ring 3. The best example of para-virtualization is the KVM to be
described below.
157
158. 2.9.3.1 Para-Virtualization Architecture
When the x86 processor is virtualized, a virtualization layer is
inserted between the hardware and the OS.
According to the x86 ring definition, the virtualization layer should
also be installed at Ring 0.
Different instructions at Ring 0 may cause some problems.
In Figure, we show that para-virtualization replaces non-
virtualizable instructions with hyper-calls that communicate directly
with the hypervisor or VMM. However, when the guest OS kernel is
modified for virtualization, it can no longer run on the hardware
directly.
158
159. CONT.,
Although para-virtualization reduces the overhead, it has incurred other
problems.
First, its compatibility and portability may be in doubt, because it must
support the unmodified OS as well.
Second, the cost of maintaining para-virtualized OS’es is high, because
they may require deep OS kernel modifications.
Finally, the performance advantage of para-virtualization varies greatly
due to workload variations. Compared with full virtualization, para-
virtualization is relatively easy and more practical.
The main problem in full virtualization is its low performance in binary
translation. To speed up binary translation is difficult.
Therefore, many virtualization products employ the para-virtualization
architecture. The popular Xen, KVM, and VMware ESX are good
examples.
159
160. 2.9.3.1 KVM (Kernel-Based VM)
This is a Linux para-virtualization system—a part of the Linux
version 2.6.20 kernel.
Memory management and scheduling activities are carried out by
the existing Linux kernel.
The KVM does the rest, which makes it simpler than the hypervisor
that controls the entire machine.
KVM is a hardware-assisted para-virtualization tool, which
improves performance and supports unmodified guest OS’es such as
Windows, Linux, Solaris, and other UNIX variants.
160
161. 2.9.3.3 Para-Virtualization with Compiler Support
Unlike the full virtualization architecture which intercepts and
emulates privileged and sensitive instructions at runtime, para-
virtualization handles these instructions at compile time.
The guest OS kernel is modified to replace the privileged and
sensitive instructions with hyper-calls to the hypervisor or VMM.
Xen assumes such a para-virtualization architecture. The guest OS
running in a guest domain may run at Ring 1 instead of at Ring 0.
161
162. CONT.,
This implies that the guest OS may not be able to execute some
privileged and sensitive instructions.
The privileged instructions are implemented by hyper-calls to the
hypervisor.
After replacing the instructions with hyper-calls, the modified guest
OS emulates the behavior of the original guest OS.
On an UNIX system, a system call involves an interrupt or service
routine. The hyper-calls apply a dedicated service routine in Xen.
162
163. 2.10 VIRTUALIZATION OF CPU, MEMORY, AND I/O
DEVICES
To support virtualization, processors such as the x86 employ a
special running mode and instructions, known as hardware-assisted
virtualization.
In this way, the VMM and guest OS run in different modes and all
sensitive instructions of the guest OS and its applications are trapped
in the VMM.
To save processor states, mode switching is completed by hardware.
For the x86 architecture, Intel and AMD have proprietary
technologies for hardware-assisted virtualization.
163
164. 2.10.1 HARDWARE SUPPORT FOR VIRTUALIZATION
Modern operating systems and processors permit multiple processes to run
simultaneously.
If there is no protection mechanism in a processor, all instructions from
different processes will access the hardware directly and cause a system
crash.
Therefore, all processors have at least two modes, user mode and
supervisor mode, to ensure controlled access of critical hardware.
Instructions running in supervisor mode are called privileged instructions.
Other instructions are unprivileged instructions.
In a virtualized environment, it is more difficult to make OSes and
applications run correctly because there are more layers in the machine
stack.
164
165. CONT.,
At the time of this writing, many hardware virtualization products were
available.
The VMware Workstation is a VM software suite for x86 and x86-64
computers.
This software suite allows users to set up multiple x86 and x86-64 virtual
computers and to use one or more of these VMs simultaneously with the host
operating system.
The VMware Workstation assumes the host-based virtualization. Xen is a
hypervisor for use in IA-32, x86-64, Itanium, and PowerPC 970 hosts.
Actually, Xen modifies Linux as the lowest and most privileged layer, or a
hypervisor.
165
166. CONT.,
One or more guest OS can run on top of the hypervisor. KVM
(Kernel-based Virtual Machine) is a Linux kernel virtualization
infrastructure.
KVM can support hardware-assisted virtualization and para-
virtualization by using the Intel VT-x or AMD-v and VirtIO
framework, respectively.
The VirtIO framework includes a paravirtual Ethernet card, a disk
I/O controller, a balloon device for adjusting guest memory usage,
and a VGA graphics interface using VMware drivers.
166
167. 2.10.2 CPU VIRTUALIZATION
A VM is a duplicate of an existing computer system in which a majority of the
VM instructions are executed on the host processor in native mode.
Thus, unprivileged instructions of VMs run directly on the host machine for
higher efficiency.
Other critical instructions should be handled carefully for correctness and stability.
The critical instructions are divided into three categories: privileged instructions,
control- sensitive instructions, and behavior-sensitive instructions.
Privileged instructions execute in a privileged mode and will be trapped if
executed outside this mode.
Control-sensitive instructions attempt to change the configuration of resources
used.
Behavior-sensitive instructions have different behaviors depending on the
configuration of resources, including the load and store operations over the virtual
memory.
167
168. CONT.,
A CPU architecture is virtualizable if it supports the ability to run the VM’s privileged
and unprivileged instructions in the CPU’s user mode while the VMM runs in
supervisor mode.
When the privileged instructions including control- and behavior-sensitive instructions
of a VM are executed, they are trapped in the VMM.
In this case, the VMM acts as a unified mediator for hardware access from different
VMs to guarantee the correctness and stability of the whole system.
However, not all CPU architectures are virtualizable. RISC CPU architectures can be
naturally virtualized because all control-sensitive and behavior-sensitive instructions are
privileged instructions.
On the contrary, x86 CPU architectures are not primarily designed to support
virtualization. This is because about 10 sensitive instructions, such as SGDT and
SMSW, are not privileged instructions.
When these instructions execute in virtualization, they cannot be trapped in the VMM.
168
169. CONT.,
On a native UNIX-like system, a system call triggers the 80h interrupt and passes
control to the OS kernel.
The interrupt handler in the kernel is then invoked to process the system call. On a
para- virtualization system such as Xen, a system call in the guest OS first triggers the
80h interrupt normally.
Almost at the same time, the 82h interrupt in the hypervisor is triggered. Incidentally,
control is passed on to the hypervisor as well.
When the hypervisor completes its task for the guest OS system call, it passes control
back to the guest OS kernel.
Certainly, the guest OS kernel may also invoke the hyper-call while it’s running.
Although para-virtualization of a CPU lets unmodified applications run in the VM, it
causes a small performance penalty.
169
170. 2.10.2.1 Hardware-Assisted CPU Virtualization
This technique attempts to simplify virtualization because full or para-
virtualization is complicated.
Intel and AMD add an additional mode called privilege mode level
(some people call it Ring-1) to x86 processors.
Therefore, operating systems can still run at Ring 0 and the hypervisor
can run at Ring -1.
All the privileged and sensitive instructions are trapped in the
hypervisor automatically. This technique removes the difficulty of
implementing binary translation of full virtualization.
It also lets the operating system run in VMs without modification.
170
172. 2.10.3 MEMORY VIRTUALIZATION
Virtual memory virtualization is similar to the virtual memory support
provided by modern operating systems.
In a traditional execution environment, the operating system maintains
mappings of virtual memory to machine memory using page tables,
which is a one-stage mapping from virtual memory to machine memory.
All modern x86 CPUs include a memory management unit (MMU) and a
translation look aside buffer (TLB) to optimize virtual memory
performance.
However, in a virtual execution environment, virtual memory
virtualization involves sharing the physical system memory in RAM and
dynamically allocating it to the physical memory of the VMs.
172
173. CONT.,
That means a two-stage mapping process should be maintained by the
guest OS and the VMM, respectively: virtual memory to physical memory
and physical memory to machine memory.
Furthermore, MMU virtualization should be supported, which is
transparent to the guest OS.
The guest OS continues to control the mapping of virtual addresses to the
physical memory addresses of VMs.
But the guest OS cannot directly access the actual machine memory. The
VMM is responsible for mapping the guest physical memory to the actual
machine memory.
173
175. CONT.,
Since each page table of the guest OS’es has a separate page table in the
VMM corresponding to it, the VMM page table is called the shadow page
table.
Nested page tables add another layer of indirection to virtual memory.
The MMU already handles virtual-to-physical translations as defined by
the OS.
Then the physical memory addresses are translated to machine addresses
using another set of page tables defined by the hypervisor.
Since modern operating systems maintain a set of page tables for every
process, the shadow page tables will get flooded. Consequently, the
performance overhead and cost of memory will be very high.
175
176. CONT.,
VMware uses shadow page tables to perform virtual-memory-to-
machine-memory address translation.
Processors use TLB hardware to map the virtual memory directly to the
machine memory to avoid the two levels of translation on every access.
When the guest OS changes the virtual memory to a physical memory
mapping, the VMM updates the shadow page tables to enable a direct
lookup.
The AMD Barcelona processor has featured hardware-assisted memory
virtualization since 2007.
It provides hardware assistance to the two-stage address translation in a
virtual execution environment by using a technology called nested
paging.
176
177. 2.10.4 I/O VIRTUALIZATION
I/O virtualization involves managing the routing of I/O requests
between virtual devices and the shared physical hardware.
At the time of this writing, there are three ways to implement I/O
virtualization: full device emulation, para-virtualization, and direct I/O.
Full device emulation is the first approach for I/O virtualization.
Generally, this approach emulates well-known, real-world devices.
All the functions of a device or bus infrastructure, such as device
enumeration, identification, interrupts, and DMA, are replicated in
software.
This software is located in the VMM and acts as a virtual device. The
I/O access requests of the guest OS are trapped in the VMM which
interacts with the I/O devices.
177
178. Diagram: Device emulation for I/O virtualization
implemented inside the middle layer that maps real
I/O devices into the virtual devices for the guest
device driver to use.
178
179. CONT.,
A single hardware device can be shared by multiple VMs that run concurrently.
However, software emulation runs much slower than the hardware it emulates.
The para-virtualization method of I/O virtualization is typically used in Xen. It
is also known as the split driver model consisting of a frontend driver and a
backend driver.
The frontend driver is running in Domain U and the backend driver is running
in Domain 0. They interact with each other via a block of shared memory.
The frontend driver manages the I/O requests of the guest OSes and the backend
driver is responsible for managing the real I/O devices and multiplexing the I/O
data of different VMs.
Although para-I/O-virtualization achieves better device performance than full
device emulation, it comes with a higher CPU overhead.
179
180. CONT.,
Direct I/O virtualization lets the VM access devices directly. It can achieve close-
to-native performance without high CPU costs.
However, current direct I/O virtualization implementations focus on networking for
mainframes.
There are a lot of challenges for commodity hardware devices. For example, when a
physical device is reclaimed (required by workload migration) for later
reassignment, it may have been set to an arbitrary state (e.g., DMA to some
arbitrary memory locations) that can function incorrectly or even crash the whole
system.
Since software-based I/O virtualization requires a very high overhead of device
emulation, hardware-assisted I/O virtualization is critical.
Intel VT-d supports the remapping of I/O DMA transfers and device-generated
interrupts. The architecture of VT-d provides the flexibility to support multiple
usage models that may run unmodified, special-purpose, or “virtualization-aware”
guest OS’es.
180
181. CONT.,
Another way to help I/O virtualization is via self-virtualized I/O (SV-IO). The key
idea of SV-IO is to harness the rich resources of a multicore processor.
All tasks associated with virtualizing an I/O device are encapsulated in SV-IO. It
provides virtual devices and an associated access API to VMs and a management
API to the VMM.
SV-IO defines one virtual interface (VIF) for every kind of virtualized I/O device,
such as virtual network interfaces, virtual block devices (disk), virtual camera
devices, and others.
The guest OS interacts with the VIFs via VIF device drivers. Each VIF consists
of two message queues. One is for outgoing messages to the devices and the other
is for incoming messages from the devices.
In addition, each VIF has a unique ID for identifying it in SV-IO.
181
182. 2.10.5 VIRTUALIZATION IN MULTI-CORE
PROCESSORS
Virtualizing a multi-core processor is relatively more complicated than
virtualizing a unicore processor.
Though multicore processors are claimed to have higher performance by
integrating multiple processor cores in a single chip, multicore
virtualization has raised some new challenges to computer architects,
compiler constructors, system designers, and application programmers.
There are mainly two difficulties: Application programs must be
parallelized to use all cores fully, and software must explicitly assign
tasks to the cores, which is a very complex problem.
182
183. CONT.,
Concerning the first challenge, new programming models, languages, and
libraries are needed to make parallel programming easier.
The second challenge has spawned research involving scheduling
algorithms and resource management policies.
Yet these efforts cannot balance well among performance, complexity, and
other issues.
What is worse, as technology scales, a new challenge called dynamic
heterogeneity is emerging to mix the fat CPU core and thin GPU cores on
the same chip, which further complicates the multi-core resource
management.
The dynamic heterogeneity of hardware infrastructure mainly comes from
less reliable transistors and increased complexity in using the transistors
183
184. 2.10.5.1 Physical versus Virtual Processor Cores
Wells, et al. proposed a multicore virtualization method to allow
hardware designers to get an abstraction of the low-level details of
the processor cores.
This technique alleviates the burden and inefficiency of managing
hardware resources by software.
It is located under the ISA and remains unmodified by the operating
system or VMM (hypervisor).
Below Figure illustrates the technique of a software-visible VCPU
moving from one core to another and temporarily suspending
execution of a VCPU when there are no appropriate cores on which
it can run.
184
185. Diagram : Multicore virtualization method that
exposes four VCPUs to the software, when only
three cores are actually present.
185
186. 2.10.5.2 Virtual Hierarchy
The emerging many-core chip multiprocessors (CMPs) provides a new
computing landscape.
Instead of supporting time-sharing jobs on one or a few cores, we can use
the abundant cores in a space-sharing, where single-threaded or
multithreaded jobs are simultaneously assigned to separate groups of cores
for long time intervals.
This idea was originally suggested by Marty and Hill. To optimize for
space-shared workloads, they propose using virtual hierarchies to overlay a
coherence and caching hierarchy onto a physical processor.
Unlike a fixed physical hierarchy, a virtual hierarchy can adapt to fit how the
work is space shared for improved performance and performance isolation.
186
187. CONT.,
Today’s many-core CMPs use a physical hierarchy of two or more cache
levels that statically determine the cache allocation and mapping.
A virtual hierarchy is a cache hierarchy that can adapt to fit the workload
or mix of workloads.
The hierarchy’s first level locates data blocks close to the cores needing
them for faster access, establishes a shared-cache domain, and establishes
a point of coherence for faster communication.
When a miss leaves a tile, it first attempts to locate the block (or sharers)
within the first level. The first level can also provide isolation between
independent workloads. A miss at the L1 cache can invoke the L2 access.
187
188. CONT.,
The idea is illustrated in below Figure. Space sharing is applied to assign
three workloads to three clusters of virtual cores: namely VM0 and VM3
for database workload, VM1 and VM2 for web server workload, and
VM4–VM7 for middleware workload.
The basic assumption is that each workload runs in its own VM.
However, space sharing applies equally within a single operating system.
Statically distributing the directory among tiles can do much better,
provided operating systems or hypervisors carefully map virtual pages to
physical frames.
Marty and Hill suggested a two-level virtual coherence and caching
hierarchy that harmonizes with the assignment of tiles to the virtual
clusters of VMs.
188
190. CONT.,
Below Figure illustrates a logical view of such a virtual cluster hierarchy in two levels.
Each VM operates in a isolated fashion at the first level.
This will minimize both miss access time and performance interference with other
workloads or VMs.
Moreover, the shared resources of cache capacity, inter-connect links, and miss handling
are mostly isolated between VMs.
The second level maintains a globally shared memory. This facilitates dynamically
repartitioning resources without costly cache flushes.
Furthermore, maintaining globally shared memory minimizes changes to existing system
software and allows virtualization features such as content-based page sharing.
A virtual hierarchy adapts to space-shared workloads like multiprogramming and server
consolidation. This many-core mapping scheme can also optimize for space-shared multi
programmed workloads in a single-OS environment.
190
192. 2.11 VIRTUALIZATION SUPPORT AND DISASTER
RECOVERY
One very distinguishing feature of cloud computing infrastructure is the
use of system virtualization and the modification to provisioning tools.
Virtualization of servers on a shared cluster can consolidate web services.
As the VMs are the containers of cloud services, the provisioning tools
will first find the corresponding physical machines and deploy the VMs
to those nodes before scheduling the service to run on the virtual nodes.
In addition, in cloud computing, virtualization also means the resources
and fundamental infrastructure are virtualized.
192
193. CONT.,
The user will not care about the computing resources that are used for
providing the services.
Cloud users do not need to know and have no way to discover physical
resources that are involved while processing a service request.
Also, application developers do not care about some infrastructure
issues such as scalability and fault tolerance (i.e., they are virtualized).
Application developers focus on service logic.
193
195. 2.11.1 HARDWARE VIRTUALIZATION
In many cloud computing systems, virtualization software is used to virtualize
the hardware.
System virtualization software is a special kind of software which simulates
the execution of hardware and runs even unmodified operating systems.
Cloud computing systems use virtualization software as the running
environment for legacy software such as old operating systems and unusual
applications.
Virtualization software is also used as the platform for developing new cloud
applications that enable developers to use any operating systems and
programming environments they like.
The development environment and deployment environment can now be the
same, which eliminates some runtime problems.
195
196. CONT.,
Some cloud computing providers have used virtualization technology to
provide this service for developers.
As mentioned before, system virtualization software is considered the
hardware analog mechanism to run an unmodified operating system,
usually on bare hardware directly, on top of software.
Table-1, lists some of the system virtualization software in wide use at
the time of this writing.
Currently, the VMs installed on a cloud computing platform are mainly
used for hosting third-party programs.
VMs provide flexible runtime services to free users from worrying about
the system environment.
196
198. CONT.,
Using VMs in a cloud computing platform ensures extreme flexibility for
users.
As the computing resources are shared by many users, a method is
required to maximize the users’ privileges and still keep them separated
safely.
Traditional sharing of cluster resources depends on the user and group
mechanism on a system. Such sharing is not flexible.
Users cannot customize the system for their special purposes. Operating
systems cannot be changed. The separation is not complete.
198
199. CONT.,
An environment that meets one user’s requirements often cannot satisfy
another user. Virtualization allows users to have full privileges while
keeping them separate.
Users have full access to their own VMs, which are completely separate
from other users’ VMs.
Multiple VMs can be mounted on the same physical server. Different
VMs may run with different OS’es.
We also need to establish the virtual disk storage and virtual networks
needed by the VMs.
199
200. CONT.,
The virtualized resources form a resource pool. The virtualization is
carried out by special servers dedicated to generating the virtualized
resource pool.
The virtualized infrastructure (black box in the middle) is built with
many virtualizing integration managers.
These managers handle loads, resources, security, data, and provisioning
functions. Figure 2.11.1.1 shows two VM platforms.
Each platform carries out a virtual solution to a user job. All cloud
services are managed in the boxes at the top.
200
201. Figure 2.11.1.1 Recovery overhead of a conventional disaster
recovery scheme, compared with that required to recover from
live migration of VMs
201
202. 2.11.2 VIRTUALIZATION SUPPORT IN PUBLIC
CLOUDS
Armbrust, et al. have assessed in Table-1 three public clouds in the
context of virtualization support: AWS, Microsoft Azure, and GAE.
AWS provides extreme flexibility (VMs) for users to execute their own
applications.
GAE provides limited application-level virtualization for users to build
applications only based on the services that are created by Google.
Microsoft provides programming-level virtualization (.NET
virtualization) for users to build their applications.
202
203. CONT.,
The VMware tools apply to workstations, servers, and virtual
infrastructure. The Microsoft tools are used on PCs and some special
servers.
The Xen-Enterprise tool applies only to Xen-based servers. Everyone is
interested in the cloud; the entire IT industry is moving toward the vision of
the cloud.
Virtualization leads to HA, disaster recovery, dynamic load leveling, and
rich provisioning support.
Both cloud computing and utility computing leverage the benefits of
virtualization to provide a scalable and autonomous computing
environment.
203
204. 2.11.3 STORAGE VIRTUALIZATION FOR GREEN DATA
CENTERS
IT power consumption in the United States has more than doubled to 3
percent of the total energy consumed in the country.
The large number of data centers in the country has contributed to this
energy crisis to a great extent.
More than half of the companies in the Fortune 500 are actively
implementing new corporate energy policies.
Recent surveys from both IDC and Gartner confirm the fact that
virtualization had a great impact on cost reduction from reduced power
consumption in physical computing systems.
204
205. CONT.,
This alarming situation has made the IT industry become more
energy-aware.
With little evolution of alternate energy resources, there is an
imminent need to conserve power in all computers.
Virtualization and server consolidation have already proven handy
in this aspect.
Green data centers and benefits of storage virtualization are
considered to further strengthen the synergy of green computing.
205
206. 2.11.4 VIRTUALIZATION FOR IAAS
VM technology has increased in ubiquity. This has enabled users to
create customized environments a top physical infrastructure for cloud
computing.
Use of VMs in clouds has the following distinct benefits:
(1) System administrators consolidate workloads of underutilized servers
in fewer servers;
(2) VMs have the ability to run legacy code without interfering with other
APIs;
(3) VMs can be used to improve security through creation of sandboxes for
running applications with questionable reliability;
(4) virtualized cloud platforms can apply performance isolation, letting
providers offer some guarantees and better QoS to customer applications.
206
207. 2.11.5 VM CLONING FOR DISASTER RECOVERY
VM technology requires an advanced disaster recovery scheme.
One scheme is to recover one physical machine by another physical machine.
The second scheme is to recover one VM by another VM. As shown in the
top timeline of Figure 2.11.1.1, traditional disaster recovery from one
physical machine to another is rather slow, complex, and expensive.
Total recovery time is attributed to the hardware configuration, installing and
configuring the OS, installing the backup agents, and the long time to restart
the physical machine.
207
208. CONT.,
To recover a VM platform, the installation and configuration times for
the OS and backup agents are eliminated.
Therefore, we end up with a much shorter disaster recovery time, about
40 percent of that to recover the physical machines.
Virtualization aids in fast disaster recovery by VM encapsulation.
The cloning of VMs offers an effective solution.
208
209. CONT.,
The idea is to make a clone VM on a remote server for every
running VM on a local server.
Among all the clone VMs, only one needs to be active. The remote
VM should be in a suspended mode.
A cloud control center should be able to activate this clone VM in
case of failure of the original VM, taking a snapshot of the VM to
enable live migration in a minimal amount of time.
The migrated VM can run on a shared Internet connection.
209
210. CONT.,
Only updated data and modified states are sent to the suspended VM
to update its state.
The Recovery Property Objective (RPO) and Recovery Time
Objective (RTO) are affected by the number of snapshots taken.
Security of the VMs should be enforced during live migration of
VMs.
210
211. 2.12 PROS AND CONS OF VIRTUALIZATION
Virtualization has now become extremely popular and widely used, especially in
cloud computing.
The primary reason for its wide success is the elimination of technology barriers
that prevented virtualization from being an effective and viable solution in the
past.
The most relevant barrier has been performance.
Today, the capillary diffusion of the Internet connection and the advancements
in computing technology have made virtualization an interesting opportunity to
deliver on-demand IT infrastructure and services.
Despite its renewed popularity, this technology has benefits and also drawbacks.
211
212. 2.12.1 ADVANTAGES OF VIRTUALIZATION
Managed execution and isolation are perhaps the most important advantages
of virtualization.
In the case of techniques supporting the creation of virtualized execution
environments, these two characteristics allow building secure and controllable
computing environments.
A virtual execution environment can be configured as a sandbox, thus
preventing any harmful operation to cross the borders of the virtual host.
Moreover, allocation of resources and their partitioning among different
guests is simplified, being the virtual host controlled by a program.
This enables fine-tuning of resources, which is very important in a server
consolidation scenario and is also a requirement for effective quality of
service.
212
213. CONT.,
Portability is another advantage of virtualization, especially for execution
virtualization techniques.
Virtual machine instances are normally represented by one or more files that
can be easily transported with respect to physical systems.
Moreover, they also tend to be self-contained since they do not have other
dependencies besides the virtual machine manager for their use.
Portability and self-containment simplify their administration. Java programs
are “compiled once and run everywhere”; they only require that the Java
virtual machine be installed on the host.
The same applies to hardware-level virtualization. It is in fact possible to
build our own operating environment within a virtual machine instance and
bring it with us wherever we go, as though we had our own laptop.
213
214. CONT.,
This concept is also an enabler for migration techniques in a server
consolidation scenario.
Portability and self-containment also contribute to reducing the costs of
maintenance, since the number of hosts is expected to be lower than the
number of virtual machine instances.
Since the guest program is executed in a virtual environment, there is very
limited opportunity for the guest program to damage the underlying
hardware.
Moreover, it is expected that there will be fewer virtual machine managers
with respect to the number of virtual machine instances managed.
214
215. CONT.,
Finally, by means of virtualization it is possible to achieve a more efficient
use of resources.
Multiple systems can securely coexist and share the resources of the
underlying host, without interfering with each other.
This is a prerequisite for server consolidation, which allows adjusting the
number of active physical resources dynamically according to the current
load of the system, thus creating the opportunity to save in terms of energy
consumption and to be less impacting on the environment.
215
216. 2.12.2 THE OTHER SIDE OF THE COIN:
DISADVANTAGES
Virtualization also has downsides.
The most evident is represented by a performance decrease of guest
systems as a result of the intermediation performed by the virtualization
layer.
In addition, suboptimal use of the host because of the abstraction layer
introduced by virtualization management software can lead to a very
inefficient utilization of the host or a degraded user experience.
Less evident, but perhaps more dangerous, are the implications for
security, which are mostly due to the ability to emulate a dif- ferent
execution environment.
216
217. 2.12.2.1 Performance degradation
Performance is definitely one of the major concerns in using virtualization
technology.
Since virtualization interposes an abstraction layer between the guest and the
host, the guest can experience increased latencies.
For instance, in the case of hardware virtualization, where the intermediate
emulates a bare machine on top of which an entire system can be installed,
the causes of performance degradation can be traced back to the overhead
introduced by the following activities:
• Maintaining the status of virtual processors
• Support of privileged instructions (trap and simulate privileged
instructions)
• Support of paging within VM
• Console functions
217
218. CONT.,
Furthermore, when hardware virtualization is realized through a program
that is installed or executed on top of the host operating systems, a major
source of performance degradation is represented by the fact that the virtual
machine manager is executed and scheduled together with other
applications, thus sharing with them the resources of the host.
Similar consideration can be made in the case of virtualization technologies
at higher levels, such as in the case of programming language virtual
machines (Java, .NET, and others).
Binary translation and interpretation can slow down the execution of
managed applications.
218
219. CONT.,
Moreover, because their execution is filtered by the runtime environment,
access to memory and other physical resources can represent sources of
performance degradation.
These concerns are becoming less and less important thanks to technology
advancements and the ever-increasing computational power available
today.
For example, specific techniques for hard- ware virtualization such as
para-virtualization can increase the performance of the guest program by
offloading most of its execution to the host without any change.
In programming-level virtual machines such as the JVM or .NET,
compilation to native code is offered as an option when performance is a
serious concern.
219
220. 2.12.2.2 Inefficiency and degraded user experience
Virtualization can sometime lead to an inefficient use of the host. In
particular, some of the specific features of the host cannot be exposed by
the abstraction layer and then become inaccessible.
In the case of hardware virtualization, this could happen for device
drivers:
The virtual machine can sometime simply provide a default graphic card
that maps only a subset of the features available in the host.
In the case of programming-level virtual machines, some of the features of
the underlying operating systems may become inaccessible unless specific
libraries are used.
220
221. CONT.,
For example, in the first version of Java the support for graphic
programming was very limited and the look and feel of applications
was very poor compared to native applications.
These issues have been resolved by providing a new framework called
Swing for designing the user interface, and further improvements have
been done by integrating support for the OpenGL libraries in the
software development kit.
221
222. 2.12.2.3 Security holes and new threats
Virtualization opens the door to a new and unexpected form of phishing.
The capability of emulating a host in a completely transparent manner led
the way to malicious programs that are designed to extract sensitive
information from the guest.
In the case of hardware virtualization, malicious programs can preload
themselves before the operating system and act as a thin virtual machine
manager toward it.
The operating system is then controlled and can be manipulated to extract
sensitive information of interest to third parties.
222
223. CONT.,
Examples of these kinds of malware are BluePill and SubVirt. BluePill,
malware targeting the AMD processor family, moves the execution of the
installed OS within a virtual machine.
The original version of SubVirt was developed as a prototype by Microsoft
through collaboration with Michigan University.
SubVirt infects the guest OS, and when the virtual machine is rebooted, it
gains control of the host.
The diffusion of such kinds of malware is facilitated by the fact that
originally, hardware and CPUs were not manufactured with virtualization in
mind.
In particular, the existing instruction sets cannot be simply changed or
updated to suit the needs of virtualization.
223
224. CONT.,
Recently, both Intel and AMD have introduced hardware support for
virtualization with Intel VT and AMD Pacifica, respectively.
The same considerations can be made for programming-level virtual
machines:
Modified versions of the runtime environment can access sensitive
information or monitor the memory locations utilized by guest applications
while these are executed.
To make this possible, the original version of the runtime environment needs
to be replaced by the modified one, which can generally happen if the
malware is run within an administrative context or a security hole of the host
operating system is exploited.
224