unit 3
unit 3
1. Application Virtualization
2. Network Virtualization
3. Desktop Virtualization
4. Storage Virtualization
5. Server Virtualization
6. Data virtualization
2. Network Virtualization: The ability to run multiple virtual networks with each
having a separate control and data plan. It co-exists together on top of one physical
network. It can be managed by individual parties that are potentially confidential to
each other. Network virtualization provides a facility to create and provision virtual
networks, logical switches, routers, firewalls, load balancers, Virtual Private
Networks (VPN), and workload security within days or even weeks.
3. Desktop Virtualization: Desktop virtualization allows the users’ OS to be
remotely stored on a server in the data center. It allows the user to access their
desktop virtually, from any location by a different machine. Users who want
specific operating systems other than Windows Server will need to have a virtual
desktop. The main benefits of desktop virtualization are user mobility, portability,
and easy management of software installation, updates, and patches.
4. Storage Virtualization: Storage virtualization is an array of servers that are
managed by a virtual storage system. The servers aren’t aware of exactly where
their data is stored and instead function more like worker bees in a hive. It makes
managing storage from multiple sources be managed and utilized as a single
repository. storage virtualization software maintains smooth operations, consistent
performance, and a continuous suite of advanced functions despite changes, breaks
down, and differences in the underlying equipment.
5. Server Virtualization: This is a kind of virtualization in which the masking of
server resources takes place. Here, the central server (physical server) is divided into
multiple different virtual servers by changing the identity number, and processors.
So, each system can operate its operating systems in an isolated manner. Where
each sub-server knows the identity of the central server. It causes an increase in
performance and reduces the operating cost by the deployment of main server
resources into a sub-server resource. It’s beneficial in virtual migration, reducing
energy consumption, reducing infrastructural costs, etc.
6. Data Virtualization: This is the kind of virtualization in which the data is
collected from various sources and managed at a single place without knowing more
about the technical information like how data is collected, stored & formatted then
arranged that data logically so that its virtual view can be accessed by its interested
people and stakeholders, and users through the various cloud services remotely.
Many big giant companies are providing their services like Oracle, IBM, At scale,
Cdata, etc.
Server Virtualization
• The major drawback of server virtualization is that all websites that are hosted by
the server will cease to exist if the server goes offline.
• The effectiveness of virtualized environments cannot be measured.
• It consumes a significant amount of RAM.
• Setting it up and keeping it up are challenging.
• Virtualization is not supported for many essential databases and apps.
Desktop Virtualization
Desktop virtualization is technology that lets users simulate a workstation load to
access a desktop from a connected device. It separates the desktop environment and
its applications from the physical client device used to access it. Desktop
virtualization is a key element of digital workspaces and depends on application
virtualization.
Since the user devices is basically a display, keyboard, and mouse, a lost or stolen
device presents a reduced risk to the organization. All user data and programs exist in
the desktop virtualization server, not on client devices.
Local desktop virtualization means the operating system runs on a client device
using hardware virtualization, and all processing and workloads occur on local
hardware. This type of desktop virtualization works well when users do not need a
continuous network connection and can meet application computing requirements
with local system resources. However, because this requires processing to be done
locally you cannot use local desktop virtualization to share VMs or resources across a
network to thin clients or mobile devices.
VDI simulates the familiar desktop computing model as virtual desktop sessions that run
on VMs either in on-premises data center or in the cloud. Organizations who adopt this
model manage the desktop virtualization server as they would any other application
server on-premises. Since all end-user computing is moved from users back into the data
center, the initial deployment of servers to run VDI sessions can be a considerable
investment, tempered by eliminating the need to constantly refresh end-user devices.
RDS is often used where a limited number of applications need be virtualized, rather
than a full Windows, Mac, or Linux desktop. In this model applications are streamed to
the local device which runs its own OS. Because only applications are virtualized RDS
systems can offer a higher density of users per VM.
DaaS shifts the burden of providing desktop virtualization to service providers, which
greatly alleviates the IT burden in providing virtual desktops. Organizations that wish to
move IT expenses from capital expense to operational expenses will appreciate the
predictable monthly costs that DaaS providers base their business model on.
In server virtualization, a server OS and its applications are abstracted into a VM from the
underlying hardware by a hypervisor. Multiple VMs can run on a single server, each with
its own server OS, applications, and all the application dependencies required to execute
as if it were running on bare metal.
Desktop virtualization abstracts client software (OS and applications) from a physical
thin client which connects to applications and data remotely, typically via the
internet. This abstraction enables users to utilize any number of devices to access
their virtual desktop. Desktop virtualization can greatly increase an organization’s
need for bandwidth, depending on the number of concurrent users during peak.
Benefits of desktop virtualization
Virtualizing desktops provides many potential benefits that can vary depending upon
the deployment model you choose.
Simpler administration. Desktop virtualization can make it easier for IT teams to
manage employee computing needs. Your business can maintain a single VM
template for employees within similar roles or functions instead of maintaining
individual computers that must be reconfigured, updated, or patched whenever
software changes need to be made. This saves time and IT resources.
Cost savings. Many virtual desktop solutions allow you to shift more of your IT
budget from capital expenditures to operating expenditures. Because compute-
intensive applications require less processing power when they’re delivered via VMs
hosted on a data center server, desktop virtualization can extend the life of older or
less powerful end-user devices. On-premise virtual desktop solutions may require a
significant initial investment in server hardware, hypervisor software, and other
infrastructure, making cloud-based DaaS—wherein you simply pay a regular usage-
based charge—a more attractive option.
Support for a broad variety of device types. Virtual desktops can support remote
desktop access from a wide variety of devices, including laptop and desktop
computers, thin clients, zero clients, tablets, and even some mobile phones. You can
use virtual desktops to deliver workstation-like experiences and access to the full
desktop anywhere, anytime, regardless of the operating system native to the end user
device.
Agility and scalability. It’s quick and easy to deploy new VMs or serve new
applications whenever necessary, and it is just as easy to delete them when they’re no
longer needed.
Better end-user experiences. When you implement desktop virtualization, your end
users will enjoy a feature-rich experience without sacrificing functionality they’ve
come to rely on, like printing or access to USB ports
Network Virtualization
2. VM Network
• Consists of virtual switches.
• Provides connectivity to hypervisor kernel.
• Connects to the physical network.
• Resides inside the physical server.
STORAGE VIRTUALIZATION
Storage virtualization is the pooling of physical storage from multiple storage devices
into what appears to be a single storage device -- or pool of available storage
capacity. A central console manages the storage.
The technology relies on software to identify available storage capacity from physical
devices and to then aggregate that capacity as a pool of storage that can be used by
traditional architecture servers or in a virtual environment by virtual machines (VMs).
The virtual storage software intercepts input/output (I/O) requests from physical or
virtual machines and sends those requests to the appropriate physical location of the
storage devices that are part of the overall pool of storage in the virtualized
environment. To the user, the various storage resources that make up the pool are
unseen, so the virtual storage appears like a single physical drive, share or logical unit
number (LUN) that can accept standard reads and writes.
Block-based or block access storage -- storage resources typically accessed via a Fibre
Channel (FC) or Internet Small Computer System Interface (iSCSI) storage area
network (SAN) -- is more frequently virtualized than file-based storage systems.
Block-based systems abstract the logical storage, such as a drive partition, from the
actual physical memory blocks in a storage device, such as a hard disk drive (HDD)
or solid-state memory device. Because it operates in a similar fashion to the native
drive software, there's less overhead for read and write processes, so block storage
systems will perform better than file-based systems.
Storage virtualization is becoming more and more important in various other forms:
File servers: The operating system writes the data to a remote location with no need
to understand how to write to the physical media.
WAN Accelerators: Instead of sending multiple copies of the same data over the
WAN environment, WAN accelerators will cache the data locally and present the re-
requested blocks at LAN speed, while not impacting the WAN performance.
SAN and NAS: Storage is presented over the Ethernet network of the operating
system. NAS presents the storage as file operations (like NFS). SAN technologies
present the storage as block level storage (like Fibre Channel). SAN technologies
receive the operating instructions only when if the storage was a locally attached
device.
Storage Tiering: Utilizing the storage pool concept as a stepping stone, storage
tiering analyze the most commonly used data and places it on the highest performing
storage pool. The lowest one used data is placed on the weakest performing storage
pool.
This operation is done automatically without any interruption of service to the data
consumer.
1. Data is stored in the more convenient locations away from the specific host. In
the case of a host failure, the data is not compromised necessarily.
2. The storage devices can perform advanced functions like replication,
reduplication, and disaster recovery functionality.
3. By doing abstraction of the storage level, IT operations become more flexible
in how storage is provided, partitioned, and protected.
Components needed for using OS Virtualization in the infrastructure are given below:
The first component is the OS Virtualization server. This server is the center point in
the OS Virtualization infrastructure. The server manages the streaming of the
information on the virtual disks for the client and also determines which client will be
connected to which virtual disk (using a database, this information is stored). Also the
server can host the storage for the virtual disk locally or the server is connected to the
virtual disks via a SAN (Storage Area Network). In high availability environments
there can be more OS Virtualization servers to create no redundancy and load
balancing. The server also ensures that the client will be unique within the
infrastructure.
Secondly, there is a client which will contact the server to get connected to the virtual
disk and asks for components stored on the virtual disk for running the operating
system.
The available supporting components are database for storing the configuration and
settings for the server, a streaming service for the virtual disk content, a (optional)
TFTP service and a (also optional) PXE boot service for connecting the client to the
OS Virtualization servers.
As it is already mentioned that the virtual disk contains an image of a physical disk
from the system that will reflect to the configuration and the settings of those systems
which will be using the virtual disk. When the virtual disk is created then that disk
needs to be assigned to the client that will be using this disk for starting. The
connection between the client and the disk is made through the administrative tool
and saved within the database. When a client has a assigned disk, the machine can be
started with the virtual disk using the following process as displayed in the below
figure:
1) Connecting to the OS Virtualization server:
First we start the machine and set up the connection with the OS Virtualization server.
Most of the products offer several possible methods to connect with the server. One
of the most popular and used methods is using a PXE service, but also a boot strap is
used a lot (because of the disadvantages of the PXE service). Although each method
initializes the network interface card (NIC), receiving a (DHCP-based) IP address and
a connection to the server.
When the connection is established between the client and the server, the server will
look into its database for checking the client is known or unknown and which virtual
disk is assigned to the client. When more than one virtual disk are connected then a
boot menu will be displayed on the client side. If only one disk is assigned, that disk
will be connected to the client which is mentioned in step number 3.
After the desired virtual disk is selected by the client, that virtual disk is connected
through the OS Virtualization server . At the back-end, the OS Virtualization server
makes sure that the client will be unique (for example computer name and identifier)
within the infrastructure.
As soon the disk is connected the server starts streaming the content of the virtual
disk. The software knows which parts are necessary for starting the operating system
smoothly, so that these parts are streamed first. The information streamed in the
system should be stored somewhere (i.e. cached). Most products offer several ways to
cache that information. For examples on the client hard disk or on the disk of the OS
Virtualization server.
5) Additional Streaming:
After that the first part is streamed then the operating system will start to run as
expected. Additional virtual disk data will be streamed when required for running or
starting a function called by the user (for example starting an application available
within the virtual disk).
APPLICATION VIRTUALIZATION
The main goal of application virtualization is to ensure that cloud users have remote
access to applications from a server. The server contains all the information and features
needed for the application to run and can be accessed over the internet. As a result, you
do not need to install the application on your native device to gain access. Application
virtualization offers end-users the flexibility to access two different versions of one
application through a hosted application or packaged software.
If we need to use a computer application, we first install it on our device and then
launch it. But what if we never had to install that application, or for that matter, any
application again? What if we could simply access applications on the cloud as and
when required that would work exactly as their local counterparts? This idea is what
application virtualization proposes.
Using this, users can access a plethora of applications in real-time without having to
allocate too much storage to all of them.
Users can also run applications not supported by their devices’ operating systems.
And let us not forget how it eliminates the need for managing and updating several
applications across different operating systems for IT teams.
How does application virtualization work?
The most common way to virtualize applications is the server-based approach. This
means an IT administrator implements remote applications on a server inside an
organization’s datacenter or via a hosting service. The IT admin then uses application
virtualization software to deliver the applications to a user’s desktop or other
connected device. The user can then access and use the application as though it were
locally installed on their machine, and the user’s actions are conveyed back to the
server to be executed.
t part of digital workspaces and desktop virtualization.
• This involves
o virtual cluster deployment,
o monitoring and management over large- scale clusters,
o resource scheduling
o load balancing
o server consolidation
o fault tolerance
• Apart from it there are common installations for most users or applications, such as
OS or user-level programming libraries.
Resource management
The term resource management refers to the operations used to control how
capabilities provided by Cloud resources and services are made available to other
entities, whether users, applications, or services.
Types of Resources
Physical Resource: Computer, disk, database, network, etc.
Logical Resource: Execution, monitoring, and application to communicate
• HA: virtual machines can be restarted on another hosts if the host where the virtual
machine running fails.
Deployment
• There are four steps to deploy a group of VMs onto a target cluster: – preparing the disk
image, – configuring the VMs, – choosing the destination nodes, and – executing the VM
deployment command on every host.
• There are four ways to manage a virtual cluster First way is to use a guest-based
manager, by which the cluster manager resides on a guest system. In this case,
multiple VMs form a virtual cluster
• Example: openMosix is an open source Linux cluster running different guest systems on
top of the Xen hypervisor
• Second way is we can build a cluster manager on the host systems. The host-based
manager supervises the guest systems and can restart the guest system on another
physical machine.
• Example. A good example is the VMware HA system that can restart a guest
system after failure.
•Third way to manage a virtual cluster is to use an independent cluster manager on both
the host and guest systems. This will make infrastructure management more complex
• Finally can use an integrated cluster Manager on the guest and host systems. This
means the manager must be designed to distinguish between virtualized resources and
physical resources.
Various cluster management schemes can be greatly enhanced when VM life migration is
enabled with minimal overhead.
Docker is a set of platforms as a service (PaaS) product that use the Operating system level
virtualization to deliver software in packages called containers. Containers are isolated from one
another and bundle their own software, libraries, and configuration files; they can communicate
with each other through well-defined channels. All containers are run by a single operating
system kernel and therefore use fewer resources than a virtual machine.
Difference between Docker Containers and Virtual Machines
1. Docker Containers
• Docker Containers contain binaries, libraries, and configuration files along with the
application itself.
• They don’t contain a guest OS for each container and rely on the underlying OS
kernel, which makes the containers lightweight.
• Containers share resources with other containers in the same host OS and provide
OS-level process isolation.
2. Virtual Machines
• Virtual Machines (VMs) run on Hypervisors, which allow multiple Virtual Machines
to run on a single machine along with its own operating system.
• Each VM has its own copy of an operating system along with the application and
necessary binaries, which makes it significantly larger and it requires more resources.
• They provide Hardware-level process isolation and are slow to boot.
Docker Components
1. Docker Image
• It is a file, comprised of multiple layers, used to execute code in a Docker container.
• They are a set of instructions used to create docker containers.
2. Docker Container
• It is a runtime instance of an image.
• Allows developers to package applications with all parts needed such as libraries and
other dependencies.
3. Docker file
• It is a text document that contains necessary commands which on execution helps
assemble a Docker Image.
• Docker image is created using a Docker file.
4. Docker Engine
• The software that hosts the containers is named Docker Engine.
• Docker Engine is a client-server-based application
• The docker engine has 3 main components:
• Server: It is responsible for creating and managing Docker images,
containers, networks, and volumes on the Docker. It is referred to as a
daemon process.
• REST API: It specifies how the applications can interact with the Server
and instructs it what to do.
• Client: The Client is a docker command-line interface (CLI), that allows
us to interact with Docker using the docker commands.
5. Docker Hub
• Docker Hub is the official online repository where you can find other Docker Images
that are available for use.
• It makes it easy to find, manage, and share container images with others.
Docker Container
Docker container is a running instance of an image. You can use Command Line Interface (CLI)
commands to run, start, stop, move, or delete a container. You can also provide configuration for
the network and environment variables. Docker container is an isolated and secure application
platform, but it can share and access to resources running in a different host or container.
An image is a read-only template with instructions for creating a Docker container. A docker
image is described in text file called a Dockerfile, which has a simple, well-defined syntax. An
image does not have states and never changes. Docker Engine provides the core Docker
technology that enables images and containers.
You can understand container and image with the help of the following command.
Docker Images:
Definition:
A Docker image is a read-only template that defines your container. It contains the code, libraries,
dependencies, and other files needed for an application to run.
Purpose:
Docker images act as a blueprint for creating containers, which are self-contained packages of applications
and related files.
Structure:
Docker images are built in layers, which allows for efficient storage and reuse of common components.
Example:
Think of a Docker image as a recipe for building a container, and the container as the dish that results from
following that recipe.
Docker Repositories:
Definition:
A Docker repository is a collection of container images, enabling you to store, manage, and share Docker
images publicly or privately. Docker Hub is a public registry, while cloud providers like Amazon Web
Services and Google Cloud offer their own private registries.
Purpose:
Repositories facilitate the distribution and reuse of Docker images across different environments, including
cloud deployments.
Example:
Imagine a repository as a library where you can find and download pre-built Docker images for various
applications or services.
Examples:
Google Cloud: Google Cloud provides services like Google Kubernetes Engine (GKE) for running
containerized applications.
Amazon Web Services: Amazon Web Services offers services like Amazon ECS (Elastic
Container Service) and Amazon EKS (Elastic Kubernetes Service) for managing containers.
Azure: Microsoft Azure provides services like Azure Container Registry (ACR) for storing and
managing container images.