0% found this document useful (0 votes)
4 views

unit 3

The document provides an overview of various types of virtualization in cloud computing, including application, network, desktop, storage, server, and data virtualization. It explains the concepts, advantages, and disadvantages of server and desktop virtualization, highlighting their roles in enhancing resource management and user accessibility. Additionally, it discusses network virtualization's functionality, applications, and the importance of storage virtualization in managing physical storage resources effectively.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

unit 3

The document provides an overview of various types of virtualization in cloud computing, including application, network, desktop, storage, server, and data virtualization. It explains the concepts, advantages, and disadvantages of server and desktop virtualization, highlighting their roles in enhancing resource management and user accessibility. Additionally, it discusses network virtualization's functionality, applications, and the importance of storage virtualization in managing physical storage resources effectively.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 25

UNIT III VIRTUALIZATION INFRASTRUCTURE AND DOCKER 7

Desktop Virtualization – Network Virtualization – Storage Virtualization – System-


level of Operating Virtualization – Application Virtualization – Virtual clusters and
Resource Management – Containers vs. Virtual Machines – Introduction to Docker –
Docker Components – Docker Container – Docker Images and Repositories.

Types of Virtualization in Cloud Computing

1. Application Virtualization
2. Network Virtualization
3. Desktop Virtualization
4. Storage Virtualization
5. Server Virtualization
6. Data virtualization

1. Application Virtualization: Application virtualization helps a user to have


remote access to an application from a server. The server stores all personal
information and other characteristics of the application but can still run on a local
workstation through the internet. An example of this would be a user who needs to
run two different versions of the same software. Technologies that use application
virtualization are hosted applications and packaged applications.

2. Network Virtualization: The ability to run multiple virtual networks with each
having a separate control and data plan. It co-exists together on top of one physical
network. It can be managed by individual parties that are potentially confidential to
each other. Network virtualization provides a facility to create and provision virtual
networks, logical switches, routers, firewalls, load balancers, Virtual Private
Networks (VPN), and workload security within days or even weeks.
3. Desktop Virtualization: Desktop virtualization allows the users’ OS to be
remotely stored on a server in the data center. It allows the user to access their
desktop virtually, from any location by a different machine. Users who want
specific operating systems other than Windows Server will need to have a virtual
desktop. The main benefits of desktop virtualization are user mobility, portability,
and easy management of software installation, updates, and patches.
4. Storage Virtualization: Storage virtualization is an array of servers that are
managed by a virtual storage system. The servers aren’t aware of exactly where
their data is stored and instead function more like worker bees in a hive. It makes
managing storage from multiple sources be managed and utilized as a single
repository. storage virtualization software maintains smooth operations, consistent
performance, and a continuous suite of advanced functions despite changes, breaks
down, and differences in the underlying equipment.
5. Server Virtualization: This is a kind of virtualization in which the masking of
server resources takes place. Here, the central server (physical server) is divided into
multiple different virtual servers by changing the identity number, and processors.
So, each system can operate its operating systems in an isolated manner. Where
each sub-server knows the identity of the central server. It causes an increase in
performance and reduces the operating cost by the deployment of main server
resources into a sub-server resource. It’s beneficial in virtual migration, reducing
energy consumption, reducing infrastructural costs, etc.
6. Data Virtualization: This is the kind of virtualization in which the data is
collected from various sources and managed at a single place without knowing more
about the technical information like how data is collected, stored & formatted then
arranged that data logically so that its virtual view can be accessed by its interested
people and stakeholders, and users through the various cloud services remotely.
Many big giant companies are providing their services like Oracle, IBM, At scale,
Cdata, etc.

Server Virtualization

Server virtualization is the partitioning of a physical server into smaller virtual


servers to help maximize our server resources. In server virtualization the resources
of the server itself are hidden, or masked, from users, and software is used to divide
the physical server into multiple virtual environments, called virtual or private
servers.

Server Virtualization is most important part of Cloud Computing. It is composed


of two words, cloud and computing. Cloud means Internet and computing means to
solve problems with help of computers. Computing is related to CPU & RAM in
digital world. Now Consider situation, You are using Mac OS on your machine but
particular application for your project can be operated only on Windows. You can
either buy new machine running windows or create virtual environment in which
windows can be installed and used. Second option is better because of less cost and
easy implementation. This scenario is called Virtualization. In it, virtual CPU,
RAM, NIC and other resources are provided to OS which it needed to run. This
resources is virtually provided and controlled by an application called Hypervisor.
The new OS running on virtual hardware resources is collectively called Virtual
Machine (VM).

Figure – Virtualization on local machine


Now migrate this concept to data centers where lot of servers (machines with fast
CPU, large RAM and enormous storage) are available. Enterprise owning data
centre provide resources requested by customers as per their need. Data centers have
all resources and on user request, particular amount of CPU, RAM, NIC and storage
with preferred OS is provided to users. This concept of virtualization in which
services are requested and provided over Internet is called Server Virtualization.

To implement Server Virtualization, hypervisor is installed on server which


manages and allocates host hardware requirements to each virtual machine. This
hypervisor sits over server hardware and regulates resources of each VM. A user
can increase or decrease resources or can delete entire VM as per his/her need. This
servers with VM created on them is called server virtualization and concept of
controlling this VM by users through internet is called Cloud Computing.

Server virtualization features


Advantages of Server Virtualization:

• Each server in server virtualization can be restarted separately without affecting


the operation of other virtual servers.
• Server virtualization lowers the cost of hardware by dividing a single server into
several virtual private servers.
• One of the major benefits of server virtualization is disaster recovery. In server
virtualization, data may be stored and retrieved from any location and moved
rapidly and simply from one server to another.
• It enables users to keep their private information in the data centers.

Disadvantages of Server Virtualization:

• The major drawback of server virtualization is that all websites that are hosted by
the server will cease to exist if the server goes offline.
• The effectiveness of virtualized environments cannot be measured.
• It consumes a significant amount of RAM.
• Setting it up and keeping it up are challenging.
• Virtualization is not supported for many essential databases and apps.

Desktop Virtualization
Desktop virtualization is technology that lets users simulate a workstation load to
access a desktop from a connected device. It separates the desktop environment and
its applications from the physical client device used to access it. Desktop
virtualization is a key element of digital workspaces and depends on application
virtualization.

Desktop virtualization is a method of simulating a user workstation so it can be


accessed from a remotely connected device. By abstracting the user desktop in this
way, organizations can allow users to work from virtually anywhere with a network
connecting, using any desktop laptop, tablet, or smartphone to access enterprise
resources without regard to the device or operating system employed by the remote
user.

Remote desktop virtualization is also a key component of digital workspaces Virtual


desktop workloads run on desktop virtualization servers which typically execute
on virtual machines (VMs) either at on-premises data centers or in the public cloud.

Since the user devices is basically a display, keyboard, and mouse, a lost or stolen
device presents a reduced risk to the organization. All user data and programs exist in
the desktop virtualization server, not on client devices.

How does desktop virtualization work?


Desktop virtualization can be achieved in a variety of ways, but the two most
important types are based on whether the operating system instance is local or remote.

Local desktop virtualization means the operating system runs on a client device
using hardware virtualization, and all processing and workloads occur on local
hardware. This type of desktop virtualization works well when users do not need a
continuous network connection and can meet application computing requirements
with local system resources. However, because this requires processing to be done
locally you cannot use local desktop virtualization to share VMs or resources across a
network to thin clients or mobile devices.

Remote desktop virtualization is a common use of virtualization that operates in a


server computing environment. This allows users to run operating systems and
applications from a server inside a datacenter while all user interactions take place on
a client device such as a laptop, thin client, or smartphone. This type of virtualization
gives IT more centralized control over applications and desktops, and can maximize
an organization’s investment in hardware through remote access to shared computing
resources.

What are the types of Desktop Virtualization?


The three most popular types of desktop virtualization are Virtual desktop
infrastructure (VDI), Remote desktop services (RDS), and Desktop-as-a-Service (DaaS).

VDI simulates the familiar desktop computing model as virtual desktop sessions that run
on VMs either in on-premises data center or in the cloud. Organizations who adopt this
model manage the desktop virtualization server as they would any other application
server on-premises. Since all end-user computing is moved from users back into the data
center, the initial deployment of servers to run VDI sessions can be a considerable
investment, tempered by eliminating the need to constantly refresh end-user devices.

RDS is often used where a limited number of applications need be virtualized, rather
than a full Windows, Mac, or Linux desktop. In this model applications are streamed to
the local device which runs its own OS. Because only applications are virtualized RDS
systems can offer a higher density of users per VM.

DaaS shifts the burden of providing desktop virtualization to service providers, which
greatly alleviates the IT burden in providing virtual desktops. Organizations that wish to
move IT expenses from capital expense to operational expenses will appreciate the
predictable monthly costs that DaaS providers base their business model on.

Desktop Virtualization vs. Server Virtualization

In server virtualization, a server OS and its applications are abstracted into a VM from the
underlying hardware by a hypervisor. Multiple VMs can run on a single server, each with
its own server OS, applications, and all the application dependencies required to execute
as if it were running on bare metal.

Desktop virtualization abstracts client software (OS and applications) from a physical
thin client which connects to applications and data remotely, typically via the
internet. This abstraction enables users to utilize any number of devices to access
their virtual desktop. Desktop virtualization can greatly increase an organization’s
need for bandwidth, depending on the number of concurrent users during peak.
Benefits of desktop virtualization

Virtualizing desktops provides many potential benefits that can vary depending upon
the deployment model you choose.
Simpler administration. Desktop virtualization can make it easier for IT teams to
manage employee computing needs. Your business can maintain a single VM
template for employees within similar roles or functions instead of maintaining
individual computers that must be reconfigured, updated, or patched whenever
software changes need to be made. This saves time and IT resources.
Cost savings. Many virtual desktop solutions allow you to shift more of your IT
budget from capital expenditures to operating expenditures. Because compute-
intensive applications require less processing power when they’re delivered via VMs
hosted on a data center server, desktop virtualization can extend the life of older or
less powerful end-user devices. On-premise virtual desktop solutions may require a
significant initial investment in server hardware, hypervisor software, and other
infrastructure, making cloud-based DaaS—wherein you simply pay a regular usage-
based charge—a more attractive option.

Improved productivity. Desktop virtualization makes it easier for employees to


access enterprise computing resources. They can work anytime, anywhere, from any
supported device with an Internet connection.

Support for a broad variety of device types. Virtual desktops can support remote
desktop access from a wide variety of devices, including laptop and desktop
computers, thin clients, zero clients, tablets, and even some mobile phones. You can
use virtual desktops to deliver workstation-like experiences and access to the full
desktop anywhere, anytime, regardless of the operating system native to the end user
device.

Stronger security. In desktop virtualization, the desktop image is abstracted and


separated from the physical hardware used to access it, and the VM used to deliver the
desktop image can be a tightly controlled environment managed by the enterprise IT
department.

Agility and scalability. It’s quick and easy to deploy new VMs or serve new
applications whenever necessary, and it is just as easy to delete them when they’re no
longer needed.

Better end-user experiences. When you implement desktop virtualization, your end
users will enjoy a feature-rich experience without sacrificing functionality they’ve
come to rely on, like printing or access to USB ports

Network Virtualization

Network Virtualization is a process of logically grouping physical networks and making


them operate as single or multiple independent networks called Virtual Networks.
Tools for Network Virtualization :
1. Physical switch OS –
It is where the OS must have the functionality of network virtualization.
2. Hypervisor –
It is which uses third-party software or built-in networking and the functionalities
of network virtualization.
The basic functionality of the OS is to give the application or the executing process
with a simple set of instructions. System calls that are generated by the OS and
executed through the libc library are comparable to the service primitives given at
the interface between the application and the network through the SAP (Service
Access Point).
The hypervisor is used to create a virtual switch and configuring virtual networks on
it. The third-party software is installed onto the hypervisor and it replaces the native
networking functionality of the hypervisor. A hypervisor allows us to have various
VMs all working optimally on a single piece of computer hardware.
Functions of Network Virtualization :
• It enables the functional grouping of nodes in a virtual network.
• It enables the virtual network to share network resources.
• It allows communication between nodes in a virtual network without routing of
frames.
• It restricts management traffic.
• It enforces routing for communication between virtual networks.

Network Virtualization in Virtual Data Center :


1. Physical Network
• Physical components: Network adapters, switches, bridges, repeaters, routers and
hubs.
• Grants connectivity among physical servers running a hypervisor, between
physical servers and storage systems and between physical servers and clients.

2. VM Network
• Consists of virtual switches.
• Provides connectivity to hypervisor kernel.
• Connects to the physical network.
• Resides inside the physical server.

Network Virtualization In VDC


Advantages of Network Virtualization :
Improves manageability –
• Grouping and regrouping of nodes are eased.
• Configuration of VM is allowed from a centralized management workstation
using management software.
Reduces CAPEX –
• The requirement to set up separate physical networks for different node groups is
reduced.
Improves utilization –
• Multiple VMs are enabled to share the same physical network which enhances
the utilization of network resource.
Enhances performance –
• Network broadcast is restricted and VM performance is improved.
Enhances security –
• Sensitive data is isolated from one VM to another VM.
• Access to nodes is restricted in a VM from another VM.
Disadvantages of Network Virtualization :
• It needs to manage IT in the abstract.
• It needs to coexist with physical devices in a cloud-integrated hybrid
environment.
• Increased complexity.
• Upfront cost.
• Possible learning curve.
Examples of Network Virtualization :
Virtual LAN (VLAN) –
• The performance and speed of busy networks can be improved by VLAN.
• VLAN can simplify additions or any changes to the network.
Network Overlays –
• A framework is provided by an encapsulation protocol called VXLAN for
overlaying virtualized layer 2 networks over layer 3 networks.
• The Generic Network Virtualization Encapsulation protocol (GENEVE) provides
a new way to encapsulation designed to provide control-plane independence
between the endpoints of the tunnel.

Network Virtualization Platform: VMware NSX –


• VMware NSX Data Center transports the components of networking and security
such as switching, firewalling and routing that are defined and consumed in
software.
• It transports the operational model of a virtual machine (VM) for the network.

Applications of Network Virtualization :


• Network virtualization may be used in the development of application testing to
mimic real-world hardware and system software.
• It helps us to integrate several physical networks into a single network or
separate single physical networks into multiple analytical networks.
• In the field of application performance engineering, network virtualization allows
the simulation of connections between applications, services, dependencies, and
end-users for software testing.
• It helps us to deploy applications in a quicker time frame, thereby supporting a
faster go-to-market.
• Network virtualization helps the software testing teams to derive actual results
with expected instances and congestion issues in a networked environment.

STORAGE VIRTUALIZATION
Storage virtualization is the pooling of physical storage from multiple storage devices
into what appears to be a single storage device -- or pool of available storage
capacity. A central console manages the storage.
The technology relies on software to identify available storage capacity from physical
devices and to then aggregate that capacity as a pool of storage that can be used by
traditional architecture servers or in a virtual environment by virtual machines (VMs).

The virtual storage software intercepts input/output (I/O) requests from physical or
virtual machines and sends those requests to the appropriate physical location of the
storage devices that are part of the overall pool of storage in the virtualized
environment. To the user, the various storage resources that make up the pool are
unseen, so the virtual storage appears like a single physical drive, share or logical unit
number (LUN) that can accept standard reads and writes.

A basic form of storage virtualization is represented by a software virtualization layer


between the hardware of a storage resource and a host -- a PC, a server or any device
accessing the storage -- that makes it possible for operating systems (OSes) and
applications to access and use the storage.

Even a redundant array of independent disks, or RAID, array can sometimes be


considered a type of storage virtualization. Multiple physical drives in the array are
presented to the user as a single storage device that, in the background, stripes and
replicates data to multiple disks to improve I/O performance and to protect data in
case a single drive fails.

Storage virtualization is the technique of abstracting physical storage resources like


SSD's and HDD's to create virtual storage resources. Its software has the ability to
pool and abstract physical storage resources, and present them as a logical storage
resources, such as virtual volumes, virtual disk files, and virtual storage systems.
It is the concept of virtualizing enterprise storage at the disk level, creating a dynamic
pool of shared storage resources available to all servers, all the time.
With read/write operations spread across all drives, multiple requests can be
processed in parallel, boosting system performance. This allows users to create
hundreds of virtual volumes in seconds to support any virtual server platform. It is a
consolidation of sorts for data and files and stored in a centralized system that can be
accessed from more than one positions.

Types of storage virtualization: Block vs. file


There are two basic methods of virtualizing storage: file-based or block-based. File-
based storage virtualization is a specific use, applied to network-attached storage
(NAS) systems.

Block-based or block access storage -- storage resources typically accessed via a Fibre
Channel (FC) or Internet Small Computer System Interface (iSCSI) storage area
network (SAN) -- is more frequently virtualized than file-based storage systems.
Block-based systems abstract the logical storage, such as a drive partition, from the
actual physical memory blocks in a storage device, such as a hard disk drive (HDD)
or solid-state memory device. Because it operates in a similar fashion to the native
drive software, there's less overhead for read and write processes, so block storage
systems will perform better than file-based systems.

The block-based operation enables the virtualization management software to collect


the capacity of the available blocks of storage space across all virtualized arrays. It
pools them into a shared resource to be assigned to any number of VMs, bare-metal
servers or containers. Storage virtualization is particularly beneficial for block
storage.
Unlike NAS systems, managing SANs can be a time-consuming process.
Consolidating a number of block storage systems under a single management
interface that often shields users from the tedious steps of LUN configuration, for
example, can be a significant timesaver.

Storage virtualization is becoming more and more important in various other forms:

File servers: The operating system writes the data to a remote location with no need
to understand how to write to the physical media.

WAN Accelerators: Instead of sending multiple copies of the same data over the
WAN environment, WAN accelerators will cache the data locally and present the re-
requested blocks at LAN speed, while not impacting the WAN performance.

SAN and NAS: Storage is presented over the Ethernet network of the operating
system. NAS presents the storage as file operations (like NFS). SAN technologies
present the storage as block level storage (like Fibre Channel). SAN technologies
receive the operating instructions only when if the storage was a locally attached
device.

Storage Tiering: Utilizing the storage pool concept as a stepping stone, storage
tiering analyze the most commonly used data and places it on the highest performing
storage pool. The lowest one used data is placed on the weakest performing storage
pool.

This operation is done automatically without any interruption of service to the data
consumer.

Advantages of Storage Virtualization

1. Data is stored in the more convenient locations away from the specific host. In
the case of a host failure, the data is not compromised necessarily.
2. The storage devices can perform advanced functions like replication,
reduplication, and disaster recovery functionality.
3. By doing abstraction of the storage level, IT operations become more flexible
in how storage is provided, partitioned, and protected.

System-level of Operating Virtualization

With the help of OS virtualization nothing is pre-installed or permanently loaded on


the local device and no-hard disk is needed. Everything runs from the network using a
kind of virtual disk. This virtual disk is actually a disk image file stored on a remote
server, SAN (Storage Area Network) or NAS (Non-volatile Attached Storage). The
client will be connected by the network to this virtual disk and will boot with the
Operating System installed on the virtual disk.
How does OS Virtualization works?

Components needed for using OS Virtualization in the infrastructure are given below:

The first component is the OS Virtualization server. This server is the center point in
the OS Virtualization infrastructure. The server manages the streaming of the
information on the virtual disks for the client and also determines which client will be
connected to which virtual disk (using a database, this information is stored). Also the
server can host the storage for the virtual disk locally or the server is connected to the
virtual disks via a SAN (Storage Area Network). In high availability environments
there can be more OS Virtualization servers to create no redundancy and load
balancing. The server also ensures that the client will be unique within the
infrastructure.

Secondly, there is a client which will contact the server to get connected to the virtual
disk and asks for components stored on the virtual disk for running the operating
system.

The available supporting components are database for storing the configuration and
settings for the server, a streaming service for the virtual disk content, a (optional)
TFTP service and a (also optional) PXE boot service for connecting the client to the
OS Virtualization servers.

As it is already mentioned that the virtual disk contains an image of a physical disk
from the system that will reflect to the configuration and the settings of those systems
which will be using the virtual disk. When the virtual disk is created then that disk
needs to be assigned to the client that will be using this disk for starting. The
connection between the client and the disk is made through the administrative tool
and saved within the database. When a client has a assigned disk, the machine can be
started with the virtual disk using the following process as displayed in the below
figure:
1) Connecting to the OS Virtualization server:

First we start the machine and set up the connection with the OS Virtualization server.
Most of the products offer several possible methods to connect with the server. One
of the most popular and used methods is using a PXE service, but also a boot strap is
used a lot (because of the disadvantages of the PXE service). Although each method
initializes the network interface card (NIC), receiving a (DHCP-based) IP address and
a connection to the server.

2) Connecting the Virtual Disk:

When the connection is established between the client and the server, the server will
look into its database for checking the client is known or unknown and which virtual
disk is assigned to the client. When more than one virtual disk are connected then a
boot menu will be displayed on the client side. If only one disk is assigned, that disk
will be connected to the client which is mentioned in step number 3.

3) VDisk connected to the client:

After the desired virtual disk is selected by the client, that virtual disk is connected
through the OS Virtualization server . At the back-end, the OS Virtualization server
makes sure that the client will be unique (for example computer name and identifier)
within the infrastructure.

4) OS is "streamed" to the client:

As soon the disk is connected the server starts streaming the content of the virtual
disk. The software knows which parts are necessary for starting the operating system
smoothly, so that these parts are streamed first. The information streamed in the
system should be stored somewhere (i.e. cached). Most products offer several ways to
cache that information. For examples on the client hard disk or on the disk of the OS
Virtualization server.

5) Additional Streaming:

After that the first part is streamed then the operating system will start to run as
expected. Additional virtual disk data will be streamed when required for running or
starting a function called by the user (for example starting an application available
within the virtual disk).
APPLICATION VIRTUALIZATION

The main goal of application virtualization is to ensure that cloud users have remote
access to applications from a server. The server contains all the information and features
needed for the application to run and can be accessed over the internet. As a result, you
do not need to install the application on your native device to gain access. Application
virtualization offers end-users the flexibility to access two different versions of one
application through a hosted application or packaged software.

If we need to use a computer application, we first install it on our device and then
launch it. But what if we never had to install that application, or for that matter, any
application again? What if we could simply access applications on the cloud as and
when required that would work exactly as their local counterparts? This idea is what
application virtualization proposes.

Application virtualization refers to the process of deploying a computer application


over a network (the cloud). The deployed application is installed locally on a
server, and when a user requests it, an instance of the application is displayed to
them. The user can then engage with that application as if it was installed on their
system.

Application virtualization is a powerful concept that takes away most of the


drawbacks of installing applications locally.

Using this, users can access a plethora of applications in real-time without having to
allocate too much storage to all of them.

Users can also run applications not supported by their devices’ operating systems.

And let us not forget how it eliminates the need for managing and updating several
applications across different operating systems for IT teams.
How does application virtualization work?
The most common way to virtualize applications is the server-based approach. This
means an IT administrator implements remote applications on a server inside an
organization’s datacenter or via a hosting service. The IT admin then uses application
virtualization software to deliver the applications to a user’s desktop or other
connected device. The user can then access and use the application as though it were
locally installed on their machine, and the user’s actions are conveyed back to the
server to be executed.
t part of digital workspaces and desktop virtualization.

Application virtualization software


The top benefits of virtualized applications are:
Simplified management
Application virtualization makes it much easier for IT to manage and maintain
applications across an organization. Rather than manually installing applications to every
user’s machine, app virtualization lets IT admins install an app once on a central server
and then deploy the app as needed on user devices. In addition to saving installation
time, this also makes it simpler to update or patch applications because IT only has to do
so on a single server.
Scalability
Application virtualization lets IT admins deploy virtual applications to all kinds of
connected devices, regardless of those devices’ operating systems or storage space. This
allows thin client provisioning, where users access an application on a low-cost machine
while centralized servers handle all the computing power necessary to run that
application. As a result, the organization spends less on computing hardware because
employees only require basic machines to access the apps they need for work.
Application virtualization solutions also allow users to access applications that normally
would not work on their machines’ operating system, because the app is actually running
on the centralized server. This is commonly used to virtually run a Windows application
on a Linux operating system.
Security
Application virtualization software gives IT admins central control over which users can
access what applications. If a user’s app permissions within an organization change, the
IT admin can simply remove that user’s access to an application. Without app
virtualization, the IT admin would have to physically uninstall the app from the user’s
device. This central control over app access is especially important if a user’s device is
lost or stolen, because the IT admin can revoke remote access to sensitive data without
having to track down the missing device.
Virtual clusters and Resource Management

• A physical cluster is a collection of servers ( physical


machines) interconnected by a physical network such as a
LAN. Virtual clusters are built with VMs installed at
distributed servers from one or more physical clusters.
• As with traditional physical servers, virtual machines (VMs) can also be
clustered. A VM cluster starts with two or more physical servers;
• We'll call them Server A and Server B.
• In simple deployments if Server A fails, its workloads restart on Server B
• The VMs in a virtual cluster are interconnected logically by a
virtual network across several physical networks.

• The below f igure i l lustrates the concepts of virtual clusters and


physical clusters.

• Each virtual cluster is formed with physical machines or a VM


hosted by multiple physical clusters.
• The virtual cluster boundaries are shown as distinct
boundaries.

Provisioning of VMs to a virtual clu ster

• The provisioning of VMs to a virtual cluster is done


dynamically to have the following properties:
o The virtual cluster nodes can be either physical or virtual
machines. Multiple VMs running with different OSes can
be deployed on the same physical node.
o A VM runs with a guest OS, which is often different
from the host OS, that manages the resources in the
physical machine, where the VM is implemented.
o The purpose of using VMs is to consolidate multiple
functionalities on the same server. This will greatly
enhance server utilization and application f lexibility.
o VMs can be colonized ( replicated) in multiple servers for
the purpose of promoting distributed parallelism, fault
tolerance, and disaster recovery.
o The size ( number of nodes) of a virtual cluster can grow
or shrink dynamically, similar to the way an overlay
network varies in size in a peer- to- peer ( P 2 P) network.
o The failure of any physical nodes may disable some VMs
installed on the failing nodes. But the failure of VMs
will not pull down the host system.

• Since system virtualization has been widely used, i t is


necessary to
o effectively manage VMs running on a mass of physical
computing nodes ( also called virtual clusters) and
o build a high- performance virtualized computing
environment.

• This involves
o virtual cluster deployment,
o monitoring and management over large- scale clusters,
o resource scheduling
o load balancing
o server consolidation
o fault tolerance

• The below f igure shows the concept of a virtual cluster based


on application partitioning or customization.
• Since large number of VM images might be present, the most important thing is to
determine how to store those images in the system efficiently

• Apart from it there are common installations for most users or applications, such as
OS or user-level programming libraries.

• These software packages can be preinstalled as templates (called template VMs).

Resource management

The term resource management refers to the operations used to control how
capabilities provided by Cloud resources and services are made available to other
entities, whether users, applications, or services.
Types of Resources
Physical Resource: Computer, disk, database, network, etc.
Logical Resource: Execution, monitoring, and application to communicate

Virtual Cluster features

• HA: virtual machines can be restarted on another hosts if the host where the virtual
machine running fails.

• DRS (Distributed Resource Scheduler): virtual machines can be load balanced so


that none of the hosts is too overloaded or too much empty in the cluster.

• Live migration: of virtual machines from one host to other.


• Three critical design issues of virtual clusters:

o l ive migration of VMs


o memory and f i le migrations
o Dynamic deployment of virtual clusters.

Deployment

• There are four steps to deploy a group of VMs onto a target cluster: – preparing the disk
image, – configuring the VMs, – choosing the destination nodes, and – executing the VM
deployment command on every host.

• The system should have the capability of fast deployment.


• Here, deployment means two things:
o to construct and distribute software stacks ( OS, l ibraries,
applications) to a physical node inside clusters as fast as
possible,
o to quickly switch runtime environments from one user’ s
virtual cluster to another user’ s virtual cluster.

• If one user f inishes using his system, the corresponding virtual


cluster should shut down or suspend quickly to save the
resources to run other VMs for other users.

Live VM Migration Steps and Performance Effects

• When a VM fails, its role could be replaced by another VM on a different node, as


long as they both run with the same guest OS, a VM must stop playing its role if its
residing host node fails.This problem can be mitigated with VM live migration . The
migration copies the VM state file from the storage area to the host machine.

• There are four ways to manage a virtual cluster First way is to use a guest-based
manager, by which the cluster manager resides on a guest system. In this case,
multiple VMs form a virtual cluster
• Example: openMosix is an open source Linux cluster running different guest systems on
top of the Xen hypervisor
• Second way is we can build a cluster manager on the host systems. The host-based
manager supervises the guest systems and can restart the guest system on another
physical machine.
• Example. A good example is the VMware HA system that can restart a guest
system after failure.
•Third way to manage a virtual cluster is to use an independent cluster manager on both
the host and guest systems. This will make infrastructure management more complex
• Finally can use an integrated cluster Manager on the guest and host systems. This
means the manager must be designed to distinguish between virtualized resources and
physical resources.
Various cluster management schemes can be greatly enhanced when VM life migration is
enabled with minimal overhead.

• A VM can be in one of the following four states.


– An inactive state is defined by the virtualization platform, under which the VM is not
enabled.
– An active state refers to a VM that has been instantiated at the virtualization
platform to perform a real task.
– A paused state corresponds to a VM that has been instantiated but disabled to
process a task or paused in a waiting state.
– A VM enters the suspended state if its machine file and virtual resources are stored
back to the disk

Live migration process of a VM from one host to Another

• When one system migrates to another physical node, we should


consider the following issues.
o Memory Migration
o File System Migration
o Network Migration
o Live Migration of VM Using Xen
Introduction to Docker

Docker is a set of platforms as a service (PaaS) product that use the Operating system level
virtualization to deliver software in packages called containers. Containers are isolated from one
another and bundle their own software, libraries, and configuration files; they can communicate
with each other through well-defined channels. All containers are run by a single operating
system kernel and therefore use fewer resources than a virtual machine.
Difference between Docker Containers and Virtual Machines
1. Docker Containers
• Docker Containers contain binaries, libraries, and configuration files along with the
application itself.
• They don’t contain a guest OS for each container and rely on the underlying OS
kernel, which makes the containers lightweight.
• Containers share resources with other containers in the same host OS and provide
OS-level process isolation.
2. Virtual Machines
• Virtual Machines (VMs) run on Hypervisors, which allow multiple Virtual Machines
to run on a single machine along with its own operating system.
• Each VM has its own copy of an operating system along with the application and
necessary binaries, which makes it significantly larger and it requires more resources.
• They provide Hardware-level process isolation and are slow to boot.

Docker Components

1. Docker Image
• It is a file, comprised of multiple layers, used to execute code in a Docker container.
• They are a set of instructions used to create docker containers.
2. Docker Container
• It is a runtime instance of an image.
• Allows developers to package applications with all parts needed such as libraries and
other dependencies.
3. Docker file
• It is a text document that contains necessary commands which on execution helps
assemble a Docker Image.
• Docker image is created using a Docker file.
4. Docker Engine
• The software that hosts the containers is named Docker Engine.
• Docker Engine is a client-server-based application
• The docker engine has 3 main components:
• Server: It is responsible for creating and managing Docker images,
containers, networks, and volumes on the Docker. It is referred to as a
daemon process.
• REST API: It specifies how the applications can interact with the Server
and instructs it what to do.
• Client: The Client is a docker command-line interface (CLI), that allows
us to interact with Docker using the docker commands.
5. Docker Hub
• Docker Hub is the official online repository where you can find other Docker Images
that are available for use.
• It makes it easy to find, manage, and share container images with others.
Docker Container

Docker container is a running instance of an image. You can use Command Line Interface (CLI)
commands to run, start, stop, move, or delete a container. You can also provide configuration for
the network and environment variables. Docker container is an isolated and secure application
platform, but it can share and access to resources running in a different host or container.

An image is a read-only template with instructions for creating a Docker container. A docker
image is described in text file called a Dockerfile, which has a simple, well-defined syntax. An
image does not have states and never changes. Docker Engine provides the core Docker
technology that enables images and containers.

You can understand container and image with the help of the following command.

1. $ docker run hello-world


2. 1) docker: It is docker engine and used to run docker program. It tells to the operating
system that you are running docker program.
3. 2) run: This subcommand is used to create and run a docker container.
4. 3) hello-world: It is a name of an image. You need to specify the name of an image which
is to load into the container.

Docker Images and Docker Respositories

Docker Images:
 Definition:
A Docker image is a read-only template that defines your container. It contains the code, libraries,
dependencies, and other files needed for an application to run.
Purpose:
Docker images act as a blueprint for creating containers, which are self-contained packages of applications
and related files.
Structure:
Docker images are built in layers, which allows for efficient storage and reuse of common components.
Example:
Think of a Docker image as a recipe for building a container, and the container as the dish that results from
following that recipe.

Docker Repositories:
Definition:
A Docker repository is a collection of container images, enabling you to store, manage, and share Docker
images publicly or privately. Docker Hub is a public registry, while cloud providers like Amazon Web
Services and Google Cloud offer their own private registries.

Purpose:
Repositories facilitate the distribution and reuse of Docker images across different environments, including
cloud deployments.

Example:
Imagine a repository as a library where you can find and download pre-built Docker images for various
applications or services.

How they relate to Cloud Computing:


 Containerization:
Docker enables containerization, a key practice in cloud computing that allows applications to be
packaged and deployed consistently across different environments.
 Cloud Deployments:
Docker images and repositories are essential for deploying applications to cloud platforms, as they
provide a standardized way to package and distribute applications.
 Infrastructure as Code:
Dockerfiles, which define how to build Docker images, can be used as part of infrastructure-as-code
practices, allowing for automated and reproducible deployments.
 Microservices:
Docker and containers are well-suited for building and deploying microservices, which are small,
independent applications that can be deployed and scaled independently.

Examples:
 Google Cloud: Google Cloud provides services like Google Kubernetes Engine (GKE) for running
containerized applications.
 Amazon Web Services: Amazon Web Services offers services like Amazon ECS (Elastic
Container Service) and Amazon EKS (Elastic Kubernetes Service) for managing containers.
 Azure: Microsoft Azure provides services like Azure Container Registry (ACR) for storing and
managing container images.

You might also like