0% found this document useful (0 votes)
4 views12 pages

Cloud Computing Unit 04

The document provides an overview of virtualization technology, defining it as the creation of virtual versions of physical resources such as servers and storage devices. It discusses various types of virtualization, including application, network, desktop, storage, server, and data virtualization, along with their benefits and implementation levels. Additionally, it covers hypervisors like VMware and KVM, detailing their architectures and functionalities in managing virtual machines.

Uploaded by

antsportant
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views12 pages

Cloud Computing Unit 04

The document provides an overview of virtualization technology, defining it as the creation of virtual versions of physical resources such as servers and storage devices. It discusses various types of virtualization, including application, network, desktop, storage, server, and data virtualization, along with their benefits and implementation levels. Additionally, it covers hypervisors like VMware and KVM, detailing their architectures and functionalities in managing virtual machines.

Uploaded by

antsportant
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

UNIT-4

BCA613: Cloud Computing


Virtualization Technology: Definition, Understanding and Benefits of Virtualization.
Implementation Level of Virtualization, Virtualization Structure/Tools and
Mechanisms, Hypervisor VMware, KVM, Xen. Virtualization of CPU, Memory, I/O
Devices, Virtual Cluster and Resources Management, Virtualization of Server,
Desktop, Network, and Virtualization of data-center
Virtualization Technology: -
Virtualization is the "creation of a virtual (rather than actual) version of something, such as a server, a desktop, a storage
device, an operating system or network resources".
In other words, Virtualization is a technique, which allows to share a single physical instance of a resource or an application
among multiple customers and organizations. It does by assigning a logical name to a physical storage and providing a pointer
to that physical resource when demanded.
Concept behind the Virtualization: -
Creation of a virtual machine over existing operating system and hardware is known as Hardware Virtualization. A Virtual
machine provides an environment that is logically separated from the underlying hardware.
The machine on which the virtual machine is going to create is known as Host Machine and that virtual machine is referred
as a Guest Machine

Characteristics of Virtualization
• Increased Security: The ability to control the execution of a guest program in a completely transparent manner
opens new possibilities for delivering a secure, controlled execution environment. All the operations of the guest
programs are generally performed against the virtual machine, which then translates and applies them to the host
programs.
• Managed Execution: In particular, sharing, aggregation, emulation, and isolation are the most relevant features.
• Sharing: Virtualization allows the creation of a separate computing environment within the same host.
• Aggregation: It is possible to share physical resources among several guests, but virtualization also allows
aggregation, which is the opposite process.
Types of Virtualizations
1. Application Virtualization
2. Network Virtualization
3. Desktop Virtualization
4. Storage Virtualization
5. Server Virtualization
6. Data virtualization

1. Application Virtualization: Application virtualization helps a user to have remote access to an application from a server.
The server stores all personal information and other characteristics of the application but can still run on a local workstation
through the internet. An example of this would be a user who needs to run two different versions of the same software.
Technologies that use application virtualization are hosted applications and packaged applications.
2. Network Virtualization: The ability to run multiple virtual networks with each having a separate control and data plan. It
co-exists together on top of one physical network. It can be managed by individual parties that are potentially confidential
to each other. Network virtualization provides a facility to create and provision virtual networks, logical switches,
routers, firewalls, load balancers, Virtual Private Networks (VPN), and workload security within days or even weeks.

3. Desktop Virtualization: Desktop virtualization allows the users’ OS to be remotely stored on a server in the data center. It
allows the user to access their desktop virtually, from any location by a different machine. Users who want specific operating
systems other than Windows Server will need to have a virtual desktop. The main benefits of desktop virtualization are user
mobility, portability, and easy management of software installation, updates, and patches.
4. Storage Virtualization: Storage virtualization is an array of servers that are managed by a virtual storage system. The
servers aren’t aware of exactly where their data is stored and instead function more like worker bees in a hive. It makes
managing storage from multiple sources be managed and utilized as a single repository. storage virtualization software
maintains smooth operations, consistent performance, and a continuous suite of advanced functions despite changes, breaks
down, and differences in the underlying equipment.
5. Server Virtualization: This is a kind of virtualization in which the masking of server resources takes place. Here, the central
server (physical server) is divided into multiple different virtual servers by changing the identity number, and processors. So,
each system can operate its operating systems in an isolated manner. Where each sub-server knows the identity of the central
server. It causes an increase in performance and reduces the operating cost by the deployment of main server resources into
a sub-server resource. It’s beneficial in virtual migration, reducing energy consumption, reducing infrastructural costs, etc.
6. Data Virtualization: This is the kind of virtualization in which the data is collected from various sources and managed at a
single place without knowing more about the technical information like how data is collected, stored & formatted then
arranged that data logically so that its virtual view can be accessed by its interested people and stakeholders, and users
through the various cloud services remotely. Many big giant companies are providing their services like Oracle, IBM, At scale,
Cdata, etc.
Uses of Virtualization
• Data-integration
• Business-integration
• Service-oriented architecture data-services
• Searching organizational data
Benefits of Virtualization
• More flexible and efficient allocation of resources.
• Enhance development productivity.
• It lowers the cost of IT infrastructure.
• Remote access and rapid scalability.
• High availability and disaster recovery.
• Pay peruse of the IT infrastructure on demand.
• Enables running multiple operating systems.

IMPLEMENTATION LEVELS OF VIRTUALIZATION: -


Virtualization is a computer architecture technology by which multiple virtual machines (VMs) are multiplexed in the same
hardware machine. The idea of VMs can be dated back to the 1960s

1.) Instruction Set Architecture Level (ISA)


ISA emulation is an option for ISA virtualization. Many older programs that were designed to operate on a different set of
hardware can still be run on this. Any ISA-based virtual computer can execute these programs. With this, an x86 processor
can now run a binary code that previously required additional levels of processing. It's possible to make it run on an x64
machine with a few modifications. It is feasible to construct a virtual machine independent of its physical hardware with the
help of ISA.
Interpreters are required for basic emulation, which translates source code into a form that can be read by hardware. Then,
the procedure can begin. This is one of the five tiers of cloud computing virtualization.
2.) Hardware Abstraction Level (HAL)
HAL, as the name implies, allows virtualization to operate at the hardware level. An operating system's hypervisor is required
for this to work. This is where the virtual machine is created, and it is this virtual machine that controls the hardware. Allows
each physical component, such as an input-output device, memory or processor to be virtualized.
You can't have more than one virtualization instance running on a single piece of hardware at the same time. Typically, this
is employed in a cloud-based environment
3.) Operating System Level
The virtualization concept is possible to provide a layer between the operating system and the application that is abstract.
This is a software and hardware container that runs on the operating system and the physical server. So each of these
becomes a server in its own right.
The virtualization level is utilized when there are multiple users and no one wants to share the hardware. A separate virtual
hardware resource will be used for each user's virtual environment. This eliminates any potential for confrontation.
4.) Library Level
A user-level library API is cumbersome, and this is because the operating system is cumbersome. The library virtualization
level is chosen in these cases since the APIs are extensively documented. API hooks enable this since they regulate the
communication link between the application and the system.
User-level APIs rather than OS-level system calls are used by the vast majority of applications. Another candidate for
virtualization is a system interface with a well-documented API. A virtualization method is possible because API hooks are
used to verify that an application is communicating with the system.
5.) Application Level
Application-level virtualization is the final implementation level of virtualization in cloud computing and is utilized when only
one application is to be virtualized. The complete platform does not need to be virtualized. When running virtual computers
that use high-level languages, this is commonly used. The virtualization layer will reside on top of the application program.
For the virtual machine's application level, it allows high-level language programs to be compiled easily.

Virtualization Structure/Tools and Mechanisms: -


In general, there are three typical classes of VM architecture. Figure 3.1 showed the architectures of a machine before and
after virtualization. Before virtualization, the operating system manages the hardware. After virtualization, a virtualization
layer is inserted between the hardware and the operating system. In such a case, the virtualization layer is responsible for
converting portions of the real hardware into virtual hardware. Therefore, different operating systems such as Linux and
Windows can run on the same physical machine, simultaneously. Depending on the position of the virtualization layer, there
are several classes of VM architectures, namely the hypervisor architecture, paravirtualization, and host-based virtualization.
The hypervisor is also known as the VMM (Virtual Machine Monitor). They both perform the same virtualization operations.
Hypervisor and Xen Architecture: -
The hypervisor supports hardware-level virtualization (see Figure 3.1(b)) on bare metal devices like CPU, memory, disk and
network interfaces. The hypervisor software sits directly between the physical hardware and its OS. This virtualization layer
is referred to as either the VMM or the hypervisor. The hypervisor provides hyper calls for the guest OSes and applications.
Depending on the functionality, a hypervisor can assume a micro-kernel architecture like the Microsoft Hyper-V. Or it can
assume a monolithic hypervisor architecture like the VMware ESX for server virtualization.

A micro-kernel hypervisor includes only the basic and unchanging functions (such as physical memory management and
processor scheduling). The device drivers and other changeable components are outside the hypervisor. A monolithic
hypervisor implements all the aforementioned functions, including those of the device drivers. Therefore, the size of the
hypervisor code of a micro-kernel hyper-visor is smaller than that of a monolithic hypervisor. Essentially, a hypervisor must
be able to convert physical devices into virtual resources dedicated for the deployed VM to use.

The Xen Architecture: -


Xen is an open-source hypervisor program developed by Cambridge University. Xen is a micro-kernel hypervisor, which
separates the policy from the mechanism. The Xen hypervisor implements all the mechanisms, leaving the policy to be
handled by Domain 0, as shown in Figure 3.5. Xen does not include any device drivers natively. It just provides a mechanism
by which a guest OS can have direct access to the physical devices. As a result, the size of the Xen hypervisor is kept rather
small. Xen provides a virtual environment located between the hardware and the OS.
The core components of a Xen system are the hypervisor, kernel, and applications. The organization of the three components
is important. Like other virtualization systems, many guest OSes can run on top of the hypervisor.

For example, Xen is based on Linux and its security level is C2. Its management VM is named Domain 0, which has the privilege
to manage other VMs implemented on the same host. If Domain 0 is compromised, the hacker can control the entire system.
So, in the VM system, security policies are needed to improve the security of Domain 0. Domain 0, behaving as a VMM, allows
users to create, copy, save, read, modify, share, migrate, and roll back VMs as easily as manipulating a file, which flexibly
provides tremendous benefits for users. Unfortunately, it also brings a series of security problems during the software life
cycle and data lifetime.

Hypervisor VMware: -
A hypervisor, also known as a virtual machine monitor or VMM, is software that creates and runs virtual machines (VMs). A
hypervisor allows one host computer to support multiple guest VMs by virtually sharing its resources, such as memory and
processing.
Benefits of hypervisors
There are several benefits to using a hypervisor that hosts multiple virtual machines:
• Speed: Hypervisors allow virtual machines to be created instantly, unlike bare-metal servers. This makes it easier to
provision resources as needed for dynamic workloads.
• Efficiency: Hypervisors that run several virtual machines on one physical machine’s resources also allow for more
efficient utilization of one physical server. It is more cost- and energy-efficient to run several virtual machines on
one physical machine than to run multiple underutilized physical machines for the same task.
• Flexibility: Bare-metal hypervisors allow operating systems and their associated applications to run on a variety of
hardware types because the hypervisor separates the OS from the underlying hardware, so the software no longer
relies on specific hardware devices or drivers.
• Portability: Hypervisors allow multiple operating systems to reside on the same physical server (host machine).
Because the virtual machines that the hypervisor runs are independent from the physical machine, they are portable.

KVM: -
Kernel-based Virtual Machine (KVM) is an open source virtualization technology built into Linux®. Specifically, KVM lets you
turn Linux into a hypervisor that allows a host machine to run multiple, isolated virtual environments called guests or virtual
machines (VMs).
KVM is part of Linux. If you’ve got Linux 2.6.20 or newer, you’ve got KVM. KVM was first announced in 2006 and merged into
the mainline Linux kernel version a year later. Because KVM is part of existing Linux code, it immediately benefits from every
new Linux feature, fix, and advancement without additional engineering.

Working of KVM: -
KVM converts Linux into a type-1 (bare-metal) hypervisor. All hypervisors need some operating system-level components—
such as a memory manager, process scheduler, input/output (I/O) stack, device drivers, security manager, a network stack,
and more—to run VMs. KVM has all these components because it’s part of the Linux kernel. Every VM is implemented as a
regular Linux process, scheduled by the standard Linux scheduler, with dedicated virtual hardware like a network card,
graphics adapter, CPU(s), memory, and disks.

Xen: -
Xen is the open source hypervisor included in the Linux kernel and, as such, it is available in all Linux distributions. The Xen
Project is one of the many open source projects managed by the Linux Foundation.

Paravirtualization And Full Virtualization: -


Xen offers two types of virtualizations: paravirtualization and full virtualization. In paravirtualization, the virtualized OS runs
a modified version of the OS, which results in the OS knowing that it's virtualized. This enables much more efficient
communication between the OS and the physical hardware, as the hardware devices can be addressed directly. The only
drawback of paravirtualization is that a modified guest OS needs to be used, which isn't provided by many vendors.
The counterpart of paravirtualization is full virtualization. This is a virtualization mode where the CPU needs to provide
support for virtualization extensions. In full virtualization, unmodified virtualized OSes can efficiently address the hardware
because of this support.

Virtualization of CPU: -
CPU virtualization is a technology that allows multiple virtual machines to run on a single physical server. It is a key component
of cloud computing, allowing for the efficient use of computing resources and the ability to quickly scale up or down as
needed. Here some type of CPU virtualization.
• Software-Based CPU Virtualization
With software-based CPU virtualization, the guest application code runs directly on the processor, while the guest
privileged code is translated and the translated code runs on the processor.
• Hardware-Assisted CPU Virtualization
Certain processors provide hardware assistance for CPU virtualization.
• Virtualization and Processor-Specific Behavior
Although VMware software virtualizes the CPU, the virtual machine detects the specific model of the processor on
which it is running.
• Performance Implications of CPU Virtualization
CPU virtualization adds varying amounts of overhead depending on the workload and the type of virtualization used.

Virtualization of Memory: -
Memory virtualization is an abstraction technique that virtualizes physical memory resources to provide flexible and effective
memory architecture for computing systems, enabling several virtual machines (VMs) to run concurrently on a single physical
machine, each with its own virtual address space. A computer’s physical memory is divided up into several logical partitions,
each of which shares a physical area while retaining privacy and security from the others.
The MMU (memory management unit), which supports the guest OS, must be virtualized in order to handle numerous virtual
machines on a single system. It is mainly used for translating virtual address into a physical address in the processor. It
consults the page table, which a data structure maintained by operating system that maps virtual addresses to physical
addresses. However, accessing the page table can be slow, when the done frequently affecting system’s performance.
1. Virtual Address Space: Creating a virtual address space for each programme that corresponds to a physical memory
address is the first stage in memory virtualization. While physical memory addresses are often bigger than virtual
address spaces, numerous applications can run simultaneously.
2. Page Tables: The operating system keeps track of the memory pages used by each app and their matching physical
memory addresses in order to manage the mapping between virtual and physical memory addresses. This
datastructure is known as a page table.
3. Memory Paging: A page fault occurs when an application tries to access a memory page that is not already in physical
memory. The OS reacts to this by loading the requested page from disc into physical memory and swapping out a
page of memory from physical memory to disc.
4. Memory compression: Different memory compression algorithms, which analyses the contents of memory pages
and compress them to conserve space, are used to make better use of physical memory. A compressed page is
instantly decompressed when a program wants to access it.
5. Memory Overcommitment: Memory overcommitment, in which applications are given access to more virtual
memory than is physically accessible, is made possible by virtualization. because not all memory pages are actively
being used at once, the System can employ memory paging and compression to release physical memory as needed.
6. Memory Ballooning: Several virtualization technologies utilize a method called ballooning to further minimize
memory use. This entails dynamically modifying the memory allotted to each virtual machine in accordance with its
usage trends. The hypervisor can reclaim some of a virtual machine’s allocated memory if it is not being fully utilized
and make it accessible to other virtual machines.

Virtualization of I/O Devices


I/O virtualization involves managing the routing of I/O requests between virtual devices and the shared physical hardware.
At the time of this writing, there are three ways to implement I/O virtualization: full device emulation, para-virtualization,
and direct I/O. Full device emulation is the first approach for I/O virtualization. Generally, this approach emulates well-
known, real-world devices.
Virtual Cluster and Resources Management: -
A physical cluster is a collection of servers (physical machines) interconnected by a physical network such as a LAN. Virtual
clusters are built with VMs installed at distributed servers from one or more physical clus-ters. The VMs in a virtual cluster
are interconnected logically by a virtual network across several physical networks. Figure 3.18 illustrates the concepts of
virtual clusters and physical clusters. Each virtual cluster is formed with physical machines or a VM hosted by multiple physical
clusters. The virtual cluster boundaries are shown as distinct boundaries.

The provisioning of VMs to a virtual cluster is done dynamically to have the following interest-ing properties:
• The virtual cluster nodes can be either physical or virtual machines. Multiple VMs running with different OSes can be
deployed on the same physical node.
• A VM runs with a guest OS, which is often different from the host OS, that manages the resources in the physical machine,
where the VM is implemented.
• The purpose of using VMs is to consolidate multiple functionalities on the same server. This will greatly enhance server
utilization and application flexibility.
. VMs can be colonized (replicated) in multiple servers for the purpose of promoting distributed parallelism, fault tolerance,
and disaster recovery.
• The size (number of nodes) of a virtual cluster can grow or shrink dynamically, similar to the way an overlay network
varies in size in a peer-to-peer (P2P) network.
• The failure of any physical nodes may disable some VMs installed on the failing nodes. But the failure of VMs will not pull
down the host system.
Since system virtualization has been widely used, it is necessary to effectively manage VMs running on a mass of physical
computing nodes (also called virtual clusters) and consequently build a high-performance virtualized computing
environment. This involves virtual cluster deployment, monitoring and management over large-scale clusters, as well as
resource scheduling, load balancing, server consolidation, fault tolerance, and other techniques. The different node colors in
Figure 3.18 refer to different virtual clusters. In a virtual cluster system, it is quite important to store the large number of VM
images efficiently. Each VM can be installed on a remote server or replicated on multiple servers belonging to the same or
different physical clusters. The boundary of a virtual cluster can change as VM nodes are added, removed, or migrated
dynamically over time.

Virtualization of Server: -
Server Virtualization is the process of dividing a physical server into several virtual servers, called virtual private servers. Each
virtual private server can run independently.
The concept of Server Virtualization widely used in the IT infrastructure to minimizes the costs by increasing the utilization
of existing resources.
Types of Server Virtualization
1. Hypervisor
In the Server Virtualization, Hypervisor plays an important role. It is a layer between the operating system (OS) and hardware.
There are two types of hypervisors.
o Type 1 hypervisor ( also known as bare metal or native hypervisors)
o Type 2 hypervisor ( also known as hosted or Embedded hypervisors)
The hypervisor is mainly used to perform various tasks such as allocate physical hardware resources (CPU, RAM, etc.) to
several smaller independent virtual machines, called "guest" on the host machine.
2. Full Virtualization
Full Virtualization uses a hypervisor to directly communicate with the CPU and physical server. It provides the best isolation
and security mechanism to the virtual machines.
The biggest disadvantage of using hypervisor in full virtualization is that a hypervisor has its own processing needs, so it can
slow down the application and server performance.
VMWare ESX server is the best example of full virtualization.
3. Para Virtualization
Para Virtualization is quite similar to the Full Virtualization. The advantage of using this virtualization is that it is easier to
use, Enhanced performance, and does not require emulation overhead. Xen primarily and UML use the Para Virtualization.
The difference between full and pare virtualization is that, in para virtualization hypervisor does not need too much
processing power to manage the OS.
4. Operating System Virtualization
Operating system virtualization is also called as system-lever virtualization. It is a server virtualization technology that divides
one operating system into multiple isolated user-space called virtual environments. The biggest advantage of using server
visualization is that it reduces the use of physical space, so it will save money.
Linux OS Virtualization and Windows OS Virtualization are the types of Operating System virtualization.
FreeVPS, OpenVZ, and Linux Vserver are some examples of System-Level Virtualization.
5. Hardware Assisted Virtualization
Hardware Assisted Virtualization was presented by AMD and Intel. It is also known as Hardware virtualization, AMD
virtualization, and Intel virtualization. It is designed to increase the performance of the processor. The advantage of using
Hardware Assisted Virtualization is that it requires less hypervisor overhead.
6. Kernel-Level Virtualization
Kernel-level virtualization is one of the most important types of server virtualization. It is an open-source virtualization which
uses the Linux kernel as a hypervisor. The advantage of using kernel virtualization is that it does not require any special
administrative software and has very less overhead.
User Mode Linux (UML) and Kernel-based virtual machine are some examples of kernel virtualization.
Advantages of Server Virtualization
There are the following advantages of Server Virtualization -
1. Independent Restart
In Server Virtualization, each server can be restart independently and does not affect the working of other virtual servers.
2. Low Cost
Server Virtualization can divide a single server into multiple virtual private servers, so it reduces the cost of hardware
components.
3. Disaster Recovery<
Disaster Recovery is one of the best advantages of Server Virtualization. In Server Virtualization, data can easily and quickly
move from one server to another and these data can be stored and retrieved from anywhere.
4. Faster deployment of resources
Server virtualization allows us to deploy our resources in a simpler and faster way.
5. Security
It allows uses to store their sensitive data inside the data centers.
Disadvantages of Server Virtualization
There are the following disadvantages of Server Virtualization -
1. The biggest disadvantage of server virtualization is that when the server goes offline, all the websites that are hosted
by the server will also go down.
2. There is no way to measure the performance of virtualized environments.
3. It requires a huge amount of RAM consumption.
4. It is difficult to set up and maintain.
5. Some core applications and databases are not supported virtualization.
6. It requires extra hardware resources.
Uses of Server Virtualization
A list of uses of server virtualization is given below -
o Server Virtualization is used in the testing and development environment.
o It improves the availability of servers.
o It allows organizations to make efficient use of resources.
o It reduces redundancy without purchasing additional hardware components.

Virtualization of Desktop: -
Desktop virtualization is the concept of isolating a logical operating system (OS) instance from the client that is used to access
it.
There are several different conceptual models of desktop virtualization, which can broadly be divided into two categories
based on whether the technology executes the OS instance locally or remotely. It is important to note that not all forms of
desktop virtualization technology involve the use of virtual machines (VMs).
Working: -
Desktop virtualization works by employing hardware virtualization technology. Virtual desktops exist as VMs, running on a
virtualization host. These VMs share the host server's processing power, memory and other resources.
Users typically run a remote desktop protocol (RDP) client to access the virtual desktop environment. This client attaches to
a connection broker that links the user's session to a virtual desktop. Typically, virtual desktops are nonpersistent, meaning
the connection broker assigns the user a random virtual desktop from a virtual desktop pool. When the user logs out, this
virtual desktop resets to a pristine, unchanged state and returns to the pool. However, some vendors offer an option to
create persistent virtual desktops, in which users receive their own writable virtual desktop.
Desktop virtualization deployment types
There are three main types of desktop virtualization: virtual desktop infrastructure (VDI), Remote Desktop Services (RDS) --
formerly, Terminal Services -- and desktop as a service (DaaS).
Virtualization of Network: -
Network Virtualization is a process of logically grouping physical networks and making them operate as single or multiple
independent networks called Virtual Networks.

Tools for Network Virtualization:


1. Physical switch OS –
It is where the OS must have the functionality of network virtualization.
2. Hypervisor –
It is which uses third-party software or built-in networking and the functionalities of network virtualization.
Functions of Network Virtualization:
• It enables the functional grouping of nodes in a virtual network.
• It enables the virtual network to share network resources.
• It allows communication between nodes in a virtual network without routing of frames.
• It restricts management traffic.
• It enforces routing for communication between virtual networks.

Virtualization of Data center: -


Data center virtualization is the process of creating a modern data center that is highly scalable, available and secure. With
data center virtualization products, you can increase IT agility and create a seamless foundation to manage private and public
cloud services alongside traditional on-premises infrastructure.
Network Virtualization in Virtual Data Center :
1. Physical Network
• Physical components: Network adapters, switches, bridges, repeaters, routers and hubs.
• Grants connectivity among physical servers running a hypervisor, between physical servers and storage systems and
between physical servers and clients.
2. VM Network
• Consists of virtual switches.
• Provides connectivity to hypervisor kernel.
• Connects to the physical network.
• Resides inside the physical server.
Like a regular data center, a VDC provides computing capabilities that enable workloads of business apps and activities, such
as:
• File sharing.
• Email operations.
• Productivity apps.
• CRM and ERP platforms.
• Database operations.
Virtualization of physical components offers a lot of advantages, and companies opt to deploy a VDC in pursuit of:
• Flexible and scalable infrastructure.
• Shorter time-to-market and idea-to-cash cycles.
• High availability.
• Higher levels of IT setup customization.
• Cost reductions (no rental, power, cooling, maintenance, or hardware costs).

You might also like