Cs8791 Cloud Computing Unit2 Notes
Cs8791 Cloud Computing Unit2 Notes
Service-oriented computing
What is a service?
A service encapsulates a software component that provides a set of coherent and
related functionalities that can be reused and integrated into bigger and more complex
applications. The term service is a general abstraction that encompasses several
different implementations using different technologies and protocols.
1
remote method invocation is transparent, in a service-oriented computing
environment the interaction with a service is explicit and the interface of a service is
kept minimal to foster its reuse and simplify the interaction.
2
At the same time, a service orientation requires that contracts and schema
remain stable over time, since it would be possible to propagate changes to all
its possible clients.
To address this issue, contracts and schema are defined in a way that allows
services to evolve without breaking already deployed code.
Technologies such as XML and SOAP provide the appropriate tools to support
such features rather than class definition or an interface declaration.
3
Service Provider
The service provider is the maintainer of the service and the organization that
makes available one or more services for others to use.
To advertise services, the provider can publish them in a registry, together with
a service contract that specifies the nature of the service, how to use it, the
requirements for the service, and the fees charged.
Service Consumer
The service consumer can locate the service metadata in the registry and
develop the required client components to bind and use the service.
Service providers and consumers can belong to different organization bodies or
business domains.
It is very common in SOA-based computing systems that components play the
roles of both service provider and service consumer.
Service Orchestration
Services might aggregate information and data retrieved from other services or
create workflows of services to satisfy the request of a given service consumer.
This practice is known as service orchestration, which more generally describes
the automated arrangement, coordination, and management of complex
computer systems, middleware, and services.
Service Choreography
Another important interaction pattern is service choreography, which is the
coordinated interaction of services without a single point of control.
SOA provides a reference model for architecting several software systems, especially
enterprise business applications and systems.
which characterize SOA platforms, are winning features within an enterprise context:
• Standardized service contract.
Services adhere to a given communication agreement, which is specified through one
or more service description documents.
4
• Loose coupling.
Services are designed as self-contained components, maintain relationships
that minimize dependencies on other services, and only require being aware of
each other.
Service contracts will enforce the required interaction among services. This
simplifies the flexible aggregation of services and enables a more agile design
strategy that supports the evolution of the enterprise business.
• Abstraction.
A service is completely defined by service contracts and description documents.
They hide their logic, which is encapsulated within their implementation.
The use of service description documents and contracts removes the need to
consider the technical implementation details and provides a more intuitive
framework to define software systems within a business context.
Reusability.
Designed as components, services can be reused more effectively, thus
reducing development time and the associated costs.
Reusability allows for a more agile design and cost-effective system
implementation and deployment.
Therefore, it is possible to leverage third-party services to deliver required
functionality by paying an appropriate fee rather developing the same capability
in-house.
• Autonomy.
Services have control over the logic they encapsulate and, from a service
consumer point of view, there is no need to know about their implementation.
• Lack of state.
By providing a stateless interaction pattern (at least in principle), services
increase the chance of being reused and aggregated, especially in a scenario
in which a single service is used by multiple consumers that belong to different
administrative and business domains.
• Discoverability.
Services are defined by description documents that constitute supplemental
metadata through which they can be effectively discovered.
5
Service discovery provides an effective means for utilizing third-party resources.
• Composability.
Using services as building blocks, sophisticated and complex operations can be
implemented.
Service orchestration and choreography provide a solid support for composing
services and achieving business goals.
SOA Technologies –
Web Services
The first implementations of SOA have leveraged distributed object
programming technologies such as CORBA and DCOM.
Later Web Services are the prominent technology for implementing SOA
systems and applications.
They leverage Internet technologies and standards for building distributed
systems.
Several aspects make Web Services the technology of choice for SOA.
First, they allow for interoperability across different platforms and programming
languages.
Second, they are based on well-known and vendor-independent standards such
as HTTP, SOAP, and WSDL.
Third, they provide an intuitive and simple way to connect heterogeneous
software systems, enabling quick composition of services in distributed
environment.
WS technology Stack
Web services are the prominent technology for implementing SOA systems and
applications.
They leverage Internet technologies and standards for building distributed
systems.
Several aspects make Web services the technology of choice for SOA. First,
they allow for interoperability across different platforms and programming
languages.
Second, they are based on well-known and vendor-independent standards such
as HTTP, SOAP, XML, and WSDL.
6
Third, they provide an intuitive and simple way to connect heterogeneous
software systems, enabling the quick.
composition of services in a distributed environment. Finally, they provide the
features required by enterprise business applications to be used in an industrial
environment.
They define facilities for enabling service discovery, which allows system
architects to more efficiently compose SOA applications, and service metering
to assess whether a specific service complies with the contract between the
service provider and the service consumer.
The concept behind a Web service is very simple. Using as a basis the object-
oriented abstraction a Web service exposes a set of operations that can be
invoked by leveraging Internet-based protocols. Method operations support
parameters and return values in the form of complex and simple types.
The semantics for invoking Web service methods is expressed through
interoperable standards such as XML and WSDL, which also provide a complete
framework for expressing simple and complex types in a platform-independent
manner.
Web services are made accessible by being hosted in a Web server; therefore,
HTTP is the most popular transport protocol used for interacting with Web
services. Figure describes the common-use case scenarios for Web services.
System architects develop a Web service with their technology of choice and
deploy it in compatible Web or application servers.
7
The service description document, expressed by means of Web Service
Definition Language (WSDL), can be either uploaded to a global registry or
attached as a metadata to the service itself.
Service consumers can look up and discover services in global catalogs using
Universal Description Discovery and Integration (UDDI) or, most likely, directly
retrieve the service metadata by interrogating the Web service first.
The Web service description document allows service consumers to
automatically generate clients for the given service and embed them in their
existing application.
Web services are now extremely popular, so bindings exist for any mainstream
programming language in the form of libraries or development support tools.
Use of Web services seamless and straightforward with respect to technologies
such as CORBA that require much more integration effort.
Moreover, being interoperable, Web services constitute a better solution for
SOA with respect to several distributed object frameworks, such as .NET
Remoting, Java RMI, and DCOM/COM1, which limit their applicability to a single
platform or environment.
Besides the main function of enabling remote method invocation by using Web-
based and interoperable standards, Web services encompass several
technologies that put together and facilitate the integration of heterogeneous
applications and enable service-oriented computing.
Figure shows the Web service technologies stack that lists all the components
of the conceptual framework describing and enabling the Web services
abstraction.
WSFL
Web Service Flow
Static UDDI
Service Discovery
Quality of Service
Direct UDDI
Service Publication
Management
8
These technologies cover all the aspects that allow Web services to operate in
a distributed environment, from the specific requirements for the networking
to the discovery of services.
The backbone of all these technologies is XML, which is also one of the causes
of Web services’ popularity and ease of use.
XML-based languages are used to manage the low-level interaction for Web
service method calls (SOAP), for providing metadata about the services
(WSDL), for discovery services (UDDI), and other core operations.
In practice, the core components that enable Web services are SOAP and
WSDL.
9
REST and Systems of Systems
10
To gether with an appropriate URI organization to identify resources, all the
atomic operations required by a Webservice are implemented.
The content of data is still transmitted using XML as part of the HTTP content,
but the additional markup required by SOAP is removed.
For this reason, REST represents a lightweight alternative to SOAP, which works
effectively in contexts where additional aspects beyond those manageable
through HTTP are absent.
One of them is security; RESTful Web services operate in an environment where
no additional security beyond the one supported by HTTP is required.
This is not a great limitation,and RESTful Web services are quite popular and
used to deliver functionalities at enterprisescale: Twitter, Yahoo! (searchAPIs,
maps, photos, etc), Flickr, and Amazon.com all leverage REST.
Besides those directly supporting Webservices, other technologies that
characterizeWeb2.0 provide and contribute to enrich and empower
Webapplications and then SOA-based systems.
These fall under the names of
Asynchronous Java Script and XML(AJAX),
JavaScript Standard Object Notation(JSON), and others.
AJAX is a conceptual framework based on JavaScript and XML that enables
asynchronous behavior in Web applications by leveraging the computing
capabilities of modern Web browsers.
AJAX uses XML to exchange data with Webservices and applications; an
alternative to XML is JSON, which allows representing objects and collections
of objects in a platform-independent manner. Often it is preferred to transmit
data in a AJAX context
Basics of Virtualization
Virtualization technology is one of the fundamental components of cloud
computing, especially in regard to infrastructure-based services.
Virtualization allows the creation of a secure, customizable, and isolated
execution environment for running applications, even if they are untrusted,
without affecting other users’ applications.
The basis of this technology is the ability of a computer program or a
combination of software and hardware—to emulate an executing environment
separate from the one that hosts such programs.
For example, we can run Windows OS on top of a virtual machine, which itself
is running on Linux OS.
11
Virtualization provides a great opportunity to build elastically scalable systems
that can provision additional capability with minimum costs.
Therefore, virtualization is widely used to deliver customizable computing
environments on demand.
Virtualization is a large umbrella of technologies and concepts that are meant
to provide an abstract environment—whether virtual hardware or an operating
system—to run applications.
The term virtualization is often synonymous with hardware virtualization,
which plays a fundamental role in efficiently delivering Infrastructure-as-a-
Service (IaaS) solutions for cloud computing.
In fact, virtualization technologies have a long trail in the history of computer
science and have been available in many flavors by providing virtual
environments at the operating system level, the programming language level,
and the application level.
Virtualization technologies provide a virtual environment for not only executing
applications but also for storage, memory, and networking.
Since its inception, virtualization has been sporadically explored and adopted,
but in the last few years there has been a consistent and growing trend to
leverage this technology.
Virtualization technologies have gained renewed interested recently due to the
confluence of several phenomena:
Increased performance and computing capacity.
Nowadays, the average end-user desktop PC is powerful enough to meet
almost all the needs of everyday computing, with extra capacity that is rarely
used.
Almost all these PCs have resources enough to host a virtual machine manager
and execute a virtual machine with by far acceptable performance.
The same consideration applies to the high-end side of the PC market, where
supercomputers can provide immense compute power that can accommodate
the execution of hundreds or thousands of virtual machines.
Underutilized hardware and software resources. Hardware and software
underutilization is occurring due to (1) increased performance and computing
capacity, and (2) the effect of limited or sporadic use of resources.
Computers today are so powerful that in most cases only a fraction of their
capacity is used by an application or the system.
Moreover, if we consider the IT infrastructure of an enterprise, many computers
are only partially utilized whereas they could be used without interruption on a
24/7/365 basis.
For example, desktop PCs mostly devoted to office automation tasks and used
by administrative staff are only used during work hours, remaining completely
unused overnight.
Using these resources for other purposes after hours could improve the
efficiency of the IT infrastructure. To transparently provide such a service, it
12
would be necessary to deploy a completely separate environment, which can
be achieved through virtualization.
Lack of space.
The continuous need for additional capacity, whether storage or compute
power, makes data centers grow quickly.
Companies such as Google and Microsoft expand their infrastructures by
building data centers as large as football fields that are able to host thousands
of nodes.
Although this is viable for IT giants, in most cases enterprises cannot afford to
build another data center to accommodate additional resource capacity.
This condition, along with hardware underutilization, has led to the diffusion of
a technique called server consolidation,for which virtualization technologies are
fundamental.
Greening initiatives.
Recently, companies are increasingly looking for ways to reduce the amount of
energy they consume and to reduce their carbon footprint.
Data centers are one of the major power consumers; they contribute
consistently to the impact that a company has on the environment.
Maintaining a data center operation not only involves keeping servers on, but
a great deal of energy is also consumed in keeping them cool.
Infrastructures for cooling have a significant impact on the carbon footprint of
a data center.
Hence, reducing the number of servers through server consolidation will
definitely reduce the impact of cooling and power consumption of a data center.
Virtualization technologies can provide an efficient way of consolidating servers.
Rise of administrative costs.
Power consumption and cooling costs have now become higher than the cost
of IT equipment.
Moreover, the increased demand for additional capacity, which translates into
more servers in a data centre, is also responsible for a significant increment in
Administrative costs.
Computers in particular, servers do not operate all on their own, but they
require care and feeding from system administrators.
Common system administration tasks include hardware monitoring, defective
hardware replacement, server setup and updates, server resources monitoring,
and backups.
These are labour-intensive operations, and the higher the number of servers
that have to be managed, the higher the administrative costs.
Virtualization can help reduce the number of required servers for a given
workload, thus reducing the cost of the administrative personnel.
13
Implementation levels of virtualization
Virtualization
14
The main function of the software layer for virtualization is to
virtualize the physical hardware of a host machine into virtual
resources to be used by the VMs.
virtualization software creates the abstraction of VMs by interposing a
virtualization layer at various levels of a computer system.
Various operational levels where virtualization layers included are
1. Instruction set architecture (ISA) level
2. Hardware abstraction level(HAL)
3. Operating system level/Server Virtualization
4. Library support level/Middleware Support level
5. Application level
15
this process is relatively slow
ii) Dynamic Binary Translation
translates basic blocks of dynamic source instructions to target instructions
Instruction set emulation requires binary translation and optimization.
A virtual instruction set architecture (V-ISA) requires adding a processor-specific
software translation layer to the compiler.
Hardware Abstraction Level
generates a virtual hardware environment for a VM
idea is to virtualize a computer’s resources, such as its processors,
memory,and I/O devices.
The intention is to upgrade the hardware utilization rate by multiple users
concurrently.
The idea was implemented in the IBM VM/370 in the 1960s.
Recently, the Xen hypervisor has been applied to virtualize x86-based machines to
run Linux or other guest OS applications
Operating System Level/Container Based Virtualization/Server
Virtualization/ Virtualization Support at the OS Level
Defn: refers to an abstraction layer between traditional OS and user applications
16
Also called as virtual execution environment (VE), Virtual Private System (VPS), or
simply container, Single-OS image Virtualization.
VEs look like real servers
most OS-level virtualization systems are Linux-based
(1) All OS-level VMs on the same physical machine share a single operating
system kernel
(2) the virtualization layer can be designed in a way that allows processes in VMs
to access as many resources of the host machine as possible, but never to modify
them
Disadvantages of OS-Level Virtualization
i. all the VMs at operating system level on a single container must have the same
kind of guest operating system.
Library Level Virtualization/Middleware Support for Virtualization
also known as user-level Application Binary Interface (ABI) or API
emulation
can create execution environments for running alien programs on a
platform rather than creating a VM to run the entire operating system.
17
API call interception and remapping are the key functions performed.
Different library-level virtualization systems are
a. Windows Application Binary Interface (WABI): middleware to convert
Windows system calls to Solaris system calls
b. Lxrun: a system call emulator that enables Linux applications written for x86
hosts to run on UNIX systems.
c. WINE(Windows Emulator):Offers library support for virtualizing x86
processors to run Windows applications on UNIX hosts
d. Visual MainWin: Offers a compiler support system to develop Windows
applications using Visual Studio to run on some UNIX hosts.
e. vCUDA
CUDA is a programming model and library for general-purpose GPUs
vCUDA virtualizes the CUDA library and can be installed on guest OSes.
CUDA applications are difficult to run on hardware-level VMs directly.
vCUDA virtualizes the CUDA library and can be installed on guest
OSes.
When CUDA applications run on a guest OS and issues a call to the CUDA
API, vCUDA intercepts the call and redirects it to the CUDA API
running on the host OS
vCUDA employs a client-server model to implement CUDA virtualization
It consists of three user space components
1. the vCUDA library
resides in the guest OS as a substitute for the standard CUDA library.
It is responsible for intercepting and redirecting API calls from the
client to the stub.
vCUDA also creates vGPUs and manages them.
2. a virtual GPU in the guest OS (which acts as a client) : functionality
of a vGPU
It abstracts the GPU structure and gives applications a uniform view of
the underlying hardware
18
when a CUDA application in the guest OS allocates a device’s memory
the vGPU can return a local virtual address to the application and notify
the remote stub to allocate the real device memory
the vGPU is responsible for storing the CUDA API flow
3. the vCUDA stub in the host OS (which acts as a server)
receives and interprets remote requests and creates a corresponding
execution context for the API calls from the guest OS, then returns the
results to the guest OS.
also manages actual physical resource allocation
19
Below table shows the Relative Merits of Virtualization at Various Levels (More
“X”’s Means Higher Merit, with a Maximum of 5 X’s).
cost to implement that particular virtualization refers to the effort required to isolate
level resources committed to different VMs
20
Virtualization Structures/Tools and Mechanisms
Depending on the position of the virtualization layer VM architectures are classified
into
1. Hypervisor architecture
2. Para-virtualization
3. Host-based virtualization
21
It is first loaded when Xen boots without any file system drivers being available.
Domain 0 is designed to access hardware directly and manage devices.
one of the responsibilities of Domain 0 is to allocate and map hardware
resources for the guest domains (the Domain U domains).
Xen is based on Linux and its security level is C2.Therefore security policies are
needed to improve the security of Domain 0.
Pictorial representation of XEN architecture
Fig-3.14:Xen Architecture
22
Noncritical instructions do not control hardware or threaten the security of the
system, but critical instructions do.
Running noncritical instructions on hardware not only can promote efficiency,
but also can ensure system security.
Note:
o The traditional x86 processor offers four instruction execution rings: Rings 0,1,
2, and 3.
o The lower the ring number, the higher the privilege of instruction being
executed.
o The OS is responsible for managing the hardware and the privileged
instructions to execute at Ring 0, while user-level applications run at Ring 3.
Binary Translation of Guest OS Requests Using a VMM
This approach was implemented by VMware
Pictorial representation of this implementation
23
Drawbacks => Performance of full virtualization may not be ideal (meaning is
best), because, binary translation which is time-consuming
Host-Based Virtualization
Guest OS is aware that it is virtualized
virtualization layer is installed on top of the host OS
host OS is still responsible for managing the hardware
The guest OSes are installed and run on top of the virtualization layer. Dedicated
applications may run on the VMs
Advantages
the user can install this VM architecture without modifying the host OS.
the host-based approach appeals to many host machine configurations
disadvantage
performance is too low(When an application requests hardware access, it
involves four layers of mapping which downgrades performance significantly.)
When the ISA of a guest OS is different from the ISA of the underlying
hardware, binary translation must be adopted.
Para-Virtualization with Compiler Support
Modifies the guest operating systems.
para-virtualization reduce the virtualization overhead, and improve
performance by modifying only the guest OS kernel
concept of a para-virtualized VM architecture
Fig-3.16:para-virtualized VM Architecture
24
Fig-3.17:para-virtualizedguest OS assisted by an intelligentcompiler to
replace nonvirtualizableOS
instructions by hypercalls
The guest OS kernel is modified to replace the privileged and sensitive
instructions with hypercalls to the hypervisor or VMM.
The guest OS running in a guest domain may run at Ring 1 instead of at Ring 0.
This implies that the guest OS may not be able to execute some privileged and
sensitive instructions.
The privileged instructions are implemented by hypercalls to the hypervisor.
After replacing the instructions with hypercalls, the modified guest OS emulates
the behavior of the original guest OS
E.g., KVM(Kernel-Based VM)
is a Linux para-virtualization system—a part of the Linux version 2.6.20 kernel
Memory management and scheduling activities are carried out by the existing
Linux kernel and KVM does the rest
KVM is a hardware-assisted para-virtualization tool, which improves
performance and supports unmodified guest OSes such as Windows, Linux,
Solaris, and other UNIX variants.
Pictorial representation of KVM
25
Fig-3.18: KVM Architecture
Para-Virtualization Architecture
Advantage
Performance is high
Disadvantages
compatibility and portability
the cost of maintaining
para-virtualized OSes is high, because they may require deep OS kernel
modifications
Table-3.1: Difference between full and para virtualization
S.No. Full Virtualization Para Virtualization
1. intercepts and emulates privileged replaces privileged and sensitive
and sensitive instructions at runtime instructions with hypercalls at
compile time.
2. Not aware that it is virtualized aware that it is virtualized
3. performance is too low performance is high
26
o User mode o Supervisor mode
Instructions running in supervisor mode are called privileged instruction, others
are non -privileged instructions
There are many h/w virtualization products are available
Example
o VMware workstation is a VM s/w that allows users to setup multiple x86 & X86-
64 virtual computers and that run one or more VMs simultaneously with the host
OS.
o Xen is a hypervisor for use in IA-32, x86-64, Itanium, and PowerPC 970 hosts.
o KVM can support hardware-assisted virtualization using the Intel VT-x or AMD-v
and VirtIO framework.
27
CPU Virtualization
VM is a duplication of existing computer system in which a majority of the VM
instructions are executed on the host processor
Unpriviledged instructions of VMs run directly on the host machine
Critical instruction should be handled carefully for correctness and stability
The critical instructions are divided into three categories:
o Privileged instructions o Behavior sensitive instructions
o Control-sensitive
instructions
Privileged instructions execute in a privileged mode and will be trapped if
executed outside this mode.
Control-sensitive instructions attempt to change the configuration of resources
used.
Behavior-sensitive instructions have different behaviors depending on the
configuration of resources, including the load and store operations over the
virtual memory.
A CPU architecture is virtualizable if
o VM’s privileged and unprivileged instructions run in CPU’s user mode
o VMM run in supervisor mode
When the privileged instructions are executed by VM , it is trapped by VMM(i.e
it act as a unified mediator)
Hardware-Assisted CPU Virtualization
Additional mode called privilege mode level (some people call it Ring-1) is
added to x86 processors.
Therefore, operating systems can still run at Ring 0 and the hypervisor runs at
Ring -1.
All the privileged and sensitive instructions are trapped in the hypervisor
automatically. This technique removes the difficulty of implementing binary
translation of full virtualization.
E.g.: Intel Hardware-Assisted CPU Virtualization
28
Fig-3.20: Intel hardware-assisted CPU virtualization
Intel calls the privilege level of x86 processors the VMX Root Mode
Transition from the hypervisor to the guest OS and from guest OS to hypervisor
occurs through VM entry and VM exit
Memory Virtualization
In traditional execution environment, OS maps virtual memory to machine memory
using page tables (called as one stage mapping) and also makes use of MMU and
TLB.
In virtual execution environment, virtual memory virtualization involves
sharing the physical system memory in RAM & dynamically allocating it
to the physical memory of VMs
Two-stage mapping process should be maintained by guest OS and the VM
i. Stage I : Guest Virtual Memory Guest Physical memory
ii. Stage II: Guest Physical Memory Host Machine memory
29
Each Page table of the guest OS has a separate page table in the VMM, the
VMM page table is called as shadow Page Table.
Nested Page table add another layer of indirection to virtual memory
VMware uses shadow page tables to perform virtual memory to machine
memory address translation
Processor uses TLB h/w to map the virtual memory directly to the machine
memory to avoid two levels of translation
Example: Extended Page Table by Intel for Memory Virtualization
Intel offers a Virtual Processor ID (VPID) to improve use of the TLB.
The Page table in Guest is used for converting Guest Virtual address(GVA) to
Guest Physical Address(GPA)
Guest Physical Address(GPA) can be converted to the host physical address
(HPA) using EPT.
30
o Full Device emulation
o Para virtualization
o Direct I/O Virtualization
Full Device Emulation
31
Direct I/O Virtualization
VM access the I/O device directly
Self Virtualized I/O(SV-I/O)
Goal: Harness (Meaning bind) the rich resources of a multicore processor.
All tasks associated with vitalizing an I/O device are encapsulated in SV-I/O
Provides Virtual devices and an associated access API to VMs and a management
API to the VMM.
SV-IO defines virtual interface (VIF) for every kind of virtualized I/O device such
as virtual n/w interface, virtual block disk, virtual camera disk etc.
Guest OS interacts through Virtual Interface device(VIF) drivers
32
• This plan should define each team member’s role and responsibilities. In the
event of a disaster, the recovery team should know how to communicate with
each other, employees, vendors, and customers.
• Risk evaluation: Assess potential hazards that put your organization at risk.
Depending on the type of event, strategize what measures and resources will
be needed to resume business.
• For example, in the event of a cyber attack, what data protection measures will
the recovery team have in place to respond?
• Business-critical asset identification: A good disaster recovery plan
includes documentation of which systems, applications, data, and other
resources are most critical for business continuity, as well as the necessary
steps to recover data.
• Backups: Determine what needs backup (or to be relocated), who should
perform backups, and how backups will be implemented.
• Include a recovery point objective (RPO) that states the frequency of backups
and a recovery time objective (RTO) that defines the maximum amount of
downtime allowable after a disaster.
• These metrics create limits to guide the choice of IT strategy, processes and
procedures that make up an organization’s disaster recovery plan.
• The amount of downtime an organization can handle and how frequently the
organization backs up its data will inform the disaster recovery strategy.
• Testing and optimization: The recovery team should continually test and
update its strategy to address ever-evolving threats and business needs.
• By continually ensuring that a company is ready to face the worst-case
scenarios in disaster situations, it can successfully navigate such challenges.
• In planning how to respond to a cyber attack, for example, it’s important that
organizations continually test and optimize their security and data protection
strategies and have protective measures in place to detect potential security
breaches.
How to build a disaster recovery team
• Crisis management: This leadership role commences recovery plans,
coordinates efforts throughout the recovery process, and resolves problems or
delays that emerge.
• Business continuity: The expert overseeing this ensures that the recovery
plan aligns with the company’s business needs, based on the business impact
analysis.
33
• Impact assessment and recovery: The team responsible for this area of
recovery has technical expertise in IT infrastructure including servers, storage,
databases and networks.
• IT applications:
This role monitors which application activities should be implemented based on a
restorative plan. Tasks include application integrations, application settings and
configuration, and data consistency.
What are the types of disaster recovery
• Back-up: This is the simplest type of disaster recovery and entails storing data
off site or on a removable drive. However, just backing up data provides only
minimal business continuity help, as the IT infrastructure itself is not backed
up.
34
• For instance, fire suppression tools will help data and computer equipment
survive a fire.
• A backup power source will help businesses sail through power outages without
grinding operations to a halt. Of course, none of these physical disaster
recovery tools will help in the event of a cyber attack.
• Point-in-time copies: Point-in-time copies, also known as point-in-time
snapshots, make a copy of the entire database at a given time.
• Data can be restored from this back-up, but only if the copy is stored off site
or on a virtual machine that is unaffected by the disaster.
Access virtualization: Allows applications to work with remote client devices without
change, even though those remote devices were never been thought of or available
when the application was written. XenDesktop from Citrix is an example of products
that work in this layer of virtualization.
35
Application virtualization: Allows applications written for one OS version of OS to
happily execute in another environment; these environments are often a brand new
OS version or a wholly completely different OS. This type of software would build it
possible for associate degree application written for Windows XP to figure simply fine
on Windows seven or Windows eight. AppZero fits into this layer of virtualization, as
will XenApp from Citrix, App-V from Microsoft and VMware ThinApp.
36
most talked-about catch phrases, software-defined storage (SDS), is associate degree
example of this technology. Open-E DSS, VMware VSAN are samples of storage
virtualization technology.
Network virtualization: Allows systems to figure with alternative systems safely
and firmly, while not having to worry an excessive amount of about the details of the
underlying network. Yet another current catchphrase, software-defined networking
(SDN), is associate degree implementation of network virtualization. Product that
supply network virtualization embody the Cisco extensible Network Controller (XNC)
and Juniper cloud.
Management of Virtualized environments: Allows IT administrators and
operators to simply monitor and manage virtual environments across boundaries. The
boundaries will include the physical location of systems; OSes in use; applications or
workloads in use; network topology; storage implementation; and the way client
systems connect with the applications. This is an important part of SDN, SDS and
SDDC, a bunch of companies supply management and monitoring software.
Security for Virtualized environments: Monitors and protects all of the other
layers of virtualization in order that only approved use are often manufactured from
the resources. Like management of virtualized environments, this layer is a crucial a
part of SDN, SDS and SDDC. Bitdefender, Kaspersky, TrendMicro, McAfee and lots of
others play during this space of the virtualization market.
37