0% found this document useful (0 votes)
14 views

CCS335 Cloud Computing-Notes

The document provides an overview of cloud architecture, detailing its components, deployment models, and service models such as IaaS, PaaS, and SaaS. It explains the roles of various actors in cloud computing, including cloud consumers, providers, carriers, brokers, and auditors, and discusses the advantages and disadvantages of public, private, community, and hybrid cloud models. Additionally, it highlights the importance of security, management, and scalability in cloud computing systems.

Uploaded by

abdul sattar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

CCS335 Cloud Computing-Notes

The document provides an overview of cloud architecture, detailing its components, deployment models, and service models such as IaaS, PaaS, and SaaS. It explains the roles of various actors in cloud computing, including cloud consumers, providers, carriers, brokers, and auditors, and discusses the advantages and disadvantages of public, private, community, and hybrid cloud models. Additionally, it highlights the importance of security, management, and scalability in cloud computing systems.

Uploaded by

abdul sattar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 108

UNIT I CLOUD ARCHITECTURE MODELS AND INFRASTRUCTURE 6

Cloud Architecture: System Models for Distributed and Cloud Computing – NIST
Cloud Computing Reference Architecture – Cloud deployment models – Cloud service
models; Cloud Infrastructure: Architectural Design of Compute and Storage Clouds –
Design Challenges

Cloud Architecture

cloud computing technology is used by both small and large organizations to store the
information in cloud and access it from anywhere at anytime using the internet
connection.Cloud computing architecture is a combination of service-oriented
architecture and event-driven architecture.

Cloud computing architecture is divided into the following two parts -

o Front End
o Back End

The below diagram shows the architecture of cloud computing -

Front End

The front end is used by the client. It contains client-side interfaces and applications that are
required to access the cloud computing platforms. The front end includes web servers
(including Chrome, Firefox, internet explorer, etc.), thin & fat clients, tablets, and mobile
devices.
Back End

The back end is used by the service provider. It manages all the resources that are required to
provide cloud computing services. It includes a huge amount of data storage, security
mechanism, virtual machines, deploying models, servers, traffic control mechanisms, etc.

Components of Cloud Computing Architecture

There are the following components of cloud computing architecture -

1. Client Infrastructure

Client Infrastructure is a Front end component. It provides GUI (Graphical User Interface) to
interact with the cloud.

2. Application

The application may be any software or platform that a client wants to access.

3. Service

A Cloud Services manages that which type of service you access according to the client’s
requirement.

Cloud computing offers the following three type of services:

i. Software as a Service (SaaS) – It is also known as cloud application services. Mostly, SaaS
applications run directly through the web browser means we do not require to download and
install these applications. Some important example of SaaS is given below –

Example: Google Apps, Salesforce Dropbox, Slack, Hubspot, Cisco WebEx.

ii. Platform as a Service (PaaS) – It is also known as cloud platform services. It is quite
similar to SaaS, but the difference is that PaaS provides a platform for software creation, but
using SaaS, we can access software over the internet without the need of any platform.

Example: Windows Azure, Force.com, Magento Commerce Cloud, OpenShift.

iii. Infrastructure as a Service (IaaS) – It is also known as cloud infrastructure services. It


is responsible for managing applications data, middleware, and runtime environments.

Example: Amazon Web Services (AWS) EC2, Google Compute Engine (GCE), Cisco
Metapod.

4. Runtime Cloud

Runtime Cloud provides the execution and runtime environment to the virtual machines.

5. Storage
Storage is one of the most important components of cloud computing. It provides a huge
amount of storage capacity in the cloud to store and manage data.

6. Infrastructure

It provides services on the host level, application level, and network level. Cloud
infrastructure includes hardware and software components such as servers, storage, network
devices, virtualization software, and other storage resources that are needed to support the
cloud computing model.

7. Management

Management is used to manage components such as application, service, runtime cloud,


storage, infrastructure, and other security issues in the backend and establish coordination
between them.

8. Security

Security is an in-built back end component of cloud computing. It implements a security


mechanism in the back end.

9. Internet

The Internet is medium through which front end and back end can interact and communicate
with each other.

System Models for Distributed and Cloud Computing


Distributed and cloud computing systems are built over a large number of autonomous
computer nodes. These node machines are interconnected by SANs, LANs, or WANs in a
hierarchical manner. With today's networking technology, a few LAN switches can easily
connect hundreds of machines as a working cluster.
Distributed and cloud computing systems are built over a large number of autonomous

computer nodes. These node machines are interconnected by SANs, LANs, or WANs in a

hierarchical manner.

With today’s networking technology, a few LAN switches can easily connect hundreds of

machines as a working cluster. A WAN can connect many local clusters to form a very large
cluster of clusters. Massive systems are considered highly scalable, and can reach web-scale

connectivity, either physically or logically.

Massive systems are classified into four groups:

1. Clusters : A distributed systems cluster is a group of machines that are virtually or

geographically separated and that work together to provide the same service or

application to clients. It is possible that many of the services you run in your network

today are part of a distributed systems Cluster Distributed Services:

 Domain Naming System


 Windows Internet Naming Service

 Active Directory

2. P2P Networks : In a P2P system, every node acts as both a client and a server, providing

part of the system resources. Peer machines are simply client computers connected to the

Internet. All client machines act autonomously to join or leave the system freely. This implies

that no master-slave relationship exists among the peers. No central coordination or central

database is needed. The system is self-organizing with distributed control.

3. Computing Grids :This is the use of widely distributed computer resources to reach a
common goal. A computing grid can be thought of as a distributed system with non-interactive

workloads that involve many files. Grid computing is distinguished from conventional high-

performance computing systems such as cluster computing in that grid computers have each

node set to perform a different task/application. Grid computers also tend to be more

heterogeneous and geographically dispersed than cluster computers.

4. Internet clouds :The idea is to move desktop computing to a service-oriented platform

using server clusters and huge databases at data centers. Cloud computing leverages its low

cost and simplicity to benefit both users and providers. Machine virtualization has enabled

such cost-effectiveness. Cloud computing intends to satisfy many user Virtualized resources

from data centers to form an Internet cloud, provisioned with hardware, software, storage,

network, and services for paid users to run their applications.

NIST Cloud Computing Reference Architecture


The NIST cloud computing reference architecture defines five major actors: cloud
consumer, cloud provider, cloud carrier, cloud auditor and cloud broker. Each actor is an
entity (a person or an organization) that participates in a transaction or process and/or
performs tasks in cloud computing.
Cloud Service Providers: A group or object that delivers cloud services to cloud
consumers or end-users. It offers various components of cloud computing. Cloud
computing consumers purchase a growing variety of cloud services from cloud service
providers. There are various categories of cloud-based services mentioned below:
IaaS Providers: In this model, the cloud service providers offer infrastructure
components that would exist in an on-premises data center. These components consist of
servers, networking, and storage as well as the virtualization layer.
 SaaS Providers: In Software as a Service (SaaS), vendors provide a wide sequence of
business technologies, such as Human resources management (HRM) software,
customer relationship management (CRM) software, all of which the SaaS vendor hosts
and provides services through the internet.
 PaaS Providers: In Platform as a Service (PaaS), vendors offer cloud infrastructure and
services that can access to perform many functions. In PaaS, services and products are
mostly utilized in software development. PaaS providers offer more services than IaaS
providers. PaaS providers provide operating system and middleware along with
application stack, to the underlying infrastructure.

2. Cloud Carrier: The mediator who provides offers connectivity and transport of
cloud services within cloud service providers and cloud consumers. It allows access
to the services of the cloud through Internet networks, telecommunication, and other
access devices. Network and telecom carriers or a transport agent can provide
distribution. A consistent level of services is provided when cloud providers set up
Service Level Agreements (SLA) with a cloud carrier. In general, Carrier may be
required to offer dedicated and encrypted connections.

3. Cloud Broker: An organization or a unit that manages the performance, use, and
delivery of cloud services by enhancing specific capability and offers value-added services
to cloud consumers. It combines and integrates various services into one or more new
services. They provide service arbitrage which allows flexibility and opportunistic choices.
There are major three services offered by a cloud broker:
 Service Intermediation.
 Service Aggregation.
 Service Arbitrage.
4. Cloud Auditor: An entity that can conduct independent assessment of cloud services,
security, performance, and information system operations of the cloud implementations.
The services that are provided by Cloud Service Providers (CSP) can be evaluated by
service auditors in terms of privacy impact, security control, and performance, etc. Cloud
Auditor can make an assessment of the security controls in the information system to
determine the extent to which the controls are implemented correctly, operating as planned
and constructing the desired outcome with respect to meeting the security necessities for
the system. There are three major roles of Cloud Auditor which are mentioned below:
 Security Audit.
 Privacy Impact Audit.
 Performance Audit.
5. Cloud Consumer: A cloud consumer is the end-user who browses or utilizes the
services provided by Cloud Service Providers (CSP), sets up service contracts with the
cloud provider. The cloud consumer pays per use of the service provisioned. Measured
services utilized by the consumer. In this, a set of organizations having mutual regulatory
constraints performs a security and risk assessment for each use case of Cloud migrations
and deployments.
Cloud consumers use Service-Level Agreement (SLAs) to specify the technical
performance requirements to be fulfilled by a cloud provider. SLAs can cover terms
concerning the quality of service, security, and remedies for performance failures. A cloud
provider may also list in the SLAs a set of limitations or boundaries, and obligations that
cloud consumers must accept. In a mature market environment, a cloud consumer can
freely pick a cloud provider with better pricing and more favourable terms. Typically, a
cloud provider’s public pricing policy and SLAs are non-negotiable, although a cloud
consumer who assumes to have substantial usage might be able to negotiate for better
contracts.

Cloud deployment models


IT is the process of deploying an application through one or more hosting models-
SaaS,PaaS,IaaS.It includes implementing,Architecting,planning,operating workloads in cloud
is called as cloud deployment.
Cloud Deployment Model functions as a virtual computing environment with a
deployment architecture that varies depending on the amount of data you want to store and
who has access to the infrastructure.

Different Types Of Cloud Computing Deployment Models

Most cloud hubs have tens of thousands of servers and storage devices to enable fast loading.
It is often possible to choose a geographic area to put the data "closer" to users. Thus,
deployment models for cloud computing are categorized based on their location. To know
which model would best fit the requirements of your organization, let us first learn about the
various types.
Public Cloud

The name says it all. It is accessible to the public. Public deployment models in the cloud are
perfect for organizations with growing and fluctuating demands. It also makes a great choice
for companies with low-security concerns. Thus, you pay a cloud service provider for
networking services, compute virtualization & storage available on the public internet. It is also
a great delivery model for the teams with development and testing. Its configuration and
deployment are quick and easy, making it an ideal choice for test environments.
There are many benefits of deploying cloud as public cloud model. The following diagram
shows some of those benefits:

Cost Effective

Since public cloud shares same resources with large number of customers it turns out
inexpensive.

Reliability

The public cloud employs large number of resources from different locations. If any of the
resources fails, public cloud can employ another one.

Flexibility

The public cloud can smoothly integrate with private cloud, which gives customers a flexible
approach.

Location Independence

Public cloud services are delivered through Internet, ensuring location independence.

Utility Style Costing

Public cloud is also based on pay-per-use model and resources are accessible whenever
customer needs them.

High Scalability
Cloud resources are made available on demand from a pool of resources, i.e., they can be
scaled up or down according the requirement.

Disadvantages

Here are some disadvantages of public cloud model:

Low Security

In public cloud model, data is hosted off-site and resources are shared publicly, therefore
does not ensure higher level of security.

Less Customizable

It is comparatively less customizable than private cloud.

Limitation of Public Cloud:

1. Low visibility and control - Public cloud infrastructure is owned by the cloud service
provider. You don't have much visibility and control over it.
2. Compliance and legal risks - Since you don't have much visibility and control over
public cloud infrastructure, you are relying on the cloud service provider to protect
data and adhere to local and international regulations. Your company may still be
liable, if the cloud service provider, fails to live up to the task and if there is a data
breach. So a public cloud, may not be the most viable solution for security sensitive or
mission-critical applications.
3. Cost concerns - Cloud in general, reduces upfront infrastructure costs and it's pay-as-
you-go model provides more flexibility. Depending on the traffic, the amount of cloud
resources you consume, the plan you have chosen, the way you scale resources up and
down, determines the overall price you pay. Sometimes this overall price tag may be
higher than what you anticipated.

Private Cloud

Private Cloud allows systems and services to be accessible within an organization.


The Private Cloud is operated only within a single organization. However, it may be managed
internally by the organization itself or by third-party. The private cloud model is shown in the
diagram below.
Benefits

There are many benefits of deploying cloud as private cloud model. The following diagram
shows some of those benefits:

High Security and Privacy

Private cloud operations are not available to general public and resources are shared from
distinct pool of resources. Therefore, it ensures high security and privacy.

More Control
The private cloud has more control on its resources and hardware than public cloud because
it is accessed only within an organization.

Cost and Energy Efficiency

The private cloud resources are not as cost effective as resources in public clouds but they
offer more efficiency than public cloud resources.

Disadvantages

Here are the disadvantages of using private cloud model:

Restricted Area of Operation

The private cloud is only accessible locally and is very difficult to deploy globally.

High Priced

Purchasing new hardware in order to fulfill the demand is a costly transaction.

Limited Scalability

The private cloud can be scaled only within capacity of internal hosted resources.

Additional Skills

In order to maintain cloud deployment, organization requires skilled expertise.

Limitations of Private Cloud

o Higher Cost - With the benefits you get, the investment will also be larger than the
public cloud. Here, you will pay for software, hardware, and resources for staff and
training.
o Fixed Scalability - The hardware you choose will accordingly help you scale in a certain
direction
o High Maintenance - Since it is managed in-house, the maintenance costs also increase.

Community Cloud

Community Cloud allows system and services to be accessible by group of organizations. It


shares the infrastructure between several organizations from a specific community. It may be
managed internally by organizations or by the third-party. The Community Cloud Model is
shown in the diagram below.
Benefits

There are many benefits of deploying cloud as community cloud model.

Cost Effective

Community cloud offers same advantages as that of private cloud at low cost.
Sharing Among Organizations

Community cloud provides an infrastructure to share cloud resources and capabilities among
several organizations.

Security

The community cloud is comparatively more secure than the public cloud but less secured
than the private cloud.

Limitations of Community Cloud

o Shared Resources - Due to restricted bandwidth and storage capacity, community


resources often pose challenges.
o Not as Popular - Since this is a recently introduced model, it is not that popular or
available across industries

Hybrid Cloud

o Hybrid cloud is a combination of public and private clouds.


Hybrid cloud = public cloud + private cloud
o The main aim to combine these cloud (Public and Private) is to create a unified,
automated, and well-managed computing environment.
o In the Hybrid cloud, non-critical activities are performed by the public
cloud and critical activities are performed by the private cloud.
o Mainly, a hybrid cloud is used in finance, healthcare, and Universities.
o The best hybrid cloud provider companies are Amazon, Microsoft, Google,
Cisco, and NetApp.

Benefits

There are many benefits of deploying cloud as hybrid cloud model. The following diagram
shows some of those benefits:
Scalability

It offers features of both, the public cloud scalability and the private cloud scalability.

Flexibility

It offers secure resources and scalable public resources.

Cost Efficiency

Public clouds are more cost effective than private ones. Therefore, hybrid clouds can be cost
saving.

Security

The private cloud in hybrid cloud ensures higher degree of security.

Disadvantages

Networking Issues

Networking becomes complex due to presence of private and public cloud.

Security Compliance

It is necessary to ensure that cloud services are compliant with security policies of the
organization.

Infrastructure Dependency

The hybrid cloud model is dependent on internal IT infrastructure, therefore it is necessary


to ensure redundancy across data centers.

Limitations of Hybrid Cloud


o Complexity - It is complex setting up a hybrid cloud since it needs to integrate two or
more cloud architectures
o Specific Use Case - This model makes more sense for organizations that have multiple
use cases or need to separate critical and sensitive data

A Comparative Analysis of Cloud Deployment Models

With the below table, we have attempted to analyze the key models with an overview of what
each one can do for you:

Important Factors to Public Private Community Hybrid


Consider

Setup and ease of Easy Requires Requires professional Requires professional IT


use professional IT IT Team Team
Team

Data Security and Low High Very High High


Privacy

Scalability and High High Fixed requirements High


flexibility

Cost-Effectiveness Most Most expensive Cost is distributed Cheaper than private but
affordable among members more expensive than public

Reliability Low High Higher High

Making the Right Choice for Cloud Deployment Models

There is no one-size-fits-all approach to picking a cloud deployment model. Instead,


organizations must select a model based on workload-by-workload. Start with assessing your
needs and consider what type of support your application requires. Here are a few factors you
can consider before making the call:

o Ease of Use - How savvy and trained are your resources? Do you have the time and the
money to put them through training?
o Cost - How much are you willing to spend on a deployment model? How much can you
pay upfront on subscription, maintenance, updates, and more?
o Scalability - What is your current activity status? Does your system run into high
demand?
o Compliance - Are there any specific laws or regulations in your country that can impact
the implementation? What are the industry standards that you must adhere to?
o Privacy - Have you set strict privacy rules for the data you gather?

Each cloud deployment model has a unique offering and can immensely add value to your
business. For small to medium-sized businesses, a public cloud is an ideal model to start with.
And as your requirements change, you can switch over to a different deployment model. An
effective strategy can be designed depending on your needs using the cloud mentioned above
deployment models.

3 Service Models of Cloud Computing

Cloud computing makes it possible to render several services, defined according to the roles,
service providers, and user companies. Cloud computing models and services are broadly
classified as below:

IAAS: Changing Its Hardware Infrastructure on Demand

The Infrastructure as a Service (IAAS) means the hiring & utilizing of the Physical
Infrastructure of IT (network, storage, and servers) from a third-party provider. The IT
resources are hosted on external servers, and users can access them via an internet connection.

The Benefits

o Time and cost savings: No installation and maintenance of IT hardware in-house,


o Better flexibility: On-demand hardware resources that can be tailored to your needs,
o Remote access and resource management.

PAAS: Providing a Flexible Environment for Your Software Applications

Platform as a Service (PAAS) allows outsourcing of hardware infrastructure and software


environment, including databases, integration layers, runtimes, and more.

The Benefits
o Focus on development: Mastering the installation and development of software
applications.
o Time saving and flexibility: no need to manage the implementation of the platform,
instant production.
o Data security: You control the distribution, protection, and backup of your business
data.

SAAS: Releasing the User Experience of Management Constraints

Software as a Service (SaaS) is provided over the internet and requires no prior installation.
The services can be availed from any part of the world at a minimal per-month fee.

The Benefits

o You are entirely free from the infrastructure management and aligning software
environment: no installation or software maintenance.
o You benefit from automatic updates with the guarantee that all users have the same
software version.
o It enables easy and quicker testing of new software solutions.

Cloud Infrastructure

Cloud infrastructure consists of servers, storage devices, network, cloud management


software, deployment software, and platform virtualization.

Hypervisor

Hypervisor is a firmware or low-level program that acts as a Virtual Machine Manager. It


allows to share the single physical instance of cloud resources between several tenants.

Management Software

It helps to maintain and configure the infrastructure.

Deployment Software
It helps to deploy and integrate the application on the cloud.

Network

It is the key component of cloud infrastructure. It allows to connect cloud services over the
Internet. It is also possible to deliver network as a utility over the Internet, which means, the
customer can customize the network route and protocol.

Server

The server helps to compute the resource sharing and offers other services such as resource
allocation and de-allocation, monitoring the resources, providing security etc.

Storage

Cloud keeps multiple replicas of storage. If one of the storage resources fails, then it can be
extracted from another one, which makes cloud computing more reliable.

Infrastructural Constraints

Fundamental constraints that cloud infrastructure should implement are shown in the
following diagram:

Transparency

Virtualization is the key to share resources in cloud environment. But it is not possible to
satisfy the demand with single resource or server. Therefore, there must be transparency in
resources, load balancing and application, so that we can scale them on demand.

Scalability

Scaling up an application delivery solution is not that easy as scaling up an application


because it involves configuration overhead or even re-architecting the network. So,
application delivery solution is need to be scalable which will require the virtual
infrastructure such that resource can be provisioned and de-provisioned easily.
Intelligent Monitoring

To achieve transparency and scalability, application solution delivery will need to be capable
of intelligent monitoring.

Security

The mega data center in the cloud should be securely architected. Also the control node, an
entry point in mega data center, also needs to be secure.

Design Challenges in Cloud Computing

Cloud computing, an emergent technology, has placed many challenges in different aspects
of data and information handling. Some of these are shown in the following diagram:

Security and Privacy

Security and Privacy of information is the biggest challenge to cloud computing. Security and
privacy issues can be overcome by employing encryption, security hardware and security
applications.

Portability

This is another challenge to cloud computing that applications should easily be migrated from
one cloud provider to another. There must not be vendor lock-in. However, it is not yet made
possible because each of the cloud provider uses different standard languages for their
platforms.

Interoperability

It means the application on one platform should be able to incorporate services from the other
platforms. It is made possible via web services, but developing such web services is very
complex.
Computing Performance

Data intensive applications on cloud requires high network bandwidth, which results in high
cost. Low bandwidth does not meet the desired computing performance of cloud application.

Reliability and Availability

It is necessary for cloud systems to be reliable and robust because most of the businesses are
now becoming dependent on services provided by third-party.

UNIT II VIRTUALIZATION BASICS


Virtual Machine Basics – Taxonomy of Virtual Machines – Hypervisor – Key Concepts
– Virtualization structure – Implementation levels of virtualization – Virtualization
Types: Full Virtualization – Para Virtualization – Hardware Virtualization –
Virtualization of CPU, Memory and I/O devices.

Virtual Machine Basics


A virtual machine (VM) is a digital version of a physical computer. Virtual machine
software can run programs and operating systems, store data, connect to networks, and do
other computing functions, and requires maintenance such as updates and system monitoring.
Virtual Machine

Virtual Machine can be defined as an emulation of the computer systems in computing.


Virtual Machine is based on computer architectures. It also gives the functionality of physical
computers. The implementation of VM may consider specialized software, hardware, or a
combination of both.

History of Virtual Machine


o System Virtual Machines are notably implemented within the CTSS (Compatible Time-
Sharing System). Time-Sharing permitted more than one user for using the computer
concurrently. All the programs displayed to have complete access to a machine, but a
single program can only run at a time. It was derived into virtual machines by the
research system of IBM notably. The 44X/M44 are using partial virtualization, and
SIMMON and CP-40 are using full virtualization. These are some examples of
hypervisors.
o The first architecture of the virtual machine was CMS/CP-67. An important
differentiation was among using many virtual machines on a single host for time-
sharing as within the CP-40 and 44X/M44.
o Emulators date turn to the 360/IBM System in 1963 with hardware emulation of
previous systems for compatibility.
o Originally, process virtual machines developed as abstract environments for any
intermediate language applied as a program's intermediate representation by the
compilers. The O-code machine was an example of early 1966. The O-code machine
was a VM that runs object code (O-code) expanded by the BCPL compiler's front end.
This abstraction permitted the compilers to be ported to any new architecture easily.
o The Euler language applied the same design with an intermediate language called
portable (P). It was promoted by Pascal in 1970, notably within the Pascal-P system
and Pascal-S compiler. They were known as p-code and the p-code machine as the
resulting machine.
o It has been affecting, and VMs within this type of sense have been generally known as
p-code machines often. In addition, Pascal p-code was run by an interpreter directly
which is used for implementing VM.
o Another example is SNOBOL (1967). It was specified in SIL (SNOBOL
Implementation Language) that is an assembly language for VM. It was intended to
physical machines through transpiling to the native assembler by a macro assembler.
o Process VM was a famous approach for implementing microcomputer software,
containing adventure and Tiny BASIC games. It can be done by some Implementations
like Pyramid 2000 to any general-purpose engine such as z-machine of Infocom.
o Significant advances illustrated in the Smalltalk-80 implementation (specifically the
Schiffmann/Deutsch Implementations). They can push forward the JIT (Just In Time)
compilation as the implementation approach which applies process VM. Notably, later
Smalltalk virtual machines were Strongtalk, Squeak Virtual Machine, and
VisualWorks.
o A complimentary language generated many VM innovation which pioneered
generational garbage collection and adaptive optimization. Commercially, these
methods were approved successfully within the HostSpot Java virtual machine in 1999.
o Other innovations contain the register-based VM to match various underlying
hardware, instead of a stack-based VM that is closer for any programming language. It
was pioneered in 1995 for the Limbo language by Dis VM. OpenJ9 is a substitute for
HotSpot Java virtual machine inside the OpenJDK. Also, it is an open-source project
requesting good startup and fewer resource consumption when compared to HotSpot.

Types of Virtual Machine

There are distinct types of VM available all with distinct functionalities:

Types of Virtual Machines : You can classify virtual machines into two types:
1. System Virtual Machine: These types of virtual machines gives us complete system
platform and gives the execution of the complete virtual operating system. Just like virtual
box, system virtual machine is providing an environment for an OS to be installed
completely. We can see in below image that our hardware of Real Machine is being
distributed between two simulated operating systems by Virtual machine monitor. And then
some programs, processes are going on in that distributed hardware of simulated machines
separately.
2. Process Virtual Machine : While process virtual machines, unlike system virtual
machine, does not provide us with the facility to install the virtual operating system
completely. Rather it creates virtual environment of that OS while using some app or
program and this environment will be destroyed as soon as we exit from that app. Like in
below image, there are some apps running on main OS as well some virtual machines are
created to run other apps. This shows that as those programs required different OS, process
virtual machine provided them with that for the time being those programs are
running. Example – Wine software in Linux helps to run Windows applications.

Virtual Machine Language : It’s type of language which can be understood by different
operating systems. It is platform-independent. Just like to run any programming language
(C, python, or java) we need specific compiler that actually converts that code into system
understandable code (also known as byte code). The same virtual machine language works.
If we want to use code that can be executed on different types of operating systems like
(Windows, Linux, etc) then virtual machine language will be helpful.
What is System Virtual Machines?

Originally, a Virtual Machine was described by Goldberg and Popek as "an isolated and
efficient duplicate of an actual computer machine." The latest use combines virtual
machines that haven't any direct relation with actual hardware. Generally, the real world or
physical hardware (executing the virtual machine) is termed as the "host" and the VM copied
on the machine is generally termed as the "guest."

Working of System Virtual Machines

The host could emulate various guests, all of which could emulate distinct hardware platforms
and operating systems.

A craving to execute more than one operating system was a starting objective of the virtual
machines. It allows time-sharing between many individual tasking operating systems. A system
VM can be could be considered the concept generalization of virtual memory that preceded it
historically.

CMS/CP of IBM, the initial systems that permit full virtualization, equipped to be sharing by
giving all users an individual-user OS (Operating System). The system VM designated the user
for writing privileged instructions inside the code. This type of method has some advantages
like including output/input devices not permitted by any standard system.

Memory over-commitment's new systems may be used for managing memory sharing between
several VMs over a single computer OS. It is because technology expands VM for various
virtualization purposes. It can be possible to distribute memory pages that include identical
contents for many VMs that execute on a similar physical machine. As a result, mapping them
to a similar physical page by a method called KSM (kernel-same page merging).

It is useful especially for various read-only pages, like those containing code segments. It is a
case for more than one VM executing the similar or same middleware components, web
servers, software libraries, software, etc. A guest OS doesn't require to be compliant with
any host hardware, hence making it feasible to execute distinct OS on a similar computer (such
as an operating system's prior version, Linux, or Windows) for supporting future software.

Uses of System Virtual Machines

The virtual machine can be used for supporting isolated guest OS. It is popular regarding
embedded systems. A common use might be to execute the real-time operating system with a
preferred complicated operating system simultaneously such as Windows or Linux.

Other uses might be for unproven and novel software that is still in the stage of
development, thus it executes in a sandbox. VMs have other aspects of OS development. It
may contain faster reboots and developed debugging access.
More than one virtual machine running their guest OS is engaged for the consolidation of the
server frequently.
What is Process Virtual Machines?
A process virtual machine is sometimes known as MRE (Manages Runtime Environment) or
application virtual machine. It runs as a general application in the host operating system and
supports an individual process. These are created if that process begins and destroyed if it exits.

The purpose of the process VM is to facilitate a programming environment that is platform-


independent. It abstracts away all the information of the underlying operating system or
hardware. It allows the programs to be executed on any platform in a similar way.

A process virtual machine gives the high-level abstraction of a high-level programming


language. Process virtual machine can be implemented with the use of an interpreter. Its
performance proportionate to the programming language (compiled) can be attained by using
a just-in-time compilation.

The process virtual machine has become famous with the Java programming language. It can
be implemented with the Java virtual machine. Another example includes the .NET
Framework and Parrot virtual machine which executes on the virtual machine known as
the Common Language Runtime. Each of them could be served as the abstraction layer for a
computer language.

The process virtual machine has a special case for those systems that essence on the
communication mechanisms of the (heterogeneous potentially) computer clusters. These types
of virtual machines do not include any individual process, although one process/physical
machine inside the cluster.

These clusters are created to mitigate the programming confluent applications task by enabling
the programmers to concentrate on algorithms instead of the communication mechanisms given
by the OS and interconnect.

They don't hide a fact that communication takes place and attempt to illustrate a cluster as an
individual machine.

This system doesn't give a particular programming language, unlike other types of process
virtual machines, although, they are embedded within any existing language. Such any system
typically facilitates binding for many languages (like FORTRAN and C).

Examples are MPI (Message Passing Interface) and PVM (Parallel Virtual Machine). They
are not virtual machines strictly because various applications executing on the top still contain
access to every OS service. Thus, they are not restricted to the model of the system.

Full Virtualization

The virtual machine affects hardware to permit a guest operating system to be executed in
separation in full virtualization. It was developed in 1966 using the IBM CP-67 and CP-40
which are the VM family's predecessors.

Some of the examples outside the field of mainframe include Egenera vBlade technology,
Win4Lin Pro, Win4BSD, Mac-on Linux, Adeos, QEMU, VMware ESXi, VMware Server (also
known as GSX Server), VMware Workstation, Hyper-V, Virtual Server, Virtual PC, Oracle
VM, Virtual Iron, VirtualBox, Parallels Desktop for Mac, and Parallels Workstation.

Hardware-assisted virtualization
The hardware facilitates architectural support in hardware-assisted virtualization. This
architectural support provides help for creating a monitor of the virtual machine and permits
various guest operating systems to be executed in separation.

This type of virtualization was first defined in 1972 on the IBM System/370. It was introduced
for applying with VM/370. The initial virtual machine OS provided by IBM was the official
product.

AMD and Intel give additional hardware for supporting virtualization in 2006 and 2005. In
2005, Sun Microsystems (Oracle Corporation) have included similar aspects in the
UltraSPARC T-Series processors. Virtualization platform's examples adapted to some
hardware include Parallels Workstation, VirtualBox, Oracle VM Server for SPARC, Parallels
Desktop for Mac, Xen, Windows Virtual PC, Hyper-V, VMware Fusion, VMware
Workstations, and KVM.

First-generation 64-bit and 32-bit x86 hardware support have been detected to facilitate
performance benefits on software virtualization in 2006.

Operating-system-level virtualization

A physical server can be virtualized on the OS level in operating-system-level virtualization.


It allows more than one secure and isolated virtualized server for running on an individual
physical server.

The environment of the guest operating system shares a similar running instance of an
operating system as any host system. Hence, a similar operating system kernel is used for
implementing guest environments. Also, various applications that are running within the
provided guest environment consider it as the stand-alone system.

The original implementation was FreeBSD jails. Another example includes iCore Virtual
Accounts, Parallels Virtuozzo Containers, AIX Workload Partitions, LXC, Linux-Vserver,
OpenVZ, Solaris Containers, and Dockers.

Full virtualization can be possible with the accurate combination of software and hardware
elements only. For example, full virtualization is not possible using most of the System/360
series of IBM and early System/360 system of IBM.

In 1972, IBM included virtual memory hardware to the series of System/370 which is not
similar to the Intel VT-x Rings. It facilitates a higher-level of privilege for the hypervisor to
handle virtual machines properly.

Challenges for full virtualization

Full virtualization's primary challenge is the simulation and interception of various privileged
operations like I/O instructions. The consequence of all operations implemented in a provided
VM should be kept inside that VM.
Virtual operations can't be permitted to change any other VM state, hardware, and the control
program.

A few machine instructions could be run via the hardware directly since all the effects are
contained entirely in the components which are handled by the control programs like arithmetic
registers and memory locations.
Although, other instructions (that can pierce the VM) can't be permitted to run directly. They
should rather be simulated and trapped. These types of instructions either affect or access the
state data that is external to the VM.
Full virtualization is highly successful for some of the following reasons:

o Separating users from one other (or from the control program)
o Distribute a single computer system between more than one user
o Imitating new hardware for achieving improved productivity, security, and reliability.

Advantages of VM
o Virtual Machine facilitates compatibility of the software to that software which is
executing on it. Hence, each software specified for a virtualized host would also execute
on the VM.
o It offers isolation among distinct types of processors and OSes. Hence, the processor
OS executing on a single virtual machine can't change the host of any other host systems
and virtual machines.
o Virtual Machine facilitates encapsulation. Various software present over the VM could
be controlled and modified.
o Virtual machines give several features such as the addition of new operating
system. An error in a single operating system will not affect any other operating system
available on the host. It offers the transfer of many files between VMs, and no dual
booting for the multi-OS host.
o VM provides better management of software because VM can execute a complete stack
of software of the run legacy operating system, host machine, etc.
o It can be possible to distribute hardware resources to software stacks independently.
The VM could be transferred to distinct computers for balancing the load.

Taxonomy of Virtual Machines

The first classification discriminates against the service or entity that is being emulated
•Virtualization is mainly used to emulate execution environments, storage and networks
•Execution virtualization is the oldest, most popular

•Two major categories: Process level, System level

•Process level techniques - on top of existing OS which has full control of the hardware
•System level-Directly on hardware and require minimum support from existing OS

Virtualization Execution Environment Storage Network …. EmulationHigh-Level VM


Multiprogramming Hardware-assisted Virtualization Process LevelSystem Level
Paravirtualization Full Virtualization How it is done?Technique Virtualization Model
Application Programming Language Operating System Hardware Partial Virtualization

1) Execution Virtualization •Includes all techniques whose a im is to emulate an execution


environment that is separate from the one hosting virtualization layer
1. Machine Reference Model -Virtualizing an execution environment at different levels of the
computing stack

ENVIRONMENT VITUALIZATION:
NETWORK VIRTUALIZATION
PROCESS LEVEL VIRTUALIZATION
WHAT IS EMULATION?

HIGH LEVEL VIRUALIZATION:


MULTIPROGRAMMING
Hypervisor

A hypervisor, also known as a virtual machine monitor or VMM. The hypervisor is a piece of
software that allows us to build and run virtual machines which are abbreviated as VMs.

A hypervisor allows a single host computer to support multiple virtual machines (VMs) by
sharing resources including memory and processing.

What is the use of a hypervisor?


Hypervisors allow the use of more of a system's available resources and provide greater IT
versatility because the guest VMs are independent of the host hardware which is one of the
major benefits of the Hypervisor.

In other words, this implies that they can be quickly switched between servers. Since a
hypervisor with the help of its special feature, it allows several virtual machines to operate on
a single physical server. So, it helps us to reduce:

o The Space efficiency


o The Energy uses
o The Maintenance requirements of the server.

Kinds of hypervisors

There are two types of hypervisors: "Type 1" (also known as "bare metal") and "Type 2" (also
known as "hosted"). A type 1 hypervisor functions as a light operating system that operates
directly on the host's hardware, while a type 2 hypervisor functions as a software layer on top
of an operating system, similar to other computer programs.

Since they are isolated from the attack-prone operating system, bare-metal hypervisors are
extremely stable.

Furthermore, they are usually faster and more powerful than hosted hypervisors. For these
purposes, the majority of enterprise businesses opt for bare-metal hypervisors for their data
center computing requirements.

While hosted hypervisors run inside the OS, they can be topped with additional (and different)
operating systems.
The hosted hypervisors have longer latency than bare-metal hypervisors which is a very major
disadvantage of the it. This is due to the fact that contact between the hardware and the
hypervisor must go through the OS's extra layer.

The Type 1 hypervisor

 The native or bare metal hypervisor, the Type 1 hypervisor is known by both names.
 It replaces the host operating system, and the hypervisor schedules VM services directly
to the hardware.

 The type 1 hypervisor is very much commonly used in the enterprise data center or other
server-based environments.
 It includes KVM, Microsoft Hyper-V, and VMware vSphere. If we are running the updated
version of the hypervisor then we must have already got the KVM integrated into the Linux
kernel in 2007.

The Type 2 hypervisor

 It is also known as a hosted hypervisor, The type 2 hypervisor is a software layer or


framework that runs on a traditional operating system.
 It operates by separating the guest and host operating systems. The host operating system
schedules VM services, which are then executed on the hardware.
 Individual users who wish to operate multiple operating systems on a personal computer
should use a form 2 hypervisor.
 This type of hypervisor also includes the virtual machines with it.
 Hardware acceleration technology improves the processing speed of both bare-metal and
hosted hypervisors, allowing them to build and handle virtual resources more quickly.
DIFFERENCE BETWEEN TYPE 1 & TYPE 2 HYPERVISOR

Benefits of hypervisors

Using a hypervisor to host several virtual machines has many advantages:

o Speed: The hypervisors allow virtual machines to be built instantly unlike bare-metal
servers. This makes provisioning resources for complex workloads much simpler.
o Efficiency: Hypervisors that run multiple virtual machines on the resources of a single
physical machine often allow for more effective use of a single physical server.
o Flexibility: Since the hypervisor distinguishes the OS from the underlying hardware,
the program no longer relies on particular hardware devices or drivers, bare-metal
hypervisors enable operating systems and their related applications to operate on a
variety of hardware types.
o Portability: Multiple operating systems can run on the same physical server thanks to
hypervisors (host machine). The hypervisor's virtual machines are portable because
they are separate from the physical computer.

As an application requires more computing power, virtualization software allows it to access


additional machines without interruption.

Container vs hypervisor
Containers and hypervisors also help systems run faster and more efficiently. But they both do
these things in very different manner that is why are different form each other.

The Hypervisors:

o Using virtual machines, an operating system can operate independently from the
underlying hardware.
o Make virtual computing, storage, and memory services available to all.

Containers:

o There is no specific need of the O.S for the program to run, the container makes it sure.
o They only need a container engine to run on any platform or on any operating system.
o Are incredibly versatile since an application has everything it requires to operate within
a container.

Containers and hypervisors have various functions. Containers, unlike virtual machines,
contain only an app and its associated services.

Since they are lighter and more compact than virtual machines, they are often used for rapid
and versatile application creation and movement.

Security considerations for hypervisors

A virtual machine (VM) creates a separate world from the rest of the device, so whatever runs
inside it won't mess with everything else on the host hardware.

Since virtual machines are isolated, even though one is compromised, the rest of the system
should be unaffected.

However, if the hypervisor is compromised, it may trigger issues with all of the VMs that it
handles, putting the data in each one at risk.

Five Essential Key Characteristics Features

The essential characteristics of cloud computing define the important features for successful
cloud computing. If any feature is missing from the defining feature, fortunately, it is not cloud
computing. Let us now discuss what these essential features are:

1. On-demand Service

Customers can self-provision computing resources like server time, storage, network,
applications as per their demands without human intervention, i.e., cloud service provider.

2. Broad Network Access

Computing resources are available over the network and can be accessed using heterogeneous
client platforms like mobiles, laptops, desktops, PDAs, etc.

3. Resource Pooling
Computing resources such as storage, processing, network, etc., are pooled to serve multiple
clients. For this, cloud computing adopts a multitenant model where the computing resources
of service providers are dynamically assigned to the customer on their demand.

The customer is not even aware of the physical location of these resources. However, at a
higher level of abstraction, the location of resources can be specified.

4. Sharp elasticity

Computing resources for a cloud customer often appear limitless because cloud resources can
be rapidly and elastically provisioned. The resource can be released at an increasingly large
scale to meet customer demand.

Computing resources can be purchased at any time and in any quantity depending on the
customers' demand.

5. Measured Service

Monitoring and control of computing resources used by clients can be done by implementing
meters at some level of abstraction depending on the type of Service.

The resources used can be reported with metering capability, thereby providing transparency
between the provider and the customer.

Virtualization structure :

Virtualization is technology that you can use to create virtual representations of servers,
storage, networks, and other physical machines. Virtual software mimics the functions of
physical hardware to run multiple virtual machines simultaneously on a single physical
machine.

The term virtualization is often synonymous with hardware virtualization, which plays a
fundamental role in efficiently delivering Infrastructure-as-a-Service (IaaS) solutions
for cloud computing. Moreover, virtualization technologies provide a virtual environment for
not only executing applications but also for storage, memory, and networking.
Virtualization
 Host Machine: The machine on which the virtual machine is going to be built is known as
Host Machine.
 Guest Machine: The virtual machine is referred to as a Guest Machine.
Work of Virtualization in Cloud Computing
Virtualization has a prominent impact on Cloud Computing. In the case of cloud computing,
users store data in the cloud, but with the help of Virtualization, users have the extra benefit
of sharing the infrastructure. Cloud Vendors take care of the required physical resources, but
these cloud providers charge a huge amount for these services which impacts every user or
organization. Virtualization helps Users or Organisations in maintaining those services which
are required by a company through external (third-party) people, which helps in reducing
costs to the company. This is the way through which Virtualization works in Cloud
Computing.
Benefits of Virtualization
 More flexible and efficient allocation of resources.
 Enhance development productivity.
 It lowers the cost of IT infrastructure.
 Remote access and rapid scalability.
 High availability and disaster recovery.
 Pay peruse of the IT infrastructure on demand.
 Enables running multiple operating systems.
Drawback of Virtualization
 High Initial Investment: Clouds have a very high initial investment, but it is also true that
it will help in reducing the cost of companies.
 Learning New Infrastructure: As the companies shifted from Servers to Cloud, it
requires highly skilled staff who have skills to work with the cloud easily, and for this, you
have to hire new staff or provide training to current staff.
 Risk of Data: Hosting data on third-party resources can lead to putting the data at risk, it
has the chance of getting attacked by any hacker or cracker very easily.
Characteristics of Virtualization
 Increased Security: The ability to control the execution of a guest program in a
completely transparent manner opens new possibilities for delivering a secure, controlled
execution environment. All the operations of the guest programs are generally performed
against the virtual machine, which then translates and applies them to the host programs.
 Managed Execution: In particular, sharing, aggregation, emulation, and isolation are the
most relevant features.
 Sharing: Virtualization allows the creation of a separate computing environment within
the same host.
 Aggregation: It is possible to share physical resources among several guests, but
virtualization also allows aggregation, which is the opposite process.
For more characteristics, you can refer to Characteristics of Virtualization.
Types of Virtualization
1. Application Virtualization
2. Network Virtualization
3. Desktop Virtualization
4. Storage Virtualization
5. Server Virtualization
6. Data virtualization

Types of Virtualization
1. Application Virtualization: Application virtualization helps a user to have remote access
to an application from a server. The server stores all personal information and other
characteristics of the application but can still run on a local workstation through the internet.
An example of this would be a user who needs to run two different versions of the same
software. Technologies that use application virtualization are hosted applications and
packaged applications.
2. Network Virtualization: The ability to run multiple virtual networks with each having a
separate control and data plan. It co-exists together on top of one physical network. It can be
managed by individual parties that are potentially confidential to each other. Network
virtualization provides a facility to create and provision virtual networks, logical switches,
routers, firewalls, load balancers, Virtual Private Networks (VPN), and workload security
within days or even weeks.
Network Virtualization
3. Desktop Virtualization: Desktop virtualization allows the users’ OS to be remotely stored
on a server in the data center. It allows the user to access their desktop virtually, from any
location by a different machine. Users who want specific operating systems other than
Windows Server will need to have a virtual desktop. The main benefits of desktop
virtualization are user mobility, portability, and easy management of software installation,
updates, and patches.
4. Storage Virtualization: Storage virtualization is an array of servers that are managed by
a virtual storage system. The servers aren’t aware of exactly where their data is stored and
instead function more like worker bees in a hive. It makes managing storage from multiple
sources be managed and utilized as a single repository. storage virtualization software
maintains smooth operations, consistent performance, and a continuous suite of advanced
functions despite changes, breaks down, and differences in the underlying equipment.
5. Server Virtualization: This is a kind of virtualization in which the masking of server
resources takes place. Here, the central server (physical server) is divided into multiple
different virtual servers by changing the identity number, and processors. So, each system
can operate its operating systems in an isolated manner. Where each sub-server knows the
identity of the central server. It causes an increase in performance and reduces the operating
cost by the deployment of main server resources into a sub-server resource. It’s beneficial in
virtual migration, reducing energy consumption, reducing infrastructural costs, etc.
Server Virtualization
6. Data Virtualization: This is the kind of virtualization in which the data is collected from
various sources and managed at a single place without knowing more about the technical
information like how data is collected, stored & formatted then arranged that data logically
so that its virtual view can be accessed by its interested people and stakeholders, and users
through the various cloud services remotely. Many big giant companies are providing their
services like Oracle, IBM, At scale, Cdata, etc.
Uses of Virtualization
 Data-integration
 Business-integration
 Service-oriented architecture data-services
 Searching organizational data

Implementation levels of virtualization

1) Instruction Set Architecture Level (ISA)

ISA virtualization can work through ISA emulation. This is used to run many legacy codes
written for a different hardware configuration. These codes run on any virtual machine using
the ISA. With this, a binary code that originally needed some additional layers to run is now
capable of running on the x86 machines. It can also be tweaked to run on the x64 machine.
With ISA, it is possible to make the virtual machine hardware agnostic.

For the basic emulation, an interpreter is needed, which interprets the source code and then
converts it into a hardware format that can be read. This then allows processing. This is one of
the five implementation levels of virtualization in Cloud Computing..

2) Hardware Abstraction Level (HAL)

True to its name HAL lets the virtualization perform at the level of the hardware. This makes
use of a hypervisor which is used for functioning. The virtual machine is formed at this level,
which manages the hardware using the virtualization process. It allows the virtualization of
each of the hardware components, which could be the input-output device, the memory, the
processor, etc.

Multiple users will not be able to use the same hardware and also use multiple virtualization
instances at the very same time. This is mostly used in the cloud-based infrastructure.

3) Operating System Level

At the level of the operating system, the virtualization model is capable of creating a layer that
is abstract between the operating system and the application. This is an isolated container on
the operating system and the physical server, which uses the software and hardware. Each of
these then functions in the form of a server.

When there are several users and no one wants to share the hardware, then this is where the
virtualization level is used. Every user will get his virtual environment using a dedicated virtual
hardware resource. In this way, there is no question of any conflict.

4) Library Level

The operating system is cumbersome, and this is when the applications use the API from the
libraries at a user level. These APIs are documented well, and this is why the library
virtualization level is preferred in these scenarios. API hooks make it possible as it controls the
link of communication from the application to the system.

5) Application Level

The application-level virtualization is used when there is a desire to virtualize only one
application and is the last of the implementation levels of virtualization in Cloud Computing.
One does not need to virtualize the entire environment of the platform.

This is generally used when you run virtual machines that use high-level languages. The
application will sit above the virtualization layer, which in turn sits on the application program.

It lets the high-level language programs compiled to be used at the application level of the
virtual machine run seamlessly.

Full Virtualization

1. Full Virtualization: Full Virtualization was introduced by IBM in the year 1966. It is the
first software solution for server virtualization and uses binary translation and direct approach
techniques. In full virtualization, guest OS is completely isolated by the virtual machine from
the virtualization layer and hardware. Microsoft and Parallels systems are examples of full
virtualization.
2. Para virtualization: Paravirtualization is the category of CPU virtualization which uses
hypercalls for operations to handle instructions at compile time. In paravirtualization, guest
OS is not completely isolated but it is partially isolated by the virtual machine from the
virtualization layer and hardware. VMware and Xen are some examples of
paravirtualization.

The difference between Full Virtualization and Paravirtualization are as follows:

S.No. Full Virtualization Paravirtualization

1. In Full virtualization, virtual In paravirtualization, a virtual machine


machines permit the execution of the does not implement full isolation of OS
instructions with the running of but rather provides a different API which
S.No. Full Virtualization Paravirtualization

unmodified OS in an entirely isolated is utilized when OS is subjected to


way. alteration.

2. While the Paravirtualization is more secure


Full Virtualization is less secure.
than the Full Virtualization.

3. Full Virtualization uses binary


While Paravirtualization uses hypercalls at
translation and a direct approach as a
compile time for operations.
technique for operations.

4. Full Virtualization is slow than Paravirtualization is faster in operation as


paravirtualization in operation. compared to full virtualization.

5. Full Virtualization is more portable Paravirtualization is less portable and


and compatible. compatible.

6. Examples of full virtualization are Examples of paravirtualization are


Microsoft and Parallels systems. Microsoft Hyper-V, Citrix Xen, etc.

7. The guest operating system has to be


It supports all guest operating
modified and only a few operating systems
systems without modification.
support it.

8. Using the drivers, the guest operating


The guest operating system will issue
system will directly communicate with the
hardware calls.
hypervisor.

9. It is less streamlined compared to


It is more streamlined.
para-virtualization.

10. It provides less isolation compared to full


It provides the best isolation.
virtualization.

Hardware Virtualization

Hardware virtualization, sometimes called platform or server virtualization,


is executed on a particular hardware platform by host software. Essentially, it hides the
physical hardware. The host software that is actually a control program is called a hypervisor.

Usage of Hardware Virtualization

Hardware virtualization is mainly done for the server platforms, because controlling virtual
machines is much easier than controlling a physical server.

Advantages of Hardware Virtualization


The main benefits of hardware virtualization are more efficient resource utilization, lower
overall costs as well as increased uptime and IT flexibility.

1) More Efficient Resource Utilization:

Physical resources can be shared among virtual machines. Although the unused resources can
be allocated to a virtual machine and that can be used by other virtual machines if the need
exists.

2) Lower Overall Costs Because Of Server Consolidation:

Now it is possible for multiple operating systems can co-exist on a single hardware platform,
so that the number of servers, rack space, and power consumption drops significantly.

3) Increased Uptime Because Of Advanced Hardware Virtualization Features:

The modern hypervisors provide highly orchestrated operations that maximize the abstraction
of the hardware and help to ensure the maximum uptime. These functions help to migrate a
running virtual machine from one host to another dynamically, as well as maintain a running
copy of virtual machine on another physical host in case the primary host fails.

4) Increased IT Flexibility:

Hardware virtualization helps for quick deployment of server resources in a managed and
consistent ways. That results in IT being able to adapt quickly and provide the business with
resources needed in good time.

Virtualization of CPU, Memory and I/O devices.


Virtualization of CPU, memory, and I/O devices allows for multiple virtual
environments to exist on a single physical server. This enables better utilization of hardware,
as multiple virtual machines can run different operating systems and applications
simultaneously.
CPU Virtualization
• A VM is a duplicate of an existing computer system in which a majority of the VM
instructions are executed on the host processor in native mode.
• Unprivileged instructions of VMs run directly on the host machine for higher efficiency.
• Other critical instructions should be handled carefully for correctness and stability.
• The critical instructions are divided into three categories: privileged instructions, control-
sensitive instructions, and behaviour-sensitive instructions.

Memory and I/O Interfacing

Several memory chips and I/O devices are connected to a microprocessor.


The following figure shows a schematic diagram to interface memory chips and I/O devices to
a microprocessor.

Memory Interfacing

When we are executing any instruction, the address of memory location or an I/O device is sent
out by the microprocessor. The corresponding memory chip or I/O device is selected by a
decoding circuit.

Memory requires some signals to read from and write to registers and microprocessor transmits
some signals for reading or writing data.

The interfacing process includes matching the memory requirements with the microprocessor
signals. Therefore, the interfacing circuit should be designed in such a way that it matches the
memory signal requirements with the microprocessor's signals.

I/O interfacing

As we know, keyboard and displays are used as communication channel with outside world.
Therefore, it is necessary that we interface keyboard and displays with the microprocessor.
This is called I/O interfacing. For this type of interfacing, we use latches and buffers for
interfacing the keyboards and displays with the microprocessor.

But the main drawback of this interfacing is that the microprocessor can perform only one
function.
UNIT III VIRTUALIZATION INFRASTRUCTURE AND DOCKER 7

Desktop Virtualization – Network Virtualization – Storage Virtualization – System-level


of Operating Virtualization – Application Virtualization – Virtual clusters and
Resource Management – Containers vs. Virtual Machines – Introduction to Docker –
Docker Components – Docker Container – Docker Images and Repositories.

Desktop Virtualization
Desktop virtualization creates a software-based (or virtual) version of an end user’s
desktop environment and operating system (OS) that is decoupled from the end user’s
computing device or client. This enables the user to access his or her desktop from any
computing device.
Desktop virtualization deployment models
Virtual desktop infrastructure (VDI)
In VDI deployment model, the operating system runs on a virtual machine (VM)
hosted on a server in a data center. The desktop image travels over the network to the end
user’s device, where the end user can interact with the desktop (and the underlying
applications and operating system) as if they were local.
VDI gives each user his or her own dedicated VM running its own operating system. The
operating system resources—drivers, CPUs, memory, etc.—operate from a software layer
called a hypervisor that mimics their output, manages the resource allocation to multiple
VMs, and allows them to run side by side on the same server.
A key benefit of VDI is that it can deliver the Windows 10 desktop and operating system to
the end user’s devices. However, because VDI supports only one user per Windows 10
instance, it requires a separate VM for each Windows 10 user.
Remote desktop services (RDS)
In RDS—also known as Remote Desktop Session Host (RDSH)—users remotely access
desktops and Windows applications through the Microsoft Windows Server operating
system. Applications and desktop images are served via Microsoft Remote Desktop Protocol
(RDP). Formerly known as Microsoft Terminal Server, this product has remained largely
unchanged since its initial release.
From the end user’s perspective, RDS and VDI are identical. But because one instance of
Windows Server can support as many simultaneous users as the server hardware can handle,
RDS can be a more cost-effective desktop virtualization option. It’s also worth noting
applications tested or certified to run on Windows 10 may not be tested or certified to run on
the Windows Server OS.
Desktop-as-a-Service (DaaS)
In DaaS, VMs are hosted on a cloud-based backend by a third-party provider. DaaS is readily
scalable, can be more flexible than on-premise solutions, and generally deploys faster than
many other desktop virtualization options.
Like other types of cloud desktop virtualization, DaaS shares many of the general benefits of
cloud computing, including support for fluctuating workloads and changing storage demands,
usage-based pricing, and the ability to make applications and data accessible from almost any
internet-connected device. The chief drawback to DaaS is that features and configurations are
not always as customizable as required.
Benefits of desktop virtualization
Virtualizing desktops provides many potential benefits that can vary depending upon
the deployment model you choose.
Simpler administration. Desktop virtualization can make it easier for IT teams to manage
employee computing needs. Your business can maintain a single VM template for employees
within similar roles or functions instead of maintaining individual computers that must be
reconfigured, updated, or patched whenever software changes need to be made. This saves
time and IT resources.
Cost savings. Many virtual desktop solutions allow you to shift more of your IT budget from
capital expenditures to operating expenditures. Because compute-intensive applications
require less processing power when they’re delivered via VMs hosted on a data center server,
desktop virtualization can extend the life of older or less powerful end-user devices. On-
premise virtual desktop solutions may require a significant initial investment in server
hardware, hypervisor software, and other infrastructure, making cloud-based DaaS—wherein
you simply pay a regular usage-based charge—a more attractive option.
Improved productivity.
Desktop virtualization makes it easier for employees to access enterprise computing
resources. They can work anytime, anywhere, from any supported device with an Internet
connection.
Support for a broad variety of device types.
Virtual desktops can support remote desktop access from a wide variety of devices,
including laptop and desktop computers, thin clients, zero clients, tablets, and even some
mobile phones. You can use virtual desktops to deliver workstation-like experiences and
access to the full desktop anywhere, anytime, regardless of the operating system native to the
end user device.
Stronger security.
In desktop virtualization, the desktop image is abstracted and separated from the
physical hardware used to access it, and the VM used to deliver the desktop image can be a
tightly controlled environment managed by the enterprise IT department.
Agility and scalability.
It’s quick and easy to deploy new VMs or serve new applications whenever
necessary, and it is just as easy to delete them when they’re no longer needed.
Better end-user experiences.
When you implement desktop virtualization, your end users will enjoy a feature-rich
experience without sacrificing functionality they’ve come to rely on, like printing or access to
USB ports.

Network Virtualization

Network virtualization represents the administration and monitoring of an entire


computer network as a single administrative entity from a single software-based
administrator’s console.

Network virtualization can include storage virtualization, which contains managing all storage
as an individual resource. Network virtualization is created to enable network optimization of
data transfer rates, flexibility, scalability, reliability, and security. It automates many network
management functions, which disguise a network's true complexity. All network servers and
services are considered as one pool of resources, which can be used independently of the
physical elements.

Virtualization can be defined as making a computer that runs within another computer. The
virtual computer, or guest device, is a fully functional computer that can manage the same
processes your physical device can. The processes performed by the guest device are separated
from the basic processes of your host device. You can run several guest devices on your host
device and each one will identify the others as an independent computer.
Advantages of Network Virtualization

The advantages of network virtualization are as follows −

Lower hardware costs − With network virtualization, entire hardware costs are
reduced, while providing a bandwidth that is more efficient.

Dynamic network control − Network virtualization provides centralized control over


network resources, and allows for dynamic provisions and reconfiguration. Also,
computer resources and applications can connect with virtual network resources
precisely. This also enables for optimization of application support and resource
utilization.

Rapid scalability − Network virtualization generated an ability to scale the network


rapidly either up or down to handle and make new networks on-demand. This is a
valuable device as enterprises transform their IT resources to the cloud and shift their
model to an ‘as a service’.

Types of Network Virtualization

The types of network virtualization are as follows −

Network Virtualization − Network virtualization is a technique of combining the


available resources in a network by splitting up the available bandwidth into different
channels, each being separate and distinguished.

Server Virtualization − This technique is the masking of server resources. It simulates


physical servers by transforming their identity, numbers, processors, and operating
frameworks. This spares the user from continuously managing complex server
resources. It also makes a lot of resources available for sharing and utilizing, while
maintaining the capacity to expand them when needed.

Data Virtualization − This type of cloud computing virtualization technique is


abstracting the technical details generally used in data management, including location,
performance, or format, in favor of broader access and more resiliency that are directly
related to business required.

Application Virtualization − Software virtualization in cloud computing abstracts the


application layer, separating it from the operating framework.

Storage Virtualization
Storage virtualization is the pooling of physical storage from multiple storage devices
into what appears to be a single storage device -- or pool of available storage capacity. A
central console manages the storage.

Storage virtualization is a major component for storage servers, in the form of


functional RAID levels and controllers. Operating systems and applications with device can
access the disks directly by themselves for writing. The controllers configure the local storage
in RAID groups and present the storage to the operating system depending upon the
configuration. However, the storage is abstracted and the controller is determining how to write
the data or retrieve the requested data for the operating system.

Storage virtualization is becoming more and more important in various other forms:

File servers: The operating system writes the data to a remote location with no need to
understand how to write to the physical media.

WAN Accelerators: Instead of sending multiple copies of the same data over the WAN
environment, WAN accelerators will cache the data locally and present the re-requested blocks
at LAN speed, while not impacting the WAN performance.

SAN and NAS: Storage is presented over the Ethernet network of the operating system. NAS
presents the storage as file operations (like NFS). SAN technologies present the storage as
block level storage (like Fibre Channel). SAN technologies receive the operating instructions
only when if the storage was a locally attached device.

Storage Tiering: Utilizing the storage pool concept as a stepping stone, storage tiering analyze
the most commonly used data and places it on the highest performing storage pool. The lowest
one used data is placed on the weakest performing storage pool.

This operation is done automatically without any interruption of service to the data consumer.

Advantages of Storage Virtualization


1. Data is stored in the more convenient locations away from the specific host. In the case
of a host failure, the data is not compromised necessarily.
2. The storage devices can perform advanced functions like replication, reduplication, and
disaster recovery functionality.
3. By doing abstraction of the storage level, IT operations become more flexible in how
storage is provided, partitioned, and protected.

System-level of Operating Virtualization

OS-level virtualization is a technology that partitions the operating system to create


multiple isolated Virtual Machines (VM). An OS-level VM is a virtual execution environment
that can be forked instantly from the base operating environment.

Virtualization software is able to convert hardware IT resources that require unique


software for operation into virtualized IT resources. As the host OS is a complete operating
system in itself, many OS-based services are available as organizational management and
administration tools can be utilized for the virtualization host management.
Some major operating system-based services are mentioned below:
1. Backup and Recovery.
2. Security Management.
3. Integration to Directory Services.
Various major operations of Operating System Based Virtualization are described below:
1. Hardware capabilities can be employed, such as the network connection and CPU.
2. Connected peripherals with which it can interact, such as a webcam, printer, keyboard, or
Scanners.
3. Data that can be read or written, such as files, folders, and network shares.
The Operating system may have the capability to allow or deny access to such resources
based on which the program requests them and the user account in the context of which it
runs. OS may also hide these resources, which leads that when a computer program
computes them, they do not appear in the enumeration results. Nevertheless, from a
programming perspective, the computer program has interacted with those resources and
the operating system has managed an act of interaction.
With operating-system-virtualization or containerization, it is probable to run programs
within containers, to which only parts of these resources are allocated. A program that is
expected to perceive the whole computer, once run inside a container, can only see the
allocated resources and believes them to be all that is available. Several containers can be
formed on each operating system, to each of which a subset of the computer’s resources is
allocated. Each container may include many computer programs. These programs may run
parallel or distinctly, even interrelate with each other.

features of operating system-based virtualization are:

 Resource isolation: Operating system-based virtualization provides a high level of


resource isolation, which allows each container to have its own set of resources,
including CPU, memory, and I/O bandwidth.
 Lightweight: Containers are lightweight compared to traditional virtual machines as
they share the same host operating system, resulting in faster startup and lower resource
usage.
 Portability: Containers are highly portable, making it easy to move them from one
environment to another without needing to modify the underlying application.
 Scalability: Containers can be easily scaled up or down based on the application
requirements, allowing applications to be highly responsive to changes in demand.
 Security: Containers provide a high level of security by isolating the containerized
application from the host operating system and other containers running on the same
system.
 Reduced Overhead: Containers incur less overhead than traditional virtual machines, as
they do not need to emulate a full hardware environment.
 Easy Management: Containers are easy to manage, as they can be started, stopped, and
monitored using simple commands.
Operating system-based virtualization can raise demands and problems related to
performance overhead, such as:
1. The host operating system employs CPU, memory, and other hardware IT resources.
2. Hardware-related calls from guest operating systems need to navigate numerous layers to
and from the hardware, which shrinkage overall performance.
3. Licenses are frequently essential for host operating systems, in addition to individual
licenses for each of their guest operating systems.
Advantages of Operating System-Based Virtualization:
 Resource Efficiency: Operating system-based virtualization allows for greater resource
efficiency as containers do not need to emulate a complete hardware environment, which
reduces resource overhead.
 High Scalability: Containers can be quickly and easily scaled up or down depending on
the demand, which makes it easy to respond to changes in the workload.Easy
Management: Containers are easy to manage as they can be managed through simple
commands, which makes it easy to deploy and maintain large numbers of containers.
Reduced Costs: Operating system-based virtualization can significantly reduce costs, as
it requires fewer resources and infrastructure than traditional virtual machines.
 Faster Deployment: Containers can be deployed quickly, reducing the time required to
launch new applications or update existing ones.
 Portability: Containers are highly portable, making it easy to move them from one
environment to another without requiring changes to the underlying application.
Disadvantages of Operating System-Based Virtualization:
 Security: Operating system-based virtualization may pose security risks as containers
share the same host operating system, which means that a security breach in one
container could potentially affect all other containers running on the same system.
 Limited Isolation: Containers may not provide complete isolation between applications,
which can lead to performance degradation or resource contention.
 Complexity: Operating system-based virtualization can be complex to set up and
manage, requiring specialized skills and knowledge.
 Dependency Issues: Containers may have dependency issues with other containers or
the host operating system, which can lead to compatibility issues and hinder deployment.
 Limited Hardware Access: Containers may have limited access to hardware resources,
which can limit their ability to perform certain tasks or applications that require direct
hardware access.

Application Virtualization
Virtualization is technology that you can use to create virtual representations of
servers, storage, networks, and other physical machines. Virtual software mimics the functions
of physical hardware to run multiple virtual machines simultaneously on a single physical
machine.
Disadvantages Of App Virtualization
The advantages of virtualized environments are numerous and include some of the following,
which are related to the proliferation of mobile and mixed working environments:

 Simple Installation: The configuration process is straightforward. And once it


completes, you can easily virtualize an app to execute in several endpoints. It is no
longer recommended to install the program on every terminal.
 Simple deployment: The apps are also simple to install for customers or suppliers.
The deployment of these programs is much simpler if you only provide them with the
executables that have already been set up.
 Programs are easy to remove: All you have to do is eliminate virtualized apps.
There is no need to remove the software from each machine.
 Easy firmware upgrades: Instead of updating each desktop separately, you can
upgrade the virtual programs once from a centralized location.
 Improved Support: Help desk employees may observe and address problems with
the functioning of virtualized apps from a centralized location if there are any.
 Liberation from the OSs: Virtualized programs may be utilized on any terminal,
whether it runs Microsoft, iOS, or Android because they are separate from the host
platform.

Virtual clusters and Resource Management


Virtual clusters are built with VMs installed at. distributed servers from one or more
physical clusters.
• The VMs in a virtual cluster are. interconnected logically by a virtual. network
across several physical networks.
A physical cluster is a collection of servers (physical machines) interconnected by a physical
network such as a LAN. In Chapter 2, we studied various clustering techniques on physical
machines. Here, we introduce virtual clusters and study its properties as well as explore their
potential applications. In this section, we will study three critical design issues of virtual
clusters: live migration of VMs, memory and file migrations, and dynamic deployment of
virtual clusters.
When a traditional VM is initialized, the administrator needs to manually write
configuration information or specify the configuration sources. When more VMs join a
network, an inefficient configuration always causes problems with overloading or under
utilization. Amazon’s Elastic Compute Cloud (EC2) is a good example of a web service that
provides elastic computing power in a cloud. EC2 permits customers to create VMs and to
manage user accounts over the time of their use. Most virtualization platforms, including
XenServer and VMware ESX Server, support a brid-ging mode which allows all domains to
appear on the network as individual hosts. By using this mode, VMs can communicate with
one another freely through the virtual network interface card and configure the network
automatically.

Phys ical versus Virt ual Clust ers

Propert ies o f Virt ual Clust er

Fast Deplo yment and E ffect ive Scheduling

High-P er for mance Virt ual St orage

1. Physical versus Virtual Clusters


Virtual clusters are built with VMs installed at distributed servers from one or more physical
clus-ters. The VMs in a virtual cluster are interconnected logically by a virtual network across
several physical networks. Figure 3.18 illustrates the concepts of virtual clusters and physical
clusters. Each virtual cluster is formed with physical machines or a VM hosted by multiple
physical clusters. The virtual cluster boundaries are shown as distinct boundaries.

The provisioning of VMs to a virtual cluster is done dynamically to have the following
interest-ing properties:
The virtual cluster nodes can be either physical or virtual machines. Multiple VMs running
with different OSes can be deployed on the same physical node.

• A VM runs with a guest OS, which is often different from the host OS, that manages the
resources in the physical machine, where the VM is implemented.

• The purpose of using VMs is to consolidate multiple functionalities on the same server.
This will greatly enhance server utilization and application flexibility.
VMs can be colonized (replicated) in multiple servers for the purpose of promoting
distributed parallelism, fault tolerance, and disaster recovery.

• The size (number of nodes) of a virtual cluster can grow or shrink dynamically, similar to
the way an overlay network varies in size in a peer-to-peer (P2P) network.
The failure of any physical nodes may disable some VMs installed on the failing nodes. But
the failure of VMs will not pull down the host system.

Since system virtualization has been widely used, it is necessary to effectively manage VMs
running on a mass of physical computing nodes (also called virtual clusters) and consequently
build a high-performance virtualized computing environment. This involves virtual cluster
deployment, monitoring and management over large-scale clusters, as well as resource
scheduling, load balancing, server consolidation, fault tolerance, and other techniques. The
different node colors in Figure 3.18 refer to different virtual clusters. In a virtual cluster system,
it is quite important to store the large number of VM images efficiently.

2.Properties of vi rtual Clu ster


Fast Deploying and Effective Scheduling:
High Performance Virtual Storage
LIve Migration process of a VM from one host to another host
Memory Migration
File System Migration

Network Migration:
Virtual Cluster Management:
Container Vs Virtual Machine
Virtual Machine
It runs on top of an emulating software called the hypervisor which sits between the
hardware and the virtual machine. The hypervisor is the key to enabling virtualization. It
manages the sharing of physical resources into virtual machines. Each virtual machine runs
its guest operating system. They are less agile and have lower portability than containers.
Container:
It sits on the top of a physical server and its host operating system. They share a
common operating system that requires care and feeding for bug fixes and patches. They
are more agile and have higher portability than virtual machines.
SNo. Virtual Machines(VM) Containers

1 VM is a piece of software that allows While a container is software that


you to install other software inside of allows different functionalities of an
it so you control it virtually as application independently.
opposed to installing the software
directly on the computer.

2. Applications running on a VM While applications running in a


system, or hypervisor, can run container environment share a single
different OS. OS.

3. VM virtualizes the computer system, While containers virtualize the


meaning its hardware. operating system, or the software only.

4. VM size is very large, generally in While the size of the container is very
gigabytes. light, generally a few hundred
megabytes, though it may vary as per
use.

5. VM takes longer to run than While containers take far less time to
containers, the exact time depending run.
on the underlying hardware.

6. VM uses a lot of system memory. While containers require very less


memory.

7. VM is more secure, as the underlying While containers are less secure, as the
hardware isn’t shared between virtualization is software-based, and
processes. memory is shared.
8. VMs are useful when we require all of While containers are useful when we
the OS resources to run various are required to maximize the running
applications. applications using minimal servers.

9 Examples of Type 1 hypervisors are Examples of containers are


. KVM, Xen, and VMware. Virtualbox RancherOS, PhotonOS, and
is a Type 2 hypervisor Containers by Docker.

INTRODUCTION TO DOCKER:

Docker is a set of platforms as a service (PaaS) products that use Operating system-
level virtualization to deliver software in packages called containers. Containers are isolated
from one another and bundle their own software, libraries, and configuration files; they can
communicate with each other through well-defined channels. All containers are run by a
single operating system kernel and therefore use fewer resources than a virtual machine.

Docker Containers

Docker containers are the lightweight alternatives of the virtual machine. It allows
developers to package up the application with all its libraries and dependencies, and ship it as
a single package. The advantage of using a docker container is that you don't need to allocate
any RAM and disk space for the applications. It automatically generates storage and space
according to the application requirement.

o Docker allows us to easily install and run software without worrying about setup or
dependencies.
o Developers use Docker to eliminate machine problems, i.e. "but code is worked on
my laptop." when working on code together with co-workers.
o Operators use Docker to run and manage apps in isolated containers for better compute
density.
o Enterprises use Docker to securely built agile software delivery pipelines to ship new
application features faster and more securely.
o Since docker is not only used for the deployment, but it is also a great platform for
development, that's why we can efficiently increase our customer's satisfaction.

Advantages of Docker
o It runs the container in seconds instead of minutes.
o It uses less memory.
o It provides lightweight virtualization.
o It does not a require full operating system to run applications.
o It uses application dependencies to reduce the risk.
o Docker allows you to use a remote repository to share your container with others.
o It provides continuous deployment and testing environment.

Disadvantages of Docker

There are the following disadvantages of Docker -


o It increases complexity due to an additional layer.
o In Docker, it is difficult to manage large amount of containers.
o Some features such as container self -registration, containers self-inspects, copying files
form host to the container, and more are missing in the Docker.
o Docker is not a good solution for applications that require rich graphical interface.
o Docker provides cross-platform compatibility means if an application is designed to
run in a Docker container on Windows, then it can't run on Linux or vice versa.

Docker Engine

It is a client server application that contains the following major components.

o A server which is a type of long-running program called a daemon process.


o The REST API is used to specify interfaces that programs can use to talk to the daemon
and instruct it what to do.
o A command line interface client.

Prerequisite

Before learning Docker, you must have the fundamental knowledge of Linux and programming
languages such as java, php, python, ruby, etc.

DOCKER ARCHITECTURE

Docker follows Client-Server architecture, which includes the three main components
that are Docker Client, Docker Host, and Docker Registry.
1. Docker Client

Docker client uses commands and REST APIs to communicate with the Docker
Daemon (Server). When a client runs any docker command on the docker client terminal, the
client terminal sends these docker commands to the Docker daemon. Docker daemon receives
these commands from the docker client in the form of command and REST API's request.

Docker Client uses Command Line Interface (CLI) to run the following commands -

docker build

docker pull

docker run

2. Docker Host

Docker Host is used to provide an environment to execute and run applications. It contains the
docker daemon, images, containers, networks, and storage.

3. Docker Registry

Docker Registry manages and stores the Docker images.

There are two types of registries in the Docker -

Pubic Registry - Public Registry is also called as Docker hub.

Private Registry - It is used to share images within the enterprise.

Docker Objects
There are the following Docker Objects -

Docker Images

Docker images are the read-only binary templates used to create Docker Containers. It uses
a private container registry to share container images within the enterprise and also uses public
container registry to share container images within the whole world. Metadata is also used by
docket images to describe the container's abilities.

Docker Containers

Containers are the structural units of Docker, which is used to hold the entire package that is
needed to run the application. The advantage of containers is that it requires very less resources.

In other words, we can say that the image is a template, and the container is a copy of that
template.

Docker Networking

Using Docker Networking, an isolated package can be communicated. Docker contains the
following network drivers -

o Bridge - Bridge is a default network driver for the container. It is used when multiple
docker communicates with the same docker host.
o Host - It is used when we don't need for network isolation between the container and
the host.
o None - It disables all the networking.
o Overlay - Overlay offers Swarm services to communicate with each other. It enables
containers to run on the different docker host.
o Macvlan - Macvlan is used when we want to assign MAC addresses to the containers.

Docker Storage

Docker Storage is used to store data on the container. Docker offers the following options for
the Storage -

o Data Volume - Data Volume provides the ability to create persistence storage. It also
allows us to name volumes, list volumes, and containers associates with the volumes.
o Directory Mounts - It is one of the best options for docker storage. It mounts a host's
directory into a container.
o Storage Plugins - It provides an ability to connect to external storage platforms.

Docker Container

Docker Container is a virtual environment that bundles application code with all the
dependencies required to run the application. The application runs quickly and reliably from
one computing environment to another

Different Types of Containers


When you are looking for a Container platform, you have a lot to choose from. Even
though Docker is the most popular one in the market right now, there are a lot more
competitors who have their own benefits and use cases. You can read about them below
and choose the one you think fits the purpose of your organization.

Docker

Docker is currently one of the most widely used Container platforms on the market. You
can create and use Linux containers with Docker. You can easily create, deploy and run
applications using Docker. Canonical and Red Hat both use Docker and also companies
like Amazon, Oracle and Microsoft have embraced it.

LXC
LinuxContainers.org’s open-source project LXC is also a popular Container Platform on
the market whose goal is to provide app environments that are like the VMs but they do not
have the overhead. LXC does not have a central daemon because it follows the Unix
process model. This means that instead of having one central program that manages it, all
the containers behave like they are being managed by different, individual programs. LXC
is pretty different from Docker because, in LXC, you will be able to run multiple processes
using an LXC Container, on the other hand, it is better if you run one process in each
Container in Docker.

CRI-O
CRI-O is also an open-source tool. It is an implemented version of the Kubernetes CRI
(Container Runtime Interface). The goal of this tool is to replace Docker and become
the Kubernetes Container Engine.

rkt
Much like LXC, rkt also does not have a central daemon and therefore it gives you the
freedom to control individual containers more easily. But Docker offers end-to-end
solutions, which they do not. But they have a community and set of tools that rival Docker.

Podman
This Container Engine is also open-source. This has pretty much the same role as Docker
but they function a bit differently, because like LXC and rkt, Podman also lacks a central
daemon. This means that in Docker if the central daemon is out of service, all the
containers will stop functioning. But the Containers in Podman are self-sufficient and can
be managed individually.

RunC
runC is a universal lightweight container runtime. Even though it began as a low-level
component of Docker, it is now a separate modular tool. It provides you with a more
portable container environment. This container runtime can work both with Docker and
without any other container system.

containerd
Windows and Linux both support containerd, which is technically a daemon. Its purpose is
to act as an interface between a container runtime and a container engine. It was also one of
the building blocks of Docker, much like runC. And also like runC, it is now an open-
source project.

Benefits of Containers in DevOps


Now that you know about some of the different types of Containers, let us talk about some
of the benefits of Containers in DevOps.

Speed and Efficiency


Without containers, developers will have to duplicate the environment in which they
developed the application. But when they use a Container, they can just run the code on
their local machine and there is no need to match the configuration requirements of the new
environment. Everything they need to run the application is already in the Container which
makes the process faster and more efficient. They are also more consistent as development
and operational teams do not have to provision environments with a Container.

Cost Reduction
Since they are more lightweight, Containers require a lot less memory than VMs or Virtual
Machines. If a company or organization wants to cut back on their cloud computing costs,
they can always opt for Containers instead of VMs as they have less expensive needs.

Security
There are no interactions that take place between different containers. So, if one of them
crashes or gets hacked for some reason, the others can run smoothly despite that hiccup.
Since the problem will be confined to one of the Containers, the whole development
process will not slow down too much.

Portable
As we have already mentioned, Containers are very light and agile. They can be run on
virtually any system, be it, Macs, Windows, Linux, or the Cloud. If a developer needs a
Container, it will be ready to run under any circumstances.

If you want to know more about DevOps containers and how they work, you can take
the DevOps Foundation Certification Training. Here you can learn all you need about
how to work with Containers, the types of Containers and why developers use them in
greater detail.

Best Practices for Containers and DevOps


Now that you know what is a Container DevOps, here are some of the most common ways
that organizations use Containers. You can also use them if you see that they can reduce
your expenses and make your development process more streamlined and efficient.

You can also see what are some of the most common ways to make sure you are taking full
advantage of the Containers. Here is how -

1. Containers are used by some organizations when they want to move applications to
more modern environments. This process has some of the benefits of OS
Virtualization. However, a modern, Container-based app architecture has more
benefits. This process is also known as lift and shift migration.
2. You can also refactor the applications that you already have for Containers. Though it
will be more comprehensive, you will also be able to use all the benefits of a
Container environment. And if you develop applications that are Container native,
you can also reap the benefits of a container environment.
3. If you use individual Containers, then you can distribute microservices and
applications alike to be easily located, deployed and scaled.
4. Jobs like Batch Processing and ETL functions which are repetitive and usually run in
the background can be easily supported with the help of Containers.
5. Continuous Integration and Continuous Deployment (CI/CD) can also be easily
pipelined with Containers as they can create, test and deploy simplified images. This
also unlocks the full potential of a Container environment much like refactoring.

Docker Images and Repositories.


Docker Image is an executable package of software that includes everything needed
to run an application. This image informs how a container should instantiate, determining
which software components will run and how. Docker Container is a virtual environment
that bundles application code with all the dependencies required to run the application. The
application runs quickly and reliably from one computing environment to another.
 Application ode.
 Runtime.
 Libraries
 Environmentaltools.
Docker image is very light in weight so can be portable to different platforms very easily.
Docker Image Prune
Docker image prune is a command used in the docker host to remove the images that are
not used or Docker image prune command is used to remove the unused docker images.
docker image prune
All the unused images are also know as dangling images which re not associated with any
containers
Docker Image Build
Following is the command which is used to build the docker image.
docker build -t your_image_name:tag -f path/to/Dockerfile .
 Docker build: Initiates the build process.
 -t your_image_name:tag: Gives the image you’re creating a name and, if desired, a tag.
 path/to/Dockerfile . : Gives the location of the Dockerfile. Give the right path if it’s not
in the current directory. “(.) DOT” represents the current wordir.
Docker Image Tag
Docker tags are labels for container images, used to differentiate versions and variants of an
image during development and deployment. Docker tags will help you identify the various
versions of docker images and help distinguish between them. Docker image will help us to
build continuous deployment very quickly
Uses of Docker Images
1. We can easily and effectively run the containers with the aid of docker images.
2. All the code, configuration settings, environmental variables, libraries, and runtime are
included in a Docker image.
3. Docker images are platform-independent.
4. Layers are the building blocks of an image.
5. With When using the build command, the user has the option of completely starting from
scratch or using an existing image for the first layer.
Difference between Docker Image VS Docker Container
Docker image Docker container

The Docker image is the Docker The Docker container is the


container’s source code. instance of the Docker image.

Dockerfile is a prerequisite to Docker Image is a pre-requisite


Docker Image. to Docker Container.
Docker image Docker container

Docker images can be shared


Docker containers can’t be
between users with the help of
shared between the users.
the Docker Registry.

To make changes in the docker We can directly interact with the


image we need to make changes container and can make the
in Dockerfile. changes required.
Structure Of Docker Image
The layers of software that make up a Docker image make it easier to configure the
dependencies needed to execute the container.
 Base Image: The basic image will be the starting point for the majority of Dockerfiles,
and it can be made from scratch.
 Parent Image: The parent image is the image that our image is based on. We can refer to
the parent image in the Dockerfile using the FROM command, and each declaration after
that affects the parent image.
 Layers: Docker images have numerous layers. To create a sequence of intermediary
images, each layer is created on top of the one before it.
 Docker Registry: Refer to this page on the Docker Registry for further information.
How To Create A Docker Image And Run It As Container?
Follow the below steps to create a Docker Image and run a Container:
Step 1: Create a Dockerfile.
Step 2: Run the following command in the terminal and it will create a docker image of the
application and download all the necessary dependencies needed for the application to run
successfully.
docker build -t <name>:<tag>
This will start building the image.
Step 3: We have successfully created a Dockerfile and a respective Docker image for the
same.
Step 4: Run the following command in the terminal and it will create a running container
with all the needed dependencies and start the application.
docker run -p 9000:80 <image-name>:<tag>
The 9000 is the port we want to access our application on. 80 is the port the container is
exposing for the host to access.
Docker Image commands

List Docker Images

docker images
Example:
$ docker ls

REPOSITORY TAG IMAGE ID CREATED SIZE


nginx latest 0d9c6c5575f5 4 days ago 126MB
ubuntu 18.04 47b199b0cb85 2 weeks ago 64.2MB

Pull an Docker Image From a Registry

docker image pull <image-name>


Example:
$ docker pull alpine:3.11

3.11: Pulling from library/alpine


Digest: sha256:9f11a34ef1c67e073069f13b09fb76dc8f1a16f7067eebafc68a5049bb0a072f
Status: Downloaded newer image for alpine:3.11

Remove an Image from Docker

docker rmi <id-of-image>


Example:
$ docker rmi <image_id>

Untagged: <image_id>
Deleted: sha256:<image_id>

Searching for a specific image on Docker Hub

docker search ubuntu


Example:
$ docker search ubuntu
NAME DESCRIPTION
STARS OFFICIAL AUTOMATED
ubuntu Ubuntu is a Debian-based Linux operating
s… 4458 [OK]
ubuntu-upstart Upstart is an event-based replacement for
… 62 [OK]
tutum/ubuntu Simple Ubuntu docker images with ssh
access 49 [OK]
ansible/ubuntu14.04-ansible

DOCKER REPOSITORIES:
A repository potentially holds multiple variants of an image. This means: A Docker
image can belong to a repository, e.g. when it was pushed to a Docker registry (with docker
push my/reporitory:version1 ). On the other side, a repository contains multiple versions of
an image (= different tags).
UNIT IV CLOUD DEPLOYMENT ENVIRONMENT

Google App Engine – Amazon AWS – Microsoft Azure; Cloud Software Environments
– Eucalyptus – OpenStack.

Google App Engine:


Google App Engine (often referred to by the acronym GAE or simply App Engine) is a
cloud computing platform as a service for developing and hosting web applications in Google-
managed data centers. Applications are sandboxed and run across multiple servers.

Google provides GAE free up to a certain amount of use for the following resources:

 processor (CPU)

 storage

 application programming interface (API) calls

 concurrent requests

Users exceeding the per-day or per-minute rates can pay for more of these resources.

Core features of Google App Engine in Cloud Computing


GAE has many features that make it extremely popular with Developers worldwide. Some of
these features are as follows:

Multiple language support


Google App Engine is adept at embracing a variety of programming languages. Whether you're
fluent in Java, Python, PHP, Go, or numerous others, Google App Engine has got you covered.
This multifaceted support ensures that developers aren't constrained by language limitations.
Instead, they can pick and choose based on their comfort and expertise, making the
development process smooth and intuitive.

Automated management
Looking deeper into Google App Engine's automated management reveals a world where
manual intervention is minimised. Google App Engine takes the reins when it comes to
managing applications. From maintaining the core infrastructure to adeptly routing traffic,
overseeing software patches, and ensuring a robust failover system, this tool does it all. For
Developers and businesses, this translates to a significant reduction in operational intricacies
and the hours usually spent on infrastructure oversight.

Scalability
Google App Engine has a one of a kind scalability feature. Imagine an application that
intelligently scales up or down in response to the ebb and flow of user traffic, ensuring
consistent performance without manual tweaks. Google App Engine's automatic scaling
discerns the needs of the application based on traffic and usage patterns, empowering it to
handle even unexpected surges in demand effortlessly.

Integrated environment
The synergy between various Google Cloud Computing services is palpable when you use
Google App Engine. A harmonious integration with platforms like Cloud Datastore, Cloud
Storage, and Google Workspace paves the way for a holistic development environment. This
not only streamlines the development process but also offers a plethora of tools and services at
one's fingertips. Such an integrated approach fosters efficiency, making it simpler to both
develop and sustain applications over time.

Google App Engine benefits and challenges

GAE extends the benefits of cloud computing to application development, but it also has
drawbacks.

Benefits of GAE

 Ease of setup and use. GAE is fully managed, so users can write code without
considering IT operations and back-end infrastructure. The built-in APIs enable users to
build different types of applications. Access to application logs also
facilitates debugging and monitoring in production.

 Pay-per-use pricing. GAE's billing scheme only charges users daily for the resources they
use. Users can monitor their resource usage and bills on a dashboard.

 Scalability. Google App Engine automatically scales as workloads fluctuate, adding and
removing application instances or application resources as needed.

 Security. GAE supports the ability to specify a range of acceptable Internet Protocol (IP)
addresses. Users can allowlist specific networks and services and blocklist specific IP
addresses.

GAE challenges

 Lack of control. Although a managed infrastructure has advantages, if a problem occurs


in the back-end infrastructure, the user is dependent on Google to fix it.

 Performance limits. CPU-intensive operations are slow and expensive to perform using
GAE. This is because one physical server may be serving several separate, unrelated app
engine users at once who need to share the CPU.

 Limited access. Developers have limited, read-only access to the GAE filesystem.

 Java limits. Java apps cannot create new threads and can only use a subset of the Java
runtime environment standard edition classes.

How is the Google App Engine used?


Google App Engine is a serverless platform, which hosts, and allows developers to build and
deploy web applications. Developers or users can create an account in Google App Engine to
set up a Software Development Kit (SDK), to write the source code of applications easily.

It is also used to build scalable back end mobile applications. These are then used to adapt
workloads as needed. Google App Engine can also be used for application testing where users
can route traffic to different application versions.

GAE ARCHITECTURE:
GAE Architecture. App Engine is created under Google Cloud Platform project when
an application resource is created. The Application part of GAE is a top-level container that
includes the service, version and instance-resources that make up the app.

1) Datastore: Serving as the central data management system in Cloud Computing, Google
App Engine's Datastore is a NoSQL database renowned for its scalability. What sets it apart is
its dynamic nature, adapting in real-time to the demands of the application. Whether it's a minor
data retrieval or a massive data influx, the datastore scales on-the-fly, ensuring that data
remains consistently accessible and safeguarded against potential threats.

2) Task queues: In any application, there exist tasks that don’t necessitate immediate user
feedback. Google App Engine's Task queues are designed to manage such background
operations. By queuing these tasks, they're executed asynchronously, optimising application
performance and ensuring users aren't bogged down with processing delays.

3) Memcache: As a rapid-access in-memory caching system, Memcache plays a pivotal role


in enhancing data retrieval speeds. Especially beneficial for frequently queried data, it acts as
a buffer, reducing the datastore's workload. This not only ensures quicker response times but
also contributes to the longevity and efficiency of the main Datastore.

4) Blobstore: In today's digital age, applications often deal with voluminous data, be it high-
definition images, videos, or other large files. The Blobstore is Google App Engine's dedicated
solution for such requirements. By efficiently managing and storing these large objects, it
ensures that the primary datastore isn’t overwhelmed, maintaining a harmonious data
ecosystem.
5) Automatic scaling: One of Google App Engine’s crowning features, Automatic Scaling,
epitomises proactive resource management. By continually monitoring application traffic and
user requests, it dynamically scales resources. This ensures optimal performance even during
unexpected traffic surges, eliminating the need for manual adjustments and guaranteeing a
consistently smooth user experience.

6) Integrated services: Google App Engine isn't an isolated entity but a cog in the vast
machinery of Google Cloud Computing services. Its ability to seamlessly mesh with other
services, from Data Analytics platforms to state-of-the-art Machine Learning tools, transforms
it from a mere hosting platform to a comprehensive, integrated Cloud solution. This
interoperability enhances the capabilities of applications hosted on Google App Engine, giving
Developers a richer toolset to work with

Amazon AWS :
o AWS stands for Amazon Web Services.
o The AWS service is provided by the Amazon that uses distributed IT infrastructure to
provide different IT resources available on demand. It provides different services such
as infrastructure as a service (IaaS), platform as a service (PaaS) and packaged software
as a service (SaaS).
o Amazon launched AWS, a cloud computing platform to allow the different
organizations to take advantage of reliable IT infrastructure.

Uses of AWS
o A small manufacturing organization uses their expertise to expand their business by
leaving their IT management to the AWS.
o A large enterprise spread across the globe can utilize the AWS to deliver the training to
the distributed workforce.
o An architecture consulting company can use AWS to get the high-compute rendering
of construction prototype.
o A media company can use the AWS to provide different types of content such as ebox
or audio files to the worldwide files.

Pay-As-You-Go

Based on the concept of Pay-As-You-Go, AWS provides the services to the customers.

AWS provides services to customers when required without any prior commitment or upfront
investment. Pay-As-You-Go enables the customers to procure services from AWS.

o Computing
o Programming models
o Database storage
o Networking
Advantages of AWS

1) Flexibility

o We can get more time for core business tasks due to the instant availability of new
features and services in AWS.
o It provides effortless hosting of legacy applications. AWS does not require learning
new technologies and migration of applications to the AWS provides the advanced
computing and efficient storage.
o AWS also offers a choice that whether we want to run the applications and services
together or not. We can also choose to run a part of the IT infrastructure in AWS and
the remaining part in data centres.

2) Cost-effectiveness

AWS requires no upfront investment, long-term commitment, and minimum expense when
compared to traditional IT infrastructure that requires a huge investment.

3) Scalability/Elasticity

Through AWS, autoscaling and elastic load balancing techniques are automatically scaled up
or down, when demand increases or decreases respectively. AWS techniques are ideal for
handling unpredictable or very high loads. Due to this reason, organizations enjoy the benefits
of reduced cost and increased user satisfaction.

4) Security

o AWS provides end-to-end security and privacy to customers.


o AWS has a virtual infrastructure that offers optimum availability while managing full
privacy and isolation of their operations.
o Customers can expect high-level of physical security because of Amazon's several
years of experience in designing, developing and maintaining large-scale IT operation
centers.
o AWS ensures the three aspects of security, i.e., Confidentiality, integrity, and
availability of user's data.

Disadvantages Of Amazon Web Services


 AWS can be complex, with a wide range of services and features that may be difficult to
understand and use, especially for new users.
 AWS can be expensive, especially if you have a high-traffic application or need to run
multiple services. Additionally, the cost of services can increase over time, so you need
to regularly monitor your spending.
 While AWS provides many security features and tools, securing your resources on AWS
can still be challenging, and you may need to implement additional security measures to
meet your specific requirements.
 AWS manages many aspects of the infrastructure, which can limit your control over
certain parts of your application and environment.
Applications Of AWS
The AWS services are using by both startup and MNC companies as per their usecase. The
startup companies are using overcome hardware infrasture cost and applications deployments
effectively with cost and performance. Whereas large scale companies are using AWS cloud
services for the management of their Infrastructure to completely focus on the development
of products widely. The following the Real-world industrial use-cases of AWS services:
 Netflix: The Large streaming gaint using AWS for the storage and scaing of the
applications for ensuring seamless content delivery with low latency without
interruptions to millions of users globally.
 Airbnb: By utilizing AWS, Airbnb manages the various workloads and provides
insurable and expandable infrastructure for its virtual marketplace and lodging offerings.
 NASA’s Jet Propulsion Laboratory: It takes the help of AWS services to handle and
analyze large-scale volumes of data related to vital scientific research missions and space
exploration.
 Capital One: A financial Company that is utilizing AWS for its security and compliance
while delivering innovative banking services to its customers.
AWS Global Infrastructure
The AWS global infrastructure is massive and is divided into geographical regions. The
geographical regions are then divided into separate availability zones. While selecting the
geographical regions for AWS, three factors come into play
 Optimizing Latency
 Reducing cost
 Government regulations (Some services are not available for some regions)
Each region is divided into at least two availability zones that are physically isolated from
each other, which provides business continuity for the infrastructure as in a distributed
system. If one zone fails to function, the infrastructure in other availability zones remains
operational. The largest region North Virginia (US-East), has six availability zones. These
availability zones are connected by high-speed fiber-optic networking.
There are over 100 edge locations distributed all over the globe that are used for the
CloudFront (content delivery network). CloudFront can cache frequently used content such
as images and videos(live streaming videos also) at edge locations and distribute it to edge
locations across the globe for high-speed delivery and low latency for end-users. It also
protects from DDOS attacks.
AWS Management Console
The AWS management console is a web-based interface to access AWS. It requires an AWS
account and also has a smartphone application for the same purpose. So When you sign in
for first time, you see the console home page where you see all the services provided by
AWS. Cost monitoring is also done through the console.
AWS resources can also be accessed through various Software Development Kits (SDKs),
which allows the developers to create applications as AWS as its backend. There are SDKs
for all the major languages(e.g., JavaScript, Python, Node.js, .Net, PHP, Ruby, Go, C++).
There are mobile SDKs for Android, iOS, React Native, Unity, and Xamarin. AWS can also
be accessed by making HTTP calls using the AWS-API. AWS also provides a AWS
Command Line Interface (CLI) for remotely accessing the AWS and can implement scripts
to automate many processes. This Console is also available as an app for Android and iOS.
For mobile apps, you can simply download AWS console app.
AWS Cloud Computing Models
There are three cloud computing models available on AWS.
1. Infrastructure as a Service (IaaS): It is the basic building block of cloud IT. It
generally provides access to data storage space, networking features, and computer
hardware(virtual or dedicated hardware). It is highly flexible and gives management
controls over the IT resources to the developer. For example, VPC, EC2, EBS.
2. Platform as a Service (PaaS): This is a type of service where AWS manages the
underlying infrastructure (usually operating system and hardware). This helps the
developer to be more efficient as they do not have to worry about undifferentiated heavy
lifting required for running the applications such as capacity planning, software
maintenance, resource procurement, patching, etc., and focus more on deployment and
management of the applications. For example, RDS, EMR, ElasticSearch.
3. Software as a Service(SaaS): It is a complete product that usually runs on a browser. It
primarily refers to end-user applications. It is run and managed by the service provider.
The end-user only has to worry about the application of the software suitable to its needs.
For example, Saleforce.com, Web-based email, Office 365 .

Microsoft Azure:

Azure is Microsoft’s cloud platform, just like Google has its Google Cloud and
Amazon has its Amazon Web Service or AWS.000. Generally, it is a platform through which
we can use Microsoft’s resources. For example, to set up a huge server, we will require huge
investment, effort, physical space, and so on. In such situations, Microsoft Azure comes to
our rescue. It will provide us with virtual machines, fast processing of data, analytical and
monitoring tools, and so on to make our work simpler. The pricing of Azure is also simpler
and cost-effective.

How Does Microsoft Azure Work?


It is a private and public cloud platform that helps developers and IT professionals to
build deploy and manage the application. It uses the technology known as virtualization.
Virtualization separates the tight coupling between the hardware and the operating system
using an abstraction layer called a hypervisor. Hypervisor emulates all the functions of a
computer in the virtual machine, it can run multiple virtual machines at the same time and each
virtual machine can run any operating system such as Windows or Linux. Azure takes this
virtualization technique and repeats it on a massive scale in the data center owned by Microsoft.
Each data center has many racks filled with servers and each server includes a hypervisor to
run multiple virtual machines. The network switch provides connectivity to all those servers.
Azure will provide the Microsoft Azure is a cloud computing platform which offers

 Infrastructure as a service (IaaS).


 Platform as a service (PaaS).
 Software as a service (SaaS).

Infrastructure as a service (IaaS)

Virtual machines, storage, and networking will come under the category of infrastructure as
a service but the users have to do manually the build and deploy of the applications. Azure
will support a wide range of operating systems because of its Hyper-hypervisor.

Platform as a service (PaaS)

Azure app service, Azure functions, and logic apps are some services that are offered by
Azure under the platform as a service. This service will provide autoscaling and load
balancing and also there will be a pre-configured environment for the application.

Software as a service (SaaS)

Office 365, Dynamics 365, and Azure Active Directory are some of the services provided by
Microsoft Azure under Software as a Service (SaaS) the complete application will be
managed by the Microsoft azure including deploying, scaling and load balancing.
Following are the some the use cases that Microsoft Azure Used.
 Deployment Of applications: You can develop and deploy the application in the azure
cloud by using the service called Azure App Service and Azure Functions after
deploying the applications end users can access it.
 Identity and Access Managment: The application and data which is deployed and
stored in the Microsoft Azure can be secured with the help of Identity and Access
Managment. It’s commonly used for single sign-on, multi-factor authentication, and
identity governance.
 Data Storage and Databases: You can store the data in Microsoft azure in service like
blob storage for unstructured data, table storage for NoSQL data, file storage, and Azure
SQL Database for relational databases. The service can be scaled depending on the
amount of data we are getting.
 DevOps and Continuous Integration/Continuous Deployment (CI/CD): Azure
DevOps will provide some tools like ncluding version control, build automation, release
management, and application monitoring.
Azure for DR and Backup
A full range of disaster recovery (DR) and backup services are available from Microsoft
Azure to help shield your vital data and apps from interruptions. With the help of these
services, you may quickly restore your data and applications in the event of a disaster by
replicating them to a secondary cloud site. Azure backup services also protect your data from
ransomware attacks, unintentional deletion, and corruption.

Key Azure DR and Backup Services


 Azure Site Recovery: Your on-premises virtual machines (VMs) can be replicated to
Azure more easily with the help of this solution. You may easily failover your virtual
machines (VMs) to Azure in the event of a disaster and keep your business running.
Azure VM replication to an alternative Azure region is also supported by Azure Site
Recovery.
 Azure Backup: If you want to protect the data which is present in the cloud then you
need to use the Azure Backup service. It offers a single area to monitor backup jobs,
manage backup policies, and recover data.Azure pricing and costs.
Azure competition
Following are the some of the competitors of Microsoft Azure:
 Amazon Web Services (AWS).
 Google Cloud Platform (GCP).
 IBM Cloud.
 Alibaba Cloud.
 Oracle Cloud Infrastructure (OCI).

Difference between AWS (Amazon Web Services), Google Cloud, and Azure
AWS Google Cloud Azure

Technology EC2 (Elastic Google


VHD (Virtual
Compute Compute
Hard Disk)
Cloud) Engine(GCE)

Databases Technologies
Supported pioneered by
AWS fully Google, like Azure supports
supports Big Query, Big both relational and
relational and Table, and NoSQL through
NoSQL Hadoop, are Windows
databases and databases, and AzureTable and
Big Data. Big HDInsight.
Data,naturally
fully supported.

Pricing Per hour — Per minute — Per minute —


rounded up. rounded up rounded up.

Models Per minute-


On demand, On demand — rounded up
reserved spot. sustained use. commitments(Pre-
paid or monthly)

Difficulties Many
enterprises find
it difficult to Fewer features Less “Enterprise-
understand the and services. ready.
company cost
structure.
AWS Google Cloud Azure

Storage  Blob
Services  Simple Storage
Storage  Queue  Cloud
Service(S3) Storage. storage.
 Elastic  File  Persistent
Block Storage Disk
Storage.  Disk  Transfer
 Elastic File Storage. appliance.
storage.  Data Lake
Store

Machine  Cloud speech


Learing  Machine AI
 Sage
learning  Cloud Viedo
maker.
 Azure Bot Intelligence.
 Lex.
service  Cloud
 polly.And
 Cognitive Machine
many more
service learning
engine.

Cloud Software Environments

In a cloud environment, consumers can deploy and run their software applications on a
sophisticated infrastructure that is owned and managed by a cloud provider (eg, Amazon Web
Services, Microsoft Azure, and Google Cloud Platform).

The Java Development Kit (JDK) is one example of a software environment. The JDK
contains tools for developing Java-based applications. The JDK also includes an integrated
development environment (IDE), which allows developers to write code in one window while
viewing output from another window

Microsoft Azure
Microsoft Azure, formerly known as Windows Azure, is Microsoft's public cloud
computing platform. It provides a broad range of cloud services, including compute, analytics,
storage and networking.

Azure Services
o Compute services: It includes the Microsoft Azure Cloud Services, Azure Virtual
Machines, Azure Website, and Azure Mobile Services, which processes the data on the
cloud with the help of powerful processors.
o Data services: This service is used to store data over the cloud that can be scaled
according to the requirements. It includes Microsoft Azure Storage (Blob, Queue Table,
and Azure File services), Azure SQL Database, and the Redis Cache.
o Application services: It includes services, which help us to build and operate our
application, like the Azure Active Directory, Service Bus for connecting distributed
systems, HDInsight for processing big data, the Azure Scheduler, and the Azure Media
Services.
o Network services: It helps you to connect with the cloud and on-premises
infrastructure, which includes Virtual Networks, Azure Content Delivery Network, and
the Azure Traffic Manager.

How Azure works

It is essential to understand the internal workings of Azure so that we can design our
applications on Azure effectively with high availability, data residency, resilience, etc.

Microsoft Azure is completely based on the concept of virtualization. So, similar to other
virtualized data center, it also contains racks. Each rack has a separate power unit and network
switch, and also each rack is integrated with a software called Fabric-Controller. This Fabric-
controller is a distributed application, which is responsible for managing and monitoring
servers within the rack. In case of any server failure, the Fabric-controller recognizes it and
recovers it. And Each of these Fabric-Controller is, in turn, connected to a piece of software
called Orchestrator. This Orchestrator includes web-services, Rest API to create, update, and
delete resources.

When a request is made by the user either using PowerShell or Azure portal. First, it will go to
the Orchestrator, where it will fundamentally do three things:

1. Authenticate the User


2. It will Authorize the user, i.e., it will check whether the user is allowed to do the
requested task.
3. It will look into the database for the availability of space based on the resources and
pass the request to an appropriate Azure Fabric controller to execute the request.

Combinations of racks form a cluster. We have multiple clusters within a data center, and we
can have multiple Data Centers within an Availability zone, multiple Availability zones within
a Region, and multiple Regions within a Geography.

o Geographies: It is a discrete market, typically contains two or more regions, that


preserves data residency and compliance boundaries.
o Azure regions: A region is a collection of data centers deployed within a defined
perimeter and interconnected through a dedicated regional low-latency network.

Azure covers more global regions than any other cloud provider, which offers the scalability
needed to bring applications and users closer around the world. It is globally available in 50
regions around the world. Due to its availability over many regions, it helps in preserving data
residency and offers comprehensive compliance and flexible options to the customers.

How Azure can help in business?


Azure can help our business in the following ways-
 Capital less: We don’t have to worry about the capital as Azure cuts out the high cost of
hardware. You simply pay as you go and enjoy a subscription-based model that’s kind to
your cash flow. Also, setting up an Azure account is very easy. You simply register in
Azure Portal and select your required subscription and get going.
 Less Operational Cost: Azure has a low operational cost because it runs on its servers
whose only job is to make the cloud functional and bug-free, it’s usually a whole lot
more reliable than your own, on-location server.
 Cost Effective: If we set up a server on our own, we need to hire a tech support team to
monitor them and make sure things are working fine. Also, there might be a situation
where the tech support team is taking too much time to solve the issue incurred in the
server. So, in this regard is way too pocket-friendly.
 Easy Back-Up and Recovery options: Azure keeps backups of all your valuable data.
In disaster situations, you can recover all your data in a single click without your
business getting affected. Cloud-based backup and recovery solutions save time, avoid
large up-front investments and roll up third-party expertise as part of the deal.
 Easy to implement: It is very easy to implement your business models in Azure. With a
couple of on-click activities, you are good to go. Even there are several tutorials to make
you learn and deploy faster.
 Better Security: Azure provides more security than local servers. Be carefree about your
critical data and business applications. As it stays safe in the Azure Cloud. Even, in
natural disasters, where the resources can be harmed, Azure is a rescue. The cloud is
always on.
 Work from anywhere: Azure gives you the freedom to work from anywhere and
everywhere. It just requires a network connection and credentials. And with most serious
Azure cloud services offering mobile apps, you’re not restricted to which device you’ve
got to hand.
 Increased collaboration: With Azure, teams can access, edit and share documents
anytime, from anywhere. They can work and achieve future goals hand in hand. Another
advantage of Azure is that it preserves records of activity and data. Timestamps are one
example of Azure’s record-keeping. Timestamps improve team collaboration by
establishing transparency and increasing accountability.

Eucalyptus – OpenStack.

Eucalyptus and OpenStack are both open-source cloud computing platforms that enable
the creation and management of private and hybrid clouds. While they share similar goals,
there are several key differences between the two platforms.

Eucalyptus is a Linux-based open-source software architecture for cloud computing


and also a storage platform that implements Infrastructure a Service (IaaS). It provides quick
and efficient computing services. Eucalyptus was designed to provide services compatible
with Amazon’s EC2 cloud and Simple Storage Service(S3).

Eucalyptus Architecture

Eucalyptus CLIs can handle Amazon Web Services and their own private instances. Clients
have the independence to transfer cases from Eucalyptus to Amazon Elastic Cloud. The
virtualization layer oversees the Network, storage, and Computing. Occurrences are isolated
by hardware virtualization.
Important Features are:-
1. Images: A good example is the Eucalyptus Machine Image which is a module software
bundled and uploaded to the Cloud.
2. Instances: When we run the picture and utilize it, it turns into an instance.
3. Networking: It can be further subdivided into three modes: Static mode(allocates IP
address to instances), System mode (assigns a MAC address and imputes the instance’s
network interface to the physical network via NC), and Managed mode (achieves local
network of instances).
4. Access Control: It is utilized to give limitations to clients.
5. Elastic Block Storage: It gives block-level storage volumes to connect to an instance.
6. Auto-scaling and Load Adjusting: It is utilized to make or obliterate cases or
administrations dependent on necessities.

Components of Architecture
 Node Controller is the lifecycle of instances running on each node. Interacts with the
operating system, hypervisor, and Cluster Controller. It controls the working of VM
instances on the host machine.
 Cluster Controller manages one or more Node Controller and Cloud Controller
simultaneously. It gathers information and schedules VM execution.
 Storage Controller (Walrus) Allows the creation of snapshots of volumes. Persistent
block storage over VM instances. Walrus Storage Controller is a simple file storage
system. It stores images and snapshots. Stores and serves files using S3(Simple Storage
Service) APIs.
 Cloud Controller Front-end for the entire architecture. It acts as a Complaint Web
Services to client tools on one side and interacts with the rest of the components on the
other side.

Operation Modes Of Eucalyptus

 Managed Mode: Numerous security groups to users as the network is large. Each security
group is assigned a set or a subset of IP addresses. Ingress rules are applied through the
security groups specified by the user. The network is isolated by VLAN between Cluster
Controller and Node Controller. Assigns two IP addresses on each virtual machine.
 Managed (No VLAN) Node: The root user on the virtual machine can snoop into other
virtual machines running on the same network layer. It does not provide VM network
isolation.
 System Mode: Simplest of all modes, least number of features. A MAC address is
assigned to a virtual machine instance and attached to Node Controller’s bridge Ethernet
device.
 Static Mode: Similar to system mode but has more control over the assignment of IP
address. MAC address/IP address pair is mapped to static entry within the DHCP server.
The next set of MAC/IP addresses is mapped.

Advantages Of The Eucalyptus Cloud

1. Eucalyptus can be utilized to benefit both the eucalyptus private cloud and the eucalyptus
public cloud.
2. Examples of Amazon or Eucalyptus machine pictures can be run on both clouds.
3. Its API is completely similar to all the Amazon Web Services.
4. Eucalyptus can be utilized with DevOps apparatuses like Chef and Puppet.
5. Although it isn’t as popular yet but has the potential to be an alternative to OpenStack and
CloudStack.
6. It is used to gather hybrid, public and private clouds.
7. It allows users to deliver their own data centers into a private cloud and hence, extend the
services to other organizations.

OpenStack.
OpenStack is a cloud OS that is used to control the large pools of computing, storage,
and networking resources within a data center. OpenStack is an open-source and free
software platform. This is essentially used and implemented as an IaaS for cloud computing.

Basic Principles of OpenStack

Open Source: Under the Apache 2.0 license, OpenStack is coded and published. Apache
allows the community to use it for free.
Open Design: For the forthcoming update, the development group holds a Design Summit
every 6 months.

Open Development: The developers maintain a source code repository that is freely accessible
through projects like the Ubuntu Linux distribution via entig100s.

Open Community: OpenStack allows open and transparent documentation for the
community.

Components of OpenStack

Major components of OpenStack are given below:

Compute (Nova): Compute is a controller that is used to manage resources in virtualized


environments. It handles several virtual machines and other instances that perform computing
tasks.

Object Storage (Swift): To store and retrieve arbitrary data in the cloud, object storage is used.
In Swift, it is possible to store the files, objects, backups, images, videos, virtual machines, and
other unstructured data. Developers may use a special identifier for referring the file and objects
in place of the path, which directly points to a file and allows the OpenStack to manage where
to store the files.

Block Storage (Cinder): This works in the traditional way of attaching and detaching an
external hard drive to the OS for its local use. Cinder manages to add, remove, create new disk
space in the server. This component provides the virtual storage for the virtual machines in the
system.

Networking (Neutron): This component is used for networking in OpenStack. Neutron


manages all the network-related queries, such as IP address management, routers, subnets,
firewalls, VPNs, etc. It confirms that all the other components are well connected with the
OpenStack.

Dashboard (Horizon): This is the first component that the user sees in the OpenStack. Horizon
is the web UI (user interface) component used to access the other back-end services. Through
individual API (Application programming interface), developers can access the OpenStack's
components, but through the dashboard, system administrators can look at what is going on in
the cloud and manage it as per their need.

Identity Service (Keystone): It is the central repository of all the users and their permissions
for the OpenStack services they use. This component is used to manage identity services like
authorization, authentication, AWS Styles (Amazon Web Services) logins, token-based
systems, and checking the other credentials (username & password).

Image Service (Glance): The glance component is used to provide the image services to
OpenStack. Here, image service means the images or virtual copies of hard disks. When we
plan to deploy a new virtual machine instance, then glance allows us to use these images as
templates. Glance allows virtual box (VDI), VMware (VMDK, OVF), Raw, Hyper-V (VHD)
and KVM (qcow2) virtual images.
Telemetry (Ceilometer): It is used to meter the usage and report it to OpenStack's individual
users. So basically, Telementry provides billing services to OpenStack's individual users.

Orchestration (Heat): It allows the developers to store the cloud application's necessities as a
file so that all-important resources are available in handy. This component organizes many
complex applications of the cloud through the templates, via both the local OpenStack REST
API and Query API.

Shared File System (Manila): It offers storage of the file to a virtual machine. This component
gives an infrastructure for managing and provisioning file shares.

Elastic Map-reduce (Sahara): The Sahara component offers a simple method to the users to
preplanned Hadoop clusters by referring to the multiple options such as the Hadoop version,
cluster topology and hardware details of nodes and some more.

How does OpenStack Work?

Basically, OpenStack is a series of commands which is called scripts. And these scripts
are packed into packages, which are called projects that rely on tasks that create cloud
environments. OpenStack relies on two other forms of software in order to construct certain
environments:

o Virtualization means a layer of virtual resources basically abstracted from the hardware.
o A base OS that executes commands basically provided by OpenStack Scripts.

So, we can say all three technologies, i.e., virtualization, base operating system, and OpenStack
must work together.

Let's discuss how OpenStack works!

 The Horizon is an interface for the appliance environment. Anything that the user wants
to do should use the Horizon (Dashboard). The Dashboard is a simple graphical user
interface with multiple modules, where each module performs specific tasks.
 All the actions in OpenStack work by the service API call. So, if you are performing any
task, it means you are calling a service API. Each API call is first validated by Keystone.
So, you will have to login yourself as a registered user with your login username and
password before you enter the OpenStack dashboard.

 Once you successfully log in to the OpenStack dashboard, you will get many options to
create new instances, volumes, Cinder, and configure the network.
 Instances are nothing but a virtual machine or environment. To generate a new VM, use
the 'instances' option from the OpenStack dashboard. In these instances, you can configure
your cloud. Instances can be RedHat, OpenSUSE, Ubuntu, etc.
 The formation of an instance is also an API call. You can configure network information
in the instances. You can connect these instances to the cinder instance or volume to add
more services.
 After the successful creation of an instance, you can configure it, you can access it through
CLI, and whatever data you want to add, you can do it. Even you can set up an instance to
manage and store the snapshots for future reference or backup purposes.

Benefits of OpenStack

There are a lot of benefits of OpenStack in the cloud computing platform. Let's see one by one
:

1. Open Source

As we know, using the open-source environment, we can create a truly defined data center.
OpenStack is the largest open-source platform. It offers the networking, computing, and storage
subsystems in a single platform. Some vendors (such as RedHat) have developed and continue
to support their own OpenStack distributions.

OpenStack source code is available at github. The two main advantages of the open-source
OpenStack project is :

o OpenStack can be modified according to your rising demand - As per your requirement,
you can add the extra features in OpenStack.
o It can be used without any limitations - Since OpenStack is a freely available project,
so there are no limitations or restrictions to use it. You can use it as per your
requirement. There are no limits for what purpose you use it, where you use it, or how
long you use it.

2. Scalability

Scalability is the major key component of cloud computing. OpenStack offers better scalability
for businesses. Through this feature, it allows enterprises to spin up and spin down servers on-
demand.

3. Security

One of the significant features of OpenStack is security, and this is the key reason why
OpenStack is so popular in the cloud computing world.

o With OpenStack, your data is always secure - When company owners want to move
their IT infrastructure to the cloud, they always fear data loss. But there is no need to
think about data loss with OpenStack. It offers the best security feature.
o OpenStack provides security professionals who are responsive to OpenStack's strong
security.

4. Automation
Automation is one of the main keys selling points of OpenStack when compared to another
option. The ease with which you can automate tasks makes OpenStack efficient. OpenStack
comes with a lot of inbuilt tools that make cloud management much faster and easier.
OpenStack provides its own API or Application Program Interface that helps other applications
to have full control over the cloud. This function makes it easier to build your own apps that
can communicate with OpenStack to perform tasks such as firing up VMs.

5. Easy to Access and Manage

We can easily access and manage OpenStack, which is the biggest benefit for you. OpenStack
is easy to access and manage because of the following features :

Command Line Tools - We can access the OpenStack using command-line tools.

Dashboard - OpenStack offers users and administrators to access and manage various aspects
of OpenStack using GUI (graphical user interface) based dashboard component. It is available
as a web UI.

APIs - There are a lot of APIs (Application Program Interface), which is used to manage
OpenStack.

6. Services

OpenStack provides many services required for several different tasks for your public, private,
and hybrid cloud.

List of services - OpenStack offers a list of services or components such as the Nova, Cinder,
Glance, Keystone, Neutron, Ceilometer, Sahara, Manila, Searchlight, Heat, Ironic, Swift,
Trove, Horizon, etc.

Each component is used for different tasks. Such as Nova provides computing services,
Neutron provides networking services, Horizon provides a dashboard interface, etc.

7. Strong Community

OpenStack has many experts, developers, and users who love to come together to work on the
product of OpenStack and enhance the feature of OpenStack.

8. Compatibility

Public cloud systems like AWS (Amazon Web Services) are compatible with OpenStack.

Compute (Nova)

Nova is one of the most common and important components of OpenStack. Compute is a
controller that is used to handle virtualized environments' resources. It handles several virtual
machines and other instances that perform computing tasks.

Nova is written in Python language. VMware, Xen, and KVM are the hypervisor technologies
used, and this choice is contingent on OpenStack's version.
OpenStack Services which communicate with Nova

To ensure that Nova operates at its most basic level, certain OpenStack services are required.
These services are:

Keystone: Firstly, Keystone authenticates and offers an identity for all OpenStack services.
The first feature built on OpenStack is Keystone, and all projects, like Nova, are responsible
for it.

Glance: It works to handle server images for your cloud. Therefore, it has the ability to upload
compatible images of OpenStack via the repository of compute images.

Neutron: The physical or virtual networks that compute instances within your OpenStack
cloud are given by Neutron.

Placement: Finally, Nova needs placement to track the inventory of resources to assist in
selecting which resource provider would be the right option when building a virtual machine
inside your OpenStack cloud.

To ensure optimum accessibility and performance, these additional OpenStack services closely
interact with Nova.

Nova Architecture
The above diagram can be summed up in these functionalities :

o The Nova-api processes the requests and responses to and from the end-user.
o When a request is submitted, the Nova generates and removes the instances.
o The Nova-scheduler schedules nova-compute jobs.
o The Glace Registry, along with its metadata, stores the image information.
o The Image stores predefined images for the user or admin.
o The Nova-network assures connectivity and routing of the network.

Block Storage (Cinder)

This works in the traditional way of attaching and detaching an external hard drive to the OS
for its local use. Cinder manages to add, remove, create new disk space in the server. This
component provides the virtual storage for the VMs in the system. Conceptually, Cinder is
similar in function to the EBS (Elastic Block Storage).

It is usually implemented in combination with other OpenStack services (e.g., Compute, Object
Storage, Image, etc.). Cinder and Nova logical architecture are:

Without needing to think about costly physical storage systems or servers, Cinder users are
able to reduce and expand their storage space significantly. In addition, by allowing users to
use one code for each operation, Cinder simplifies code management. With reliability and ease
of usage, Cinder can handle all the provisioning and eliminate consumers' needs.

Some of the goals of Cinder are :

o Highly Available
o Recoverable
o Fault-Tolerant
o Component-based architecture
o Open Standards

Cinder Components
Object Storage (Swift)

Object storage is used in order to store and recover arbitrary data in the cloud. In Swift, it is
possible to store the files, objects, backups, images, videos, virtual machines, and other
unstructured data. Developers may use a special identifier for referring the file and objects in
place of the path, which directly points to a file and allows the OpenStack to manage where to
store the files to the API.

For longevity, availability, and competitiveness, it is scalable and optimized. For storing
unconstrained, redundant data, Swift is ideal. Since this is an object storage service, Swift
enables an API-accessible storage option that can be used around the cluster for backups, data
retention, or archives that are redundant.

Object Storage components are divided into the following key groups :

o Proxy Services
o Auth Services
o Storage Services

o Account Service
o Container Service
o Objective Service

Let's see an example diagram for the OpenStack Object Storage :


Some Characteristics of OpenStack Object Storage are :

o There's a URL for all objects contained in Object Storage.


o All objects have their own metadata.
o It is possible to locate object data anywhere in the cluster.
o Via a RESTful HTTP API, developers communicate with the swift.
o Without downtime, new nodes can be connected to the cluster.
o It runs on industry-standard h/w, like HP, Dell, & Supermicro.
o Data should not be transferred to an entirely new storage system.
o For objects stored in the cluster, 'Storage Policies' can describe various durability levels.

Shared File Systems (Manila)


It offers file-based storage to a VM. This component gives an infrastructure for managing and
provisioning file shares. Manila uses a SQL based central database shared by all manila
services in the system. The Manila service can operate in the configuration of a single node or
multiple nodes.

Usually, Manila is deployed with other OpenStack resources, such as Compute, Image or
Object Storage.

Following are the goals of shared file system service :

o Highly Available
o Recoverable
o Open-Standards
o Fault-tolerant
o Component-based architecture

Manila offers the following set of services :

manila-api: It is an application for the Web Server Gateway Interface (WSGI), which verifies
and guides requests via the shared file system service and also offers support to the OpenStack
API.

manila-data: This service receives the requests, processes the data operations with long
running times such as backup, copying, or share migration.

manila-scheduler: This service schedules and routes the requests to the shared file system
services. To route requests, the scheduler follows configurable filters and weighers. The Filter
Scheduler is the default and allows filters on items such as Availability Zones, Capacity,
Capabilities, and Share Types. Manila-scheduler also allows custom filters.

manila-share: This service manages back-end systems in which have a shared file system. A
manila-share service is capable of running in 1 of 2 modes, with or without the managing of
shared servers.

The shared file system (Manila) contains the following set of components :

o Back-end storage devices


o Users and tenants
o Basic resources

o Shares
o Snapshots
o Share networks

Networking (Neutron)

This component is used for networking in OpenStack. Neutron manages all the network-related
queries, such as IP address management, routers, subnets, firewalls, VPNs, etc. It confirms that
all the other components are connected properly with the OpenStack.

Neutron delivers NaaS (Networking-as-a-service) in a virtual computing environment. It has


replaced the original API (Application Program Interface), called Quantum, in OpenStack.
Neutron is managed by other OpenStack components such as Nova.

Networking has a service on the controller node, called the neutron server, including a lot of
agents and plugins that use a messaging queue to communicate with each other. You can select
the various agents you want to use, dependent on the type of operation.

Some features of Neutron:

o Sets up the virtual network infrastructure.


o Switching and routing.
o Specialized virtual network functions like VPNaaS, FWaaS, LBaaS.
o Flexibility through agents, plugins, and drivers.
o Neutron integrates with various OpenStack services, i.e., Keystone, Nova, glance, and
Horizon.

There are the following neutron plugins :

o VMware NSX
o Cisco switches (NX-OS)
o Ryu network OS
o NEC OpenFlow
o Open vSwitch
o PLUMgrid Director plugin
o Linux bridging
o OpenDaylight plugin
o Juniper OpenContrail
o Midokura Midonet plugin

Neutron Architecture

The neutron architecture is very simple. It is fully based on agents and plugins.

Dashboard (Horizon)

This is the first component that the user sees in the OpenStack. Horizon is the web UI (user
interface) component used to access the other back-end services. Through individual API
(Application programming interface), developers can access the OpenStack's components, but
through the Dashboard, system administrators can look at what is going on in the cloud and
manage it as per their need.

At the core of its architecture and design, the Dashboard has many key values :
Core Support: Out-of-the-box provision for all core OpenStack projects.

Extensible: As a "first-class citizen", anyone can add a new component.

Manageable: The core codebase has to be easy to direct and should be simple.

Consistent: Throughout, visual and interaction paradigms are maintained.

Stable: A reliable Application program interface (API) with an emphasis on backward


compatibility.

Usable: Providing an amazing interface that individuals want to use.

Horizon is based on the Django web framework for both users and administrators of an
OpenStack cloud. It interacts with instances, images, volumes, and networks within an
OpenStack cloud. Through Horizon, we can manage Nova, Glance, Neutron, and Cinder
services within the OpenStack cloud.

The image below shows how the Dashboard is connected to all the OpenStack components.
Notice that OpenStack with all seven core components is shown in this image :

Highlights of OpenStack

o OpenStack has made it possible for companies such as Bloomberg and Disney to handle
their private clouds at very manageable prices.
o OpenStack offers mixed hypervisor environments and bare metal server environments.
o RedHat, SUSE Linux, and Debian have all been active contributors and have been
supporting OpenStack since its inception.
o OpenStack is used by Walmart to organize more than one lac cores, which offers 100
% uptime during last year's Black Friday.

Difference between AWS and OpenStack

The difference between AWS and OpenStack usually depends on your company's specific
requirements. Let's see the difference between OpenStack and AWS:

S.No. OpenStack AWS

1. OpenStack is categorized as Cloud AWS Lambda is categorized as a Cloud


Management Platforms and Platform as a Service (PaaS).
Infrastructure as a Service (IaaS).

2. Glance handles the images. AMI (Amazon Machine Image) handles the
images.

3. LBaaS of OpenStack handles the load The ELB (Elastic Load Balancer)
balance traffic. automatically distributes the incoming traffic
from the services to the EC2 instances.

4. Each virtual instance will AWS allocates a private IP address to every


automatically be allocated an IP new instance using DHCP.
address. It is handled by DHCP.

5. Identity authentication services are Identity authentication services are handled


handled by Keystone. by IAM Identity and Access management.

6. Swift handles object storage. Object storage is managed by S3 (simple


storage service) bucket

7. A cinder component manages block Block storage is managed by EBS (Elastic


storage. Block Storage)
8. OpenStack provides MYSQL and Users of AWS use an instance of MySQL or
PostgreSQL for the relational Oracle 11g.
databases.

9. OpenStack uses MongoDB, For a non-relational database, AWS uses


Cassandra, or Couchbase for a non- EMR (Elastic Map Reduce).
relational database.

10. For networking, OpenStack uses For networking, AWS uses VPC (Virtual
Neutron. Private Cloud).

11. Machine learning (ML) and NLP Machine Learning (ML) and NLP (Natural
(Natural Language processing) are not Language processing) are possible in AWS.
readily available.

12. OpenStack has no Speech or Voice Lex is used for speech or voice recognition
recognition solution. solutions.

13. It has the Mistral - Workflow Service. It follows the Simple Workflow Service
(SWF).

14. Ceilometer - the Telemetry based AWS Usage and the Billing Report.
billing, resource tracking etc.

15. No Serverless Framework. Lambda is a serverless framework.


UNIT V CLOUD SECURITY

Virtualization System:

Virtualization is technology that you can use to create virtual representations of


servers, storage, networks, and other physical machines. Virtual software mimics the
functions of physical hardware to run multiple virtual machines simultaneously on a single
physical machine.

virtualization work in cloud computing

Virtualization plays a very important role in the cloud computing technology, normally in the
cloud computing, users share the data present in the clouds like application etc, but actually
with the help of virtualization users shares the Infrastructure.

The main usage of Virtualization Technology is to provide the applications with the standard
versions to their cloud users, suppose if the next version of that application is released, then
cloud provider has to provide the latest version to their cloud users and practically it is possible
because it is more expensive.

To overcome this problem we use basically virtualization technology, By using virtualization,


all severs and the software application which are required by other cloud providers are
maintained by the third party people, and the cloud providers has to pay the money on monthly
or annual basis.

Mainly Virtualization means, running multiple operating systems on a single machine


but sharing all the hardware resources. And it helps us to provide the pool of IT resources so
that we can share these IT resources in order get benefits in the business.

Specific Attacks:
Cloud attacks encompass malicious activities that target vulnerabilities in cloud
computing systems and services. Attackers use weak points in cloud infrastructure,
applications, or user accounts to gain access without authorization, jeopardize data integrity,
steal confidential data, or disrupt services.

Guest hopping :
In this type of attack, an attacker will try to get access to one virtual machine by
penetrating another virtual machine hosted in the same hardware. One of the possible
mitigations of guest hopping attack is the Forensics and VM debugging tools to observe the
security of cloud.

New Virtualization System-Specific Attacks:

Hypervisor Risks
• The hypervisor is the part of a virtual machine that allows host resource sharing
and enables VM/host isolation.
• Therefore, the ability of the hypervisor to provide the necessary isolation during
intentional attack greatly determines how well the virtual machine can survive Risk.
• One reason why the hypervisor is susceptible to risk is because it’s a software
program; risk increases as the volume and complexity of application code
increases.
• Ideally, software code operating within a defined VM would not be able to
communicate or affect code running either on the physical host itself or within a
different VM; but several issues, such as bugs in the software, or limitations to
the virtualization implementation, may put this isolation at risk.
• Major vulnerabilities inherent in the hypervisor consist of rogue hypervisor
rootkits, external modification to the hypervisor, and VM escape.

You might also like