Cloud Computing Unit-II Cloud Computing Architecture By. Dr. Samta Gajbhiye
Cloud Computing Unit-II Cloud Computing Architecture By. Dr. Samta Gajbhiye
Unit-II
Cloud Computing Architecture
1
Cloud Computing
2
Cloud Reference Model/Architecture
NIST cloud computing reference architecture, which identifies the
major actors, their activities and functions in cloud computing by
partitioning it into abstraction layers and cross-layer functions.
This reference model groups the cloud computing functions and activities
into five logical layers and three cross-layer functions.
The five layers are: physical layer, virtual layer, control layer, service
orchestration layer, and service layer.
Each of these layers specifies various types of entities that may exist in a
cloud computing environment, such as compute systems, network devices,
storage devices, virtualization software, security mechanisms, control
software, orchestration software, management software, and so on. It also
[A reference model is an abstract framework for understanding significant relationships
among the entities of some environment, and for the development of consistent
standards or specifications supporting that environment. It is based on a small number of
unifying concepts and may be used as a basis for education and explaining standards. It is not
directly tied to any standards, technologies, or other concrete implementation details, but it
does seek to provide a common semantics that can be used unambiguously across and
between different implementations]
3
4
Cloud computing abstraction layers
Physical Layer: The physical layer consists of the hardware resources that are
necessary to support the cloud services being provided, and typically includes
server, storage and network components. The abstraction layer consists of the
software deployed across the physical layer, which manifests(marks) the essential
cloud characteristics.
Foundation layer of the cloud infrastructure.
Specifies entities that operate at this layer : Compute systems, network devices
and storage devices.
Functions of physical layer : Executes requests generated by the virtualization
and control layer.
5
Cloud computing abstraction layers Cont….
Virtual Layer
Deployed on the physical layer. It is built by deploying virtualization software on
compute systems, network devices, and storage devices.
Specifies entities that operate at this layer : Virtualization software, resource
pools, virtual resources.
Virtual Layer Virtualization Process and Operations Module:
Step 1: Deploy virtualization software on: Compute systems, Network
devices, Storage devices
Step 2: Create resource pools: Processing power and memory, Network
bandwidth, Storage
Step 3: Create virtual resources: Virtual machines, Virtual networks, Virtual
resources are packaged and offered as services
Functions of virtual layer :
Abstracts physical resources and makes them appear as virtual resources
(enables multitenant environment).
Executes the requests generated by control layer.
This layer enables fulfilling two key characteristics of cloud infrastructure:
resource pooling and rapid elasticity. 6
Cloud computing abstraction layers -Virtual Layer Cont….
[ A resource pool is an aggregation of computing resources, such as processing
power, memory, storage, and network bandwidth, which provides an
aggregated view of these resources to the control layer.]
Virtualization software in collaboration with control layer creates virtual
resources. These virtual resources are created by allocating physical resources from
the resource pool. These virtual resources share pooled physical resources.
Examples of virtual resources include virtual machines, LUNs, and virtual
networks
For example, storage virtualization software pools capacity of multiple storage
devices to appear as a single large storage capacity.
By using compute virtualization software, the processing power and memory
capacity of the pooled physical compute system can be viewed as an aggregation of
the power of all processors (in megahertz) and all memory (in megabyte).
[A logical unit number (LUN) is a number used for identifying a logical unit
relating to computer storage. A logical unit is a device addressed by protocols and
related to fiber channel, small computer system interface (SCSI), Internet SCSI
(iSCSI) and other comparable interfaces. OR a logical unit number (LUN) is a slice or
portion of a configured set of disks that is presentable to a host and mounted as a
volume within the OS]
7
Cloud computing abstraction layers -Virtual Layer Cont….
8
Cloud computing abstraction layers Cont….
Control Layer: Deployed either on virtual layer or on physical layer
The control layer includes control software that are responsible for managing
and controlling the underlying cloud infrastructure resources and enable
provisioning of IT resources for creating cloud services.
Specifies entities that operate at this layer : control software (hypervisor
management software )
The hypervisor along with hypervisor management software (also known as
control software, is the fundamental component for deploying software defined
compute environment.
Functions of control layer :
Enables resource configuration, resource pool configuration and resource
provisioning. Executes requests generated by service layer.
[space and resources are pooled to serve multiple clients at one time.
Depending on a client's resource consumption, usage can be set to provide
more or less at any given time]
Exposes resources to and supports the service layer.
Collaborates with the virtualization software (hypervisor) and enables
resource pooling and creating virtual resources, dynamic allocation and
9
optimizing utilization of resources.
Cloud computing abstraction layers Cont….
Service Orchestration Layer: responsible for the governance (Governance is the
process of making and enforcing decisions within an organization or society) , control, and
coordination of the service delivery process.
[Cloud governance is the process of defining, implementing, and monitoring a framework of policies
that guides an organization's cloud operations. This process regulates how users work in cloud
environments to facilitate consistent performance of cloud services and systems.]
10
Cloud computing abstraction layers -Orchestration layer Cont….
11
Cloud computing abstraction layers - Orchestration layer Cont….
Simplified Optimization
Automation is a subset of orchestration.
It means automation handles individual tasks while orchestration integrates
automation.
Orchestration effectively turns individual tasks into a larger optimized workflow.
Control and Visibility
The orchestration layer gives IT admins a unified dashboard that gives them a
single pane of glass view of cloud resources.
It provides IT admins with tools that can help to monitor and modify virtual
machine instances with little manual input.
Reduce Errors
It automates small and straightforward operations even without the intervention
of IT staff.
For example, if a disk runs out of storage space, it can create space by deleting
junk applications and files. Or with orchestration, add storage tiers in the
environment to help organizations better manage storage as specifically
applicable to a workload.
Cost-Effective: The orchestration layer helps with automated cloud metering. It
allows organizations to improve cost governance and promotes efficient use of
resources. This allows organizations to reduce infrastructure costs and realize12long-
term cost savings.
Cloud computing abstraction layers Cont….
Service Layer
Consumers interact and consume cloud resources via this layer.
Specifies the entities that operate at this layer : Service catalog and self-
service portal.
Functions of service layer :
Store information about cloud services in service catalog and presents
them to the consumers.
Enables consumers to access and manage cloud services via a self-service
portal.
13
Cloud Reference Model Cont….
The three cross-layer functions are: business continuity, security, and service
management.
Business continuity and security functions specify various activities, tasks, and
processes that are required to offer reliable and secure cloud services to the
consumers.
Service management function specifies various activities, tasks, and processes
that enable the administrations of the cloud infrastructure and services to
meet the provider’s business requirements and consumer’s expectations.
14
Cross-layer function
1. Business continuity
Measures Description
Proactive Business impact analysis • Risk assessment • Technology solutions
deployment (backup and replication)
Reactive Disaster recovery •
Disaster restart
18
Cloud Services Models
Service Models (XaaS): XaaS is the essence of cloud computing
Combination of Service-Oriented Infrastructure (SOI) and cloud
computing realizes to XaaS.
9
Cloud Services Models Cont…
Source: Cloud Security and Privacy: An Enterprise Perspective on Risks and Compliance by Tim Mather and Subra
Kumaraswamy
10
Cloud Services Models Cont…
Service Models (XaaS)
9
Cloud Services Models SAAS
1.Software as a Service (SaaS)
SaaS applications are designed for end users and are delivered over the web
SaaS is defined as software that is deployed over the internet.
sales.
The capability provided to the consumer is to use the provider’s applications
running on a cloud infrastructure. The applications are accessible from
various client devices through either a thin client interface, such as a web
browser (e.g., web-based email), or a program interface.
7
Cloud Services Models SAAS
SaaS Characteristics
Web access to commercial software
Software is managed from central location
Software is delivered in a ‘one to many’ model
Users not required to handle software upgrades and patches
Application Programming Interfaces (API) allow for integration between
different pieces of software.
24
Applications where SaaS is Used
Applications where there is significant interplay between organization and
outside world. E.g. email newsletter campaign software [Email marketing
software enables users to create, send, and track emails to their list of subscribers. Using software
makes it easier to create well-designed emails, and also allows you to see key metrics such as open rates
and click-through rates.]
Applications that have need for web or mobile access. E.g. mobile sales
management software
Software that is only to be used for a short term need.
Software where demand spikes significantly. E.g. Tax/Billing softwares
E.g. of SaaS: Sales Force Customer Relationship Management (CRM) software
22
Cloud Services Models PaaS
web applications quickly and easily and without the complexity of buying
and maintaining the software and infrastructure underneath it.
PaaS is the set of tools and services designed to make coding and deploying
applications quickly and efficiently
Platform as a Service (PaaS) brings the benefits that SaaS bought for
applications, but over to the software development world.
Cloud Services Models PaaS Cont….
PaaS is analogous to SaaS except that, rather than being software delivered
over the web, it is a platform for the creation of software, delivered over
the web.
The capability provided to the consumer is to deploy onto the cloud
infrastructure consumer-created or acquire applications created using
programming languages, libraries, services, and tools supported by the
provider.
The consumer does not manage or control the underlying cloud
infrastructure including network, servers, operating systems, or storage, but
has control over the deployed applications and possibly configuration
settings for the application-hosting environment.
Cloud Services Models PaaS
PaaS Characteristics
Services to develop, test, deploy, host and maintain applications in the
same integrated development environment. All the varying services
needed to fulfill the application development process.
Web based user interface creation tools help to create, modify, test and
deploy different UI scenarios.
Multi-tenant architecture where multiple concurrent users utilize
the same development application.
Built in scalability of deployed software including load balancing and
failover.
Integration with web services and databases via common standards.
Support for development team collaboration – some PaaS solutions
include project planning and communication tools.
Tools to handle billing and subscription management
15
Cloud Services Models Scenarios where PaaS is used
31
Cloud Services Models IaaS
service on demand.
The capability provided to provision processing, storage, networks, and
other fundamental computing resources.
Consumer can deploy and run arbitrary software
e.g: Amazon Web Services and Flexi scale.
7
Cloud Services Models IaaS
Characteristics of IaaS
Allows for dynamic scaling
Has a variable cost, utility pricing model
Generally includes multiple users on a single piece of hardware
21
Classic Model vs. XaaS
35
XaaS
Saa Paa
Iaa
S S
Managed by user
S
Applications Applications Applications
Managed by user
Data Data Data
25
XaaS
Analytics-as-a-Service (AaaS)
It provides access to data analysis software and tools through the Cloud, rather
than having to invest in on-premise software.
AaaS services are complete and customizable solutions for organizing, analyzing
and visualizing data.
Business process as a service (BPaaS)
It is a term for a specific kind of web-delivered or cloud hosting service that
benefits an enterprise by assisting with business objectives.
In the general sense, a business process is simply a task that must be completed
to benefit business operations.
One example is transaction management. Credit card transactions may need to
be recorded in a central database, or otherwise handled or evaluated. If a vendor
can offer a company that same task performed and delivered through cloud
37
hosted networks, that would be an example of BPaaS.
XaaS Cont….
Storage as a service (STaaS) is a business model in which a company leases or rents
its storage infrastructure to another company or individuals to store data
Security as a Service (SECaaS), offers security to IT companies on a subscription
basis.
A superior security platform is provided by the outsourced approach, which
lowers the total cost of ownership than the business could supply on its own.
With the use of cloud computing, security for the company is maintained by an
outside party.
Implement security kind services and doesn’t need on-premises hardware,
avoiding substantial capital outlays.
These security services typically embody authentication, antivirus, anti-
malware/spyware, intrusion detection, penetration testing, and security event
management among others.
XaaS Cont….
52
XaaS [DkaaS] Cont….
Desktop as a service (DKaaS): Desktop as a Service (DaaS) is a cloud
computing offering where a service provider delivers virtual desktops to end users
over the Internet, licensed with a per-user subscription. It is one type of desktop
virtualization which is provided by third party hosts.
Example – An organization lets its workforce work from home and soon it may be
the future of IT work process. In this situation, the organization has to provide
data in a centralized server through virtual desktop infrastructure (VDI) so that all
of the workers can access the data. But setting up the virtual desktop
infrastructure is too expensive and resource-consuming such as the need for
servers, hardware, software, skilled staff to set up and maintain the VDI. This is
when we need DaaS.
DaaS helps people to access data and applications remotely with the help of the
internet regardless of what devices they use to access.
53
Desktop as a Service(DaaS) is cost-effective and ensures security and control.
XaaS [DkaaS] Cont….
Benefits from DaaS ?
It is less expensive than setting up and maintaining the virtual desktop
Can be easily administered: new user can be added or an existing user can be
removed rapidly
It delivers high-performing workspaces to any user on any device from
anywhere. These benefits become the reasons to choose DaaS over VDI.
Where can it be used ?
DaaS has a lot of use cases.
Software developers
Call-center, part-time work
University lab
Remote and mobile workers
Shift and contract work
Some DaaS Providers are :
Amazon Workspaces
Citrix Managed Desktops
Microsoft Windows Virtual Desktop
VMware Horizon Cloud
Evolve IP
Cloudalize Managed Desktops etc. 54
XaaS [DkaaS] Cont….
The two types of desktops in DaaS are
1. Persistent desktop : In this type, the user can customize the looks of the virtual
desktop and whenever the user logs back the details will remain the same. This
needs more storage, so it’s expensive.
2. Non-persistent desktop : The desktops are wiped off each time a user logs out
and it just acts as a portal for shared cloud services.
Advantages of DaaS :
Quick deployment and decommissioning of active end users: DaaS can quickly
give a service to the end user as well as it can revoke it faster also.
High cost savings: costs less to set up and maintain a virtual device infrastructure.
Easy user interface: The interface is easy, a normal IT employee with usability
skills can use it.
Increased device flexibility: number of users can be increased or decreased easily
Improved security: provides high security and reduces the fear of cyber attacks
and risks as it is a virtual service.
Disadvantages of DaaS :
Internet outage: In case of an internet outage, employees may not be able to
access their desktops as these desktops are hosted in the cloud and accessed
over the internet.
Poor Virtual Performance: Sometimes it may happen the end users will face poor
55
virtual performance as it is accessed virtually
NIST’s Four Cloud Deployment Models/ Cloud Environment
The final part of the NIST cloud computing definition includes four cloud
deployment models, representing four types of cloud environments. Users
can choose the model with features and capabilities that are best suited for
their needs.
1. Public Cloud
2. Private Cloud
3. Hybrid Cloud
4. Community Cloud
1. Private Cloud:
A private cloud is a single-tenant environment provisioned for use by a
single organization.
The cloud infrastructure is operated solely for an organization.
e.g Window Server 'Hyper-V‘, Ubuntu Enterprise Cloud - UEC ,
Microsoft ECI (ElastiCLoud) data center
56
Public Cloud
The cloud infrastructure is made available to the general public
58
Public Cloud Cont……
Workload locations are hidden from clients (public):
61
Private Cloud
A private cloud is a single-tenant environment provisioned for use by a single
organization. The cloud infrastructure is provisioned for exclusive use by a single
organization comprising multiple consumers (e.g., business units).
The cloud infrastructure is operated solely for an organization.
It may be owned, managed, and operated by the organization, a third party, or
some combination of them, and it may exist on or off premises.
Examples of Private Cloud:
Window Server 'Hyper-V‘: The Hyper-V role in Windows Server lets you create a
virtualized computing environment where you can create and manage virtual machines
Ubuntu Enterprise Cloud – UEC: Ubuntu Enterprise Cloud is integrated with the open
source Eucalyptus private cloud platform, making it possible to create a private cloud
with much less configuration than installing Linux first, then Eucalyptus.
Microsoft ECI (ElastiCLoud) data center
Eucalyptus: Eucalyptus is a Linux-based open-source software architecture for cloud
computing and also a storage platform that implements Infrastructure a Service (IaaS).
Amazon VPC (Virtual Private Cloud)
VMware Cloud Infrastructure Suite
Microsoft ECI data center.
62
Private Cloud Cont……..
Contrary to popular belief, private cloud may exist off premises and can
be managed by a third party. Thus, two private cloud scenarios exist, as
follows:
1. On-site Private Cloud
Applies to private clouds implemented at a
customer’s premises.
2. Out-sourced Private Cloud
Applies to private clouds where the server side is outsourced
to a hosting company.
63
On-Site Private Cloud Cont……..
Security perimeter does not guarantees control over the private cloud’s
resources but subscriber can exercise control over the resources.
64
On-Site Private Cloud Cont……..
65
On-Site Private Cloud Cont……..
66
On-Site Private Cloud Cont……..
68
Out Sourced Private Cloud Cont……..
Organizations considering the use of an outsourced private cloud should
consider:
Network Dependency (outsourced-private): In the outsourced private scenario,
subscribers may have an option to provision unique protected and reliable
communication links with the provider.
Workload locations are hidden from clients (outsourced-private):
Risks from multi-tenancy (outsourced-private): The implications are the same as
those for an on-site private cloud.
Data import/export, and performance limitations (outsourced-private): On-
demand bulk data import/export is limited by the network capacity between a
provider and subscriber, and real-time or critical processing may be problematic
because of networking limitations. In the outsourced private cloud scenario,
however, these limits may be adjusted, although not eliminated, by provisioning
high-performance and/or high-reliability networking between the provider and
subscriber.
Potentially strong security from external threats (outsourced-private): As with the
on-site private cloud scenario, a variety of techniques exist to harden a security
perimeter. The main difference with the outsourced private cloud is that the
techniques need to be applied both to a subscriber's perimeter and provider's
69
perimeter, and that the communications link needs to be protected.
Out Sourced Private Cloud Cont……..
70
Community Cloud
71
Community Cloud Cont…..
72
On-Site Community Cloud Cont…..
1. Community cloud is made up of a set of participant organizations.
Each participant organization may provide cloud services, consume cloud
services, or both
2. At least one organization must provide cloud services
3. Each organization implements a security perimeter
73
On-Site Community Cloud Cont…..
4. The participant organizations are connected via links between the boundary
controllers that allow access through their security perimeters
5. Access policy of a community cloud may be complex
Ex. :if there are N community members, a decision must be made, either
implicitly or explicitly, on how to share a member's local cloud resources
with each of the other members
Policy specification techniques like role-based access control (RBAC),
attribute-based access control can be used to express sharing policies.
[Role-based access control (RBAC) restricts network access based on a person's
role within an organization, set of permissions that allow users to read, edit, or
delete articles in a writing application]
76
On-Site Community Cloud Cont…..
77
Out-Sourced Community Cloud Cont…..
78
Out-Sourced Community Cloud Cont…..
Organizations considering the use of an Out-Sourced community cloud
should consider:
1. Network dependency (outsourced-community):
The network dependency of the outsourced community cloud is similar
to that of the outsourced private cloud. The primary difference is that
multiple protected communications links are likely from the community
members to the provider's facility.
2. Workload locations are hidden from clients (outsourced-
community).
Same as the outsourced private cloud
3. Risks from multi-tenancy (outsourced-community):
Same as the on-site community cloud
4. Data import/export, and performance limitations (outsourced- community):
Same as outsourced private cloud
79
Out-Sourced Community Cloud Cont…..
80
Hybrid Cloud
The cloud infrastructure is a composition of two or more distinct cloud
infrastructures (private, community, or public) that remain unique entities,
but are bound together by standardized or proprietary technology that
enables data and application portability
It provides greater flexibility, portability, and scalability than the other
deployment models. e.g Cloud Bursting [Cloud bursting is a configuration method that
uses cloud computing resources whenever on-premises infrastructure reaches peak capacity.
When organizations run out of computing resources in their internal data center, they burst the
extra workload to external third-party cloud services] for load balancing between clouds
They have significant variations in performance, reliability, and security
properties depending upon the type of cloud chosen to build hybrid cloud.
A hybrid cloud can be extremely complex
A hybrid cloud may change over time with constituent clouds joining and
leaving.
e.g. Windows Azure (capable of Hybrid Cloud), VMware Cloud (Hybrid
Cloud Services) )
81
Hybrid Cloud Cont…..
82
Hybrid Cloud Cont…
83
Cloud Interoperability & Standards
86
Cloud Interoperability & Standards Cont…
• Data Portability – Also termed as cloud portability, refers to the transfer of data
from one source to another source or from one service to another service, i.e.
from one application to another application or it may be from one cloud service
to another cloud service in the aim of providing a better service to the customer
without affecting it’s usability.
• Application Portability – It enables re-use of various application components in
different cloud PaaS services.
Portability between development and operational environments is specific
application portability that arises with cloud computing. Cloud computing
brings development and operations closer together and, indeed contributes to
the convergence of the two as develops gradually.
Cloud PaaS is especially appealing for development environments because it
eliminates the need for investment in costly infrastructure that would be
unused until the development is complete.
However, if a particular environment is to be used at runtime, either on in-
house systems or on various cloud platforms, it is important that the
applications between the two environments can be transferred.
This will only function if the same environment is used for development
and operation, or if device portability exists between environments for
development and operation.
Cloud Interoperability & Standards Cont…
• Platform Portability –There are two types of platform portability- platform source
portability and machine image portability.
– In the case of platform source portability, e.g. UNIX OS, Platform portability is
supported by the UNIX operating system. It is mostly written in the language of C
programming, and by re-compiling and re-writing a few small hardware-
dependent parts that are not coded in C, it can be implemented on different
hardware. This is the conventional portability approach to platforms. It allows
portability of applications because applications that use the standard interface of
the operating system can similarly be re-compiled and run on devices with distinct
hardware. It is demonstrated in the platform source portability.
88
Cloud Interoperability & Standards Cont…
Application Interoperability – Interoperability of the application manages
application to application communications using external services (e.g.,
middleware services). Application interoperability translates the processing
functions into new programs from existing systems.
Management Interoperability – Here, the Cloud services like SaaS, PaaS or IaaS
and applications related to self-service are assessed. It would be pre-dominant as
Cloud services are allowing enterprises to work-in-house and eradicate
dependency from third parties.
Cloud Interoperability & Standards Cont…
The below figure represents an overview of Cloud interoperability and
portability :
Cloud Interoperability & Standards Cont…
Major Scenarios where interoperability and portability is required: Cloud
Standards Custom Council (CSCC) has identified some of the basic scenarios where
portability and interoperability is required.
Switching between cloud service providers –The customer wants to transfer data
or applications from Cloud 1 to Cloud 2.
Using multiple cloud service providers- The client may subscribe to the same or
different services e.g. Cloud 1 and 2.
Directly linked cloud services- The customer can use the service by linking to
Cloud 1 and Cloud 3.
Hybrid Cloud configuration- Here the customer connects with a legacy system not
in a public, but, private cloud, i.e. Cloud 1, which is then connected to public cloud
services i.e. Cloud 3.
1. Vertical Scaling
To understand vertical scaling, imagine a 20-story hotel. There are innumerable
rooms inside this hotel from where the guests keep coming and going. Often
there are spaces available, as not all rooms are filled at once. People can move
easily as there is space for them. As long as the capacity of this hotel is not
exceeded, no problem. This is vertical scaling.
With computing, you can add or subtract resources, including memory or
storage, within the server, as long as the resources do not exceed the capacity
of the machine.
Although it has its limitations, it is a way to improve your server and avoid
latency and extra management. Like in the hotel example, resources can
come and go easily and quickly, as long as there is room for them.
Scaling Cont….
1. Vertical Scaling
To understand vertical scaling, imagine a 20-story hotel. There are innumerable
rooms inside this hotel from where the guests keep coming and going. Often
there are spaces available, as not all rooms are filled at once. People can move
easily as there is space for them. As long as the capacity of this hotel is not
exceeded, no problem. This is vertical scaling.
With computing, you can add or subtract resources, including memory or
storage, within the server, as long as the resources do not exceed the capacity
of the machine.
Although it has its limitations, it is a way to improve your server and avoid
latency and extra management. Like in the hotel example, resources can
come and go easily and quickly, as long as there is room for them.
Scaling Cont….
2. Horizontal Scaling
Horizontal scaling is a bit different. This time, imagine a two-lane highway. Cars
travel smoothly in each direction without major traffic problems. But then the
area around the highway develops - new buildings are built, and traffic increases.
Very soon, this two-lane highway is filled with cars, and accidents become
common. Two lanes are no longer enough. To avoid these issues, more lanes are
added, and an overpass is constructed. Although it takes a long time, it solves
the problem.
Horizontal scaling refers to adding more servers to your network, rather than
simply adding resources like with vertical scaling. This method tends to take
more time and is more complex, but it allows you to connect servers together,
handle traffic efficiently and execute concurrent workloads.
Scaling Cont….
3. Diagonal Scaling: It is a mixture of both Horizontal and Vertical scalability where
the resources are added both vertically and horizontally.
When you combine vertical and horizontal, you simply grow within your existing
server until you hit the capacity. Then, you can clone that server as necessary
and continue the process, allowing you to deal with a lot of requests and traffic
concurrently.
Scaling Cont….
Benefits of cloud scalability
Convenience: Often, with just a few clicks, IT administrators can easily add
more VMs that are available-and customized to an organization's exact needs-
without delay. Teams can focus on other tasks instead of setting up physical
hardware for hours and days. This saves the valuable time of the IT staff.
Cost Savings: Businesses can avoid the upfront cost of purchasing expensive
equipment that can become obsolete in a few years. Through cloud providers,
they only pay for what they use and reduce waste.
Disaster recovery: With scalable cloud computing, you can reduce disaster
recovery costs by eliminating the need to build and maintain secondary data
centers.
Scaling Cont….
When to Use Cloud Scalability?
• Scalability is one of the driving reasons for migrating to the cloud. Whether traffic
or workload demands increase suddenly or increase gradually over time, a
scalable cloud solution enables organizations to respond appropriately and cost-
effectively to increased storage and performance.
How to determine optimal cloud scalability?: Changing business needs or
increasing demand often necessitate your scalable cloud solution changes. But how
much storage, memory, and processing power do you need? Will you scale in or
out?
To determine the correct size solution, continuous performance testing is
essential. IT administrators must continuously measure response times,
number of requests, CPU load, and memory usage.
Automation can also help optimize cloud scalability. You can set a threshold
for usage that triggers automatic scaling so as not to affect performance.
You may also consider a third-party configuration management service or
tool to help you manage your scaling needs, goals, and implementation.
Virtual Desktop Interface (VDI)
Virtual Desktop Infrastructure (VDI) is a technology that refers to the use of virtual
machines to provide and manage virtual desktops.
VDI hosts desktop environments on a centralized server and deploys them to end-
users on request.
How does VDI work? : In VDI, a hypervisor segments servers into virtual
machines that in turn host virtual desktops, which users access remotely from their
devices.
All processing is done on the host server.
Users connect to their desktop instances through a that acts as an intermediary
between the user and the server. connection broker, which is a software-based
gateway
Virtual Desktop Interface Cont…..
Two approaches to deploying desktops
1.Persistent desktop. Each user is assigned a unique desktop instance, which they
can customize to their individual preferences.
With persistent VDI, a user connects to the same desktop each time, and users
can personalize the desktop for their needs since changes are saved even after
the connection is reset. In other words, desktops in a persistent VDI
environment act like personal physical desktops.
With persistent VDI, each user gets his or her own persistent virtual desktop --
also known as a one-to-one ratio.
2. Nonpersistent desktop. Users can access a pool of uniform desktop images as
needed to perform tasks. Nonpersistent desktops are many-to-one, meaning that
they are shared among end users.
These nonpersistent desktops revert to their original state after each use,
rather than being personalized for a unique user.
In non-persistent VDI, where users connect to generic desktops and no changes
are saved, it is usually simpler and cheaper since there is no need to maintain
customized desktops between sessions.
As a result, Nonpersistent VDI is often used in organizations with many task
workers or employees who perform a limited set of repetitive tasks and don’t
need a customized desktop.
Virtual Desktop Interface Cont…..
Why VDI? : VDI offers a number of advantages, such as user mobility, ease of
access, flexibility and greater security. In the past, high-performance requirements
made it costly and challenging to deploy on legacy systems, which posed a barrier
for many businesses. However, the rise in enterprise adoption of hyperconverged
infrastructure (HCI) offers a solution that provides scalability and high
performance at a lower cost.
Benefits of VDI: Although VDI’s complexity means that it isn’t necessarily the right
choice for every organization, it offers a number of benefits for organization:
1. Remote access: VDI users can connect to their virtual desktop from any location
or device, making it easy for employees to access all their files and applications
and work remotely from anywhere in the world.
2. Cost savings: Since processing is done on the server, the hardware
requirements for end devices are much lower. Users can access their virtual
desktops from older devices, thin clients, or even tablets, reducing the need for
IT to purchase new and expensive hardware.
3. Security: In a VDI environment, data lives on the server rather than the end
client device. This serves to protect data if an endpoint device is ever stolen or
compromised.
4. Centralized management: VDI’s centralized format allows IT to easily patch,
update or configure all the virtual desktops in a system.
Virtual Desktop Interface Cont…..
What is VDI used for? There are a number of use cases that are uniquely suited for
VDI, including:
1.Remote work: Since VDI makes virtual desktops easy to deploy and update from
a centralized location, an increasing number of companies are implementing it for
remote workers.
2.Bring your own device (BYOD): VDI is an ideal solution for environments that
allow or require employees to use their own devices. Since processing is done on
a centralized server, VDI allows the use of a wider range of devices. It also offers
better security, since data lives on the server and is not retained on the end
client device.
3.Task or shift work: Nonpersistent VDI is particularly well suited to organizations
such as call centers that have a large number of employees who use the same
software to perform limited tasks.
Virtual Desktop Interface Cont…..
How to implement VDI?: Larger enterprises should consider implementing it in an
HCI environment, as HCI’s scalability and high performance are a natural fit for
VDI’s resource needs. On the other hand, implementing HCI for VDI is probably not
necessary (and would be overly expensive) for organizations that require less than
100 virtual desktops.
Best practices to follow when implementing VDI:
Prepare Your Network: Since VDI performance is so closely linked to network
performance, it’s important to know peak usage times and anticipate demand
spikes to ensure sufficient network capacity.
Avoid Under provisioning: Perform capacity planning in advance using a
performance monitoring tool to understand the resources each virtual desktop
consumes and to make sure you know your overall resource consumption needs.
Understand Your End-Users’ Needs: Identify, is organization better suited to a
persistent or non-persistent VDI setup? What are users performance
requirements? Accordingly, provision the setup differently for users who use
graphics-intensive applications versus those who just need access to the internet
or to one or two simple applications.
Perform a Pilot Test: Most virtualization providers offer testing tools that you
can use to run a test VDI deployment beforehand; it’s important to do so to make
sure you’ve provisioned your resources correctly.
Virtual Desktop Interface Cont…..
The typical call center is an excellent example of how this model directly
supports the needs of a team of people. Each member of the team is only
required to do a specific set of tasks, which do not require the desktop to be
nonstandard.
3. Contract employees: When temporary contractors join a team, they need access
to some of the core assets and team members, but security is an important
consideration. By using a virtual desktop, it’s possible to control access to corporate
resources while delivering the connection point for the temporary workers.
Contractors are able to perform tasks that use organizational resources without
having access to systems that are not related to the contract
Fog Computing and Edge Computing
Nowadays, a massive amount of data is generated every second around the
globe.
Businesses collect and process that data from the people and get analytics to
scale their business.
When lots of organizations access their data simultaneously on the remote
servers in data centers, data traffic might occur. Data traffic can cause some
delay in accessing the data, lower bandwidth, etc.
Cloud computing technology alone is not effective enough to store and
process massive amounts of data and respond quickly.
For example, in the Tesla self-driving car, the sensor constantly monitors
certain regions around the car. If it detects an obstacle or pedestrian on its way,
then the car must be stopped or move around without hitting. When an
obstacle is on its way, the data sent through the sensor must be processed
quickly and help the car to detect before it hits. A little delay in detection could
be a major issue. To overcome such challenges, edge computing and fog
computing are introduced.
Fog Computing
Fog computing or fog networking, also known as fogging, is an architecture that
uses edge devices to carry out a substantial amount of computation (edge
computing), storage, and communication locally and routed over the Internet
backbone.
Fog computing is a decentralized computing infrastructure in which data, compute,
storage and applications are located somewhere between the data source and the
cloud.
The devices comprising the fog infrastructure are known as fog nodes.
Fog Computing Cont…..
In 2011, the need to extend cloud computing with fog computing emerged, in order
to cope with huge number of IoT devices and big data volumes for real-time low-
latency applications.
Fog computing, also called edge computing, is intended for distributed computing
where numerous "peripheral" devices connect to a cloud.
The word "fog" refers to its cloud-like properties, but closer to the "ground", i.e.
IoT devices.
Many of these devices will generate voluminous raw data (e.g., from sensors),
and rather than forward all this data to cloud-based servers to be processed, the
idea behind fog computing is to do as much processing as possible using
computing units co-located with the data-generating devices, so that
processed rather than raw data is forwarded, and bandwidth requirements are
reduced.
An additional benefit is that the processed data is most likely to be needed by the
same devices that generated the data, so that by processing locally rather than
remotely, the latency between input and response is minimized.
This idea is not entirely new: in non-cloud-computing scenarios, special-purpose
hardware (e.g., signal-processing chips performing Fast Fourier Transforms) has
long been used to reduce latency and reduce the burden on a CPU.
Fog Computing Cont…..
Fog computing emphasizes:
Proximity (closeness ) to end-users and client objectives (e.g. operational costs,
security policies, resource exploitation)
Dense geographical distribution and context-awareness (for what concerns
computational and IoT resources),
Latency reduction and backbone bandwidth savings to achieve better quality of
service (QoS)
Edge analytics/stream mining, resulting in superior user-experience and
redundancy in case of failure while it is also able to be used in Assisted
Living scenarios.
Fog networking supports the Internet of Things (IoT) concept, in which most of the
devices used by humans on a daily basis will be connected to each other.
Examples include phones, wearable health monitoring devices, connected
vehicle and augmented reality using devices such as the Google Glass.
IoT devices are often resource-constrained and have limited computational
abilities to perform cryptography computations.
A fog node can provide security for IoT devices by performing these
cryptographic computations instead
Fog Computing Cont…..
Cisco invented the phrase "Fog Computing," which refers to extending cloud
computing to an enterprise's network's edge. As a result, it's also known
as Fogging or Edge Computing. It makes computation, storage, and networking
services more accessible between end devices and computing data centers.
Fog computing is the computing, storage, and communication architecture that
employs EDGE devices to perform a significant portion of computation, storage,
and communication locally before routing it over the Internet backbone.
The goal of fog computing is to conduct as much processing as possible using
computing units that are co-located with data-generating devices so that processed
data rather than raw data is sent and bandwidth needs are decreased.
Another advantage of processing locally rather than remotely is that the
processed data is more needed by the same devices that created the data, and the
latency between input and response is minimized.
Fog Computing Cont…..
History of fog computing: The term fog computing was coined by Cisco in January
2014. This was because fog is referred to as clouds that are close to the ground in
the same way fog computing was related to the nodes which are present near the
nodes somewhere in between the host and the cloud. It was intended to bring the
computational capabilities of the system close to the host machine. After this
gained a little popularity, IBM, in 2015, coined a similar term called “Edge
Computing”.
When to use fog computing? Fog Computing can be used in the following
scenarios:
It is used when only selected data is required to send to the cloud. This
selected data is chosen for long-term storage and is less frequently accessed by
the host.
It is used when the data should be analyzed within a fraction of seconds i.e
Latency should be low.
It is used whenever a large number of services need to be provided over a large
area at different geographical locations.
Devices that are subjected to rigorous computations and processings must use
fog computing. IoT devices
Real-world examples where fog computing is used are in IoT devices (eg. Car-to-
Car Consortium, Europe), Devices with Sensors, Cameras (IIoT-Industrial Internet
Fog Computing Cont…..
Advantages of fog computing
This approach reduces the amount of data that needs to be sent to the cloud.
Since the distance to be traveled by the data is reduced, it results in saving
network bandwidth.
Reduces the response time of the system.
It improves the overall security of the system as the data resides close to the
host.
It provides better privacy as industries can perform analysis on their data locally.
Disadvantages of fog computing
Congestion may occur between the end devices and the fog node due to
increased traffic (heavy data flow).
Power consumption increases when another layer is placed between the end
devices and the cloud.
Scheduling tasks between end devices and fog nodes along with fog nodes and
the cloud is difficult.
Data management becomes tedious as along with the data stored and computed,
the transmission of data involves encryption-decryption too which in turn release
data.
Fog Computing Cont…..
Applications of fog computing
• It can be used to monitor and analyze the patients’ condition. In case of
emergency, doctors can be alerted.
• It can be used for real-time rail monitoring as for high-speed trains we want as
little latency as possible.
Mist (Edge) Computing
Cloud, fog and edge computing may appear similar, but they are different layers
of the IIoT.
Edge computing for the IIoT allows processing to be performed locally at multiple
decision points for the purpose of reducing network traffic.
As the name implies, edge computing occurs exactly at ‘the edge’ of the
application network.
In terms of topology, this means that an ‘edge computer’ is right next to or even
on top of the endpoints (such as controllers and sensors) connected to the
network. The data is then either partially or entirely processed and sent to the
cloud for further processing or storage.
Edge Computing Cont…..
The number of devices connected to enterprise networks and the volume of data
being generated by them are scaling at a pace that is too rapid for traditional
data centers to keep up with.
– Such a situation could lead to tremendous strain on both local networks and the
internet at large.
– To address this threat of congestion and help enhance the reliability of big data
processing systems, IT infrastructure has evolved to bring computing resources
to the point of data generation. Edge computing removes the reliance on a
single, centralized data processing center. Instead, it makes computing more
efficient by bringing data centers closer to where they are actually needed.
However, edge computing can lead to large volumes of data being transferred
directly to the cloud. This can affect system capacity, efficiency, and security.
Fog computing addresses this problem by inserting a processing layer between
the edge and the cloud. This way, the ‘fog computer’ receives the data gathered
at the edge and processes it before it reaches the cloud.
Edge Computing Cont…..
As such, edge computing and fog computing work in unison to minimize latency
and maximize the efficiency associated with cloud-enabled enterprise systems.
IT personnel commonly view the terms edge computing and fog computing as
interchangeable. This is because both processes bring processing and intelligence
closer to the data source.
Edge computing defined: Edge computing brings processing and storage systems as
close as possible to the application, device, or component that generates and
collects data.
This helps minimize processing time by removing the need for transferring data to a
central processing system and back to the endpoint. As a result, data is processed
more efficiently, and the need for internet bandwidth is reduced.
This keeps operating costs low and enables the use of applications in remote
locations that have unreliable connectivity.
Security is also enhanced as the need for interaction with public cloud platforms
and networks is minimized.
Edge Computing Cont…..
Edge Computing Cont…..
Edge computing is useful for environments that require real-time data processing
and minimal latency.
This includes applications such as autonomous vehicles, the internet of things
(IoT), software as a service (SaaS), rich web content delivery, voice assistants,
predictive maintenance, and traffic management.
Autonomous vehicle edge computing devices collect data from cameras and
sensors on the vehicle, process it, and make decisions in milliseconds, such as
self-parking cars.
In order to accurately assess a patient’s condition and foresee treatments, data
is processed from a variety of edge devices connected to sensors and monitors.
Examples of edge devices are sensors, laptops, and smart phones.
Edge Computing Cont…..
The aim of edge computing is to move the computation away from data centers
towards the edge of the network, exploiting smart objects, mobile phones,
or network gateways to perform tasks and provide services on behalf of the cloud.
Computation takes place at the edge of a device’s network, which is known as edge
computing. That means a computer is connected with the network of the device,
which processes the data and sends the data to the cloud in real-time. That
computer is known as “edge computer” or “edge node”.
Fog Computing Vs Edge Computing
1. Concept:
Edge Computing: Edge computing is defined as a computing architecture that
brings data processing as close to the source of data as physically possible.
– In many cases, data collection and processing occur on the same device, such as on an
endpoint computer or IoT device. This minimizes bandwidth use and latency.
– i.e computing processes take place locally, thus reducing the need for long-distance
data transfers to cloud servers, which can be expensive and slow.
Fog Computing: Fog computing services are a slightly more customized version of edge
computing and may need to be set up either from scratch or using a combination
of ‘as a service’ deployments.
• While this is likely to mean a higher price tag than edge computing, the benefits of
fog computing are manifold. For instance, fog computing creates an economic
opportunity through massive savings in terms of bandwidth, latency, computing,
and storage.
• Depending on the use case, fog computing provides an economically viable
alternative to large data centers.