0% found this document useful (0 votes)
98 views

Cloud Computing Unit-II Cloud Computing Architecture By. Dr. Samta Gajbhiye

The document discusses the NIST cloud computing reference architecture, which defines 5 abstraction layers - physical, virtual, control, service orchestration, and service layer. Each layer specifies the entities that operate within it and their functions. For example, the physical layer consists of hardware resources, the virtual layer abstracts these resources into pooled virtual resources using virtualization software, and the control layer manages and provisions the underlying infrastructure through software. [/SUMMARY]

Uploaded by

Pulkit Jain
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
98 views

Cloud Computing Unit-II Cloud Computing Architecture By. Dr. Samta Gajbhiye

The document discusses the NIST cloud computing reference architecture, which defines 5 abstraction layers - physical, virtual, control, service orchestration, and service layer. Each layer specifies the entities that operate within it and their functions. For example, the physical layer consists of hardware resources, the virtual layer abstracts these resources into pooled virtual resources using virtualization software, and the control layer manages and provisions the underlying infrastructure through software. [/SUMMARY]

Uploaded by

Pulkit Jain
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 132

Cloud Computing

Unit-II
Cloud Computing Architecture

By. Dr. Samta Gajbhiye

1
Cloud Computing

• NIST’s Cloud Computing Definition and Model: NIST’s cloud model


(definition) is composed of:
Five essential characteristics
Three service models
Four deployment models

2
Cloud Reference Model/Architecture
NIST cloud computing reference architecture, which identifies the
major actors, their activities and functions in cloud computing by
partitioning it into abstraction layers and cross-layer functions.
 This reference model groups the cloud computing functions and activities
into five logical layers and three cross-layer functions.
 The five layers are: physical layer, virtual layer, control layer, service
orchestration layer, and service layer.
 Each of these layers specifies various types of entities that may exist in a
cloud computing environment, such as compute systems, network devices,
storage devices, virtualization software, security mechanisms, control
software, orchestration software, management software, and so on. It also
[A reference model is an abstract framework for understanding significant relationships
among the entities of some environment, and for the development of consistent
standards or specifications supporting that environment. It is based on a small number of
unifying concepts and may be used as a basis for education and explaining standards. It is not
directly tied to any standards, technologies, or other concrete implementation details, but it
does seek to provide a common semantics that can be used unambiguously across and
between different implementations]

3
4
Cloud computing abstraction layers

 Physical Layer: The physical layer consists of the hardware resources that are
necessary to support the cloud services being provided, and typically includes
server, storage and network components. The abstraction layer consists of the
software deployed across the physical layer, which manifests(marks) the essential

cloud characteristics.
 Foundation layer of the cloud infrastructure.
 Specifies entities that operate at this layer : Compute systems, network devices
and storage devices.
 Functions of physical layer : Executes requests generated by the virtualization
and control layer.

5
Cloud computing abstraction layers Cont….
 Virtual Layer
 Deployed on the physical layer. It is built by deploying virtualization software on
compute systems, network devices, and storage devices.
 Specifies entities that operate at this layer : Virtualization software, resource
pools, virtual resources.
 Virtual Layer Virtualization Process and Operations Module:
 Step 1: Deploy virtualization software on: Compute systems, Network
devices, Storage devices
 Step 2: Create resource pools: Processing power and memory, Network
bandwidth, Storage
 Step 3: Create virtual resources: Virtual machines, Virtual networks, Virtual
resources are packaged and offered as services
 Functions of virtual layer :
 Abstracts physical resources and makes them appear as virtual resources
(enables multitenant environment).
 Executes the requests generated by control layer.
 This layer enables fulfilling two key characteristics of cloud infrastructure:
resource pooling and rapid elasticity. 6
Cloud computing abstraction layers -Virtual Layer Cont….
[ A resource pool is an aggregation of computing resources, such as processing
power, memory, storage, and network bandwidth, which provides an
aggregated view of these resources to the control layer.]
 Virtualization software in collaboration with control layer creates virtual
resources. These virtual resources are created by allocating physical resources from
the resource pool. These virtual resources share pooled physical resources.
Examples of virtual resources include virtual machines, LUNs, and virtual
networks
 For example, storage virtualization software pools capacity of multiple storage
devices to appear as a single large storage capacity.
 By using compute virtualization software, the processing power and memory
capacity of the pooled physical compute system can be viewed as an aggregation of
the power of all processors (in megahertz) and all memory (in megabyte).
[A logical unit number (LUN) is a number used for identifying a logical unit
relating to computer storage. A logical unit is a device addressed by protocols and
related to fiber channel, small computer system interface (SCSI), Internet SCSI
(iSCSI) and other comparable interfaces. OR a logical unit number (LUN) is a slice or
portion of a configured set of disks that is presentable to a host and mounted as a
volume within the OS]
7
Cloud computing abstraction layers -Virtual Layer Cont….

 Hypervisor kernel: Designed to run multiple Virtual Machines concurrently


 The hypervisor is a software that is installed on a compute system and enables
multiple operating systems to run concurrently on a physical compute system.
 The software used for compute virtualization is known as the hypervisor.
 Virtual machine manager (VMM): Each virtual machine is assigned a VMM that
gets a share of the processor, memory, I/O devices, and storage from the physical
compute system to successfully run the virtual machine.
 The hypervisor abstracts the physical compute hardware to create multiple
virtual machine, which to the operating systems look and behave like physical
compute systems. The hypervisor provides standardized hardware resources, such
as processor, memory, network, and disk to all the virtual machines.

8
Cloud computing abstraction layers Cont….
 Control Layer: Deployed either on virtual layer or on physical layer
 The control layer includes control software that are responsible for managing
and controlling the underlying cloud infrastructure resources and enable
provisioning of IT resources for creating cloud services.
 Specifies entities that operate at this layer : control software (hypervisor
management software )
 The hypervisor along with hypervisor management software (also known as
control software, is the fundamental component for deploying software defined
compute environment.
 Functions of control layer :
 Enables resource configuration, resource pool configuration and resource
provisioning. Executes requests generated by service layer.
[space and resources are pooled to serve multiple clients at one time.
Depending on a client's resource consumption, usage can be set to provide
more or less at any given time]
 Exposes resources to and supports the service layer.
 Collaborates with the virtualization software (hypervisor) and enables
resource pooling and creating virtual resources, dynamic allocation and
9
optimizing utilization of resources.
Cloud computing abstraction layers Cont….
 Service Orchestration Layer: responsible for the governance (Governance is the
process of making and enforcing decisions within an organization or society) , control, and
coordination of the service delivery process.
[Cloud governance is the process of defining, implementing, and monitoring a framework of policies
that guides an organization's cloud operations. This process regulates how users work in cloud
environments to facilitate consistent performance of cloud services and systems.]

 Cloud Orchestration can be defined as the coordination, arrangement, or end-


to-end automation of the deployment of services in a cloud-based
environment.
 It introduces and enforces a workflow for automated activities of various
processes to deliver the desired service to its client.
 Some Orchestration tools are Terraform, Ansible AWS Cloud Formation, etc.
 Specifies the entities that operate at this layer : Orchestration software.
[Orchestration is the automated configuration, management, and coordination
of computer systems, applications, and services.]

10
Cloud computing abstraction layers -Orchestration layer Cont….

 Functions of orchestration layer :


 The orchestration layer in the cloud manages the interactions and
interconnections between cloud-based and on-premises components. These
components include servers, networking, virtual machines, security, and storage.
 The logic of this layer is too abstract, so it’s not always visible to cloud
management platform users.
 The goal of the orchestration layer is to optimize and streamline frequent,
repeatable processes. This guarantees speedier, accurate deployment of services
in the cloud.

11
Cloud computing abstraction layers - Orchestration layer Cont….
 Simplified Optimization
 Automation is a subset of orchestration.
 It means automation handles individual tasks while orchestration integrates
automation.
 Orchestration effectively turns individual tasks into a larger optimized workflow.
 Control and Visibility
 The orchestration layer gives IT admins a unified dashboard that gives them a
single pane of glass view of cloud resources.
 It provides IT admins with tools that can help to monitor and modify virtual
machine instances with little manual input.
 Reduce Errors
 It automates small and straightforward operations even without the intervention
of IT staff.
 For example, if a disk runs out of storage space, it can create space by deleting
junk applications and files. Or with orchestration, add storage tiers in the
environment to help organizations better manage storage as specifically
applicable to a workload. 
 Cost-Effective: The orchestration layer helps with automated cloud metering. It
allows organizations to improve cost governance and promotes efficient use of
resources. This allows organizations to reduce infrastructure costs and realize12long-
term cost savings.
Cloud computing abstraction layers Cont….

 Service Layer
 Consumers interact and consume cloud resources via this layer.
 Specifies the entities that operate at this layer : Service catalog and self-
service portal.
 Functions of service layer :
 Store information about cloud services in service catalog and presents
them to the consumers.
 Enables consumers to access and manage cloud services via a self-service
portal.

13
Cloud Reference Model Cont….

 The three cross-layer functions are: business continuity, security, and service
management.
 Business continuity and security functions specify various activities, tasks, and
processes that are required to offer reliable and secure cloud services to the
consumers.
 Service management function specifies various activities, tasks, and processes
that enable the administrations of the cloud infrastructure and services to
meet the provider’s business requirements and consumer’s expectations.

14
Cross-layer function
1. Business continuity
Measures Description
Proactive Business impact analysis • Risk assessment • Technology solutions
deployment (backup and replication)
Reactive Disaster recovery •
Disaster restart

 Specifies adoption of proactive and reactive measures the impact of planned


and unplanned downtime.
[Proactive measures are preventive actions taken to decrease the
likelihood of an incident occurring, these measures also set in place techniques
or procedures meant to mitigate the damage caused by the workplace
accident] [reactive measures are usually spontaneous actions that respond to
an accident.]
 Enables ensuring the availability of services in line with SLA
[A service-level agreement (SLA) is a contract negotiated between a provider and a
consumer that specifies various parameters and metrics such as cost, service availability,
maintenance schedules, performance levels, service desk response time, and consumer’s and
provider’s responsibilities. ]
 Supports all the layers to provide uninterrupted services. 15
Cross-layer function
2. Security layer Specifies the adoption of Administrative and Technical
mechanisms that can mitigate and minimize security threats and provide a secure
cloud environment
 Administrative mechanisms
 security and personnel policies
 standard procedures to direct safe execution of operations
 Technical mechanisms : Usually implemented through tools or devices
deployed on the IT infrastructure.
 Firewall
 intrusion detection and prevention systems
 antivirus
 Deploys security mechanisms to meet GRC requirements.
Governance, risk, and compliance (GRC) specifies processes that help
an organization ensure that their acts are ethically correct and in accordance
with their risk appetite (the risk level an organization chooses to accept),
internal policies, and external regulations. GRC refers to an integrated suite of
software capabilities for implementing and managing an enterprise GRC
program.]
 Supports all the layers to provide secure service 16
Cross-layer function Service Management
3.. Service Management
 Specifies adoption of activities related to: service portfolio management and
service operation management. [portfolio’s meaning can be defined as a collection
of financial assets and investment tools that are held by an individual, a financial
institution or an investment firm]
 Adoption of these activities enables an organization to align the creation and
delivery of cloud services to meet their business objectives and to meet the
expectations of cloud service consumers.
 Service portfolio management encompasses the set of business-related services
that:
 Define the service roadmap, service features, and service levels
 Assess and prioritize where investments across the service portfolio are most
needed
 Establish budgeting and pricing
 Deal with consumers in supporting activities such as taking orders, processing
bills, and collecting payments
 Service portfolio management also performs market research, measures service
adoption, collects information about competitors, and analyzes feedback from
consumers in order to quickly modify and align services according to consumer
needs and market conditions.
Cross-layer function - Service Management Cont…

 Service operation management : Service operation management enables cloud


administrators to manage cloud infrastructure and services.
 Enables infrastructure configuration and resource provisioning
 Enables problem resolution
 Enables capacity and availability management
 Enables compliance (act of obeying) conformance
 Enables monitoring cloud services and their constituent elements. This enables
the provider to gather information related to resource consumption and bill
generation.
 All of these tasks enable ensuring that services and service levels are delivered as
committed.
 This function supports all the layers to perform monitoring, management, and
reporting for the entities of the infrastructure.

18
Cloud Services Models
Service Models (XaaS): XaaS is the essence of cloud computing
 Combination of Service-Oriented Infrastructure (SOI) and cloud
computing realizes to XaaS.

 X as a Service (XaaS) is a generalization for cloud-related services

 XaaS stands for "anything as a service" or "everything as a service“

 XaaS refers to an increasing number of services that are delivered over


the Internet rather than provided locally or on-site

Other examples of XaaS include: Business Process as a Service (BPaaS), Storage


as a service (another SaaS), Security as a service (SECaaS), Database as a
service (DBaaS), Monitoring/management as a service (MaaS),
Communications, content and computing as a service (CaaS), Identity as a
service (IDaaS), Backup as a service (BaaS), Desktop as a service (DKaaS)
Cloud Services Models Cont…
Cloud Computing 3 major service models:
 Software as a Service (SaaS) 
 Platform as a Service (PaaS) 
 Infrastructure as a Service (IaaS) 

9
Cloud Services Models Cont…

Source: Cloud Security and Privacy: An Enterprise Perspective on Risks and Compliance by Tim Mather and Subra
Kumaraswamy

10
Cloud Services Models Cont…
Service Models (XaaS)

9
Cloud Services Models SAAS
1.Software as a Service (SaaS)
 SaaS applications are designed for end users and are delivered over the web
 SaaS is defined as software that is deployed over the internet.

 With SaaS, a provider licenses an application to customers either as a



service on demand, through a subscription, in a “pay-as-you-go” model, or
(increasingly) at no charge when there is opportunity to generate revenue
from streams other than the user, such as from advertisement or user list

sales.
The capability provided to the consumer is to use the provider’s applications
running on a cloud infrastructure. The applications are accessible from
various client devices through either a thin client interface, such as a web
browser (e.g., web-based email), or a program interface.
7
Cloud Services Models SAAS

The consumer does not manage or control the underlying cloud


infrastructure including network, servers, operating systems, storage, or
even individual application capabilities, with the possible exception of
limited user-specific application configuration settings.
e.g: Google Spread Sheet

SaaS Characteristics
 Web access to commercial software
 Software is managed from central location
 Software is delivered in a ‘one to many’ model
 Users not required to handle software upgrades and patches
 Application Programming Interfaces (API) allow for integration between
different pieces of software.
24
Applications where SaaS is Used
 Applications where there is significant interplay between organization and
outside world. E.g. email newsletter campaign software [Email marketing
software enables users to create, send, and track emails to their list of subscribers. Using software
makes it easier to create well-designed emails, and also allows you to see key metrics such as open rates
and click-through rates.]
 Applications that have need for web or mobile access. E.g. mobile sales
management software
 Software that is only to be used for a short term need.
 Software where demand spikes significantly. E.g. Tax/Billing softwares
 E.g. of SaaS: Sales Force Customer Relationship Management (CRM) software

Applications where SaaS may not be the best option


 Applications where extremely fast processing of real time data is needed
 Applications where legislation or other regulation does not permit data
being hosted externally
 Applications where an existing on-premise solution fulfills all of the
organization’s needs 12
Cloud Services Models SaaS Providers

22
Cloud Services Models PaaS

2. Platform as a Service (PaaS)


 PaaS can be defined as a computing platform that allows the creation of

web applications quickly and easily and without the complexity of buying
and maintaining the software and infrastructure underneath it.

 PaaS is the set of tools and services designed to make coding and deploying
applications quickly and efficiently

 Platform as a Service (PaaS) brings the benefits that SaaS bought for
applications, but over to the software development world.
Cloud Services Models PaaS Cont….
 PaaS is analogous to SaaS except that, rather than being software delivered
over the web, it is a platform for the creation of software, delivered over
the web.
 The capability provided to the consumer is to deploy onto the cloud
infrastructure consumer-created or acquire applications created using
programming languages, libraries, services, and tools supported by the
provider.
 The consumer does not manage or control the underlying cloud
infrastructure including network, servers, operating systems, or storage, but
has control over the deployed applications and possibly configuration
settings for the application-hosting environment.
Cloud Services Models PaaS

PaaS Characteristics
 Services to develop, test, deploy, host and maintain applications in the
same integrated development environment. All the varying services
needed to fulfill the application development process.
 Web based user interface creation tools help to create, modify, test and
deploy different UI scenarios.
 Multi-tenant architecture where multiple concurrent users utilize
the same development application.
 Built in scalability of deployed software including load balancing and
failover.
 Integration with web services and databases via common standards.
 Support for development team collaboration – some PaaS solutions
include project planning and communication tools.
 Tools to handle billing and subscription management

15
Cloud Services Models Scenarios where PaaS is used

 PaaS is especially useful in any situation where multiple developers will be


working on a development project or where other external parties need to
interact with the development process
 PaaS is useful where developers wish to automate testing and deployment
services.

 The popularity of agile (ability to move quickly and easily) software


development, a group of software development methodologies based on
iterative and incremental development, will also increase the uptake of
PaaS as it eases the difficulties around rapid development and iteration of
software.
 PaaS Examples: Microsoft Azure, Google App Engine 16
Cloud Services Models Scenarios where PaaS is not ideal

 Where the application needs to be highly portable in terms of where it is


hosted.

 Where proprietary languages or approaches would impact on the


development process

 Where a proprietary language would hinder later moves to another


provider – concerns are raised about vendor lock in

 Where application performance requires customization of the underlying


hardware and software

31
Cloud Services Models IaaS

3. Infrastructure as a Service (IaaS)


 IaaS is the hardware and software that powers it all – servers, storage,
network, operating systems

 Infrastructure as a Service (IaaS) is a way of delivering Cloud


Computing infrastructure – servers, storage, network and operating
systems as an on-demand service.

 Rather than purchasing servers, software, datacenter space or network


equipment, clients instead buy those resources as a fully outsourced

service on demand.
 The capability provided to provision processing, storage, networks, and
other fundamental computing resources.
 Consumer can deploy and run arbitrary software
 e.g: Amazon Web Services and Flexi scale.

7
Cloud Services Models IaaS
Characteristics of IaaS
 Allows for dynamic scaling
 Has a variable cost, utility pricing model
 Generally includes multiple users on a single piece of hardware

Scenario where IaaS make Sense


 Where demand is very volatile – any time there are significant spikes and
troughs in terms of demand on the infrastructure
 For new organizations without the capital to invest in hardware
 Where the organization is growing rapidly and scaling hardware would
be problematic
 Where there is pressure on the organization to limit capital
expenditure and to move to operating expenditure
 For specific line of business, trial or temporary infrastructural needs
18
Scenario where IaaS may not be the best option

 Where regulatory compliance makes the offshoring or outsourcing of


data storage and processing difficult

 Where the highest levels of performance are required, and on-premise


or dedicated hosted infrastructure has the capacity to meet the
organization’s needs

21
Classic Model vs. XaaS

35
XaaS

Saa Paa
Iaa
S S

Managed by user
S
Applications Applications Applications

Managed by user
Data Data Data

Runtime Runtime Runtime


Managed by service provider

Middleware Middleware Middleware

Managed by service provider

Managed by service provider


O/S O/S O/S

Virtualization Virtualization Virtualization

Servers Servers Servers

Storage Storage Storage

Network Network Network

25
XaaS
 Analytics-as-a-Service (AaaS)
 It provides access to data analysis software and tools through the Cloud, rather
than having to invest in on-premise software.
 AaaS services are complete and customizable solutions for organizing, analyzing
and visualizing data.
 Business process as a service (BPaaS)
 It is a term for a specific kind of web-delivered or cloud hosting service that
benefits an enterprise by assisting with business objectives.
 In the general sense, a business process is simply a task that must be completed
to benefit business operations.
 One example is transaction management. Credit card transactions may need to
be recorded in a central database, or otherwise handled or evaluated. If a vendor
can offer a company that same task performed and delivered through cloud
37
hosted networks, that would be an example of BPaaS.
XaaS Cont….
 Storage as a service (STaaS) is a business model in which a company leases or rents
its storage infrastructure to another company or individuals to store data
 Security as a Service (SECaaS), offers security to IT companies on a subscription
basis.
 A superior security platform is provided by the outsourced approach, which
lowers the total cost of ownership than the business could supply on its own.
 With the use of cloud computing, security for the company is maintained by an
outside party.
 Implement security kind services and doesn’t need on-premises hardware,
avoiding substantial capital outlays.
 These security services typically embody authentication, antivirus, anti-
malware/spyware, intrusion detection, penetration testing, and security event
management among others. 
XaaS Cont….

 Installs virus protection software, spam filtering software, and other security


tools on every computer, on the network, or on the server
 Examples of SECaaS :
 Cyber security is handled by security analysts, Threat intelligence reacts right
away to any malfunctions that compromise security
 To minimize the impact on the system, sophisticated techniques identify the
infection
 The automations respond to spam and viruses automatically and eliminate them.
XaaS [DBaaS] Cont….
 Database as a service (DBaaS)
 DBaaS and cloud database comes under Software as a Service (SaaS) whose
demand is growing so fast
 Database as a service (DBaaS) is a cloud computing managed service offering
that provides access to a database without requiring the setup of physical
hardware, the installation of software or the need to configure the database.
 Database administration and maintenance tasks are handled by the service
provider, enabling users to quickly benefit from the database service.
 As in on-premises deployments, NoSQL DBaaS technologies span multiple
database types, including graph databases, document databases, wide-column
stores and key-value stores.
 Multimodel databases that support more than one database type are also
available for DBaaS deployments
XaaS [DBaaS] Cont….
 DBaaS and on-premises database variations
 In an on-premises computing environment, the database server is part of the IT
infrastructure in an organization's data center and is installed, managed and run
by the organization's own IT staff. A database administrator is responsible for
configuring and managing the databases that run on the server.
 In contrast, under the DBaaS model, the provider maintains the system
infrastructure and database and delivers it as a fully managed cloud service. The
service covers high-level administrative functions, such as database installation,
configuration, maintenance and upgrades. Additional tasks, such as backups,
patching and performance management, typically are also handled by the
provider.
 DBaaS is a fee-based subscription service. customers pay for their use of system
resources.
XaaS [DBaaS] Cont….
 Control of the data in a database remains the responsibility of the customer, but
the role of the DBA primarily involves monitoring the use of the database,
managing user access and coordinating with the DBaaS vendor on things like
provisioning, patching and maintenance.
 DBaaS when to use when and not to use
 The DBaaS model is ideal for small and medium-sized businesses (SMBs) that
don't have well-staffed IT departments. Offloading the service and maintenance
of the database to the DBaaS provider enables SMBs to implement applications
and systems that they otherwise couldn't afford to build and support on premises.
 Workloads involving data with stringent regulatory requirements may not be
suitable for the DBaaS model because of data security and privacy concerns.
Furthermore, mission-critical applications that require optimal performance and
uptime may be better suited for an on-premises implementation.
XaaS [DBaaS] Cont….
 Advantages of DBaaS : The DBaaS model offers some specific advantages over
traditional on-premises database systems, including the following:
 Reduced management requirements. The DBaaS provider takes on many of
the routine database management and administration burdens.
 Elimination of physical infrastructure. The underlying IT infrastructure
required to run the database is provided by the DBaaS vendor or the provider
of the cloud platform that's hosting the DBaaS environment, if they're
different companies.
 Reduced IT equipment costs. Because the system infrastructure is no longer
on premises, users don't need to invest in database servers or plan for
hardware upgrades on an ongoing basis.
 Additional savings. In addition to lower capital expenditures, savings can
come from decreased electrical and HVAC operating costs and smaller space
needs in data centers, as well as possible IT staff reductions.
 More flexibility and easier scalability. The infrastructure supporting the
database can be elastically scaled up or down as database usage changes, as
opposed to the more complex and rigorous process required to scale on-
premises systems.
XaaS [DBaaS] Cont….
 Disadvantages of DBaaS :
 Lack of control over the IT infrastructure is usually the most significant issue with
DBaaS versus an in-house system. With managed databases, an organization's IT
team doesn't have direct access to the servers and storage devices used to run
them. As a result, it has to rely on the cloud provider to manage the infrastructure
effectively.
 Also, if an providers internet connection goes down, or if the DBaaS provider
experiences a system outage, the organization won't have access to its database
until the problem is repaired.
 Security can also be a concern in some cases because it's controlled by the DBaaS
provider and an organization doesn't have direct influence over the safety of the
servers running its databases. Under the shared responsibility model for cloud
security, organizations are responsible for some aspects of data security and
things like identity and access management in DBaaS environments. But the
vendor is in charge of securing the database platform and underlying
infrastructure.
 Latency is another concern. The additional time required to access enterprise
data over the internet can cause performance issues. These performance issues
grow when loading large amounts of data, which tends to be slow and time-
consuming.
XaaS [MaaS] Cont….
Monitoring as a Service (MaaS) is the service that is concerned with monitoring
the status and proper functioning of the applications and infrastructure.
 It combines both cloud computing and on-premise IT infrastructure.
 It is mainly concerned with the online state monitoring of our applications, storage
instances, network traffic, etc. 
 E.g. ‘Monitoring as a Service’ tools
 Amazon CloudWatch: Amazon CloudWatch allows us to completely monitor the
tech stack of our application and infrastructure. It notifies us with alarms, logs, etc,
and helps us to take necessary actions.
 Azure Monitor: It collects the data from various sources and stores it as logs. This
data can later be used for logs, analysis, security checks, notifications, etc. It not
only reports the issue to the user but also provides the solution to solve the issue. 
 AppDynamics: Used for monitoring every aspect of the application. It can monitor
45
the business transactions, transaction snapshots, tires, and nodes, etc
XaaS [CaaS] Cont….
 Communications, content and computing as a service:
 CaaS is a specialized variation of Software as a Service (SaaS) which is among
three basic services delivered by the cloud computing technology.
 CaaS providers manage the hardware and software that are important for
delivering Voice over IP (VoIP) for voice communication service, and other services
like Instant Messaging (IM) to provide text communication service and video
conferencing to provide video communication service.
 Features of CaaS
 Integrated and Unified Communication: Unified communication features
include Chat, Multimedia conferencing, Microsoft Outlook integration, Real-time
presence, “Soft” phones (software-based telephones), Video calls, Unified
messaging and mobility.
 No Investment Required: The customer only has to pay for the service he is
46
getting from the CaaS vendor,
XaaS [CaaS] Cont….
 Flexibility & Scalability: The customer can extend their service requirement
according to their need. This brings flexibility and scalability in communication
services and even make the service economical.
 No Risk of Obsolescence: The CaaS vendors keep on updating their hardware and
software that provide communication services to meet the changing demands of
the market. So the customer using the services does not have to be worried about
the service obsolescence.
 No Maintenance Cost Incurred: The customer outsourcing the CaaS service does
not have to bear the cost of maintaining the equipment deployed for providing
communication services.
 Ensure Business Continuity: Companies distribute their data to the geographically
dispersed data centre which maintain the redundancy & help them in recovering
soon after any catastrophic event.
47
XaaS [IaaS] Cont….
 Identity as a service (IDaaS)
 It is fully on-premises and provided via set of software and hardware means.
 An identity service stores the information linked with a digital entity in a form
which can be managed and queried for further utilization in electronic transactions.
 Services which provide digital identity management as a service are classification of
internetworked systems.
 Servers that run the numerous internet domains (.COM, .ORG, .EDU, .MIL, .RU, .TV
etc.) are IDaaS servers. DNS configures the identity of a domain as belonging to a
group of assigned networks, linked with an owner and his information, and so
forth.
 If the identity is configured in the form of IP number, then the metadata is another
property.
[Objects stored in Cloud Storage have metadata associated with them. Metadata identifies
properties of the object, as well as specifies how the object should be handled when it's
48
XaaS [IaaS] Cont….
 To establish an identity, an individual might be demanded to provide a name
and password, that is termed as single-factor authentication method.
 More secure authentication needs the use of minimum two-factor
authentication.
 To get a multi-factor authentication, an individual might have a system which
checks a biometric factor like fingerprint pattern which is unique.
 Things having digital identity.
 Machine accounts and user, devices, and many other objects configure their
identity in various methods. In this, identities are created and stored in the
database of security domains that are the basis of any domain of network.
 Network interfaces which are recognized uniquely by Media Access Control
(MAC) addresses. Network identity assign specific MAC address that enables
system to be uniquely found on the network.
49
XaaS [IaaS] Cont….
 The manner in which a Windows provider validates installation of windows on the
system of the user is known as Windows Product Activation and it establish an
identification index or profile of the system, which is instructive. During
activation, the following unique data components are fetched:
 PC manufacturer
 Network address and its MAC
 CPU type and its serial number
address
 BIOS checksum
 SCSCI and IDE adapters
 Display adapter
 Hard drive and volume serial
 RAM amount
number
 A 25-character software
 Optical drive
product key and product ID
 Region and language settings
 The uniquely assigned Global
Unique Identifier or GUI
50
XaaS [BaaS] Cont….
Backup as a service (BaaS): Online backup service, also known as cloud backup
or backup as a service (BaaS), is a method of offsite data storage in which files,
folders, or the entire contents of a hard drive are regularly backed up by a service
vendor to a remote secure cloud-based data repository over a network connection. 
 Instead of performing backup with a centralized, on-premises IT department, BaaS
connects systems to a private, public, or hybrid cloud managed by the outside
provider. 
 Benefits:
1.Convenience. BaaS is automated — once it's set up, information is saved
automatically as it streams in. The convenience of BaaS allows you to concentrate on
your work without worrying about data loss.
2. Safety. Because your data is stored in the BaaS, you are not subject to the typical
threats of hackers, natural disasters, and user error. In fact, data that is stored in
the BaaS is encrypted. 51
XaaS [BaaS] Cont….

1. Ease of recovery. Due to multiple levels of redundancy, if data is lost or deleted


(most frequently through individual user error or deletion), backups are available
and easily located.
2. Affordability. BaaS can be less expensive than the cost of tape drives, servers,
or other hardware and software elements necessary to perform backup; the
media on which the backups are stored; the transportation of media to a remote
location for safekeeping; and the IT labor required to manage and troubleshoot
backup systems.

52
XaaS [DkaaS] Cont….
Desktop as a service (DKaaS): Desktop as a Service (DaaS) is a cloud
computing offering where a service provider delivers virtual desktops to end users
over the Internet, licensed with a per-user subscription.  It is one type of desktop
virtualization which is provided by third party hosts.
 Example – An organization lets its workforce work from home and soon it may be
the future of IT work process. In this situation, the organization has to provide
data in a centralized server through virtual desktop infrastructure (VDI) so that all
of the workers can access the data. But setting up the virtual desktop
infrastructure is too expensive and resource-consuming such as the need for
servers, hardware, software, skilled staff to set up and maintain the VDI. This is
when we need DaaS.
 DaaS helps people to access data and applications remotely with the help of the
internet regardless of what devices they use to access.
53
 Desktop as a Service(DaaS) is cost-effective and ensures security and control.  
XaaS [DkaaS] Cont….
 Benefits from DaaS ?
 It is less expensive than setting up and maintaining the virtual desktop
 Can be easily administered: new user can be added or an existing user can be
removed rapidly
 It delivers high-performing workspaces to any user on any device from
anywhere. These benefits become the reasons to choose DaaS over VDI.
 Where can it be used ?
 DaaS has a lot of use cases.
 Software developers
 Call-center, part-time work
 University lab
 Remote and mobile workers
 Shift and contract work
 Some DaaS Providers are :
 Amazon Workspaces
 Citrix Managed Desktops
 Microsoft Windows Virtual Desktop
 VMware Horizon Cloud
 Evolve IP
 Cloudalize Managed Desktops etc. 54
XaaS [DkaaS] Cont….
 The two types of desktops in DaaS are
1. Persistent desktop : In this type, the user can customize the looks of the virtual
desktop and whenever the user logs back the details will remain the same. This
needs more storage, so it’s expensive.
2. Non-persistent desktop : The desktops are wiped off each time a user logs out
and it just acts as a portal for shared cloud services.
 Advantages of DaaS :
 Quick deployment and decommissioning of active end users: DaaS can quickly
give a service to the end user as well as it can revoke it faster also.
 High cost savings: costs less to set up and maintain a virtual device infrastructure.
 Easy user interface: The interface is easy, a normal IT employee with usability
skills can use it.
 Increased device flexibility: number of users can be increased or decreased easily
 Improved security: provides high security and reduces the fear of cyber attacks
and risks as it is a virtual service.
 Disadvantages of DaaS :
 Internet outage: In case of an internet outage, employees may not be able to
access their desktops as these desktops are hosted in the cloud and accessed
over the internet.
 Poor Virtual Performance: Sometimes it may happen the end users will face poor
55
virtual performance as it is accessed virtually
NIST’s Four Cloud Deployment Models/ Cloud Environment
The final part of the NIST cloud computing definition includes four cloud
deployment models, representing four types of cloud environments. Users
can choose the model with features and capabilities that are best suited for
their needs.
1. Public Cloud
2. Private Cloud
3. Hybrid Cloud
4. Community Cloud
1. Private Cloud:
 A private cloud is a single-tenant environment provisioned for use by a
single organization.
 The cloud infrastructure is operated solely for an organization.
 e.g Window Server 'Hyper-V‘, Ubuntu Enterprise Cloud - UEC ,
Microsoft ECI (ElastiCLoud) data center

56
Public Cloud
 The cloud infrastructure is made available to the general public

 It may be owned, managed, and operated by a business, academic, or


government organization, or some combination of them.

 The cloud is owned by the cloud service provider.


 It exists on the premises of the cloud provider.
 In this multi-tenant deployment model
 Examples of Public Cloud:
 Google Doc, Spreadsheet
 Google App Engine
 Microsoft Windows Azure
 IBM Smart Cloud
 Amazon EC2
57
Public Cloud Cont……
 In Public setting, the provider's computing and storage resources are
potentially large
 The communication links can be assumed to be implemented over the
public Internet
 The cloud serves a diverse pool of clients (and possibly attackers).

58
Public Cloud Cont……
 Workload locations are hidden from clients (public):

 In the public scenario, a provider may migrate a subscriber's workload,


whether processing or data, at any time.
 Workload can be transferred to data centres where cost is low
 Workloads in a public cloud may be relocated anywhere at any time
unless the provider has offered (optional) location restriction policies
 Risks from multi-tenancy (public):

 A single machine may be shared by the workloads of any combination


of subscribers (a subscriber's workload may be co-resident with the
workloads of competitors or adversaries)
 Introduces both reliability and security risk
59
Public Cloud Cont……
Organizations considering the use of an on-site public cloud should
consider:
1.Network dependency (public):
 Subscribers connect to providers via the public Internet.
 Connection depends on Internet’s Infrastructure like: Domain Name
System (DNS) servers, Router infrastructure, Inter-router links
2. Limited visibility and control over data regarding security (public):
 The details of provider system operation are usually considered
proprietary information and are not divulged to subscribers.
 In many cases, the software employed by a provider is usually
proprietary and not available for examination by subscribers
 A subscriber cannot verify that data has been completely
60
Public Cloud Cont……

3. Elasticity: illusion of unlimited resource availability (public):


 Public clouds are generally unrestricted in their location or size.

 Public clouds potentially have high degree of flexibility in the


movement of subscriber workloads to correspond with available
resources.

4. Low up-front costs to migrate into the cloud (public)

5. Restrictive default service level agreements (public):


 The default service level agreements of public clouds specify
limited promises that providers make to subscribers

61
Private Cloud
 A private cloud is a single-tenant environment provisioned for use by a single
organization. The cloud infrastructure is provisioned for exclusive use by a single
organization comprising multiple consumers (e.g., business units).
 The cloud infrastructure is operated solely for an organization.
 It may be owned, managed, and operated by the organization, a third party, or
some combination of them, and it may exist on or off premises.
 Examples of Private Cloud:
 Window Server 'Hyper-V‘: The Hyper-V role in Windows Server lets you create a
virtualized computing environment where you can create and manage virtual machines
 Ubuntu Enterprise Cloud – UEC: Ubuntu Enterprise Cloud is integrated with the open
source Eucalyptus private cloud platform, making it possible to create a private cloud
with much less configuration than installing Linux first, then Eucalyptus.
 Microsoft ECI (ElastiCLoud) data center
 Eucalyptus: Eucalyptus is a Linux-based open-source software architecture for cloud
computing and also a storage platform that implements Infrastructure a Service (IaaS).
 Amazon VPC (Virtual Private Cloud)
 VMware Cloud Infrastructure Suite
 Microsoft ECI data center.

62
Private Cloud Cont……..
 Contrary to popular belief, private cloud may exist off premises and can
be managed by a third party. Thus, two private cloud scenarios exist, as
follows:
1. On-site Private Cloud
 Applies to private clouds implemented at a
customer’s premises.
2. Out-sourced Private Cloud
 Applies to private clouds where the server side is outsourced
to a hosting company.

63
On-Site Private Cloud Cont……..

 The security perimeter extends around both the subscriber’s on-site


resources and the private cloud’s resources.

 Security perimeter does not guarantees control over the private cloud’s
resources but subscriber can exercise control over the resources.

64
On-Site Private Cloud Cont……..

Organizations considering the use of an on-site private cloud should


consider:
1. Network dependency (on-site-private):
 Subscribers still need IT skills (on-site-private):Subscriber organizations will
need the traditional IT skills required to manage user devices that access
the private cloud, and will require cloud IT skills as well.
 Workload locations are hidden from clients (on-site-private): With an on-
site private cloud, however, a subscriber organization chooses the physical
infrastructure, but individual clients still may not know where their
workloads physically exist within the subscriber organization's infrastructure

65
On-Site Private Cloud Cont……..

2. Risks from multi-tenancy (on-site-private): Workloads of different clients


may reside concurrently on the same systems and local networks,
separated only by access policies implemented by a cloud provider's
software. A flaw in the software or the policies could compromise the
security of a subscriber organization by exposing client workloads to one
another
3. Data import/export, and performance limitations (on-site-private): On-
demand bulk data import/export is limited by the on-site private cloud's
network capacity, and real-time or critical processing may be problematic
because of networking limitations.
4. Potentially strong security from external threats (on-site-private): In an
on-site private cloud, a subscriber has the option of implementing an
appropriately strong security perimeter to protect private cloud resources
against external threats to the same level of security as can be achieved
for non-cloud resources.

66
On-Site Private Cloud Cont……..

5. Significant-to-high up-front costs to migrate into the cloud (on-site-


private): An on-site private cloud requires that cloud management
software be installed on computer systems within a subscriber
organization. If the cloud is intended to support process-intensive or data-
intensive workloads, the software will need to be installed on numerous
commodity systems or on a more limited number of high-performance
systems. Installing cloud software and managing the installations will incur
significant up-front costs, even if the cloud software itself is free, and even
if much of the hardware already exists within a subscriber organization.

6. Limited resources (on-site-private): An on-site private cloud, at any specific


time, has a fixed computing and storage capacity that has been sized to
correspond to anticipated workloads and cost restrictions. 67
Out Sourced Private Cloud Cont……..

1. Outsourced private cloud has two security perimeters, one implemented


by a cloud subscriber (on the right) and one implemented by a provider.

2. Two security perimeters are joined by a protected communications link.

3. The security of data and processing conducted in the outsourced private


cloud depends on the strength and availability of both security
perimeters and of the protected communication link.

68
Out Sourced Private Cloud Cont……..
 Organizations considering the use of an outsourced private cloud should
consider:
 Network Dependency (outsourced-private): In the outsourced private scenario,
subscribers may have an option to provision unique protected and reliable
communication links with the provider.
 Workload locations are hidden from clients (outsourced-private):
 Risks from multi-tenancy (outsourced-private): The implications are the same as
those for an on-site private cloud.
 Data import/export, and performance limitations (outsourced-private): On-
demand bulk data import/export is limited by the network capacity between a
provider and subscriber, and real-time or critical processing may be problematic
because of networking limitations. In the outsourced private cloud scenario,
however, these limits may be adjusted, although not eliminated, by provisioning
high-performance and/or high-reliability networking between the provider and
subscriber.
 Potentially strong security from external threats (outsourced-private): As with the
on-site private cloud scenario, a variety of techniques exist to harden a security
perimeter. The main difference with the outsourced private cloud is that the
techniques need to be applied both to a subscriber's perimeter and provider's
69
perimeter, and that the communications link needs to be protected.
Out Sourced Private Cloud Cont……..

 Modest-to-significant up-front costs to migrate into the cloud (outsourced-


private):
 In the outsourced private cloud scenario, the resources are provisioned by the
provider
 Main start-up costs for the subscriber relate to:
Negotiating the terms of the service level agreement (SLA)
Possibly upgrading the subscriber's network to connect to the
outsourced private cloud
Switching from traditional applications to cloud-hosted applications,
Porting existing non-cloud operations to the cloud
Training
 Extensive resources available (outsourced-private): In the case of the
outsourced private cloud, a subscriber can rent resources in any quantity
offered by the provider. Provisioning and operating computing equipment
at scale is a core competency of providers.

70
Community Cloud

 A community cloud is used by a community of users from organizations


with shared concerns (e.g., mission, security requirements, policy, and
compliance considerations).
 This multi-tenant platform allows multiple companies or special interest
user groups to collaborate securely on projects or research.
 It may be owned, managed, and operated by one or more of the
organizations in the community
 A third party, or some combination of them, and it may exist on or off
premises.

71
Community Cloud Cont…..

 Examples of Community Cloud: Google Apps for Government,


Microsoft Government Community Cloud

72
On-Site Community Cloud Cont…..
1. Community cloud is made up of a set of participant organizations.
Each participant organization may provide cloud services, consume cloud
services, or both
2. At least one organization must provide cloud services
3. Each organization implements a security perimeter

73
On-Site Community Cloud Cont…..
4. The participant organizations are connected via links between the boundary
controllers that allow access through their security perimeters
5. Access policy of a community cloud may be complex
 Ex. :if there are N community members, a decision must be made, either
implicitly or explicitly, on how to share a member's local cloud resources
with each of the other members
 Policy specification techniques like role-based access control (RBAC),
attribute-based access control can be used to express sharing policies.
[Role-based access control (RBAC) restricts network access based on a person's
role within an organization, set of permissions that allow users to read, edit, or
delete articles in a writing application]

[an organization’s access policies enforce access decisions based on the


attributes of the subject (ID, job roles, group memberships, departmental and
organizational memberships, management level, security clearance, and other
identifying criteria), resource (file, application, server, or even API), action
(Common action attributes include “read,” “write,” “edit,” “copy,” and “delete.” ),
and environment (time and location of an access attempt, the subject’s device,
communication protocol, and encryption strength) involved in an access event]
On-Site Community Cloud Cont…..
Organizations considering the use of an on-site community cloud
should consider:
1. Network Dependency (on-site community):
 The subscribers in an on-site community cloud need to either provision
controlled inter-site communication links or use cryptography over a
less controlled communications media (such as the public Internet).
 The reliability and security of the community cloud depends on the
reliability and security of the communication links.
2. Subscribers still need IT skills (on-site-community).
 Organizations in the community that provides cloud resources, requires
IT skills similar to those required for the on-site private cloud scenario
except that the overall cloud configuration may be more complex and
hence require a higher skill level.
 Identity and access control configurations among the
participant organizations may be complex
75
On-Site Community Cloud Cont…..
3. Workload locations are hidden from clients (on-site-community):
4. Data import/export, and performance limitations (on-site-community):
 The communication links between the various participant organizations
in a community cloud can be provisioned to various levels of
performance, security and reliability, based on the needs of the
participant organizations.
 The network-based limitations are thus similar to those of the
outsourced-private cloud scenario.
5. Potentially strong security from external threats (on-site-community):
 The security of a community cloud from external threats depends on
the security of all the security perimeters of the participant
organizations and the strength of the communications links.
 These dependencies are essentially similar to those of the outsourced
private cloud scenario, but with possibly more links and security
perimeters.

76
On-Site Community Cloud Cont…..

6. Highly variable up-front costs to migrate into the cloud (on-site-


community):
 The up-front costs of an on-site community cloud for a participant
organization depend greatly on whether the organization plans to
consume cloud services only or also to provide cloud services.

 For a participant organization that intends to provide cloud services


within the community cloud, the costs appear to be similar to those
for the on-site private cloud scenario (i.e., significant-to- high).

77
Out-Sourced Community Cloud Cont…..

78
Out-Sourced Community Cloud Cont…..
Organizations considering the use of an Out-Sourced community cloud
should consider:
1. Network dependency (outsourced-community):
 The network dependency of the outsourced community cloud is similar
to that of the outsourced private cloud. The primary difference is that
multiple protected communications links are likely from the community
members to the provider's facility.
2. Workload locations are hidden from clients (outsourced-
community).
 Same as the outsourced private cloud
3. Risks from multi-tenancy (outsourced-community):
 Same as the on-site community cloud
4. Data import/export, and performance limitations (outsourced- community):
 Same as outsourced private cloud

79
Out-Sourced Community Cloud Cont…..

5. Potentially strong security from external threats (outsourced-


community):
 Same as the on-site community cloud
6. Modest-to-significant up-front costs to migrate into the cloud
(outsourced-community):
 Same as outsourced private cloud
7. Extensive resources available (outsourced-community).
 Same as outsourced private cloud

80
Hybrid Cloud
The cloud infrastructure is a composition of two or more distinct cloud
infrastructures (private, community, or public) that remain unique entities,
but are bound together by standardized or proprietary technology that
enables data and application portability
 It provides greater flexibility, portability, and scalability than the other
deployment models. e.g Cloud Bursting [Cloud bursting is a configuration method that
uses cloud computing resources whenever on-premises infrastructure reaches peak capacity.
When organizations run out of computing resources in their internal data center, they burst the
extra workload to external third-party cloud services] for load balancing between clouds
They have significant variations in performance, reliability, and security
properties depending upon the type of cloud chosen to build hybrid cloud.
A hybrid cloud can be extremely complex
A hybrid cloud may change over time with constituent clouds joining and
leaving.
 e.g. Windows Azure (capable of Hybrid Cloud), VMware Cloud (Hybrid
Cloud Services) )
81
Hybrid Cloud Cont…..

82
Hybrid Cloud Cont…

83
Cloud Interoperability & Standards

 Every organization/ business is increasingly moving towards cloud-based solutions.


 Suitable interoperability and portability is very essential.
 Interoperability is defined as the capacity of at least two systems or applications to
trade with data and utilize it.
 On the other hand, cloud interoperability refers to the ability of customers to
use the same management tools, server images and other software with a
variety of cloud computing providers and platforms.
 Cloud interoperability is the ability in which one cloud service can communicate
with other cloud services by sharing information to achieve predictable results
according to a specified process.
 Cloud computing and interoperability demonstrate that public and private cloud
providers recognize the APIs, their standard settings, data formats, authentication,
and authorization. "Ideally, it is a standardized interface so that we can migrate
from one cloud service to another as a customer with little effect as possible on our
systems.
Cloud Interoperability & Standards

 The two crucial components in Cloud interoperability are usability and


connectivity, have been separated into 5 layers: Behaviour, Policy, Semantic,
Syntactic, Transport
 Portability can be separated into two types: Cloud data portability and Cloud
application portability.
– Cloud data portability –It is the capability of moving information from one cloud
service to another and so on without expecting to re-enter the data.
– Cloud application portability – It is the capability of moving an application from
one cloud service to another or between a client’s  environment and a cloud
service.
Cloud Interoperability & Standards Cont…

 Categories of Cloud Computing Interoperability and portability :


The Cloud portability and interoperability can be divided into –

 Data Portability  Platform Interoperability


 Platform Portability  Application Interoperability  
 Application Portability  Management Interoperability

86
Cloud Interoperability & Standards Cont…
• Data Portability – Also termed as cloud portability, refers to the transfer of data
from one source to another source or from one service to another service, i.e.
from one application to another application or it may be from one cloud service
to another cloud service in the aim of providing a better service to the customer
without affecting it’s usability.
• Application Portability – It enables re-use of various application components in
different cloud PaaS services.
 Portability between development and operational environments is specific
application portability that arises with cloud computing. Cloud computing
brings development and operations closer together and, indeed contributes to
the convergence of the two as develops gradually.
 Cloud PaaS is especially appealing for development environments because it
eliminates the need for investment in costly infrastructure that would be
unused until the development is complete.
 However, if a particular environment is to be used at runtime, either on in-
house systems or on various cloud platforms, it is important that the
applications between the two environments can be transferred.
 This will only function if the same environment is used for development
and operation, or if device portability exists between environments for
development and operation.
Cloud Interoperability & Standards Cont…

• Platform Portability –There are two types of platform portability- platform source
portability and machine image portability.
– In the case of platform source portability, e.g. UNIX OS, Platform portability is
supported by the UNIX operating system. It is mostly written in the language of C
programming, and by re-compiling and re-writing a few small hardware-
dependent parts that are not coded in C, it can be implemented on different
hardware. This is the conventional portability approach to platforms. It allows
portability of applications because applications that use the standard interface of
the operating system can similarly be re-compiled and run on devices with distinct
hardware. It is demonstrated in the platform source portability.

– Machine image portability binds application with platform by porting the


resulting bundle which requires standard program representation. [The Image
Portability workflow begins when you use Cloud to start the migration of an
image from your on-premises location to your public cloud subscription. ]

88
Cloud Interoperability & Standards Cont…
 Application Interoperability – Interoperability of the application manages
application to application communications using external services (e.g.,
middleware services). Application interoperability translates the processing
functions into new programs from existing systems.

 Platform Interoperability –Interoperability of networks includes standard


protocols for the discovery of resources and the exchange of knowledge. These
implicitly enable the interoperability of the applications that use the platforms.
Interoperability with software cannot be accomplished without interoperability
with platforms.

 Management Interoperability – Here, the Cloud services like SaaS, PaaS or IaaS
and applications related to self-service are assessed. It would be pre-dominant as
Cloud services are allowing enterprises to work-in-house and eradicate
dependency from third parties.
Cloud Interoperability & Standards Cont…
The below figure represents an overview of Cloud interoperability and
portability :
Cloud Interoperability & Standards Cont…
 Major Scenarios where interoperability and portability is required: Cloud
Standards Custom Council (CSCC) has identified some of the basic scenarios where
portability and interoperability is required.

 Switching between cloud service providers –The customer wants to transfer data
or applications from Cloud 1 to Cloud 2.

 Using multiple cloud service providers- The client may subscribe to the same or
different services e.g. Cloud 1 and 2.

 Directly linked cloud services- The customer can use the service by linking to
Cloud 1 and Cloud 3.

 Hybrid Cloud configuration- Here the customer connects with a legacy system not
in a public, but, private cloud, i.e. Cloud 1, which is then connected to public cloud
services i.e. Cloud 3.

 Cloud Migration- Clients migrate to one or more in-house applications to Cloud 1.


Cloud Interoperability & Standards Cont…
Challenges faced in Cloud Portability and Interoperability :
 If we move the application to another cloud, then, naturally, data is also moved.
And for some businesses, data is very crucial. But unfortunately, most cloud service
providers charge a small amount of money to get the data into the cloud.
 The degree of mobility of data can also act as an obstacle. Moving data from one
cloud to another cloud, the capability of moving workload from one host to
another should also be accessed.
 Interoperability should not be left out, otherwise data migration can be highly
affected. So the functioning of all components and applications should be ensured.
 As data is highly important in business, the safety of customer’s data should be
ensured.
 Cloud interoperability eradicates the complex parts by providing custom
interfaces. Moving from one framework can be conceivable with a container
service which improves scalability. Having a few hurdles, adaptability to change in
service providers, better assistance in cloud clients will enhance the improvement
of cloud interoperability.
 Several Standards Organizations are:The European Telecommunications Standards
Institute, The National Institute of Standards and Technology, The Institute of
Electrical and Electronics Engineers, The International Organization for
Standardization., International Telecommunications Union – Telecommunications
Sector., Cloud Standards Customer Council.
Fault Tolerance in Cloud Computing
 Means creating a blueprint for ongoing work whenever some parts are down or
unavailable. It helps enterprises evaluate their infrastructure needs and
requirements and provides services in case the respective device becomes
unavailable for some reason.
 Still the concept is to keep the system usable and, most importantly, at a
reasonable level in operational mode.
 Main Concepts behind Fault Tolerance in Cloud Computing System
1.Replication: Fault-tolerant systems work on running multiple replicas for each
service. Thus, if one part of the system goes wrong, other instances can be used
to keep it running instead. For example, take a database cluster that has 3
servers with the same information on each. All the actions like data entry,
update, and deletion are written on each. Redundant servers will remain idle
until a fault tolerance system demands their availability.
2.Redundancy: When a system part fails or goes downstate, it is important to
have a backup type system. The server works with emergency databases that
include many redundant services. For example, a website program with MS
SQL as its database may fail midway due to some hardware fault. Then the
redundancy concept has to take advantage of a new database when the original
is in offline mode.
Fault Tolerance Cont…..

 Techniques for Fault Tolerance in Cloud Computing


 Priority should be given to all services while designing a fault tolerance system.
Special preference should be given to the database as it powers many other
entities.
 After setting the priorities, the Enterprise has to work on mock tests.
 For example, Enterprise has a forums website that enables users to log in and
post comments. When authentication services fail due to a problem, users
will not be able to log in. Then, the forum becomes read-only and does not
serve the purpose. But with fault-tolerant systems, healing will be ensured,
and the user can search for information with minimal impact.
Fault Tolerance Cont…..
 Major Attributes of Fault Tolerance in Cloud Computing
 None Point of Failure: The concepts of redundancy and replication define that
fault tolerance can occur but with some minor effects. If there is no single
point of failure, then the system is not fault-tolerant.
 Accept the fault isolation concept: the fault occurrence is handled separately
from other systems. It helps to isolate the Enterprise from an existing system
failure.
 Existence of Fault Tolerance in Cloud Computing
 System Failure: This can either be a software or hardware issue.
 A software failure results in a system crash or hangs, which may be due to
Stack Overflow or other reasons.
 Any improper maintenance of physical hardware machines will result in
hardware system failure.
 Incidents of Security Breach: There are many reasons why fault tolerance may
arise due to security failures. The hacking of the server hurts the server and
results in a data breach.
 Other reasons for requiring fault tolerance in the form of security breaches
include ransomware, phishing, virus attacks, etc.
Scaling in Cloud Computing

 Cloud scalability in cloud computing refers to increasing or decreasing IT resources


as needed to meet changing demand.
 Scalability is one of the hallmarks of the cloud and the primary driver of its explosive
popularity with businesses.
 Data storage capacity, processing power, and networking can all be increased by
using existing cloud computing infrastructure.
 Scaling can be done quickly and easily, usually without any disruption or
downtime.
 Third-party cloud providers already have the entire infrastructure in place.
 In the past, when scaling up with on-premises physical infrastructure, the process
could take weeks or months and require exorbitant expenses.
 There are a few main ways to scale to the cloud:
1. Vertical Scalability (Scaled-up)
2. horizontal scalability
3. diagonal scalability
Cloud Elasticity Vs Scalability
1. Elasticity is used just to meet the 1. Scalability is used to meet the static
sudden up and down in the increase in the workload.
workload for a small period of time
2. Elasticity is used to meet dynamic 2. Scalability is always used to address
changes, where the resources need the increase in workload in an
can increase or decrease. organization
3. Elasticity is commonly used by small
companies whose workload and 3. Scalability is used by giant companies
demand increases only for a whose customer circle persistently
specific period of time. grows in order to do the operations
4. It is a short term planning and efficiently.
adopted just to deal with an 4. Scalability is a long term planning and
unexpected increase in demand or adopted just to deal with an expected
seasonal demands. increase in dema
Scaling Cont….

1. Vertical Scaling
 To understand vertical scaling, imagine a 20-story hotel. There are innumerable
rooms inside this hotel from where the guests keep coming and going. Often
there are spaces available, as not all rooms are filled at once. People can move
easily as there is space for them. As long as the capacity of this hotel is not
exceeded, no problem. This is vertical scaling.
 With computing, you can add or subtract resources, including memory or
storage, within the server, as long as the resources do not exceed the capacity
of the machine.
 Although it has its limitations, it is a way to improve your server and avoid
latency and extra management. Like in the hotel example, resources can
come and go easily and quickly, as long as there is room for them.
Scaling Cont….
1. Vertical Scaling
 To understand vertical scaling, imagine a 20-story hotel. There are innumerable
rooms inside this hotel from where the guests keep coming and going. Often
there are spaces available, as not all rooms are filled at once. People can move
easily as there is space for them. As long as the capacity of this hotel is not
exceeded, no problem. This is vertical scaling.
 With computing, you can add or subtract resources, including memory or
storage, within the server, as long as the resources do not exceed the capacity
of the machine.
 Although it has its limitations, it is a way to improve your server and avoid
latency and extra management. Like in the hotel example, resources can
come and go easily and quickly, as long as there is room for them.
Scaling Cont….
2. Horizontal Scaling
 Horizontal scaling is a bit different. This time, imagine a two-lane highway. Cars
travel smoothly in each direction without major traffic problems. But then the
area around the highway develops - new buildings are built, and traffic increases.
Very soon, this two-lane highway is filled with cars, and accidents become
common. Two lanes are no longer enough. To avoid these issues, more lanes are
added, and an overpass is constructed. Although it takes a long time, it solves
the problem.
 Horizontal scaling refers to adding more servers to your network, rather than
simply adding resources like with vertical scaling. This method tends to take
more time and is more complex, but it allows you to connect servers together,
handle traffic efficiently and execute concurrent workloads.
Scaling Cont….
3. Diagonal Scaling: It is a mixture of both Horizontal and Vertical scalability where
the resources are added both vertically and horizontally.
 When you combine vertical and horizontal, you simply grow within your existing
server until you hit the capacity. Then, you can clone that server as necessary
and continue the process, allowing you to deal with a lot of requests and traffic
concurrently.
Scaling Cont….
 Benefits of cloud scalability
 Convenience: Often, with just a few clicks, IT administrators can easily add
more VMs that are available-and customized to an organization's exact needs-
without delay. Teams can focus on other tasks instead of setting up physical
hardware for hours and days. This saves the valuable time of the IT staff.

 Flexibility and speed: As business needs change and grow, including


unexpected demand spikes, cloud scalability allows IT to respond quickly.
Companies are no longer tied to obsolete equipment-they can update systems
and easily increase power and storage. Today, even small businesses have
access to high-powered resources that used to be cost-prohibitive.

 Cost Savings: Businesses can avoid the upfront cost of purchasing expensive
equipment that can become obsolete in a few years. Through cloud providers,
they only pay for what they use and reduce waste.

 Disaster recovery: With scalable cloud computing, you can reduce disaster
recovery costs by eliminating the need to build and maintain secondary data
centers.
Scaling Cont….
 When to Use Cloud Scalability?
• Scalability is one of the driving reasons for migrating to the cloud. Whether traffic
or workload demands increase suddenly or increase gradually over time, a
scalable cloud solution enables organizations to respond appropriately and cost-
effectively to increased storage and performance.
 How to determine optimal cloud scalability?: Changing business needs or
increasing demand often necessitate your scalable cloud solution changes. But how
much storage, memory, and processing power do you need? Will you scale in or
out?
 To determine the correct size solution, continuous performance testing is
essential. IT administrators must continuously measure response times,
number of requests, CPU load, and memory usage.
 Automation can also help optimize cloud scalability. You can set a threshold
for usage that triggers automatic scaling so as not to affect performance.
 You may also consider a third-party configuration management service or
tool to help you manage your scaling needs, goals, and implementation.
Virtual Desktop Interface (VDI)

 Virtual Desktop Infrastructure (VDI) is a technology that refers to the use of virtual
machines to provide and manage virtual desktops.
 VDI hosts desktop environments on a centralized server and deploys them to end-
users on request.
 How does VDI work? : In VDI, a hypervisor segments servers into virtual
machines that in turn host virtual desktops, which users access remotely from their
devices.
 All processing is done on the host server.
 Users connect to their desktop instances through a that acts as an intermediary
between the user and the server. connection broker, which is a software-based
gateway
Virtual Desktop Interface Cont…..
 Two approaches to deploying desktops
1.Persistent desktop. Each user is assigned a unique desktop instance, which they
can customize to their individual preferences.
 With persistent VDI, a user connects to the same desktop each time, and users
can personalize the desktop for their needs since changes are saved even after
the connection is reset. In other words, desktops in a persistent VDI
environment act like personal physical desktops.
 With persistent VDI, each user gets his or her own persistent virtual desktop --
also known as a one-to-one ratio.
2. Nonpersistent desktop. Users can access a pool of uniform desktop images as
needed to perform tasks. Nonpersistent desktops are many-to-one, meaning that
they are shared among end users.
 These nonpersistent desktops revert to their original state after each use,
rather than being personalized for a unique user.
 In non-persistent VDI, where users connect to generic desktops and no changes
are saved, it is usually simpler and cheaper since there is no need to maintain
customized desktops between sessions.
 As a result, Nonpersistent VDI is often used in organizations with many task
workers or employees who perform a limited set of repetitive tasks and don’t
need a customized desktop.
Virtual Desktop Interface Cont…..
 Why VDI? : VDI offers a number of advantages, such as user mobility, ease of
access, flexibility and greater security. In the past, high-performance requirements
made it costly and challenging to deploy on legacy systems, which posed a barrier
for many businesses. However, the rise in enterprise adoption of hyperconverged
infrastructure (HCI) offers a solution that provides scalability and high
performance at a lower cost.
 Benefits of VDI: Although VDI’s complexity means that it isn’t necessarily the right
choice for every organization, it offers a number of benefits for organization: 
1. Remote access: VDI users can connect to their virtual desktop from any location
or device, making it easy for employees to access all their files and applications
and work remotely from anywhere in the world.
2. Cost savings: Since processing is done on the server, the hardware
requirements for end devices are much lower. Users can access their virtual
desktops from older devices, thin clients, or even tablets, reducing the need for
IT to purchase new and expensive hardware.
3. Security: In a VDI environment, data lives on the server rather than the end
client device. This serves to protect data if an endpoint device is ever stolen or
compromised.
4. Centralized management: VDI’s centralized format allows IT to easily patch,
update or configure all the virtual desktops in a system.
Virtual Desktop Interface Cont…..
 What is VDI used for? There are a number of use cases that are uniquely suited for
VDI, including: 
1.Remote work: Since VDI makes virtual desktops easy to deploy and update from
a centralized location, an increasing number of companies are implementing it for
remote workers.
2.Bring your own device (BYOD): VDI is an ideal solution for environments that
allow or require employees to use their own devices. Since processing is done on
a centralized server, VDI allows the use of a wider range of devices. It also offers
better security, since data lives on the server and is not retained on the end
client device.
3.Task or shift work: Nonpersistent VDI is particularly well suited to organizations
such as call centers that have a large number of employees who use the same
software to perform limited tasks.
Virtual Desktop Interface Cont…..
 How to implement VDI?: Larger enterprises should consider implementing it in an
HCI environment, as HCI’s scalability and high performance are a natural fit for
VDI’s resource needs. On the other hand, implementing HCI for VDI is probably not
necessary (and would be overly expensive) for organizations that require less than
100 virtual desktops. 
 Best practices to follow when implementing VDI: 
 Prepare Your Network: Since VDI performance is so closely linked to network
performance, it’s important to know peak usage times and anticipate demand
spikes to ensure sufficient network capacity.
 Avoid Under provisioning: Perform capacity planning in advance using a
performance monitoring tool to understand the resources each virtual desktop
consumes and to make sure you know your overall resource consumption needs.
 Understand Your End-Users’ Needs: Identify, is organization better suited to a
persistent or non-persistent VDI setup? What are users performance
requirements? Accordingly, provision the setup differently for users who use
graphics-intensive applications versus those who just need access to the internet
or to one or two simple applications.
 Perform a Pilot Test: Most virtualization providers offer testing tools that you
can use to run a test VDI deployment beforehand; it’s important to do so to make
sure you’ve provisioned your resources correctly.
Virtual Desktop Interface Cont…..

 Virtual desktop infrastructure (VDI) use cases


1.Remote workers: The use of VDI environments makes it much easier to provide
access for remote workers to organizational standard desktop environments
across a broad range of devices.
 With the virtual desktop, access to the core software systems can be
controlled and access can be granted to any remote worker at a remote site,
with minimal investment.
 Regardless of location, each team member has access to the same
organizational network and resources while maintaining central access and
application controls.
 Persistent desktops can be customized to suit each user, while the data
remains centralized in the core data center.
Virtual Desktop Interface Cont…..
2. Call centers: A major advantage of a non-persistent desktop is the ability to simply
consume a standard desktop from a pool of identical desktops.

 The typical call center is an excellent example of how this model directly
supports the needs of a team of people. Each member of the team is only
required to do a specific set of tasks, which do not require the desktop to be
nonstandard.

 The standard nonpersistent desktop instance can be easily patched and


deployed with just the requisite software installed and can be deployed across
physical sites with minimal complications.

3. Contract employees: When temporary contractors join a team, they need access
to some of the core assets and team members, but security is an important
consideration. By using a virtual desktop, it’s possible to control access to corporate
resources while delivering the connection point for the temporary workers.
Contractors are able to perform tasks that use organizational resources without
having access to systems that are not related to the contract
Fog Computing and Edge Computing
 Nowadays, a massive amount of data is generated every second around the
globe.
 Businesses collect and process that data from the people and get analytics to
scale their business.
 When lots of organizations access their data simultaneously on the remote
servers in data centers, data traffic might occur. Data traffic can cause some
delay in accessing the data, lower bandwidth, etc.
 Cloud computing technology alone is not effective enough to store and
process massive amounts of data and respond quickly.  
 For example, in the Tesla self-driving car, the sensor constantly monitors
certain regions around the car. If it detects an obstacle or pedestrian on its way,
then the car must be stopped or move around without hitting. When an
obstacle is on its way, the data sent through the sensor must be processed
quickly and help the car to detect before it hits. A little delay in detection could
be a major issue. To overcome such challenges, edge computing and fog
computing are introduced.  
Fog Computing
 Fog computing or fog networking, also known as fogging, is an architecture that
uses edge devices to carry out a substantial amount of computation (edge
computing), storage, and communication locally and routed over the Internet
backbone.
 Fog computing is a decentralized computing infrastructure in which data, compute,
storage and applications are located somewhere between the data source and the
cloud.
 The devices comprising the fog infrastructure are known as fog nodes.
Fog Computing Cont…..
 In 2011, the need to extend cloud computing with fog computing emerged, in order
to cope with huge number of IoT devices and big data volumes for real-time low-
latency applications.
  Fog computing, also called edge computing, is intended for distributed computing
where numerous "peripheral" devices connect to a cloud.
 The word "fog" refers to its cloud-like properties, but closer to the "ground", i.e.
IoT devices.
 Many of these devices will generate voluminous raw data (e.g., from sensors),
and rather than forward all this data to cloud-based servers to be processed, the
idea behind fog computing is to do as much processing as possible using
computing units co-located with the data-generating devices, so that
processed rather than raw data is forwarded, and bandwidth requirements are
reduced.
 An additional benefit is that the processed data is most likely to be needed by the
same devices that generated the data, so that by processing locally rather than
remotely, the latency between input and response is minimized.
 This idea is not entirely new: in non-cloud-computing scenarios, special-purpose
hardware (e.g., signal-processing chips performing Fast Fourier Transforms) has
long been used to reduce latency and reduce the burden on a CPU.
Fog Computing Cont…..
  Fog computing emphasizes:
 Proximity (closeness ) to end-users and client objectives (e.g. operational costs,
security policies, resource exploitation)
 Dense geographical distribution and context-awareness (for what concerns
computational and IoT resources),
 Latency reduction and backbone bandwidth savings to achieve better quality of
service (QoS) 
 Edge analytics/stream mining, resulting in superior user-experience and
redundancy in case of failure while it is also able to be used in Assisted
Living scenarios.
 Fog networking supports the Internet of Things (IoT) concept, in which most of the
devices used by humans on a daily basis will be connected to each other.
 Examples include phones, wearable health monitoring devices, connected
vehicle and augmented reality using devices such as the Google Glass.
 IoT devices are often resource-constrained and have limited computational
abilities to perform cryptography computations.
 A fog node can provide security for IoT devices by performing these
cryptographic computations instead
Fog Computing Cont…..

 Both cloud computing and fog computing provide storage, applications, and data


to end-users. However, fog computing is closer to end-users and has wider
geographical distribution.
 'Cloud computing' is the practice of using a network of remote servers hosted on
the Internet to store, manage, and process data, rather than a local server or a
personal computer.
 Also known as edge computing or fogging, fog computing facilitates the
operation of compute, storage, and networking services between end devices
and cloud computing data centers.
 The University of Melbourne is addressing the challenges of collecting and
processing data from cameras, ECG devices, laptops, smartphones, and IoT devices
with its project FogBus 2, which uses edge/fog and Oracle Cloud Infrastructure to
process data in real-time.
Fog Computing Cont…..

 Definition as per National Institute of Standards and Technology: Fog Computing


Conceptual Model, that defines fog computing as a horizontal, physical or virtual
resource paradigm that resides between smart end-devices and traditional cloud
computing or data center. This paradigm supports vertically-isolated, latency-
sensitive applications by providing ubiquitous, scalable, layered, federated,
distributed computing, storage, and network connectivity.
 Thus, fog computing is most distinguished by distance from the edge.
 In the theoretical model of fog computing, fog computing nodes are physically
and functionally operative between edge nodes and centralized cloud.
Fog Computing Cont…..

 Cisco invented the phrase "Fog Computing," which refers to extending cloud
computing to an enterprise's network's edge. As a result, it's also known
as Fogging or Edge Computing. It makes computation, storage, and networking
services more accessible between end devices and computing data centers.
 Fog computing is the computing, storage, and communication architecture that
employs EDGE devices to perform a significant portion of computation, storage,
and communication locally before routing it over the Internet backbone.
 The goal of fog computing is to conduct as much processing as possible using
computing units that are co-located with data-generating devices so that processed
data rather than raw data is sent and bandwidth needs are decreased.
 Another advantage of processing locally rather than remotely is that the
processed data is more needed by the same devices that created the data, and the
latency between input and response is minimized.
Fog Computing Cont…..
 History of fog computing: The term fog computing was coined by Cisco in January
2014. This was because fog is referred to as clouds that are close to the ground in
the same way fog computing was related to the nodes which are present near the
nodes somewhere in between the host and the cloud. It was intended to bring the
computational capabilities of the system close to the host machine. After this
gained a little popularity, IBM, in 2015, coined a similar term called “Edge
Computing”. 
 When to use fog computing?  Fog Computing can be used in the following
scenarios: 
  It is used when only selected data is required to send to the cloud. This
selected data is chosen for long-term storage and is less frequently accessed by
the host.
 It is used when the data should be analyzed within a fraction of seconds i.e
Latency should be low.
 It is used whenever a large number of services need to be provided over a large
area at different geographical locations.
 Devices that are subjected to rigorous computations and processings must use
fog computing. IoT devices
 Real-world examples where fog computing is used are in IoT devices (eg. Car-to-
Car Consortium, Europe), Devices with Sensors, Cameras (IIoT-Industrial Internet
Fog Computing Cont…..
 Advantages of fog computing 
 This approach reduces the amount of data that needs to be sent to the cloud. 
 Since the distance to be traveled by the data is reduced, it results in saving
network bandwidth. 
 Reduces the response time of the system. 
 It improves the overall security of the system as the data resides close to the
host.
 It provides better privacy as industries can perform analysis on their data locally.
 
 Disadvantages of fog computing 
 Congestion may occur between the end devices and the fog node due to
increased traffic (heavy data flow). 
 Power consumption increases when another layer is placed between the end
devices and the cloud. 
 Scheduling tasks between end devices and fog nodes along with fog nodes and
the cloud is difficult. 
 Data management becomes tedious as along with the data stored and computed,
the transmission of data involves encryption-decryption too which in turn release
data. 
Fog Computing Cont…..

 
 Applications of fog computing
• It can be used to monitor and analyze the patients’ condition. In case of
emergency, doctors can be alerted.

• It can be used for real-time rail monitoring as for high-speed trains we want as
little latency as possible.
Mist (Edge) Computing

 Cloud, fog and edge computing may appear similar, but they are different layers
of the IIoT.
 Edge computing for the IIoT allows processing to be performed locally at multiple
decision points for the purpose of reducing network traffic. 
 As the name implies, edge computing occurs exactly at ‘the edge’ of the
application network.
 In terms of topology, this means that an ‘edge computer’ is right next to or even
on top of the endpoints (such as controllers and sensors) connected to the
network. The data is then either partially or entirely processed and sent to the
cloud for further processing or storage.
Edge Computing Cont…..
 The number of devices connected to enterprise networks and the volume of data
being generated by them are scaling at a pace that is too rapid for traditional
data centers to keep up with.
– Such a situation could lead to tremendous strain on both local networks and the
internet at large.
– To address this threat of congestion and help enhance the reliability of big data
processing systems, IT infrastructure has evolved to bring computing resources
to the point of data generation. Edge computing removes the reliance on a
single, centralized data processing center. Instead, it makes computing more
efficient by bringing data centers closer to where they are actually needed.

 However, edge computing can lead to large volumes of data being transferred
directly to the cloud. This can affect system capacity, efficiency, and security.
 Fog computing addresses this problem by inserting a processing layer between
the edge and the cloud. This way, the ‘fog computer’ receives the data gathered
at the edge and processes it before it reaches the cloud.
Edge Computing Cont…..
 As such, edge computing and fog computing work in unison to minimize latency
and maximize the efficiency associated with cloud-enabled enterprise systems.
 IT personnel commonly view the terms edge computing and fog computing as
interchangeable. This is because both processes bring processing and intelligence
closer to the data source.
 Edge computing defined: Edge computing brings processing and storage systems as
close as possible to the application, device, or component that generates and
collects data.
 This helps minimize processing time by removing the need for transferring data to a
central processing system and back to the endpoint. As a result, data is processed
more efficiently, and the need for internet bandwidth is reduced.
 This keeps operating costs low and enables the use of applications in remote
locations that have unreliable connectivity.
 Security is also enhanced as the need for interaction with public cloud platforms
and networks is minimized.
Edge Computing Cont…..
Edge Computing Cont…..
 Edge computing is useful for environments that require real-time data processing
and minimal latency.
 This includes applications such as autonomous vehicles, the internet of things
(IoT), software as a service (SaaS), rich web content delivery, voice assistants,
predictive maintenance, and traffic management.
 Autonomous vehicle edge computing devices collect data from cameras and
sensors on the vehicle, process it, and make decisions in milliseconds, such as
self-parking cars.
 In order to accurately assess a patient’s condition and foresee treatments, data
is processed from a variety of edge devices connected to sensors and monitors.
 Examples of edge devices are sensors, laptops, and smart phones.
Edge Computing Cont…..

 The aim of edge computing is to move the computation away from data centers
towards the edge of the network, exploiting smart objects, mobile phones,
or network gateways to perform tasks and provide services on behalf of the cloud.

 By moving services to the edge, it is possible to provide content caching, service


delivery, persistent data storage, and IoT management resulting in better response
times and transfer rates. At the same time, distributing the logic to different
network nodes introduces new issues and challenges.

 Computation takes place at the edge of a device’s network, which is known as edge
computing. That means a computer is connected with the network of the device,
which processes the data and sends the data to the cloud in real-time. That
computer is known as “edge computer” or “edge node”.  
Fog Computing Vs Edge Computing
1. Concept:
Edge Computing: Edge computing is defined as a computing architecture that
brings data processing as close to the source of data as physically possible.
– In many cases, data collection and processing occur on the same device, such as on an
endpoint computer or IoT device. This minimizes bandwidth use and latency.
– i.e computing processes take place locally, thus reducing the need for long-distance
data transfers to cloud servers, which can be expensive and slow.

Fog Computing: Fog computing, is an alternative to in-cloud processing and data


storage. Like edge computing, fog computing reduces bandwidth requirements by
transmitting lesser data to and from remote, cloud-based data centers. Instead,
data is processed as close to the edge as physically possible.
– However, unlike edge computing, fog computing often does not take place on the same
device on which data is extracted or produced. In simpler terms, while ‘edge
computers’ are normally the same devices that generate or collect data, ‘fog computers’
are nodes that are physically close to but distinct from these edge computers.
Fog Computing Vs Edge Computing Cont…..
2. Scope
Edge Computing: Edge computing normally takes place on employee endpoints
(laptops or smartphones) or IoT devices (sensors).
• In some cases, the device that collects or generates data is not the same as the
‘edge computer’. Rather, the edge computer is a device that stores and computes
data and is connected to the data-generating device over a local area network.
Depending on the nature of the data being collected, this setup can be protected
from wear and tear by using air conditioning, hardened enclosures, or other
forms of security infrastructure.
• It can also transmit the results of its processes directly to the cloud. As such, edge
computing is possible without fog computing.
Fog Computing: Fog computing reduces the load on both edge and cloud computers
by undertaking processing tasks from both sides.
• A fog computer is physically close to the edge computer, and they can both be
connected using a LAN.
• Fog computing is adopted in environments where the cloud platform is located
too far away to allow efficient response times, and the edge devices are either
resource-limited or physically distributed.
• A fog computer, by definition, is not capable of data collection or generation. As
such, fog computing would not exist without edge computing
Fog Computing Vs Edge Computing Cont…..
3. Applications
Edge Computing: Edge computing is normally used in less resource-intensive
applications due to the limited capabilities of the devices that collect data for
processing.
• Predictive maintenance is one such application. Here, edge computers in the form
of sensors help manufacturers analyze plant equipment and detect changes
before a failure occurs. IIoT sensors constantly monitor equipment health and use
analytics to warn of impending maintenance needs.
• Healthcare applications such as patient monitoring are also a popular use for edge
devices. Devices such as smart glucose monitors and heart monitors connect
directly to patients’ smart phones and relay relevant information to their
healthcare provider in real-time.
• Massive-scale multiplayer gaming continues to stay popular across the globe. This
is a prime example of edge computing, as all inputs and processing takes place on
the edge device, which can be a gaming console, personal computer, or
smartphone. As this form of gaming is highly sensitive to latency, only the
metadata from the game session is transmitted to the cloud for processing.
Provided the connections between the edge devices and the cloud server are
stable, the outcomes of the actions of all players are displayed in real-time.
Fog Computing Vs Edge Computing Cont…..
3. Applications
Fog Computing: Fog computing is often deployed in time-sensitive applications
that require high volume, resource-intensive processing of data collected from a
dispersed network of devices.
• Autonomous vehicles, especially cars and drones, are fast gaining popularity in the
US and around the world. These vehicles, used for civil as well as military
applications, produce high volumes of data. This information needs to be
processed in real-time, or lives can be put in danger.
• Smart grids also require the processing of large volumes of real-time data to allow
for efficient management. The sensors and other edge devices used in these
applications are numerous and greatly dispersed. Therefore, fog computing is
used to process data concurrently without compromising response time.
• Real-time analytics that leverage artificial intelligence and machine learning to
generate actionable business insights rely on the data collected from numerous
edge computers. While long-term analytics can rely directly on a centralized cloud
computer, rapidly changing short-term analytics may take place over fog
computers. This helps meet the requirements of time-sensitive data analytics
applications, such as those seen in the banking and finance industry
Fog Computing Vs Edge Computing Cont…..
4. Processing and Storage
Edge Computing: In the case of edge computing, data is processed and stored either
within the edge computer itself or very close to it.
• As edge devices have limited processing and storage capabilities, data can be
transmitted to the cloud for further operations.
• A smart phone connected to a cloud network is an example of an edge computer.

Fog Computing: Fog computing is more like a ‘gateway’ of intelligence and


processing power. A fog computer connects to a batch of edge computers
simultaneously, thus creating a localized network of devices for more efficient data
processing and storage.
• Fog computing enhances the efficiency of enterprise networks by providing a
‘bigger picture’ of the operations through the use of data from multiple endpoints.
However, it does so without the latency and congestion associated with a direct
edge to cloud connection.
• An IIoT environment in a manufacturing plant is an example of fog computing.
Fog Computing Vs Edge Computing Cont…..
5. Economic Considerations
Edge Computing: Edge computing services are provided by leading vendors such as
Microsoft, Amazon, and Dell. These providers have a fixed, recurring fee based on
configuration and usage.
• Companies can also set up their own edge infrastructure. However, depending on
the scale of operations and the quality of the components used, it is usually more
economical for edge computing requirements to be outsourced.

Fog Computing: Fog computing services are a slightly more customized version of edge
computing and may need to be set up either from scratch or using a combination
of ‘as a service’ deployments.
• While this is likely to mean a higher price tag than edge computing, the benefits of
fog computing are manifold. For instance, fog computing creates an economic
opportunity through massive savings in terms of bandwidth, latency, computing,
and storage.
• Depending on the use case, fog computing provides an economically viable
alternative to large data centers.

You might also like