0% found this document useful (0 votes)
33 views

Cloud Computing Unit-1 Notes

Uploaded by

asta11732
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views

Cloud Computing Unit-1 Notes

Uploaded by

asta11732
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 16

Departmental Elective – V (Cloud Computing) (KCS 713)

Unit – 1

1. Definition:
NIST: National Institute of Standards and Technology
Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a
shared pool of configurable computing resources (e.g., networks, servers, storage, applications,
and services) that can be rapidly provisioned and released with minimal management effort or
service provider interaction. This cloud model is composed of five essential characteristics, three
service models, and four deployment models.

1.1 What is the Cloud?


The term cloud has been used historically as a metaphor for the Internet. This usage was
originally derived from its common depiction in network diagrams as an outline of a cloud, used
to represent the transport of data across carrier backbones (which owned the cloud) to an
endpoint location on the other side of the cloud.

Figure 1: Backbone network represented as cloud

This concept dates back as early as 1961, when Professor John McCarthy suggested that
computer time-sharing technology might lead to a future where computing power and even
specific applications might be sold through a utility-type business model. This idea became very
popular in the late 1960s, but by the mid-1970s the idea faded away when it became clear that
the IT-related technologies of the day were unable to sustain such a futuristic computing model.
However, since the turn of the millennium, the concept has been revitalized. It was during this
time of revitalization that the term cloud computing began to emerge in technology circles.
1.2 More about cloud computing
1.2.1 Computing over the Internet
Gordon Bell, Jim Gray, and Alex Szalay have advocated:“Computational science is changing to
be data-intensive. Supercomputers must be balanced systems, not just CPU farms but also
petascale I/O and networking arrays.”In the future, working with large data sets will typically
mean sending the computations (programs) to the data, rather than copying the data to the
workstations. This reflects the trend in IT of moving computing and data from desktops to large
data centers, where there is on-demand provision of software, hardware, and data as a service.
This data explosion has promoted the idea of cloud computing. Cloud computing has been
defined differently by many users and designers. For example, IBM, a major player in cloud
computing, has defined it as follows:“A cloud is a pool of virtualized computer resources. A
cloud can host a variety of different workloads, including batch-style backend jobs and
interactive and user-facing applications.”Based on this definition, a cloud allows workloads to be
deployed and scaled out quickly through rapid provisioning of virtual or physical machines. The
cloud supports redundant, self-recovering, highly scalable programming models that allow
workloads to recover from many unavoidable hardware/software failures. Finally, the cloud
system should be able to monitor resource use in real time to enable rebalancing of allocations
when needed

1.2.2 Internet Clouds


Cloud computing applies a virtualized platform with elastic resources on demand by
provisioning hardware, software, and data sets dynamically. The idea is to move desktop
computing to a service-oriented platform using server clusters and huge databases at data centers.
Cloud computing leverages its low cost and simplicity to benefit both users and providers.
Machine virtualization has enabled such cost-effectiveness. Cloud computing intends to satisfy
many user applications simultaneously. The cloud ecosystem must be designed to be secure,
trustworthy, and dependable. Some computer users think of the cloud as a centralized resource
pool. Others consider the cloud to be a server cluster which practices distributed computing over
all the servers used. Then Cloud Computing concepts were shaped into practice in 2007, which
was like utility computing based upon pay as use model. First research initiatives were taken
by Google and IBM, in association with six American universities. Big organizations, who were
having huge infrastructures, high end servers, enormous storage, high speed networks, started
offering IaaS, PaaS, SaaS in the form of utilities by deploing public cloud. Private cloud and
hybrid cloud.

1.3 Top cloud service providers:


 Amazon Web Services (AWS)
 Microsoft Azure.
 Google Cloud.
 Alibaba Cloud.
 IBM Cloud.
 Oracle.
 Salesforce.
 SAP.
2. Characteristics of Cloud Computing
There are basically five essential characteristics of Cloud Computing.
1. On-demand self-services:
The Cloud computing services does not require any human administrators, user themselves
are able to provision, monitor and manage computing resources as needed.
2. Broad network access:
The Computing services are generally provided over standard networks and heterogeneous
devices.
3. Rapid elasticity:
The Computing services should have IT resources that are able to scale out and in quickly
and on as needed basis. Whenever the user require services it is provided to him and it is
scale out as soon as its requirement gets over.
4. Resource pooling:
The IT resource (e.g., networks, servers, storage, applications, and services) present are
shared across multiple applications and occupants. Multiple clients are provided service from
a same physical resource.
5. Measured service:
The resource utilization is tracked for each application and occupant, it will provide both the
user and the resource provider with an account of what has been used. This is done for
various reasons like monitoring billing and effective use of resource.

Figure 2: Cloud computing framework


Other characteristics are-
Pay as you go

In cloud computing, the user has to pay only for the service or the space they have utilized. There
is no hidden or extra charge which is to be paid. The service is economical and most of the time
some space is allotted for free.

Economical

It is the one-time investment as the company (host) has to buy the storage and a small part of it
can be provided to the many companies which save the host from monthly or yearly costs. Only
the amount which is spent is on the basic maintenance and a few more expenses which are very
less.

Security

Cloud Security, is one of the best features of cloud computing. It creates a snapshot of the data
stored so that the data may not get lost even if one of the servers gets damaged.
The data is stored within the storage devices, which cannot be hacked and utilized by any other
person. The storage service is quick and reliable.

3. Cloud Elasticity:

The Elasticity refers to the ability of a cloud to automatically expand or compress the
infrastructural resources on a sudden-up and down in the requirement so that the workload can
be managed efficiently. This elasticity helps to minimize infrastructural cost. This is not
applicable for all kind of environment, it is helpful to address the only those scenarios where
the resources requirements fluctuate up and down suddenly for a specific time interval. It is not
quite practical to use where persistence resource infrastructure is required to handle the heavy
workload. It is most commonly used in pay-per-use, public cloud services. Where IT managers
are willing to pay only for the duration to which they consumed the resources.
Example:
Consider an online shopping site whose transaction workload increases during festive season
like Christmas. So for this specific period of time, the resources need spike up. In order to
handle this kind of situation, we can go for Cloud-Elasticity service. As soon as the season
goes out, the deployed resources can then be requested for withdrawal.

4. On-demand provisioning

Cloud services are on-demand; that is, service consumers can automatically request the service
based on their needs, without human interaction with the service provider. Assigning resources is
done dynamically based on the consumers' needs. The on-demand model evolved to overcome
the challenge of being able to meet fluctuating resource demands efficiently. Because demand
for computing resources can vary drastically from one time to another, maintaining sufficient
resources to meet peak requirements can be costly. This is one of the many key characteristics of
cloud computing that makes it appealing for cloud users.
5.Evolution of cloud computing
5.1. From Mainframe to Personal Computers-
In 1969, 4004 microprocessor was introduced and in 1971, 8008 was introduced by Intel. Then
Personal Computer (PC) came in to existence in the 1970s. In 1970s, IBM released an operating
system called VM that allowed administrators on their System/370 mainframe systems to have
multiple virtual systems, or Virtual Machines (VMs) on a single machine. The VM operating
system extended the shared access of a mainframe further at large scale by offering many
separate computing environments in the same physical machine. Although much more
sophisticated than 1970s, most of the basic functions of any virtualization software available
nowadays can be compared with this early VM OS. Every VM had operating systems,
memory, processor, storage, networking and I/O devices. Virtualization became a key factor
and it led the path to biggest evolutions in ICT. MITS in 1975 developed the Altair 8800 as
construction set, which was one of the first Home Computers, which for Microsoft has developed
a BASIC Interpreter. More and more Home Computers from leading companies like Apple were
launched. In 1981 IBM joined this market segment and floated the name Personal Computer
(PC). Microsoft developed the operating system for the IBM-PC, which emerged as the
standard platform, with which many PC manufacturers were compatible with. Since then the
development and penetration of PC‘s into human life gained pace, even faster processors were
introduced, graphical user interfaces were established, human – computer interaction became
graceful and the continuing miniaturization eventually led to the development of laptops and
mobile devices.
5.2 From Personal Computers to Internet
Another important milestone was the development of the Internet. Internet took the world in its
stride. Internet has its roots in a research project at the Advanced Research Projects Agency
(ARPA). A system to communicate among nodes was developed in 1969. It could work even one
of its nodes failed, on behalf of the US ministry of defense. Consequently the ARPAnet was
developed out of this project. In 1981 around 200 institutions were connected to this network.
This network of networks was soon called Internet. At the start its purpose was to share
information on ongoing scientific projects and military information between two parties. but later
on it became boon for the world. So many applications were developed which attracted common
people. E-mail, newsgroup, Telnet were common applications of interest for general
people.However, the Internet penetrated in common man life with the introduction of World
Wide Web by Tim Berners-Lee in 1989. ―Tim Berners-Lee formed an information
management system for CERN, a Hypertext based, a network structure, where information
entities are accessed through logical references, named as hyperlinks. At that time hyper
textual semantics was in the form of Table of contents or cross references. The modern
hypertext model can be linked to Vannevar Bush(1945). With the introduction of the web
browser Internet Explorer, the World Wide Web finally gained great popularity. In the 1990s,
telecommunications companies took this revolution to new heights. As the users started growing,
instead of developing more physical infrastructure, they enhanced their services from dedicated
point –to-point data connection to Virtual private network connections with the same service
value as before but at a reduced cost. This change allowed the telecommunication companies to
shift data load as required for maintaining better network balance and more control over
bandwidth usage. Meanwhile, virtualization for home computers started intensively, and as the
Internet became more accessible, the next logical step was to take virtualization online. As the
internet as well as online applications grew, the fluctuation in resource demand increased
according to workload. The solution adopted was ―Virtualization. Virtualization led to shared
hosting environments, virtually dedicated environments for applications using the same types of
functionality provided by the VM OS in the 1950s. This kind of environment saved on
infrastructure costs and minimized the amount of actual hardware organizations needed to
achieve desired performance. With introduction to new technologies like Java, it became
possible to develop more and more elaborate, interactive websites. Due to this development, we
have now so many online shopping websites, reservation sites, Blogs and a lot. Now a days
almost all organizations are going online, accepting requests from customers online and replying
online too. Some examples of these applications are Railway reservation, Electric fault
reporting, and social interaction sites like Facebook, Twitter, Whatapp etc. Talking about
software services, Cloud servers are offering softwares for common use also, like word
processors etc. This deployment concept, usually referred to Software-as-a-Service gained
popularity around the year 2000. Similar deployment concepts were developed for the providing
high computing power and resources for Academia in the form of Grid Computing in the
beginning of the 1990 .

5.3 Computing Paradox


As the costs of server hardware started coming down as we entered in 21stcentury, more people
were able to purchase their own computers. This led to a ―Computing paradox. One question
arises in every mind, when Electronic components are becoming more and more powerful and
cost-effective, then why to take hardware on rent. The answer is, though computers are
becoming more and more powerful and cost-effective, at the same time computing is becoming
more and more expensive. Computing is becoming more and more persistent within the
organization, and the management of large databases, information systems, distributed databases
and other software now, is not a matter of cup of tea. With the penetration of IT in everyone‘s
life, growth of Internet and excessive offering of online services by most organizations, with the
advent of memory eating applications, large numbers of servers were required and consequently
large expenditure was required. With this increased complexity, computing has become more
expensive than ever before to an organization. Here cloud computing took shape. By having
software, popularly called Virtual machine monitor, a single physical machine resources are
partitioned among many applications and at the same time resources can be scaled up or scaled
down from other physical machines. This is the ground of the term "ubiquitous computing" and
"cloud computing". We can add resources to already installed resources when there is need and
we can turn off when not needed. So Cloud Computing can provide scalability in service.
5.4 From Internet to Cloud Computing
By the year 2005 most of the businesses whether they were IT organizations or others, were
using computer hardware, software, storage devices, cooling equipments. Their operational cost
was exceeding their capital cost and resources were not used optimally. Moreover electricity
budgets were crossing their boundaries. At the same time IT giants such as Amazon and Google
were confronting with a similar situation as it has been in the 1960s and 1970s. They as well
strived to utilize their immense resources in a better way. Approaches, such as virtualization
became efficient means to grant third party users to dynamically use their resources and exploit
computing power and storage capacities. Then Cloud Computing concepts were shaped into
practice in 2007, which was like utility computing based upon pay as use model. First research
initiatives were taken by Google and IBM, in association with six American universities.
Big organizations, who were having huge infrastructures, high end servers, enormous storage,
high speed networks, started offering IaaS, PaaS, SaaS in the form of utilities. Now
organizations, whose nature of job was not computing, started taking option of outsourcing their
computational needs to Cloud providers. In this way Cloud Computing started its real journey.
With the advancement of technology, large enterprises started to create bigger environment to
offer Cloud benefits to those users who could not have sufficiently large resources for setting up
a Cloud. Those users depending upon their needs could order resource instances from Cloud
servers online. For Cloud service providers it was quite easy to "powering up" a new instance or
server. Everything was managed by a piece of software called hypervisor, so scaling up the
resources or scaling down the resources was as easy as anything.

6. Underlying Principles of Parallel and Distributed Computing


6.1The Age of Internet Computing
Billions of people use the Internet every day. As a result, supercomputer sites and large data
centers must provide high-performance computing services to huge numbers of Internet users
concurrently. Because of this high demand, the Linpack Benchmark for high-performance
computing (HPC) (LINPACK is a software library for performing numerical linear algebra on
digital computers. The LINPACK Benchmarks are a measure of a system's floating-
point computing power. Introduced by Jack Dongarra, they measure how fast a computer solves
a dense n by n system of linear equations Ax = b, which is a common task in engineering.)
applications is no longer optimal for measuring system performance. The emergence of
computing clouds instead demands high-throughput computing (HTC) systems built with parallel
and distributed computing technologies.

Figure 3: HTC, HPC, Cloud computing


The general computing trend is to leverage shared web resources and massive amounts of data
over the Internet. Figure 3 illustrates the evolution of HPC and HTC systems. On the HPC side,
supercomputers (massively parallel processors or MPPs) are gradually replaced by clusters of
cooperative computers out of a desire to share computing resources. The cluster is often a
collection of homogeneous compute nodes that are physically connected in close range to one
another. Advances in virtualization make it possible to see the growth of Internet clouds as a new
computing paradigm. The maturity of radio-frequency identification (RFID), Global Positioning
System (GPS), and sensor technologies has triggered the development of the Internet of Things
(IoT).

On the HTC side, peer-to-peer (P2P) networks are formed for distributed file sharing and content
delivery applications. A P2P system is built over many client machines. Peer machines are
globally distributed in nature. P2P, cloud computing, and web service platforms are more
focused on HTC applications than on HPC applications. Clustering and P2P technologies lead to
the development of computational grids or data grids.

6.1.1 High-Performance Computing


For many years, HPC systems emphasize the raw speed performance. The speed of HPC systems
has increased from Gflops in the early 1990s to now Pflops. This improvement was driven
mainly by the demands from scientific, engineering, and manufacturing communities. For
example, the Top 500 most powerful computer systems in the world are measured by floating-
point speed in Linpack benchmark results. However, the number of supercomputer users is
limited to less than 10% of all computer users. Today, the majority of computer users are using
desktop computers or large servers when they conduct Internet searches and market-driven
computing tasks.

6.1.2High-Throughput Computing
The development of market-oriented high-end computing systems is undergoing a strategic
change from an HPC paradigm to an HTC paradigm. This HTC paradigm pays more attention to
high-flux computing. The main application for high-flux computing is in Internet searches and
web services by millions or more users simultaneously. The performance goal thus shifts to
measure high throughput or the number of tasks completed per unit of time. HTC technology
needs to not only improve in terms of batch processing speed, but also address the acute
problems of cost, energy savings, security, and reliability at many data and enterprise computing
centers. When the Internet was introduced in 1969, Leonard Klienrock of UCLA declared: “As
of now, computer networks are still in their infancy, but as they grow up and become
sophisticated, we will probably see the spread of computer utilities, which like present electric
and telephone utilities, will service individual homes and offices across the country.” Many
people have redefined the term “computer” since that time. In 1984, John Gage of Sun
Microsystems created the slogan, “The network is the computer.” In 2008, David Patterson of
UC Berkeley said, “The data center is the computer. There are dramatic differences between
developing software for millions to use as a service versus distributing software to run on their
PCs.” Recently, Rajkumar Buyya of Melbourne University simply said: “The cloud is the
computer.”
6.2.Computing Paradigm Distinctions

Figure 4 : Various computing paradigms

Comparing these six computing paradigms, it looks like that cloud computing
is a return to the original mainframe computing paradigm. However, these
two paradigms have several important differences. Mainframe computing
offers finite computing power, while cloud computing provides almost infinite
power and capacity.
The high-technology community has argued for many years about the precise definitions of
centralized computing, parallel computing, distributed computing, and cloud computing. In
general, distributed computing is the opposite of centralized computing. The field of parallel
computing overlaps with distributed computing to a great extent, and cloud computing overlaps
with distributed, centralized, and parallel computing.
• Centralized computing -This is a computing paradigm by which all computer resources are
centralized in one physical system. All resources (processors, memory, and storage) are fully
shared and tightly coupled within one integrated OS. Many data centers and supercomputers are
centralized systems, but they are used in parallel, distributed, and cloud computing applications.
PC computing and Mainframe computing can be regarded as centralized computing.
• Parallel computing- In parallel computing, all processors are either tightly coupled with
centralized shared memory or loosely coupled with distributed memory. Some authors refer to
this discipline as parallel processing. Interprocessor communication is accomplished through
shared memory or via message passing. A computer system capable of parallel computing is
commonly known as a parallel computer. Programs running in a parallel computer are called
parallel programs. The process of writing parallel programs is often referred to as parallel
programming. Grid computing can be viewed as parallel computing.
• Distributed computing -This is a field of computer science/engineering that studies distributed
systems. A distributed system consists of multiple autonomous computers, each having its own
private memory, communicating through a computer network. Information exchange in a
distributed system is accomplished through message passing. A computer program that runs in a
distributed system is known as a distributed program. The process of writing distributed
programs is referred to as distributed programming. Internet computing can be viewed as
distributed computing.
• Cloud computing- An Internet cloud of resources can be either a centralized or a distributed
computing system. The cloud applies parallel or distributed computing, or both. Clouds can be
built with physical or virtualized resources over large data centers that are centralized or
distributed. Some authors consider cloud computing to be a form of utility computing or service
computing.
As an alternative to the preceding terms, some in the high-tech community prefer the term
concurrent computing or concurrent programming. These terms typically refer to the union of
parallel computing and distributing computing, although biased practitioners may interpret them
differently.
Ubiquitous computing refers to computing with pervasive devices at any place and time using
wired or wireless communication. The Internet of Things (IoT) is a networked connection of
everyday objects including computers, sensors, humans, etc. The IoT is supported by Internet
clouds to achieve ubiquitous computing with any object at any place and time. Finally, the term
Internet computing is even broader and covers all computing paradigms over the Internet.

Note-Jim Gray’s paper, “Rules of Thumb in Data Engineering,” is an excellent example of


how technology affects applications and vice versa. In addition, Moore’s law indicates that
processor speed doubles every 18 months. Although Moore’s law has been proven valid over
the last 30 years, it is difficult to say whether it will continue to be true in the future.
Gilder’s law indicates that network bandwidth has doubled each year in the past.

6.3. Distributed System Families


Since the mid-1990s, technologies for building P2P networks and networks of clusters have been
consolidated into many national projects designed to establish wide area computing
infrastructures, known as computational grids or data grids. Recently, we have witnessed a surge
in interest in exploring Internet cloud resources for data-intensive applications. Internet clouds
are the result of moving desktop computing to service-oriented computing using server clusters
and huge databases at data centers. Grids and clouds are disparity systems that place great
emphasis on resource sharing in hardware, software, and data sets.
Massively distributed systems are intended to exploit a high degree of parallelism or concurrency
among many machines. In October 2010, the highest performing cluster machine was built in
China with 86016 CPU processor cores and 3,211,264 GPU cores in a Tianhe-1A system. The
largest computational grid connects up to hundreds of server clusters. A typical P2P network
may involve millions of client machines working simultaneously. Experimental cloud computing
clusters have been built with thousands of processing nodes. In the future, both HPC and HTC
systems will demand multicore or many-core processors that can handle large numbers of
computing threads per core. Both HPC and HTC systems emphasize parallelism and distributed
computing. Future HPC and HTC systems must be able to satisfy this huge demand in computing
power in terms of throughput, efficiency, scalability, and reliability. The system efficiency is
decided by speed, programming, and energy factors (i.e.,throughput per watt of energy
consumed). Meeting these goals requires to yield the following design objectives:

• Efficiency measures the utilization rate of resources in an execution model by exploiting


massive parallelism in HPC. For HTC, efficiency is more closely related to job throughput, data
access, storage, and power efficiency.
• Dependability measures the reliability and self-management from the chip to the system and
application levels. The purpose is to provide high-throughput service with Quality of Service
(QoS) assurance, even under failure conditions.
• Adaptation in the programming model measures the ability to support billions of job
requests over massive data sets and virtualized cloud resources under various workload and
service models.
• Flexibility in application deployment measures the ability of distributed systems to run well
in both HPC (science and engineering) and HTC (business) applications.

6.4 . Degrees of Parallelism


Fifty years ago, when hardware was bulky and expensive, most computers were designed in a
bit-serial fashion. In this scenario, bit-level parallelism (BLP) converts bit-serial processing to
word-level processing gradually. Over the years, users graduated from 4-bit microprocessors to
8-,16-, 32-, and 64-bit CPUs. This led us to the next wave of improvement, known as instruction-
level parallelism (ILP), in which the processor executes multiple instructions simultaneously
rather than only one instruction at a time. For the past 30 years, we have practiced ILP through
pipelining, superscalar computing, VLIW (very long instruction word) architectures, and
multithreading. ILP requires branch prediction, dynamic scheduling, speculation, and compiler
support to work efficiently.
Data-level parallelism (DLP) was made popular through SIMD (single instruction, multiple
data) and vector machines using vector or array types of instructions. DLP requires even more
hardware support and compiler assistance to work properly. Ever since the introduction of
multicore processors and chip multiprocessors (CMPs), we have been exploring task-level
parallelism (TLP).
A modern processor explores all of the aforementioned parallelism types. In fact, BLP, ILP, and
DLP are well supported by advances in hardware and compilers. However, TLP is far from being
very successful due to difficulty in programming and compilation of code for efficient execution
on multicore CMPs. As we move from parallel processing to distributed processing, we will see
an increase in computing granularity to job-level parallelism (JLP). It is fair to say that coarse-
grain parallelism is built on top of fine-grain parallelism.

6.5 The Internet of Things and Cyber-Physical Systems


These evolutionary trends emphasize the extension of the Internet to everyday objects. The
traditional Internet connects machines to machines or web pages to web pages.
The Internet of Things- The concept of the IoT was introduced in 1999 at MIT. The IoT
refers to the networked interconnection of everyday objects, tools, devices, or computers. One
can view the IoT as a wireless network of sensors that interconnect all things in our daily life.
These things can be large or small and they vary with respect to time and place. The idea is to tag
every object using RFID or a related sensor or electronic technology such as GPS. With the
introduction of the IPv6 protocol, 2128 IP addresses are available to distinguish all the objects on
Earth, including all computers and pervasive devices. The IoT researchers have estimated that
every human being will be surrounded by 1,000 to 5,000 objects. The IoT needs to be designed
to track 100 trillion static or moving objects simultaneously. The IoT demands universal
addressability of all of the objects or things. To reduce the complexity of identification, search,
and storage, one can set the threshold to filter out fine-grain objects. The IoT obviously extends
the Internet and is more heavily developed in Asia and European countries. In the IoT era, all
objects and devices are instrumented, interconnected, and interacted with each other intelligently.
This communication can be made between people and things or among the things themselves.
Three communication patterns co-exist: namely H2H (human-to-human), H2T (human-to thing),
and T2T (thing-to-thing). Here things include machines such as PCs and mobile phones. The
idea here is to connect things (including human and machine objects) at any time and any place
intelligently with low cost. The IoT is still in its infancy stage of development. Many prototype
IoTs with restricted areas of coverage are under experimentation. Cloud computing researchers
expect to use the cloud and future Internet technologies to support fast, efficient, and intelligent
interactions among humans, machines, and any objects on Earth. A smart Earth should have
intelligent cities, clean water, efficient power, convenient transportation, good food supplies,
responsible banks, fast telecommunications, green IT, better schools, good health care, abundant
resources, and so on. This dream living environment may take some time to reach fruition at
different parts of the world.
Cyber-Physical Systems-A cyber-physical system (CPS) is the result of interaction between
computational processes and the physical world. A CPS integrates “cyber” (heterogeneous,
asynchronous) with “physical” (concurrent and information-dense) objects. A CPS merges the
“3C” technologies of computation, communication, and control into an intelligent closed
feedback system between the physical world and the information world, a concept which is
actively explored in the United States. The IoT emphasizes various networking connections
among physical objects, while the CPS emphasizes exploration of virtual reality (VR)
applications in the physical world. We may transform how we interact with the physical world
just like the Internet transformed how we interact with the virtual world. Examples of CPS
include smart grid, autonomous automobile systems, medical monitoring, industrial control
systems, robotics systems, and automatic pilot avionics.

7. The Hype Cycle of New Technologies


Any new and emerging computing and information technology may go through a hype cycle, as
illustrated in Figure 5 This cycle shows the expectations for the technology at five different
stages. The expectations rise sharply from the trigger period to a high peak of inflated
expectations. Through a short period of disillusionment, the expectation may drop to a valley and
then increase steadily over a long enlightenment period to a plateau of productivity. The number
of years for an emerging technology to reach a certain stage is marked by special symbols. The
hollow circles indicate technologies that will reach mainstream adoption in two years. The gray
circles represent technologies that will reach mainstream adoption in two to five years. The solid
circles represent those that require five to 10 years to reach mainstream adoption, and the
triangles denote those that require more than 10 years. The crossed circles represent technologies
that will become obsolete before they reach the plateau.
The hype cycle in Figure 5 shows the technology status as of August 2010. For example, at that
time consumer-generated media was at the disillusionment stage, and it was predicted to take
less than two years to reach its plateau of adoption. Internet micropayment systems were forecast
to take two to five years to move from the enlightenment stage to maturity. It was believed that
3D printing would take five to 10 years to move from the rising expectation stage to mainstream
adoption, and mesh network sensors were expected to take more than 10 years to move from the
inflated expectation stage to a plateau of mainstream adoption.
Also as shown in Figure 5, the cloud technology had just crossed the peak of the expectation
stage in 2010, and it was expected to take two to five more years to reach the productivity stage.
However, broadband over power line technology was expected to become obsolete before
leaving the valley of disillusionment stage in 2010. Many additional technologies (denoted by
dark circles in Figure 5) were at their peak expectation stage in August 2010, and they were
expected to take five to 10 years to reach their plateau of success. Once a technology begins to
climb the slope of enlightenment, it may reach the productivity plateau within two to five years.
Among these promising technologies are the clouds, biometric authentication, interactive TV,
speech recognition, predictive analytics, and media tablets.

Figure 5: Life cycle of technologies

8. Centralized versus Distributed Computing


Some people argue that cloud computing is centralized computing at data centers. Others claim
that cloud computing is the practice of distributed parallel computing over data-center resources.
These represent two opposite views of cloud computing. All computations in cloud applications
are distributed to servers in a data center. These are mainly virtual machines (VMs) in virtual
clusters created out of data-center resources. In this sense, cloud platforms are systems
distributed through virtualization. Both public clouds and private clouds are developed in the
Internet. As many clouds are generated by commercial providers or by enterprises in a
distributed manner, they will be interconnected over the Internet to achieve scalable and efficient
computing services. Commercial cloud providers such as Amazon, Google, and Microsoft
created their platforms to be distributed geographically. This distribution is partially
attributed to fault tolerance, response latency reduction, and even legal reasons. Intranet-
based private clouds are linked to public clouds to get additional resources. Nevertheless,
users in Europe may not feel comfortable using clouds in the United States, and vice versa, until
extensive service-level agreements (SLAs) are developed between the two user communities.

9. Cloud Ecosystem and Enabling Technologies


Cloud computing platforms differ from conventional computing platforms in many aspects. In
this section, we will identify their differences in computing paradigms and cost models applied.
The traditional computing model is specified below by the process on the left, which involves
buying the hardware, acquiring the necessary system software, installing the system, testing the
configuration and executing the application code and management of resources. What is even
worse is that this cycle repeats itself in about every 18 months, meaning the machine we bought
becomes obsolete every 18 months. The cloud computing paradigm is shown on the right. This
computing model follows a pay-as-you-go model. Therefore the cost is significantly reduced,
because we simply rent computer resources without buying the computer in advance. All
hardware and software resources are leased from the cloud provider without capital investment
on the part of the users. Only the execution phase costs some money. The experts at IBM have
estimated that an 80 percent to 95 percent saving results from cloud computing, compared with
the conventional computing paradigm. This is very much desired, especially for small
businesses, which requires limited computing power and thus avoid the purchase of expensive
computers or servers repeatedly every few years.

Classical Computing Cloud Computing


(Repeat the following cycle every 18 months) (Pay as you go per each service provided)
Buy and own Subscribe
Hardware, system software, applications to meet
peak needs ----
Install, configure, test, verify, evaluate, manage Use (Save about 80-95% of the total cost)
---- ----
Use --------
---- ------------
Pay $$$$$ (High cost) Finally $-Pay for what you use based on the QoS

10. Cloud Design Objectives


Despite the controversy surrounding the replacement of desktop or deskside computing by
centralized computing and storage services at data centers or big IT companies, the cloud
computing community has reached some consensus on what has to be done to make cloud
computing universally acceptable. The following list highlights six design objectives for cloud
computing:
• Shifting computing from desktops to data centers - Computer processing, storage, and software
delivery is shifted away from desktops and local servers and toward data centers over the
Internet.
• Service provisioning and cloud economics -Providers supply cloud services by signing SLAs
with consumers and end users. The services must be efficient in terms of computing, storage, and
power consumption. Pricing is based on a pay-as-you-go policy.
• Scalability in performance- The cloud platforms and software and infrastructure services must
be able to scale in performance as the number of users increases.
• Data privacy protection- Can you trust data centers to handle your private data and records?
This concern must be addressed to make clouds successful as trusted services.
• High quality of cloud services - The QoS of cloud computing must be standardized to make
clouds interoperable among multiple providers.
• New standards and interfaces - This refers to solving the data lock-in problem associated with
data centers or cloud providers. Universally accepted APIs and access protocols are needed to
provide high portability and flexibility of virtualized applications.

11. Cost Model


In traditional IT computing, users must acquire their own computer and peripheral equipment as
capital expenses. In addition, they have to face operational expenditures in operating and
maintaining the computer systems, including personnel and service costs. Figure 6(a) shows the
addition of variable operational costs on top of fixed capital investments in traditional IT. Note
that the fixed cost is the main cost, and that it could be reduced slightly as the number of users
increases. However, the operational costs may increase sharply with a larger number of users.
Therefore, the total cost escalates quickly with massive numbers of users. On the other hand,
cloud computing applies a pay-per-use business model, in which user jobs are outsourced to data
centers. To use the cloud, one has no up-front cost in hardware acquisitions. Only variable costs
are experienced by cloud users, as demonstrated in Figure 6(b). Overall, cloud computing will
reduce computing costs significantly for both small users and large enterprises. Computing
economics does show a big gap between traditional IT users and cloud users. The savings in
acquiring expensive computers up front releases a lot of burden for startup companies. The fact
that cloud users only pay for operational expenses and do not have to invest in permanent
equipment is especially attractive to massive numbers of small users. This is a major driving
force for cloud computing to become appealing to most enterprises and heavy computer users. In
fact, any IT users whose capital expenses are under more pressure than their operational
expenses should consider sending their overflow work to utility computing or cloud service
providers.
Figure 6: Cost model

You might also like