Unit 1 Notes
Unit 1 Notes
Cloud Computing provides us a means by which we can access the applications as utilities, over the
Internet. It allows us to create, configure, and customize applications online. Before personal computers
took off in the early 1980s, if your company needed sales or payroll figures calculating in a hurry, you'd
most likely have bought in "data-processing" services from another company, with its own expensive
computer systems, that specialized in number crunching; these days, you can do the job just as easily on
your desktop with off-the-shelf software. Or can you? In a striking throwback to the 1970s, many
companies are finding, once again, that buying in computer services makes more business sense than
do-it-yourself. This new trend is called cloud computing and, not surprisingly, it's linked to the Internet's
inexorable rise.
What is Cloud?
The term Cloud refers to a Network or Internet. In other words, we can say that Cloud is something,
which is present at remote location. Cloud can provide services over network, i.e., on public networks or on
private networks, i.e., WAN, LAN or VPN. Applications such as e-mail, web conferencing, customer
relationship management (CRM), all run in cloud. The term ―Cloud‖ came from a network design that was
used by network engineers to represent the location of various network devices and there inter-connection.
The shape of this network design was like a cloud.
What is Cloud Computing?
Cloud computing is the delivery of computing services—including servers, storage, databases, networking,
software, analytics, and intelligence—over the Internet (―the cloud‖) to offer faster innovation, flexible
resources, and economies of scale. You typically pay only for cloud services you use, helping lower your
operating costs, run your infrastructure more efficiently and scale as your business needs change.
Cloud computing means that instead of all the computer hardware and software you're using sitting on
your desktop, or somewhere inside your company's network, it's provided for you as a service by another
company and accessed over the Internet, usually in a completely seamless way. Exactly where the
hardware and software is located and how it all works doesn't matter to you, the user—it's just somewhere
up in the nebulous "cloud" that the Internet represents.
Cloud computing is a buzzword that means different things to different people. For some, it's just another
way of describing IT (information technology) "outsourcing"; others use it to mean any computing service
provided over the Internet or a similar network; and some define it as any bought-in computer service you
use that sits outside your firewall. However we define cloud computing, there's no doubt it makes most
sense when we stop talking about abstract definitions and look at some simple, real examples—so let's do
just that.
Why Cloud Computing?
With increase in computer and Mobile user’s, data storage has become a priority in all fields. Large and small
scale businesses today thrive on their data & they spent a huge amount of money to maintain this data. It
requires a strong IT support and a storage hub. Not all businesses can afford high cost of in-house IT
infrastructure and back up support services. For them Cloud Computing is a cheaper solution. Perhaps its
efficiency in storing data, computation and less maintenance cost has succeeded to attract even bigger
businesses as well.
Cloud computing decreases the hardware and software demand from the user’s side. The only thing that user
must be able to run is the cloud computing systems interface software, which can be as simple as Web
browser, and the Cloud network takes care of the rest. We all have experienced cloud computing at some
instant of time, some of the popular cloud services we have used or we are still using are mail services like
gmail, hotmail or yahoo etc.
While accessing e-mail service our data is stored on cloud server and not on our computer. The technology
and infrastructure behind the cloud is invisible. It is less important whether cloud services are based on
HTTP, XML, Ruby, PHP or other specific technologies as far as it is user friendly and functional. An
individual user can connect to cloud system from his/her own devices like desktop, laptop or mobile.
Cloud computing harnesses small business effectively having limited resources, it gives small businesses
access to the technologies that previously were out of their reach. Cloud computing helps small businesses
to convert their maintenance cost into profit. Let’s see how?
In an in-house IT server, you have to pay a lot of attention and ensure that there are no flaws into the
system so that it runs smoothly. And in case of any technical glitch you are completely responsible; it will
seek a lot of attention, time and money for repair. Whereas, in cloud computing, the service provider takes
the complete responsibility of the complication and the technical faults.
Benefits of Cloud Computing
Cloud computing is a big shift from the traditional way businesses think about IT resources. Here are seven
common reasons organizations are turning to cloud computing services:
Cost - Cloud computing eliminates the capital expense of buying hardware and software and
setting up and running on-site datacenters—the racks of servers, the round-the-clock electricity for
power and cooling, the IT experts for managing the infrastructure. It adds up fast.
Speed - Most cloud computing services are provided self service and on demand, so even vast
amounts of computing resources can be provisioned in minutes, typically with just a few mouse
clicks, giving businesses a lot of flexibility and taking the pressure off capacity planning.
Global Scale - The benefits of cloud computing services include the ability to scale elastically. In
cloud speak, that means delivering the right amount of IT resources—for example, more or less
computing power, storage, bandwidth—right when it is needed and from the right geographic
location.
Productivity - On-site datacenters typically require a lot of ―racking and stacking‖—hardware setup,
software patching, and other time-consuming IT management chores. Cloud computing removes
the need for many of these tasks, so IT teams can spend time on achieving more important
business goals.
Performance - The biggest cloud computing services run on a worldwide network of secure
datacenters, which are regularly upgraded to the latest generation of fast and efficient computing
hardware. This offers several benefits over a single corporate datacenter, including reduced
network latency for applications and greater economies of scale.
Reliability - Cloud computing makes data backup, disaster recovery and business continuity easier
and less expensive because data can be mirrored at multiple redundant sites on the cloud provider’s
network.
Security - Many cloud providers offer a broad set of policies, technologies and controls that
strengthen your security posture overall, helping protect your data, apps and infrastructure from
potential threats.
Agility - A customer can rapidly and inexpensively re-provision technological infrastructure resources.
Risks Related to Cloud Computing
Although cloud Computing is a promising innovation with various benefits in the world of computing, it
comes with risks. Some of them are discussed below:
It is the biggest concern about cloud computing. Since data management and infrastructure
management in cloud is provided by third-party, it is always a risk to handover the sensitive information
to cloud service providers.
Although the cloud computing vendors ensure highly secured password protected accounts, any sign
of security breach may result in loss of customers and businesses.
Lock In
It is very difficult for the customers to switch from one Cloud Service Provider (CSP) to another. It
results in dependency on a particular CSP for service.
Isolation Failure
This risk involves the failure of isolation mechanism that separates storage, memory, and routing
between the different tenants.
In case of public cloud provider, the customer management interfaces are accessible through the
Internet.
It is possible that the data requested for deletion may not get deleted. It happens because either of the
following reasons
Extra copies of data are stored but are not available at the time of deletion
Disk that stores data of multiple tenants is destroyed.
Characteristics of Cloud Computing
There are four key characteristics of cloud computing. They are shown in the following diagram:
Cloud computing has been credited with increasing competitiveness through cost reduction, greater
flexibility, elasticity and optimal resource utilization. Here are a few situations where cloud computing is
used to enhance the ability to achieve business goals.
Among the many incentives for using cloud, there are two situations where organizations are looking
into ways to assess some of the applications they intend to deploy into their environment through the
use of a cloud (specifically a public cloud). While in the case of test and development it may be limited in
time, adopting a hybrid cloud approach allows for testing application workloads, therefore providing the
comfort of an environment without the initial investment that might have been rendered useless should
the workload testing fail.
Another use of hybrid cloud is also the ability to expand during periods of limited peak usage, which is
often preferable to hosting a large infrastructure that might seldom be of use. An organization would
seek to have the additional capacity and availability of an environment when needed on a pay-as you-go
basis.
Probably the best scenario for the use of a cloud is a test and development environment. This entails
securing a budget, setting up your environment through physical assets, significant manpower and time.
Then comes the installation and configuration of your platform. All this can often extend the time it takes
for a project to be completed and stretch your milestones.
With cloud computing, there are now readily available environments tailored for your needs at your
fingertips. This often combines, but is not limited to, automated provisioning of physical and virtualized
resources.
One of the aspects offered by leveraging cloud computing is the ability to tap into vast quantities of both
structured and unstructured data to harness the benefit of extracting business value.
Retailers and suppliers are now extracting information derived from consumers’ buying patterns to target
their advertising and marketing campaigns to a particular segment of the population. Social networking
platforms are now providing the basis for analytics on behavioral patterns that organizations are using to
derive meaningful information.
5. File storage
Cloud can offer you the possibility of storing your files and accessing, storing and retrieving them from
any web- enabled interface. The web services interfaces are usually simple. At any time and place you
have high availability, speed, scalability and security for your environment. In this scenario,
organizations are only paying for the amount of storage they are actually consuming, and do so without
the worries of overseeing the daily maintenance of the storage infrastructure.
There is also the possibility to store the data either on or off premises depending on the regulatory
compliance requirements. Data is stored in virtualized pools of storage hosted by a third party based on
the customer specification requirements.
6. Disaster recovery
This is yet another benefit derived from using cloud based on the cost effectiveness of a disaster
recovery (DR) solution that provides for a faster recovery from a mesh of different physical locations at a
much lower cost that the traditional DR site with fixed assets, rigid procedures and a much higher cost.
7. Backup
Backing up data has always been a complex and time-consuming operation. This included maintaining
a set of tapes or drives, manually collecting them and dispatching them to a backup facility with all the
inherent problems that might happen in between the originating and the backup site. This way of
ensuring a backup is performed is not immune to problems such as running out of backup media , and
there is also time to load the backup devices for a restore operation, which takes time and is prone to
malfunctions and human errors.
Cloud-based backup, while not being the panacea, is certainly a far cry from what it used to be. You can
now automatically dispatch data to any location across the wire with the assurance that neither security,
availability nor capacities are issues.
While the list of the above uses of cloud computing is not exhaustive, it certainly give an incentive to use
the cloud when comparing to more traditional alternatives to increase IT infrastructure flexibility , as well as
leverage on big data analytics and mobile computing.
History of Cloud Computing
The evolution of cloud computing can be bifurcated into three basic phases:
1. The Idea Phase- This phase incepted in the early 1960s with the emergence of utility and grid
computing and lasted till pre-internet bubble era. Joseph Carl Robnett Licklider was the founder of cloud
computing.
2. The Pre-cloud Phase- The pre-cloud phase originated in 1999 and extended to 2006. In this phase the
internet as the mechanism to provide Application as Service.
3. The Cloud Phase- The much talked about real cloud phase started in the year 2007 when the
classification of IaaS, PaaS, and SaaS got formalized. The history of cloud computing has witnessed some
very interesting breakthroughs launched by some of the leading computer/web organizations of the world.
At the beginning era of technology, the Client-Server architecture was popular along with the mainframe
and terminal application. At that time, storing data in CPU was very expensive, and hence the mainframe
connected both types of resources and served them to a small client-terminal. With the revolution in the
mass storage capacity, the file servers gained the popularity for storing vast amount of information.
In 1990, the giant connecting concept - Internet, finally got enough computers attached to it and the
connection of those machines together create a massive, interconnected shared pool of storage that won't
be possible by a single organization or institution to afford. There comes the concept of "grid". The term
'grid' has a misinterpretation as a synonym for 'cloud computing' as both of the technology is formed from a
lot of computers connected. 'Grid Computing' requires the usage of application programs to divide one
large system processing to several thousands of machines. But there lies the disadvantage; that if a single
part of a software node fails the processing or working, other pieces of that software nodes may also fail to
process. So, this 'grid'-based working concept didn't become so fruitful.
On the other hand, cloud computing involves the concept of 'grid', except that it provides on-demand
resource provisioning.
On the first milestone of cloud technology, Salesforce.com engraved its name in 1999. It pioneered the
technique of delivering enterprise application via a simple website. They provided both specialist &
mainstream software firms to bring up used over the internet.
The next development was in 2002 by Amazon's Web Service (AWS). They provided cloud-oriented
services including storage, computing power & human intelligence via Amazon Mechanical Turk. Then in
2006, Amazon launches their EC2 (Elastic Compute Cloud) - a commercial web service that let small
organizations and sole proprietors to rent computers on which they run their computer applications.
In 2009, another significant milestone engraved the name of Google with Web 2.0. Google and others
started to offer browser-based application via Google apps and other apps. Then came Microsoft's Azure -
both Microsoft and Google deliver services in a way that is reliable and easy to consume.
Cloud Computing Architecture
Cloud Computing architecture comprises of many cloud components, which are loosely coupled. We can
broadly divide the cloud architecture into two parts:
Front End
Back End
Each of the ends is connected through a network, usually Internet. The following diagram shows the
graphical view of cloud computing architecture:
Front End
The front end refers to the client part of cloud computing system. It consists of interfaces and applications
that are required to access the cloud computing platforms, Example - Web Browser.
Back End
The back End refers to the cloud itself. It consists of all the resources required to provide cloud computing
services. It comprises of huge data storage, virtual machines, security mechanism, services, deployment
models, servers, etc.
Note
It is the responsibility of the back end to provide built-in security mechanism, traffic control and protocols.
The server employs certain protocols known as middleware, which help the connected devices to
communicate with each other.
Cloud Infrastructure
Cloud infrastructure consists of servers, storage devices, network, cloud management software,
deployment software, and platform virtualization.
Hypervisor - Hypervisor is a firmware or low-level program that acts as a Virtual Machine Manager. It
allows sharing the single physical instance of cloud resources between several tenants.
Deployment Software - It helps to deploy and integrate the application on the cloud.
Network - It is the key component of cloud infrastructure. It allows connecting cloud services over the
Internet. It is also possible to deliver network as a utility over the Internet, which means, the customer can
customize the network route and protocol.
Server - The server helps to compute the resource sharing and offers other services such as resource
allocation and de-allocation, monitoring the resources, providing security etc.
Storage - Cloud keeps multiple replicas of storage. If one of the storage resources fails, then it can be
extracted from another one, which makes cloud computing more reliable.
Infrastructural Constraints
Fundamental constraints that cloud infrastructure should implement are shown in the following diagram:
Transparency
Virtualization is the key to share resources in cloud environment. But it is not possible to satisfy the
demand with single resource or server. Therefore, there must be transparency in resources, load balancing
and application, so that we can scale them on demand.
Scalability
Scaling up an application delivery solution is not that easy as scaling up an application because it involves
configuration overhead or even re-architecting the network. So, application delivery solution is need to be
scalable which will require the virtual infrastructure such that resource can be provisioned and de-
provisioned easily.
Intelligent Monitoring
To achieve transparency and scalability, application solution delivery will need to be capable of intelligent
monitoring.
Security
The mega data center in the cloud should be securely architected. Also the control node, an entry point in
mega data center, also needs to be secure.
Types of Cloud (Cloud Deployment
Model)
There are four different cloud models that you can subscribe according to business needs:
1) Private Cloud: Here, computing resources are deployed for one particular organization. This method is
more used for intra-business interactions. Where the computing resources can be governed, owned
and operated by the same organization.
2) Public Cloud: This type of cloud is used usually for B2C (Business to Consumer) type interactions.
Here the computing resource is owned, governed and operated by government, an academic or
business organization.
3) Hybrid Cloud: This type of cloud can be used for both type of interactions - B2B (Business to
Business) and B2C (Business to Consumer). This deployment method is called hybrid cloud as the
computing resources are bound together by different clouds.
4) Community Cloud: Here, computing resources are provided for a community and organizations.
Public Cloud
Public Cloud allows systems and services to be easily accessible to general public. The IT giants such as
Google, Amazon and Microsoft offer cloud services via Internet. The Public Cloud Model is shown in the
diagram below.
Benefits
There are many benefits of deploying cloud as public cloud model. The following diagram shows some of
those benefits:
Cost Effective
Since public cloud shares same resources with large number of customers it turns out inexpensive.
Reliability
The public cloud employs large number of resources from different locations. If any of the resources fails,
public cloud can employ another one.
Flexibility
The public cloud can smoothly integrate with private cloud, which gives customers a flexible approach.
Location Independence
Public cloud services are delivered through Internet, ensuring location independence.
Public cloud is also based on pay-per-use model and resources are accessible whenever customer needs
them.
High Scalability
Cloud resources are made available on demand from a pool of resources, i.e., they can be scaled up or
down according the requirement.
Disadvantages
Low Security
In public cloud model, data is hosted off-site and resources are shared publicly, therefore does not
ensure higher level of security.
Less Customizable
Private Cloud
Private Cloud allows systems and services to be accessible within an organization. The Private Cloud is
operated only within a single organization. However, it may be managed internally by the organization
itself or by third-party. The private cloud model is shown in the diagram below.
Benefits
There are many benefits of deploying cloud as private cloud model. The following diagram shows some of
those benefits:
Private cloud operations are not available to general public and resources are shared from distinct pool of
resources. Therefore, it ensures high security and privacy.
More Control
The private cloud has more control on its resources and hardware than public cloud because it is
accessed only within an organization.
The private cloud resources are not as cost effective as resources in public clouds but they offer more
efficiency than public cloud resources.
Disadvantages
The private cloud is only accessible locally and is very difficult to deploy globally.
High Priced
Limited Scalability
The private cloud can be scaled only within capacity of internal hosted resources.
Additional Skills
Hybrid Cloud
Hybrid Cloud is a mixture of public and private cloud. Non-critical activities are performed using public
cloud while the critical activities are performed using private cloud. The Hybrid Cloud Model is shown in
the diagram below.
Benefits
There are many benefits of deploying cloud as hybrid cloud model. The following diagram shows some of
those benefits:
Scalability
It offers features of both, the public cloud scalability and the private cloud scalability.
Flexibility
Public clouds are more cost effective than private ones. Therefore, hybrid clouds can be cost saving.
Security
Disadvantages
Networking
Issues
Security Compliance
It is necessary to ensure that cloud services are compliant with security policies of the organization.
Infrastructure Dependency
The hybrid cloud model is dependent on internal IT infrastructure; therefore it is necessary to ensure
redundancy across data centers.
Community Cloud
Community Cloud allows system and services to be accessible by group of organizations. It shares the
infrastructure between several organizations from a specific community. It may be managed internally by
organizations or by the third-party. The Community Cloud Model is shown in the diagram below.
Benefits
Cost Effective
Community cloud offers same advantages as that of private cloud at low cost.
Community cloud provides an infrastructure to share cloud resources and capabilities among several
organizations.
Security
The community cloud is comparatively more secure than the public cloud but less secured than the private
cloud.
Issues
Since all data is located at one place, one must be careful in storing data in community cloud because
it might be accessible to others.
It is also challenging to allocate responsibilities of governance, security and cost among organizations.
Major Players in Cloud Computing
Cloud computing companies continue to grow rapidly as organizations execute their digital transformation
strategies and grapple with more and more data. In fact, Gartner expects public cloud revenue to grow 17.3
percent in 2019 to $206.2 billion, up from 175.8 billion in 2018.
The fastest-growing cloud market segment is infrastructure as a service (IAAS), which is expected to grow
a by a stunning 27.6 percent in 2019. By 2022, Gartner predicts that 90 percent of organizations
purchasing public cloud IaaS will use integrated IaaS and platform as a service (PaaS) providers. Since the
cloud is central to digital transformation, Gartner expects cloud computing and services will reach nearly
$300 billion by 2021.
Multicloud is growing as well. According to IDC, 85% of enterprises were expected to have a multicloud
strategy in 2018. The choice of providers tends to hinge on what the company is attempting to accomplish
and how comfortable that company is with the service provider.
While the chart below shows the current cloud market share, realize that the cloud landscape will continue
to shift greatly in the years ahead. Clearly, the leading cloud computing companies: Alibaba,
Amazon Web Services, Microsoft Azure, Google Cloud, IBM Cloud, and Oracle Cloud, each continue to
invest heavily in offering new tools to customers. The cloud competition has only just begun.
Alibaba
Amazon Web Services
Google Cloud Platform
IBM Cloud
Microsoft Azure
Oracle Cloud
Alibaba Cloud
When Alibaba Cloud launched in 2009, it was narrowly focused on ecommerce, but today it offers
integrated IaaS and PaaS services.
Alibaba Cloud is the leading cloud provider in China, and the organization is expanding its global footprint.
However, AWS remains the number one provider Asia Pacific (APAC) region with Alibaba in second place.
Outside Asia, Alibaba trails behind AWS and Microsoft Azure because some organizations have concerns
about doing business with a Chinese cloud service provider. However, Alibaba’s global plans for its cloud
business are aggressive, as evidenced by continued expansion and revenue that doubled to about $2.1
billion in fiscal year 2018.
Elastic computing
Storage
Content delivery network (CDN)
Networking
Database
Security
Big data & analytics
Application
Internet of Things (IoT)
Monitoring and management.
Alibaba now has data centers in the U.S., U.K., Germany, India, Middle East, China, Hong Kong SAR
China, Japan, Singapore, Malaysia, Indonesia, and Australia.
AWS remains the market leader and top innovator in the space. In fact, at the recent AWS Re:Invent event
in Las Vegas, Amazon CEO Andy Jassy spent a three-hour keynote explaining new tool announcements.
The general theme was making whatever customers want to do in the cloud easier, whether it's big data
analytics, building robots, storing data, or taking advantage of machine learning to name a few.
According to Gartner, AWS is the most popular choice for strategic, organization-wide adoption. Its
revenue grew 45% from $4.5 billion in Q3 2017 to $6.6 billion in Q3 2018.
Analytics
Application Integration
AR/VR
AWS Cost management
Blockchain
Business applications
Compute
Customer engagement
Database
Desktop and app streaming
Developer tools
IoT
Machine learning
Management and governance
Media services
Migration and transfer
Mobile
Network and content delivery
Robotics
Satellite
Security, identity and compliance
Storage
AWS has a robust marketplace, a vibrant partner network, and data centers in the U.S., Canada, Brazil,
Great Britain, Ireland, France, Germany, Sweden, India, Singapore, South Korea, China, Japan and
Australia. Three new data centers are planned for Italy, the Kingdom of Bahrain, South Africa, and Hong
Hong SAR, China.
Google Cloud Platform is not as popular as AWS or Microsoft Azure. In fact, historically, it’s trailed behind AWS and
Microsoft Azure significantly, despite its deep technical expertise and compelling capabilities.
Google stated that it wants to make AI available to everyone. As part of that, non-data scientists can
experiment with machine learning and AI using preconfigured machine learning content. Google is also
taking a leadership position with AI ethics, which isn’t top of mind for most organizations yet, although
Gartner deemed it one of the Top 10 trends for 2019.
According to Synergy Research Group, Google continues to gain market share at a linear rate, though it
still holds less than 10% overall, like IBM and Alibaba, albeit a point or two behind IBM and a point or two
ahead of Alibaba.
Google Cloud has data centers located in the U.S., Canada, Great Britain, The Netherlands, Belgium,
Finland, India, Indonesia, Taiwan, Japan, Australia, and Hong Kong SAR, China. Three others are planned
for Switzerland, Indonesia, and another location in Japan.
IBM Cloud
IBM Cloud is IBM’s one-stop shop for cloud services. It supersedes the Bluemix PaaS and SoftLayer IaaS
brands. The transition is ongoing, which its website reflects. (It changed even during the course of writing
this article).
All cloud service providers have a unique position in the market. IBM’s is its long history and offerings that
have evolved with the times, which makes the company a natural choice for enterprises with legacy
systems that want to migrate to the cloud. IBM's breadth and depth of capabilities reflect both its heritage
and continued commitment to innovation.
According to Gartner, IBM Cloud is working on a next-generation infrastructure project that will result in
new cloud IaaS offerings, including new compute infrastructure.
AI (IBM Watson)
Analytics
Blockchain
Compute
Databases
Developer tools
Integration
Management
Migration
Network
Security
Storage
Data centers are located in the U.S., Canada, Mexico, Brazil, Norway, The Netherlands, Great Britain,
Germany, France, Italy, Spain, South Africa, India, Singapore, Australia, China, Hong Kong SAR China,
South Korea and Japan.
Microsoft Azure
Microsoft Azure is the number two cloud platform globally. It’s a natural favorite among .NET developers because,
among other things, it’s compatible with the .NET programming language and integrated with Visual Studio. Over
the years, Microsoft has made a point of making developers’ lives easier so they can produce better quality products
faster and focus on innovation instead of the mechanics of programming.
Microsoft Azure also shares an advantage in the enterprise space with the likes of IBM and Oracle. Like
those players, well-rounded enterprise-class features are a given.
Other factors helping to drive Azure’s continued success is the formidable partner network and a
marketplace that caters to developers and those looking for finished products.
AI/machine learning
Analytics
Compute
Containers
Databases
Developer tools
DevOps
Identity
Integration
IoT
Management
Media
Microsoft Azure Stack
Migration
Mobile
Networking
Security
Storage
Web
Microsoft Azure has data centers in the U.S., Canada, Brazil, Ireland, Netherlands, France, Great Britain,
Germany, Singapore, Australia, China, India, Japan, South Korea, and Hong Kong SAR, China. Other
locations have been announced include Norway, Switzerland, South Africa, United Arab Emirates (UAE),
and two more locations in Germany.
Oracle Cloud
Like the other top IaaS and PaaS service providers, Oracle’s cloud offerings have been the result of
internal development and acquisition. Its enterprise focus, deep pockets and the general digitization of
everything makes the company a viable contender in the space.
However, the company also made an important developer push recently when it announced the availability
of a cloud-native framework that spans all app deployment scenarios (public cloud, on premises, and
hybrid cloud) and new container-centric infrastructure services.
Oracle Cloud offerings include:
Application integration
Application development
Analytics
Containers
Compute
Database
Edge services
IT Operations optimization
Networking
Security and compliance
Storage
Oracle’s global footprint is pale in comparison to AWS or Microsoft Azure. Data centers are currently located in
the US, Great Britain, Germany, India, South Korea, and Japan. Future locations include Canada,
Switzerland, Brazil, and Australia.
Cloud Computing Issues and
Challenges
Selection of the right technology takes your business to new heights while a few mistakes land your
business in troubles. Every technology comes with a baggage of some pros and cons. Similarly, cloud
computing too comes with its share of issues despite being core strength of some business industries. It
also can create some major problems under some rare circumstances. Issues and challenges of cloud
computing are characterized as ghosts in the cloud. Let us talk in brief about some real life ghosts of cloud
computing.
When we talk about the security concern of the cloud technology, then a lot of questions remain
unanswered. Multiple serious threats like virus attack and hacking of the client’s site are the biggest cloud
computing data security issues. Entrepreneurs have to think on these issues before adopting cloud
computing technology for their business. Since you are transferring your company’s important details to a third
party so it is important to ensure yourself about the manageability and security system of the cloud.
Choosing the appropriate cloud mechanism as per the needs of your business is very necessary. There
are three types of clouds configuration such as public, private, and hybrid. The main secret behind
successful implementation of the cloud is picking up the right cloud. If you are not selecting the right cloud
then maybe you have to face some serious hazards. Some companies having vast data so they prefer
private clouds while small organizations usually use public clouds. A few companies like to go for a
balanced approach with hybrid clouds. Choose a cloud computing consulting service which is aware and
clearly disclose the terms and conditions regarding cloud implementation and data security.
In some agencies, it is required to monitor their system in real time. It is compulsory term for their business
that they continuously monitor and maintain their inventory system. Banks and some government agencies
need to update their system in real time but cloud service providers are unable to match this requirement.
This is really a big challenge for cloud services providers.
Every organization wants to have a proper control and access over the data. It is not easy to handover your
precious data to a third party. The main tension between enterprise and executives is they desire to control
over the new modes of operations while using technology. These tensions are not unsolvable, but they do
suggest that providers and clients alike must deliberately address a suite of cloud challenges in the
planning, contracting and managing the services.
5) Reliability on new technology
It is a fact of human nature that we trust on the things present in front of our eyes. Normally entrepreneurs
feel hesitation in letting out the organizational information to any unknown service provider. They think that
information stored in their office premises is more secure and easily accessible. By using cloud computing
they have fear of losing control over the data. They think that data is taken from them and handover to an
unknown third party. Security threads are increase as they do not know and where is the information stored
and processed. These frights of the unknown service providers must very amicably be dealt with and
eliminated form their minds.
For uninterrupted services and proper working it is necessary that you acquire a vendor services with
proper infrastructural and technical expertise. An authorized vendor who can meet the security standards
set by your company’s internal policies and government agencies. While selecting the service provider you must
carefully read the service level agreement and understand their policies and terms and provision of
compensation in case of any outage or lock in clauses.
7) Cultural Obstacles
High authority of the company and organizational culture has also become a big obstacle in the proper
implementation of the cloud computing. Top authority never wants to store the important data of the
company somewhere else where they are not able to control and access the data. They have
misconceptions in their minds that cloud computing puts the organization at the risk by seeping out
important details. Their mindset is such that the organization on risk averse footing, which makes it more
reluctant to migrate to a cloud solution.
8) Cost Barrier
For efficient working of cloud computing you have to bear the high charges of the bandwidth. Business can
cut down the cost on hardware but they have to spend a huge amount on the bandwidth. For smaller
application cost is not a big issue but for large and complex applications it is a major concern. For
transferring complex and intensive data over the network it is very necessary that you have sufficient
bandwidth. This is a major obstacle in front of small organizations, which restrict them for implementing
cloud technology in their business.
Every organization does not have sufficient knowledge about the implementation of the cloud solutions.
They have not expertise staff and tools for the proper use of cloud technology. Delivering the information
and selection the right cloud is quite difficult without right direction. Teaching your staff about the process
and tools of the cloud computing is a very big challenge in itself. Requiring an organization to shift their
business to cloud based technology without having any proper knowledge is like asking for disaster. They
would never use this technology for their business functions.
10) Consumption basic service charges
Cloud computing services are on-demand services so it is difficult to define specific cost for a particular
quantity of services. These types of fluctuations and price differences make the implementation of cloud
computing very difficult and complicated. It is not easy for a normal business owner to study consistent
demand and fluctuations with the seasons and various events. So it is hard to budget for a service that
could consume several months of budget in a few days of heavy use.
It is very complicated to certify that the cloud service provider meet the standards for security and threat
risk. Every organization may not have enough mechanism to mitigate these types of threats. Organizations
should observe and examine the threats very seriously. There are mainly two types of threat such as
internal threats, within the organizations and external threats from the professional hackers who seek out
the important information of your business. These threats and security risks put a check on implementing
cloud solutions.
Cloud computing is a new concept for most of the business organizations. A normal businessman is not
able to verify the genuineness of the service provider agency. It’s very difficult for them to check the whether
the vendors meet the security standards or not. They have not an ICT consultant to evaluate the vendors
against the worldwide criteria. It is necessary to verify that the vendor must be operating this business for a
sufficient time without having any negative record in past. Vendor continuing business without any data
loss complaint and have a number of satisfied clients. Market reputation of the vendor should be
unblemished.
Cloud computing carries some major risk factors like hacking. Some professional hackers are able to hack
the application by breaking the efficient firewalls and steal the sensitive information of the organizations. A
cloud provider hosts numerous clients; each can be affected by actions taken against any one of them.
When any threat came into the main server it affects all the other clients also. As in distributed denial of
service attacks server requests that inundate a provider from widely distributed computers.
Cloud services faces issue of data loss. A proper backup policy for the recovery of data must be placed to
deal with the loss. Vendors must set proper infrastructures to efficiently handle with server breakdown and
outages. All the cloud computing service providers must set up their servers at economically stable
locations where they should have proper arrangements for the backup of all the data in at least two
different locations. Ideally they should manage a hot backup and a cold backup site.
15) Data portability
Every person wants to leverage of migrating in and out of the cloud. Ensuring data portability is very
necessary. Usually, clients complain about being locked in the cloud technology from where they cannot
switch without restraints. There should be no lock in period for switching the cloud. Cloud technology must
have capability to integrate efficiently with the on premises. The clients must have a proper contract of data
portability with the provider and must have an updated copy of the data to be able to switch service
providers, should there be any urgent requirement.
Managing a cloud is not an easy task. It consist a lot of technical challenges. A lot of dramatic predictions
are famous about the impact of cloud computing. People think that traditional IT department will be
outdated and research supports the conclusions that cloud impacts are likely to be more gradual and less
linear. Cloud services can easily change and update by the business users. It does not involve any direct
involvement of IT department. It is a service provider’s responsibility to manage the information and spread it
across the organization. So it is difficult to manage all the complex functionality of cloud computing.
Cloud providers have important additional incentives to attempt to exploit lock-ins. A prefixed switching cost
is always there for any company receiving external services. Exit strategies and lock-in risks are primary
concerns for companies looking to exploit cloud computing.
There is no transparency in the service provider’s infrastructure and service area. You are not able to see the exact
location where your data is stored or being processed. It is a big challenge for an organization to transfer
their business information to such an unknown vendor.
Transition of business data from a premise set up to a virtual set up is a major issue for various
organizations. Data migration and network configuration are the serious problems behind avoiding the
cloud computing technology.
The idea of cloud has been famous that there is a rush of implementing virtualization amongst CIOs. This
has led to more complexities than solutions.
These are some common problems regarding the cloud computing execution in real life. But the benefits of
cloud computing are more vast in compare to these hazards. So you should find the perfect solutions and
avail the huge benefits of cloud technology in your business. It can take your business to the new heights!!
Cloud Frameworks
Introduction
Today, many cloud Infrastructure as a Service (IaaS) frameworks exist. Users, developers, and
administrators have to make a decision about which environment is best suited for them. These
frameworks manage the provisioning of virtual machines for a cloud providing IaaS. Commercial cloud
services charge, by the hour, for CPU time. It might be more cost effective for the organization to purchase
hardware to create its own private cloud. Eucalyptus, Nimbus and OpenNebula are major Open-Source
Cloud Computing Software Platforms. These software products are designed to allow an organization to
set up a private group of machines as their own cloud. These frameworks represent different points of
interest in the design space of this particular type of open-source cloud.
In a generic open-source cloud computing systems, we can identify six basic components:
The first component is the hardware and operating system that are on the various physical
machines in the system.
The second component is the network. This includes the DNS, DHCP and the subnet organization
of the physical machines.
The third component is the virtual machine hypervisor (also known as a Virtual Machine Monitor or
VMM).
The fourth component is an archive of VM disk images.
The fifth component is the front-end for users.
The last component is the cloud framework itself, where Eucalyptus, Nimbus or OpenNebula are
placed.
Eucalyptus
Eucalyptus is paid and open-source computer software for building Amazon Web Services (AWS)-
compatible private and hybrid cloud computing environments, originally developed by the company
Eucalyptus Systems. Eucalyptus is an acronym for Elastic Utility Computing Architecture for Linking Your
Programs To Useful Systems. Eucalyptus enables pooling compute, storage, and network resources that
can be dynamically scaled up or down as application workloads change.
The software development had its roots in the Virtual Grid Application Development Software project, at
Rice University and other institutions from 2003 to 2008. Rich Wolski led a group at the University of
California, Santa Barbara (UCSB), and became the chief technical officer at the company headquartered in
Goleta, California before returning to teach at UCSB.
The co-founders of Eucalyptus were Rich Wolski (CTO), Dan Nurmi, Neil Soman, Dmitrii Zagorodnov,
Chris Grzegorczyk, Graziano Obertelli and Woody Rollins (CEO). Eucalyptus Systems announced a
formal agreement with Amazon Web Services in March 2012.
Software Architecture of
Eucalyptus
Eucalyptus commands can manage either Amazon or Eucalyptus instances. Users can also move
instances between a Eucalyptus private cloud and the Amazon Elastic Compute Cloud to create a hybrid
cloud. Hardware virtualization isolates applications from computer hardware details.
Images – An image is a fixed collection of software modules, system software, application software,
and configuration information that is started from a known baseline (immutable/fixed). When bundled
and uploaded to the Eucalyptus cloud, this becomes a Eucalyptus machine image (EMI).
Instances – When an image is put to use, it is called an instance. The configuration is executed at
runtime, and the Cloud Controller decides where the image will run, and storage and networking is
attached to meet resource needs.
IP addressing – Eucalyptus instances can have public and private IP addresses. An IP address is
assigned to an instance when the instance is created from an image. For instances that require a
persistent IP address, such as a web-server, Eucalyptus supplies elastic IP addresses. These are pre-
allocated by the Eucalyptus cloud and can be reassigned to a running instance.
Security – TCP/IP security groups share a common set of firewall rules. This is a mechanism to
firewall off an instance using IP address and ports block/allow functionality. Instances are isolated at
TCP/IP layer 2. If this were not present, a user could manipulate the networking of instances and gain
access to neighboring instances violating the basic cloud tenet of instance isolation and separation.
Networking – There are three networking modes. In Managed Mode Eucalyptus manages a local
network of instances, including security groups and IP addresses. In System Mode, Eucalyptus
assigns a MAC address and
attaches the instance's network interface to the physical network through the Node Controller's bridge.
System Mode does not offer elastic IP addresses, security groups, or VM isolation. In Static Mode,
Eucalyptus assigns IP addresses to instances. Static Mode does not offer elastic IPs, security groups,
or VM isolation.
Access Control – A user of Eucalyptus is assigned an identity, and identities can be grouped together
for access control.
Components of Eucalyptus
The Cloud Controller (CLC) is a Java program that offers EC2-compatible interfaces, as well as a web
interface to the outside world. In addition to handling incoming requests, the CLC acts as the
administrative interface for cloud management and performs high-level resource scheduling and
system accounting. The CLC accepts user API requests from command-line interfaces like euca2ools
or GUI-based tools like the Eucalyptus User Console and manages the underlying compute, storage,
and network resources. Only one CLC can exist per cloud and it handles authentication, accounting,
reporting, and quota management.
Walrus, also written in Java, is the Eucalyptus equivalent to AWS Simple Storage Service (S3). Walrus
offers persistent storage to all of the virtual machines in the Eucalyptus cloud and can be used as a
simple HTTP put/get storage as a service solution. There are no data type restrictions for Walrus, and
it can contain images (i.e., the building blocks used to launch virtual machines), volume snapshots
(i.e., point-in-time copies), and application data. Only one Walrus can exist per cloud.
The Cluster Controller (CC) is written in C and acts as the front end for a cluster within a Eucalyptus
cloud and communicates with the Storage Controller and Node Controller. It manages instance (i.e.,
virtual machines) execution and Service Level Agreements (SLAs) per cluster.
The Storage Controller (SC) is written in Java and is the Eucalyptus equivalent to AWS EBS. It
communicates with the Cluster Controller and Node Controller and manages Eucalyptus block
volumes and snapshots to the instances within its specific cluster. If an instance requires writing
persistent data to memory outside of the cluster, it would need to write to Walrus, which is available to
any instance in any cluster.
The VMware Broker is an optional component that provides an AWS-compatible
interface for VMware environments and physically runs on the Cluster Controller. The VMware
Broker overlays existing ESX/ESXi hosts and transforms Eucalyptus Machine Images (EMIs) to
VMware virtual disks. The VMware Broker mediates interactions between the Cluster Controller and
VMware and can connect directly to either ESX/ESXi hosts or to vCenter Server.
The Node Controller (NC) is written in C and hosts the virtual machine instances and manages the
virtual network endpoints. It downloads and caches images from Walrus as well as creates and caches
instances. While there is no theoretical limit to the number of Node Controllers per cluster,
performance limits do exist.
Functionality of Eucalyptus
The Eucalyptus User Console provides an interface for users to self-service provision and configures
compute, network, and storage resources. Development and test teams can manage virtual instances
using built-in key management and encryption capabilities. Access to virtual instances is available using
familiar SSH and RDP mechanisms. Virtual instances with application configuration can be stopped and
restarted using encrypted boot from EBS capability.
IaaS service components Cloud Controller, Cluster Controller, Walrus, Storage Controller, and VMware
Broker are configurable as redundant systems that are resilient to multiple types of failures. Management
state of the cloud machine is preserved and reverted to normal operating conditions in the event of a
hardware or software failure.
Eucalyptus can run multiple versions of Windows and Linux virtual machine images. Users can build a
library of Eucalyptus Machine Images (EMIs) with application metadata that are decoupled from
infrastructure details to allow them to run on Eucalyptus clouds. Amazon Machine Images are also
compatible with Eucalyptus clouds. VMware Images and vApps can be converted to run on Eucalyptus
clouds and AWS public clouds.
Eucalyptus user identity management can be integrated with existing Microsoft Active Directory or LDAP
systems to have fine-grained role based access control over cloud resources.
Eucalyptus supports storage area network devices to take advantage of storage arrays to improve
performance and reliability. Eucalyptus Machine Images can be backed by EBS-like persistent storage
volumes, improving the performance of image launch time and enabling fully persistent virtual machine
instances. Eucalyptus also supports direct-attached storage.
Nimbus
Nimbus Infrastructure is an open source EC2/S3-compatible IaaS solution with features that benefit
scientific community interests, like support for auto-configuring clusters, proxy credentials, batch
schedulers, best-effort allocations, etc.
Nimbus Platform is an integrated set of tools for a multi-cloud environment that automates and
simplifies the work with infrastructure clouds (deployment, scaling, and management of cloud
resources) for scientific users.
This toolkit is compatible with Amazon's Network Protocols via EC2 based clients, S3 REST API clients, as
well as SOAP API and REST API that have been implemented in Nimbus. Also it provides support for X509
credentials, fast propagation, multiple protocols, and compartmentalized dependencies. Nimbus features
flexible user, group and workspaces management, request authentication and authorization, and per-client
usage tracking.
Nimbus allows a client to lease remote resources by deploying virtual machines (VMs) on those resources
and configuring them to represent an environment desired by the user. It was formerly known as the ―Virtual
Workspace Service‖ (VWS) but the ―workspace service‖ is technically just one of the components in the
software collection. Nimbus was designed with the goal of turning clusters into clouds mainly to be used in
scientific applications.
The design of Nimbus which consists of a number of components is based on the web service technology.
Workspace Service
Workspace Pilot
Workspace Control
It implements VM instance management such as start, stop and pause VM. It also provides image
management and sets up networks and provides IP assignment.
Context Broker
It allows clients coordinate large virtual cluster launches automatically and periodically.
Workspace Client
It is a complex client that provides full access to the workspace service functionality.
Cloud Client
Storage Service
Cumulus is a web service providing users with storage capabilities to store images and works in
conjunction with GridFTP.
OpenNebula is an open source management tool that helps virtualized data centers oversee private
clouds, public clouds and hybrid clouds. OpenNebula combines existing virtualization technologies with
advanced features for multi-tenancy, automated provisioning and elasticity. A built-in virtual network
manager maps virtual networks to physical networks. OpenNebula is vendor neutral, as well as
platform- and API-agnostic. It can use KVM, Xen or VMware hypervisors.
OpenNebula, which now operates as an open source project, began as a research project by Ignacio M.
Llorente and Rubén S. Montero in 2005. The first public release of OpenNebula was in March 2008. The
goals of the research were to create efficient solutions for managing virtual machines on distributed
infrastructures. It was also important that these solutions had the ability to scale at high levels.
OpenNebula tends to a greater level of centralization and customizability (especially for end users). The
idea of OpenNebula is a pure private cloud, in which users actually log into the head node to access cloud
functions. OpenNebula, by default, uses a shared file system, typically NFS, for all disk image files and all
files for actually running OpenNebula functions.
Features of OpenNebula
A number of API’s are available for the platform, including AWS EC2, EBS, and OGF OCCI.
A powerful, yet familiar UNIX based, command-line interface is available to administrators.
Further ease of use is available via the SunStone Portal, a graphical-user interface for cloud
consumers and data center administrators.
Appliance Marketplace
The OpenNebula Marketplace offers a wide variety of applications capable of running in OpenNebula
environments.
A private catalogue of applications is deployable across OpenNebula instances.
The marketplace is fully integrated with the SunStone GUI.
Fine-tuned ACL’s, user quotas, and powerful user, group and role management ensure solid security.
The platform fully integrates with user management services such as LDAP and Active Directory. A
built-in user name and password, SSH, and X.509 are also supported.
Login token functionality, fine-grained auditing, and the ability to isolate various levels also provide
increased security levels.
The platform features a modular and extensible architecture allowing third-party tools to be easily
integrated.
Custom plug-ins are available for the integration of any third-party data center service.
A number of API’s allow for the integration of tools such as billing and self-service portals.
Internal Architecture
The OpenNebula Project's deployment model resembles classic cluster architecture which utilizes
Front-end machine
The master node, sometimes referred to as the front-end machine, executes all the OpenNebula services.
This is the actual machine where OpenNebula is installed. OpenNebula services on the front-end machine
include the management daemon (oned), scheduler (sched), the web interface server (Sunstone server),
and other advanced components. These services are responsible for queuing, scheduling, and submitting
jobs to other machines in the cluster. The master node also provides the mechanisms to manage the entire
system. This includes adding virtual machines, monitoring the status of virtual machines, hosting the
repository, and transferring virtual machines when necessary. Much of this is possible due to a monitoring
subsystem which gathers information such as host status, performance, and capacity use. The system is
highly scalable and is only limited by the performance of the actual server.
Hypervisor enabled-hosts
The worker nodes, or hypervisor enabled-hosts, provide the actual computing resources needed for
processing all jobs submitted by the master node. OpenNebula hypervisor enabled-hosts use a
virtualization hypervisor such as Vmware, Xen, or KVM. The KVM hypervisor is natively supported and
used by default. Virtualization hosts are the physical machines that run the virtual machines and various
platforms can be used with OpenNebula. A Virtualization Subsystem interacts with these hosts to take the
actions needed by the master node.
Storage
The datastores simply hold the base images of the Virtual Machines. The datastores must be accessible to
the front- end; this can be accomplished by using one of a variety of available technologies such as NAS,
SAN, or direct attached storage.
Three different datastore classes are included with OpenNebula, including system datastores, image
datastores, and file datastores. System datastores hold the images used for running the virtual machines.
The images can be complete copies of an original image, deltas, or symbolic links depending on the
storage technology used. The image datastores are used to store the disk image repository. Images from
the image datastores are moved to or from the system datastore when virtual machines are deployed or
manipulated. The file datastore is used for regular files and is often used for kernels, ram disks, or context
files.
Physical networks
Physical networks are required to support the interconnection of storage servers and virtual machines in
remote locations. It is also essential that the front-end machine can connect to all the worker nodes or
hosts. At the very least two physical networks are required as OpenNebula requires a service network and
an instance network. The front-end machine uses the service network to access hosts, manage and
monitor hypervisors, and to move image files. The instance network allows the virtual machines to connect
across different hosts. The network subsystem of OpenNebula is easily customizable to allow easy
adaptation to existing data centers.
CloudSim
CloudSim is a framework for modeling and simulation of cloud computing infrastructures and services.
Originally built primarily at the Cloud Computing and Distributed Systems (CLOUDS) Laboratory, The
University of Melbourne, Australia, CloudSim has become one of the most popular open source cloud
simulators in the research and academia. CloudSim is completely written in Java. CloudSim provides a
generalized and extensible simulation framework that enables modeling, simulation, and experimentation
of emerging Cloud Computing infrastructures and application services.
CloudSim Features
Support for modeling and simulation of large scale Cloud Computing data centers.
Energy-aware computational resources.
Support for data center network topologies and message-passing applications.
Support for dynamic insertion of simulation elements, stop and resume of simulation.
Support for user-defined policies for allocation of hosts to virtual machines and policies for
allocation of host resources to virtual machines.
CloudSim Architecture
Figure below shows the multi-layered design of the CloudSim software framework and its architectural
components. Initial releases of CloudSim used SimJava as the discrete event simulation engine that
supports several core functionalities, such as queuing and processing of events, creation of Cloud system
entities (services, host, data center, broker, VMs), communication between components, and management
of the simulation clock. However in the current release, the SimJava layer has been removed in order to
allow some advanced operations that are not supported by it.
The CloudSim simulation layer provides support for modeling and simulation of virtualized Cloud-based
data center environments including dedicated management interfaces for VMs, memory, storage, and
bandwidth. The fundamental issues, such as provisioning of hosts to VMs, managing application execution,
and monitoring dynamic system state, are handled by this layer. A Cloud provider, who wants to study the
efficiency of different policies in allocating its hosts to VMs (VM provisioning), would need to implement his
strategies at this layer. Such implementation can be done by programmatically extending the core VM
provisioning functionality. There is a clear distinction at this layer related to provisioning of hosts to VMs. A
Cloud host can be concurrently allocated to a set of VMs that execute applications based on SaaS
provider’s defined QoS levels. This layer also exposes the functionalities that a Cloud application developer
can extend to perform complex workload profiling and application performance study. The top-most layer in
the CloudSim stack is the User Code that exposes basic entities for hosts (number of machines, their
specification, and so on), applications (number of tasks and their requirements), VMs, number of users and
their application types, and broker scheduling policies. By extending the basic entities given at this layer, a
Cloud application developer can perform the following activities: (i) generate a mix of workload request
distributions, application configurations; (ii) model Cloud availability scenarios and perform robust tests
based on the custom configurations; and (iii) implement custom application provisioning techniques for
clouds and their federation.
As Cloud computing is still an emerging paradigm for distributed computing, there is a lack of defined
standards, tools, and methods that can efficiently tackle the infrastructure and application level
complexities. Hence, in the near future there will be a number of research efforts both in the academia and
industry toward defining core algorithms, policies, and application benchmarking based on execution
contexts. By extending the basic functionalities already exposed to CloudSim, researchers will be able to
perform tests based on specific scenarios and configurations, thereby allowing the development of best
practices in all the critical aspects related to Cloud Computing.
Advantages:
Time effectiveness: CloudSim can be used with Netbeans using Java it requires very less effort and time to
implement cloud based application provisioning test environment.
Flexibility and applicability: Developers can model and test the performance of their application services in
heterogeneous cloud environments (Amazon EC2, Microsoft Azure) with little programming and
deployment effort.
Very few pre-requisites: To use CloudSim, User only needs basic understanding of Java and OOP concept.