SlideShare a Scribd company logo
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
1
III B.Sc. – VI SEMESTER
Cloud Computing is an emerging computing technology that uses the Internet and Central remote
servers to maintain data and applications.
Cloud Computing provides us a means by which we can access the applications as utilities, over the
Internet. It allows us to create, configure, and customize applications online.
With Cloud Computing we can access database resources via the Internet from anywhere for as long
as they need without worrying about any maintenance and management of actual resources.
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
2
SYLLABUS
UNIT - I
Cloud Computing Overview – Origins of Cloud computing – Cloud components - Essential
characteristics – On-demand self-service , Broad network access , Location independent resource
pooling , Rapid elasticity , Measured service.
UNIT - II
Cloud scenarios – Benefits: scalability , simplicity , vendors ,security. Limitations – Sensitive
information - Application development – Security concerns - privacy concern with a third party -
security level of third party - security benefits Regularity issues: Government policies.
UNIT - III
Cloud architecture: Cloud delivery model – SPI framework , SPI evolution , SPI vs. traditional
IT Model; Software as a Service (SaaS): SaaS service providers – Google App Engine,
Salesforce.com and google platfrom – Benefits – Operational benefits - Economic benefits –
Evaluating SaaS; Platform as a Service ( PaaS ): PaaS service providers – Right Scale –
Salesforce.com – Rackspace – Force.com – Services and Benefits
UNIT - IV
Infrastructure as a Service ( IaaS): IaaS service providers – Amazon EC2 , GoGrid – Microsoft
soft implementation and support – Amazon EC service level agreement – Recent developments –
Benefits; Cloud deployment model : Public clouds – Private clouds – Community clouds - Hybrid
clouds - Advantages of Cloud computing.
UNIT - V
Virtualization: Virtualization and cloud computing - Need of virtualization – cost , administration
, fast deployment , reduce infrastructure cost – limitations; Types of hardware virtualization:
Full virtualization - partial virtualization - para virtualization; Desktop virtualization: Software
virtualization – Memory virtualization - Storage virtualization – Data virtualization – Network
virtualization; Microsoft Implementation: Microsoft Hyper V – Vmware features and
infrastructure – Virtual Box - Thin client.
Reference Books
1. Cloud computing a practical approach - Anthony T.Velte , Toby J. Velte Robert
Elsenpeter TATA McGraw- Hill , New Delhi - 2010
2. Cloud Computing: Web-Based Applications That Change the Way You Work and
Collaborate Online - Michael Miller - Que 2008
3. Cloud Computing, Theory and Practice, Dan C Marinescu, MK Elsevier.
4. Cloud Computing, A Hands on approach, Arshadeep Bahga, Vijay Madisetti, University
Press
5. Mastering Cloud Computing, Foundations and Application Programming, Raj Kumar
Buyya, Christenvecctiola, S Tammarai selvi, TMH
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
3
INDEX
UNIT TOPIC PAGE
UNIT-1
CLOUD COMPUTING OVERVIEW
4
UNIT-2
CLOUD SCENARIOS
11
UNIT-3
CLOUD ARCHITECTURE
(SOFTWAREAS ASERVICE, PLATFORM ASASERVICE)
18
UNIT-4
INFRASTRUCTURE AS A SERVICE
33
UNIT-5
VIRTUALIZATION
44
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
4
UNIT-1
CHAPTER-1
1. Explain cloud computing over view?
Cloud Computing provides us means of accessing the applications as utilities over the
Internet. It allows us to create, configure, and customize the applications online.
What is Cloud?
The term Cloud refers to a Network or Internet. In other words, we can say that Cloud is
something, which is present at remote location. Cloud can provide services over public and private
networks, i.e., WAN, LAN or VPN.
Applications such as e-mail, web conferencing, customer relationship management (CRM)
execute on cloud.
What is Cloud Computing?
Cloud Computing refers to manipulating, configuring, and accessing the hardware and software
resources remotely. It offers online data storage, infrastructure, and application.
Cloud computing offers platform independency, as the software is not required to be
installed locally on the PC. Hence, the Cloud Computing is making our business
applications mobile and collaborative.
2. Explain the origin of Cloud computing.
There are certain services and models working behind the scene making the cloud computing
feasible and accessible to end users. Following are the working models for cloud computing:
● Deployment Models
● Service Models
Deployment Models
Deployment models define the type of access to the cloud, i.e., how the cloud is located?
Cloud can have any of the four types of access: Public, Private, Hybrid, and Community.
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
5
Public Cloud: The public cloud allows systems and services to be easily accessible to the
general public. Public cloud may be less secure because of its openness.
Private Cloud: The private cloud allows systems and services to be accessible within an
organization. It is more secured because of its private nature.
Community Cloud: The community cloud allows systems and services to be accessible by a
group of organizations.
Hybrid Cloud: The hybrid cloud is a mixture of public and private cloud, in which the critical
activities are performed using private cloud while the non-critical activities are performed using
public cloud.
Service Models
Cloud computing is based on service models. These are categorized into three basic service
models which are -
● Infrastructure-as–a-Service (IaaS)
● Platform-as-a-Service (PaaS)
● Software-as-a-Service (SaaS)
3. Explain history of cloud computing?
Before emerging the cloud computing, there was Client/Server computing which is
basically a centralized storage in which all the software applications, all the data and all the
controls are resided on the server side.
If a single user wants to access specific data or run a program, he/she need to connect to
the server and then gain appropriate access, and then he/she can do his/her business.
Then after, distributed computing came into picture, where all the computers are networked
together and share their resources when needed.
On the basis of above computing, there was emerged of cloud computing concepts that
later implemented.
At around in 1961, John MacCharty suggested in a speech at MIT that computing can be
sold like a utility, just like a water or electricity. It was a brilliant idea, but like all brilliant ideas,
it was ahead if its time, as for the next few decades, despite interest in the model, the technology
simply was not ready for it.
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
6
But of course time has passed and the technology caught that idea and after few years we
mentioned that:
In 1999, Salesforce.com started delivering of applications to users using a simple website.
The applications were delivered to enterprises over the Internet, and this way the dream of
computing sold as utility were true.
In 2002, Amazon started Amazon Web Services, providing services like storage,
computation and even human intelligence. However, only starting with the launch of the Elastic
Compute Cloud in 2006 a truly commercial service open to everybody existed.
In 2009, Google Apps also started to provide cloud computing enterprise applications.
Of course, all the big players are present in the cloud computing evolution, some were earlier,
some were later. In 2009, Microsoft launched Windows Azure, and companies like Oracle and HP
have all joined the game. This proves that today, cloud computing has become mainstream.
4. Explain cloud components?
In a simple, topological sense, a cloud computing solution is made up of several elements:
clients, the datacenter, and distributed servers. These components make up the three parts of a
cloud computing solution.
Each element has a purpose and plays a specific role in delivering a functional cloud-
based application, so let’s take a closer look.
Clients
Clients are, in a cloud computing architecture, the exact same things that they are in a plain,
old, everyday local area network (LAN). They are, typically, the computers that just sit on your
desk. But they might also be laptops, tablet computers, mobile phones, or PDAs—all big drivers
for cloud computing because of their mobility.
Anyway, clients are the devices that the end users interact with to manage their information
on the cloud. Clients generally fall into three categories:
● Mobile Mobile devices include PDAs or smartphones, like a Blackberry,
WindowsMobile Smartphone, or an iPhone.
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
7
● Thin Clients are computers that do not have internal hard drives, but rather let
theserver do all the work, but then display the information.
● Thick This type of client is a regular computer, using a web browser like Firefoxor
Internet Explorer to connect to the cloud.
Datacenter
The datacenter is the collection of servers where the application to which you subscribe is
housed. It could be a large room in the basement of your building or a room full of servers on the
other side of the world that you access via the Internet.
A growing trend in the IT world is virtualizing servers. That is, software can be installed
allowing multiple instances of virtual servers to be used. In this way, you can have half a dozen
virtual servers running on one physical server.
DistributedServers
But the servers don’t all have to be housed in the same location. Often, servers are in
geographically disparate locations. But to you, the cloud subscriber, these servers act as if they’re
humming away right next to each other.
This gives the service provider more flexibility in options and security. For instance, Amazon
has their cloud solution in servers all over the world. If something were to happen at one site,
causing a failure, the service would still be accessed through another site. Also, if the cloud needs
more hardware, they need not throw more servers in the safe room—they can add them at another
site and simply make it part of the cloud.
5. Explain Essentialcharacteristicsofcloud computing?
On-Demand Self-Service
Cloud computing provides resources on demand, i.e. when the consumer wants it. This is
made possible by self-service and automation. Self-service means that the consumer performs all
the actions needed to acquire the service herself, instead of going through an IT department, for
example. The consumer’s request is then automatically processed by the cloud infrastructure,
without human intervention on the provider’s side.
To make this possible, a cloud provider must obviously have the infrastructure in place to
automatically handle consumers’ requests. Most likely, this infrastructure will be virtualized, so
different consumers can use the same pooled hardware.
On-demand self-service computing implies a high level of planning. For instance, a cloud
consumer can request a new virtual machine at any time, and expects to have it working in a couple
of minutes. The underlying hardware, however, might take 90 days to get delivered to the provider.
It is therefore necessary to monitor trends in resource usage and plan for future situations well in
advance.
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
8
Advantages:
Simple User Interfaces
The cloud provider can’t assume much specialized knowledge on the consumer’s part. In
a traditional enterprise IT setting, IT specialists process requests from business. They know, for
instance, how much RAM is going to be needed for a given use case.
Policies
The high level of automation required for operating a cloud means that there is no
opportunity for humans to thoroughly inspect the specifics of a given situation and make an
informed decision for a request based on context.
Broad NetworkAccess
Cloud computing separates computing capabilities from their consumers, so that they don’t
have to maintain the capabilities themselves. A consequence of this is that the computing
capabilities are located elsewhere, and must be accessed over a network.
Network
A computer network is a collection of two or more computers linked together for the
purposes of sharing information.
Resource Pooling
Resource pooling, the sharing of computing capabilities, leads to increased resource
utilization rates. This means you need fewer resources and thus save costs.
Multi-tenancy
Pooling resources on the software level means that a consumer is not the only one using
the software. The software must be designed to partition itself and provide scalable services to
multiple unrelated tenants. This is not a new concept: in the 1960s and 1970s, in mainframe
environments, this was called time sharing. In the 1990s, the term in vogue was Application
Service Provider (ASP). Nowadays people speak of cloud services.
Billing and Metering
When multiple consumers share the same resources, the question arises who pays for them.
Billing and metering infrastructure automatically collects per tenant usage of resources. For this
to work, each request must be assigned a unique transaction ID, that is related to the tenant. The
transaction ID must be passed along to all sub-components, so that each can add their usage cost
to the transaction.
Data Partitioning
It may make sense to store data from different tenants in different locations. For instance,
storing data close to where it’s used may decrease latency and thereby improve performance for
the cloud consumer. Data for different tenants may be combined into a shared.
Rapid Elasticity
Since consumers can ask for and get resources at any time and in any quantity, the cloud
must be able to scale up and down as load demands. Note that scaling down is just as important as
scaling up, to conserve resources and thereby reduce cost.
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
9
Different applications running in the cloud will have different workload patterns, be they
seasonal, batch, transient, hockey stick, or more complex. Because of these differences, high
workloads in some applications will coincide with low workloads in others. This is why resource
pooling leads to higher resource utilization rates and economies of scale.
Scalability
To achieve these economies of scale, the cloud infrastructure must be able to scale
quickly. Scalability is the ability of a system to improve performance proportionally after adding
hardware. In a scalable cloud, one can just add hardware whenever the demand rises, and the
applications keep performing at the required level.
Since resources in a system typically have some overhead associated with them, it’s
important to understand what percentage of the resource you can actually use. The measurement
of the additional output by adding a unit of resource, as compared to the previously added unit of
resource is called the scalability factor. Based on this concept we can distinguish the following
types of scalability:
● Linear scalability: The scalability factor stays constant when capacity is added.
● Sub-linear scalability: The scalability factor decreases when capacity is added.
● Supra-linear scalability: The scalability factor increases when capacity is added. For
instance, I/O across multiple disk spindles in a RAID gets better with more spindles.
● Negative scalability: The performance of the system gets worse, instead of better, when
capacity is added.
Dynamic Provisioning
Cloud systems must not only be able to scale, but scale at will, since cloud consumers
should get the resources they want whenever they want it. It is, therefore, important to be able to
dynamically provision new computing resources. Dynamic provisioning relies heavily on
demand monitoring.
Measured Service
In order to know when to scale up or down, one needs information about the current
demand on the cloud. In other words, one needs to measure things like CPU, memory, and network
bandwidth usage to make sure cloud consumers never run out of those resources. The types of
resources to measure depend in part on the types of services that the cloud system offers.
6. Explain the characteristics of cloud computing.
Cloud computing has following major characteristics:
1. Swiftness of organizations gets improved, as cloud computing may increase users
flexibility of adding, expanding technological infrastructure resources.
2. Access applications as utilities over the Internet or intranet.
3. Configure the application online at any time. It does not require to install a specific piece
of software to access or manipulate cloud applications.
4. Cloud computing offers online development and deployment tools, programming runtime
environment through Platform As A Service model.
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
10
5. Cloud resources are available over the network in a manner that provides platform
independent access to any type of clients.
6. Cloud computing offers on-demand self-service. The resources can be used without
interaction with cloud service provider.
7. Cloud computing is highly cost effective because it operates at higher efficiencies with
greater utilization. It just requires an Internet connection.
8. Cloud computing offers load balancing that makes it more reliable. Costs savings depend
on the type of activities supported and the type of infrastructure available in-house.
7. Explain Service Level Agreements.
One of the advantages of cloud computing is that the consumer no longer has the burden
of making sure capacity is adequate for fulfilling demand. Consumers sign up for Service Level
Agreements (SLAs), that guarantee them enough capacity.
An SLA should contain:
● The list of services the provider will deliver and a complete definition of each service
● Metrics to determine whether the provider is delivering the service as promised and an
auditing mechanism to monitor the service.
● Responsibilities of the provider and the consumer and remedies available to both if the
terms of the SLA are not met
● A description of how the SLA will change over time
Auditing
To prove that certain QoS attributes are met, it may be necessary to keep an audit trail of
performed operations.
High Availability
One of the most important things to settle in an SLA is availability. This is usually
expressed in a number of nines, e.g. five nines stands for 99.999% uptime.
Replication
In replication, a logical variable x that can be read and written to, actually consists of a set
of physical variables x0, … xn and an associated protocol that makes sure that reads and writes to
the replicas are performed in a way that looks indistinguishable from reads and writes to the
original variable.
There are three major types of data replication protocols: Transactional replication maintains
replication within the boundaries of a single transaction.
Virtual synchrony is an inter-process message passing technology that guarantees that messages
are delivered to all nodes, in the order they were sent.
State machine consensus / Paxos is a way of achieving consensus among a group of distributed
servers that guarantees fault-tolerance.
● Read repair
The correction is done when a read finds an inconsistency. This slows down the read
operation.
● Write repair
The correction is done during a write operation, if an inconsistency has been found out,
slowing down the write operation.
● Asynchronous repair
The correction is not part of a read or write operation.
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
11
UNIT-2
1. Explain cloud scenarios?
Scenarios There are three different major implementations of cloud computing. How
organizations are using cloud computing is quite different at a granular level, but the uses
generally fall into one of these three solutions.
Compute Clouds
Compute Clouds Compute clouds allow access to highly scalable, inexpensive, on-
demand computing resources that run the code that they’re given.
Three examples of compute clouds are
• Amazon’s EC2
• Google App Engine
• Berkeley Open Infrastructure for Network Computing (BOINC)
Compute clouds are the most flexible in their offerings and can be used for sundry purposes;
it simply depends on the application the user wants to access.
Sign up for a cloud computing account, and get started right away. These applications are good
for any size organization, but large organizations might be at a disadvantage because these
applications don’t offer the standard management, monitoring, and governance capabilities that
these organizations are used to. Enterprises aren’t shut out, however. Amazon offers enterprise-
class support and there are emerging sets of cloud offerings like Terremark’s Enterprise Cloud,
which are meant for enterprise use.
Cloud Storage
One of the first cloud offerings was cloud storage and it remains a popular solution. Cloud
storage is a big world. There are already in excess of 100 vendors offering cloud storage. This is
an ideal solution if you want to maintain files off-site.
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
12
Security and cost are the top issues in this field and vary greatly, depending on the vendor
you choose. Currently, Amazon’s S3 is the top dog
Cloud Applications
Cloud applications differ from compute clouds in that they utilize software applications
that rely on cloud infrastructure. Cloud applications are versions of Software as a Service (SaaS)
and include such things as web applications that are delivered to users via a browser or application
like Microsoft Online Services. These applications offload hosting and IT management to the
cloud.
Cloud applications often eliminate the need to install and run the application on the
customer’s own computer, thus alleviating the burden of software maintenance, ongoing operation,
and support. Some cloud applications include
• Peer-to-peer computing (like BitTorrent and Skype)
• Web applications (like MySpace or YouTube)
• SaaS (like Google Apps)
• Software plus services (like Microsoft Online Services)
2. Explain the Benefits of cloud scenarios?
Your organization is going to have different needs from the company next door. However,
cloud computing can help you with your IT needs. Let’s take a closer look at what cloud computing
has to offer your organization.
Scalability
If you are anticipating a huge upswing in computing need (or even if you are surprised by
a sudden demand), cloud computing can help you manage. Rather than having to buy, install, and
configure new equipment, you can buy additional CPU cycles or storage from a third party.
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
13
Since your costs are based on consumption, you likely wouldn’t have to pay out as much as if
you had to buy the equipment.
Once you have fulfilled your need for additional equipment, you just stop using the cloud
provider’s services, and you don’t have to deal with unneeded equipment. You simply add or
subtract based on your organization’s need.
Simplicity
Again, not having to buy and configure new equipment allows you and your IT staff to get
right to your business. The cloud solution makes it possible to get your application started
immediately, and it costs a fraction of what it would cost to implement an on-site solution.
Knowledgeable Vendors
Typically, when new technology becomes popular, there are plenty of vendors who pop up
to offer their version of that technology. This isn’t always good, because a lot of those vendors
tend to offer less than useful technology. By contrast, the first comers to the cloud computing party
are actually very reputable companies. Companies like Amazon, Google, Microsoft, IBM, and
Yahoo! have been good vendors because they have offered reliable service, plenty of capacity, and
you get some brand familiarity with these well-known names.
Security
There are plenty of security risks when using a cloud vendor, but reputable companies
strive to keep you safe and secure.
Vendors have strict privacy policies and employ stringent security measures, like proven
cryptographic methods to authenticate users. Further, you can always encrypt your data before
storing it on a provider’s cloud. In some cases, between your encryption and the vendor’s security
measures, your data may be more secure than if it were stored in-house.
3. Explain the Limitations of cloud scenarios.
There are other cases when cloud computing is not the best solution for your computing
needs. This section looks at why certain applications are not the best to be deployed on the cloud.
We don’t mean to make these cases sound like deal-breakers, but you should be aware of some of
the limitations.
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
14
1. Sensitive Information: Let us understand by an example: A marketing survey company is using
google Docs to store the data like your PAN Card, Aadhar Card etc. The company is not the only
one who should protect your data. Thought it will be expected from google also to protect your
data but google pardons itself of this when agreement with them is signed. This sensitive
information can be used by government for specific analysis.
2. Don’t go with Trend: Your development team has given you a product and that product is
completely handling your situation well, even then you are planning to move the applications to
cloud just to follow the market trend or fashion then probably time to re analyze the situation and
don’t take the decision just for the sake of taking it. There are certainly situations where moving
to cloud is advantageous but not all.
3. Integration Issues:There are two applications your business house/development team is using,
one of the application contains the sensitive data and other one contains non-sensitive data so you
decided to move the sensitive data on cloud but moved non-sensitive data on cloud. In this case
one application is installed locally and other one is on cloud. It would create issues with security
and speed. You might try to run a high-speed application on local machine and it is using the data
coming from one application located on cloud, the speed of the application will be controlled by
application on the cloud as it will move based on internet speed and other factors.
4. Delay in response: As the application size grow which means the data used by the application
changes and grows everyday for example sales/production/logs data. The response coming from
the application hosted on cloud might increase and delay specially when data is needed
spontaneously.
5. Security is largely juvenile, and requires focused expertise.
6. Your dependent on the cloud computing provider for your IT resources, thus you could be
exposed around outages and other service interruptions.
7. Using the Internet can cause network latency with some cloud applications.
8. Much of the technology is proprietary, and thus can cause lock-in.
9. Cost increases exponentially if subscription prices go up in the future.
10. Agreement issues could increase the risks of using cloud computing.
11. Data privacy issues could rise, if cloud provider hunt to monetize the data in their system.
12. Developing Your Own Applications: Often, the applications you want are already out there.
However, it may be the case that you need a very specific application. And in that case, you’ll
have to commission its development yourself.
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
15
Developing your own applications can certainly be a problem if you don’t know how to
program, or if you don’t have programmers on staff. In such a case, you’ll have to hire a software
company (or developer) or be left to use whatever applications the provider offers.
4. Explain the Security Concerns in Cloud Computing.
As with so many other technical choices, security is a two-sided coin in the world of cloud
computing—there are pros and there are cons. In this section, let’s examine security in the cloud
and talk about what’s good, and where you need to take extra care.
IDC conducted a survey of 244 IT executives about cloud services, security led the pack
of cloud concerns with 74.5 percent.
In order to be successful, vendors will have to take data like this into consideration as
they offer up their clouds.
Privacy Concerns with a Third Party
The first and most obvious concern is for privacy considerations. That is, if another party
is housing all your data, how do you know that it’s safe and secure? You really don’t. As a starting
point, assume that anything you put on the cloud can be accessed by anyone. There are also
concerns because law enforcement has been better able to get at data maintained on a cloud, more
so than they are from an organization’s servers.
The best plan of attack is to not perform mission-critical work or work that is highly
sensitive on a cloud platform without extensive security controls managed by your organization.
If you cannot manage security at that rigorous level, stick to applications that are less critical and
therefore better suited for the cloud and more “out of the box” security mechanisms. Remember,
nobody can steal critical information that isn’t there.
The statistics on third party breaches very badly and it's clear that organisations have trust
issues when it comes to third parties reliable in notifying them when an incident or a breach occurs.
report from insurance company Beazley covering the first 6 months of 2017 indicates that
accidental breaches caused by employee error/data breach while controlled by third party suppliers
account for 40% of bridges over all.
That doesn't mean that there are not reputable companies who would never think of
compromising your data and who are not staying on the cutting edge of the network security to
keep your data safe. But, even if providers are doing their best to Secure data, it can still be hacked,
and your information is at the mercy of who broken in. So before signing in it's always advisable
to know are they doing enough to protect your data access the company with a five-star reputation.
Hackers
There’s a lot of lot hackers can do if they’ve compromised your data. It ranges from selling
your proprietary information to your competition to secretly encoding your storage until you pay
them. Or they may just delete everything to damage your business and justify the action based on
their ethical views. Your data become more prone to them as your data is saved on cloud which is
third party.
Denial of services:
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
16
In a company is recognised worst-case scenario, attackers use multiple internet connected
devices each of which is running one or more than one bots to perform distributed denial of
service(DDOS) attacks. To get the hackers to stop attack in your network. A Tokyo firm had to
pay 2.5 million yen after the network was brought to a halt botnet attacks. Because the attack was
so discrete, police was unable to track down the attackers. In the world of cloud computing this is
a clearly a huge concern.
5. Explain Security Benefits of cloud scenarios
We are not trying to imply that your data is unsecure on the cloud. Service providers do
make an effort to ensure security of your data. Otherwise business will dry up. Some of the security
benefits of cloud services.
By maintaining data on the cloud and ensure strong access control, and putting a limit for
an employee to download/access only what they need to perform a task, cloud computing can limit
the amount of information that could potentially be lost. Reduced data loss is also ensured by the
fact the data is stored at a centralized place making your systems more inherently secure.
If your data is maintained on a cloud, it is easier to monitor security than must worry about
the security of numerous servers and clients. Of course, the chance that the cloud would be
breached puts all the data at risk, but if you are mindful of security and keep up on it, you only
must worry about one location, rather than several.
If your system breached, you can instantly move the data to another machine and parallel
conduct the investigation to find who was behind all breach. This is done at without disturbing
your users. Traditionally in such cases the time gets wasted in explaining the management about
the cause and taking the approval to shutdown the system so that data is moved to another system.
When you developed your own network, you had to buy third-party security software to
get the level of protection you want. With a cloud solution, those tools can be bundled in and
available to you and you can develop your system with whatever level of security you desire.
SaaS providers don’t bill you for all the security testing they do. It’s shared among the
cloud users. The result is that t because you are in a pool with others you get lower costs for
security testing. This is also the case with PaaS where your developers create their own code, but
the cloud code-scanning tools check the code for security weakness.
6. Expalin RegulatoryIssues
It’s rare when we actually want the government in our business. In the case of cloud
computing, however, regulation might be exactly what we need. Without some rules in place, it’s
too easy for service providers to be unsecure or even shifty enough to make off with your data.
Government to the Rescue?
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
17
Is it the government’s place to regulate cloud computing? As we mentioned, thanks to the
Great Depression, we had regulation that protected WaMu’s customers’ money when the bank
failed.
There are two schools of thought on the issue. First, if government can figure out a way to
safeguard data—either from loss or theft—any company facing such a loss would applaud the
regulation. On the other hand, there are those who think the government should stay out of it and
let competition and market forces guide cloud computing.
There are important questions that government needs to work out. First, who owns the
data? Also, should law enforcement agencies have easier access to personal information on cloud
data than that stored on a personal computer?
A big problem is that people using cloud services don’t understand the privacy and security
implications of their online email accounts, their LinkedIn account, their MySpace page, and so
forth. While these are popular sites for individuals, they are still considered cloud services and
their regulation will affect other cloud services.
Government Procurement
There are also questions about whether government agencies will store their data on the
cloud. Procurement regulations will have to change for government agencies to be keen on
jumping on the cloud.
The General Services Administration is making a push toward cloud computing, in an
effort to reduce the amount of energy their computers consume. Hewlett-Packard and Intel
produced a study that shows the federal government spends $480 million per year on electricity to
run its computers.
UNIT-3
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
18
CLOUD ARCHITECTURE
1. Explain Cloud Architecture?
Cloud Computing architecture comprises of many cloud components, which are loosely
coupled. We can broadly divide the cloud architecture into two parts:
• Front End
• Back End
Each of the ends is connected through a network, usually Internet. The following diagram
shows the graphical view of cloud computing architecture:
Front End
The front end refers to the client part of cloud computing system. It consists of interfaces
and applications that are required to access the cloud computing platforms, Example - Web
Browser.
Back End
The back End refers to the cloud itself. It consists of all the resources required to provide
cloud computing services. It comprises of huge data storage, virtual machines, security
mechanism, services, deployment models, servers, etc.
2. Explain SPI Framework ForCloud Computing?
A commonly agreed upon framework for describing cloud computing services goes by the
acronym ―SPI. This acronym stands for the three major services provided through the cloud:
software-as-a-service (SaaS),platform-as-a-service (PaaS), and infrastructure-as-aservice(IaaS)
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
19
Infrastructure as a Service
The IaaS model provides the required infrastructure to run the applications. A cloud
infrastructure enables on-demand provisioning of servers running several types of operating
systems and a customized software stack. The provider is in complete control of the infrastructure.
Infrastructure services are considered to be the bottom layer of cloud computing systems. Example
IBM, The definition of infrastructure as a service (IaaS) is pretty simple. You rent cloud
infrastructure—servers, storage and networking on demand, in
a pay-as-you-go model.
Advantages:-
1. Tremendous control to use whatever content makes sense.
2. Flexibility to secure data to whatever degree necessary
3. Physical Independence from infrastructure(you don’t have to ensure that proper cooling
is there etc.)
Disadvantages:-
1. Responsible for all configuration implemented on the server (and in application)
2. Responsible for keeping software up to date.
3. Multi-tenancy at hypervisor level. Integration of all aspects of application
Platform as a Service
In a platform-as-a-service (PaaS) model, the service provider offers a development
environment to application developers, who develop applications and offer those services through
the provider’s platform.
A cloud platform offers an environment on which developers create and deploy
applications and do not necessarily need to know how many processors or how much memory that
applications will be using. Example Google App Engine [9], an example of Platform as a
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
20
Service,offers a scalable environment for developing and hostingWeb applications, which should
be written in specific programming languages such as Python or Java, and use the services’ own
proprietary structured object data store.
Advantages:-
a) Reduce complexity because CSP is maintaining the environment.
b) The cloud service provider often uses it’s API (a benefit to the developer)
Disadvantages:-
a) Still responsible to keep software updated.
b) Multi-tenancy at platform layer.
Software as a Service
In a SaaS model, the customer does not purchase software, but rather rents it for use on a
subscription or pay-per-use model. Services provided by this layer can be accessed by end users
through Web portals. Therefore, consumers are increasingly shifting from locally installed
computer programs to on-line software services that offer the same functionally.
This model removes the burden of software maintenance for customers and simplifies
development and testing for providers. Example Salesforce.com [10],which relies on the SaaS
model, offers business productivity applications (CRM) that reside completely on their servers,
allowing customers to customize and access applications on demand.
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
21
Advantages:-
a) Scaling the environment is not the customer problem.
b) Updates/configuration/security are all managed by the CSP.
Disadvantages:-
a) Very little application customization.
b) No control of components.
c) No control over security.
d) Multi-tenancy issue at the application layer
3. Explain SPI Evaluation?
Software Process Improvement (SPI) encompasses the analysis and modification of the
processes within software development, aimed at improving key areas that contribute to the
organizations' goals. The task of evaluating whether the selected improvement path meets these
goals is challenging.
On the basis of the results of a systematic literature review on SPI measurement and
evaluation practices, we developed a framework (SPI Measurement and Evaluation Framework
(SPI-MEF)) that supports the planning and implementation of SPI evaluations.
CHALLENGES IN MEASURING AND EVALUATING SPI INITIATIVE
Challenge I - Heterogeneity of SPI initiatives
The spectrum of SPI initiatives ranges from the application of tools for improving specific
development processes, to the implementation of organization-wide programs to increase the
software development capability as a whole.
Challenge II - Partial evaluation
The outcome of SPI initiatives is predominately assessed by evaluating measures which
are collected at the project level . As a consequence, the improvement can be evaluated only
partially,neglecting effects which are visible only outside individual projects. Such evaluations can
thereforelead to sub-optimizations of the process. By focusing on the measurement of a single
attribute, e.g. effectiveness of the code review process, other attributes might inadvertently change
Challenge III - Limited visibility
This challenge is a consequence of the previous one since a partial evaluation implies that
the gathered information is targeted to a specific audience which may not cover all important
stakeholders of an SPI initiative. This means that information requirements may not be satisfied,
and that the actual achievements of the SPI initiative may not be visible to some stakeholder as
the measurement scope is not adequately determined.
Challenge IV - Evaluation effort and validity
Due to the vast diversity of SPI initiatives (see Challenge I), it is not surprising that the
evaluation strategies vary. The evaluation and analysis techniques are customized to the specific
settings were the initiatives are embedded.
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
22
4. Explain SPIvs Traditional IT Model?
Cloud computing is far more abstract as a virtual hosting solution. Instead of being
accessible via physical hardware, all servers, software and networks are hosted in the cloud, off
premises. It’s a real-time virtual environment hosted between several different servers at the same
time. So rather than investing money into purchasing physical servers in-house, you can rent the
data storage space from cloud computing providers on a more cost effective pay-per-use basis.
Resilience and Elasticity
The information and applications hosted in the cloud are evenly distributed across all the
servers, which are connected to work as one. Therefore, if one server fails, no data is lost and
downtime is avoided. The cloud also offers more storage space and server resources, including
better computing power. This means your software and applications will perform faster.
Traditional IT systems are not so resilient and cannot guarantee a consistently high level
of server performance. They have limited capacity and are susceptible to downtime, which can
greatly hinder workplace productivity.
Flexibility and Scalability
Cloud hosting offers an enhanced level of flexibility and scalability in comparison to
traditional data centres. The on-demand virtual space of cloud computing has unlimited storage
space and more server resources. Cloud servers can scale up or down depending on the level of
traffic your website receives, and you will have full control to install any software as and when
you need to. This provides more flexibility for your business to grow.
With traditional IT infrastructure, you can only use the resources that are already available
to you. If you run out of storage space, the only solution is to purchase or rent another server.If
you hire more employees, you will need to pay for additional software licences and have these
manually uploaded on your office hardware. This can be a costly venture, especially if your
business is growing quite rapidly.
Automation
A key difference between cloud computing and traditional IT infrastructure is how they are
managed. Cloud hosting is managed by the storage provider who takes care of all the necessary
hardware, ensures security measures are in place, and keeps it running smoothly. Traditional data
centres require heavy administration in-house, which can be costly and time consuming for your
business. Fully trained IT personnel may be needed to ensure regular monitoring and maintenance
of your servers – such as upgrades, configuration problems, threat protection and installations.
Running Costs
Cloud computing is more cost effective than traditional IT infrastructure due to methods
of payment for the data storage services. With cloud based services, you only pay for what is used
– similarly to how you pay for utilities such as electricity. Furthermore, the decreased likelihood
of downtime means improved workplace performance and increased profits in the long run.
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
23
With traditional IT infrastructure, you will need to purchase equipment and additional
server space upfront to adapt to business growth. If this slows, you will end up paying for resources
you don’t use. Furthermore, the value of physical servers decreases year on year, so the return on
investment of investing money in traditional IT infrastructure is quite low.
Security
Cloud computing is an external form of data storage and software delivery, which can make
it seem less secure than local data hosting. Anyone with access to the server can view and use the
stored data and applications in the cloud, wherever internet connection is available. Choosing a
cloud service provider that is completely transparent in its hosting of cloud platforms and ensures
optimum security measures are in place is crucial when transitioning to the cloud..
With traditional IT infrastructure, you are responsible for the protection of your data, and
it is easier to ensure that only approved personnel can access stored applications and data. .
Software-As-A-Service
Software as a Service(SaaS) is a way of delivering applications over the Internet- as a
service. Instead of installing and maintaining software, we simply access it via the Internet, freeing
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
24
from complex software and hardware management. SaaS is simply the cloud vendor providing the
given piece of software you want to use, on their servers.
The area becomes even more marked by companies like Google and Salesforce that offer
both types of services. For instance, not only can you build an application with Salesforce, but you
can also allow others to use the application you developed.
1. Explain the Benefits of SaaS?
Operational Benefits
There are benefits to the way you operate. You can change business processes (for the
better) by moving some applications and storage to the cloud. The following are some of the
operational benefits:
• Reduced cost Since technology is paid incrementally, your organization saves money in
the long run.
• Increased storage You can store more data on the cloud than on a private network. Plus,
if you need more it’s easy enough to get that extra storage.
• Automation Your IT staff no longer needs to worry that an application is up to date—
that’s the provider’s job. They can focus on duties that matter, rather than being
maintenance.
• Flexibility You have more flexibility with a cloud solution. Applications can be tested and
deployed with ease, and if it turns out that a given application isn’t getting the job done,
you can switch to another.
• Better mobility Users can access the cloud from anywhere with an Internet connection.
This is ideal for road warriors or telecommuters—or someone who needs to access the
system after hours.
Economic Benefits
Where the rubber really meets the road is when you consider the economic benefits of
something. And with cloud computing, cost is a huge factor. But it isn’t just in equipment savings;
it is realized throughout the organization. These are some benefits to consider:
• People We hate to suggest that anyone lose their job, but the honest-to-goodness truth
(we’re sorry) is that by moving to the cloud, you’ll rely on fewer staffers. By having fewer
staff members, you can look at your team and decide if such-and-such a person is
necessary. Is he or she bringing something to the organization? Are their core competencies
something you still need? If not, this gives you an opportunity to find the best people to
remain on staff.
• Hardware With the exception of very large enterprises or governments, major cloud
suppliers can purchase hardware, networking equipment, bandwidth, and so forth, much
cheaper than a “regular” business. That means if you need more storage, it’s just a matter
of upping your subscription costs with your provider, instead of buying new equipment. If
you need more computational cycles, you needn’t buy more servers; rather you just buy
more from your cloud provider.
• Pay as you go Think of cloud computing like leasing a car. Instead of buying the car
outright, you pay a smaller amount each month. It’s the same with cloud computing—you
just pay for what you use. But, also like leasing a car, at the end of the lease you don’t own
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
25
the car. That might be a good thing—the car may be a piece of junk, and in the case of a
purchased server, it’s sure to be obsolete.
• Time to market One of the greatest benefits of the cloud is the ability to get apps up
and running in a fraction of the time you would need in a conventional scenario. Let’s take
a closer look at that and see how getting an application online faster saves you money.
2. Explain Evaluating SaaS?
Before employing a SaaS solution, there are factors to consider. You should evaluate not
only the SaaS provider and its service, but also what your organization wants from SaaS. Be sure
the following factors are present as you evaluate your SaaS provider:
• Time to value As we mentioned earlier, one of the great benefits of using cloud services
is the ability to shorten the time it takes to get a new system or application up and running.
Unlike traditional software that might require complex installation, configuration,
administration, and maintenance, SaaS only requires a browser. This allows you to get up
and running much more quickly than by using traditional software.
• Trial period Most SaaS providers offer a 30-day trial of their service. This usually doesn’t
happen with traditional software—and certainly you wouldn’t move everyone en masse to
the trial. However, you can try out the SaaS vendor’s offering and if it feels like a good fit,
you can start making the move.
• Low entry costs Another appeal of SaaS is the low cost to get started using it. Rather than
laying out an enormous amount of money, you can get started relatively inexpensively.
Using anSaaS solution is much less expensive than rolling out a complex software
deployment across your organization.
• Service In SaaS, the vendor serves the customer. That is, the vendor becomes your IT
department—at least for the applications they’re hosting. This means that your own, in-
house IT department doesn’t have to buy hardware, install and configure software, or
maintain it. That’s all on your SaaS vendor. And if the vendor isn’t responsive to your
needs, pack up your toys and move to a different service. It is in the vendor’s best interests
to keep you and other customers happy.
• Wiser investment SaaS offers a less risky option than traditional software installed locally.
Rather than spend a lot of money up front, your organization will pay for the software as it
is used. Also, there is no long-term financial commitment. The monetary risk is greatly
lessened in anSaaS environment.
• Security Earlier in this book we talked about the security concerns with going to the cloud.
We mentioned those issues for the sake of completeness, but in reality it is in your vendor’s
best interests to keep you as secure as possible.
3. Explain the SaaS Providers.
Salesforce.com
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
26
Salesforce.com is a cloud computing and social software-as-a-service(SaaS) provider
based in San Francisco. It was founded in March 1999 in part of former Oracle executive Marc
benioff.
Software-as-a-service(SaaS) is a software distribution model in which a third-party
provider host applications and make them available to customers over the internet. SaaS is one of
the three main categories of cloud computing, alongside infrastructure as a service (IaaS) and
platform as a service (PaaS).
Salesforce.com Customer Relationship Management Service is broken down into several
broad categories
• commerce cloud
• sales cloud
• business logic mentor
• Programmable interface
• automatic mobile device deployment
• data cloud
• marketing cloud, community cloud
• analytics cloud enter
• appcloud
• reporting and Analytics
Sales cloud is a fully customizable product that brings all the customer information together
in an integrated platform that incorporates marketing lead generation, sales, customer service and
business analytics and provides access 2007 placation through the app exchange. The platform is
provided as software as a service for browser based access; mobile app is also available. A real
time social feed for collaboration allows users to share information are asked questions of the user
community.
salesforce.com offers 5 actions of sales cloud on a paper user
• Per month basis, from lowest to highest
• Group, Professional, Enterprise, Unlimited and Performance
• The company offers three levels of support contracts
o Standard success plan
o premier success plan
o premier + success plan
Force.com
Force.com is salesforce.com on-demand Cloud Computing platform will basis
salesforce.com as the world's first PaaS. Force.com features visualforce, a technology that makes
it much simpler for and customers, developers, and independent software vendors to design almost
any type of cloud application for a wide range of uses. The force.com platform offers Global
infrastructure and services for database, logic, workflow, integration, user interface, and
application exchange.
Desk.com
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
27
Desk.com is a SaaS help desk and customer support of salesforce.com. Desk.com was
previously known as Assistly. After being acquired by Salesforce.com assistive was renamed as
desk.com in 2012 as a slick social customers of support software.
The product differentiate itself from Salesforce other service platform in that desk.com
specifically targets a small businesses with its features and functions. Desk.com fitted with a
variety of products and third party applications including Salesforce CRM, Salesforce and other
apps. Desk.com also supports up to 50 languages.
4. Software-as-a-service withGoogle App Engine
Software architects interested in building Software-as-a-Service(SaaS) have a wide variety
of deployment options at their disposal, with multiple vendors providing services that cater o their
individual needs and requirements. Google App Engine(GAE) is one of the more popular
platforms in this arena, providing robust and scalable services inherent with its namesake. With
GAE, developers can build a SaaS with the language of their choice while reaping the benefits of
cloud computing in hosting their application: infinite and automatic horizontal scalability, metered
usage and on-demand deployment of services.
A good example of SaaS is Google Docs. Google Docs is a productivity suite that is free
for anyone to use. All you need is to login and instantly access word processor, spreadsheet
application or power point presentation creator. Google’s online services are managed directly
from the web browser and require zero installation. You can access your Google Docs from any
computer or mobile deveice with a web browser.
Google App Engine provides more infrastructure than other scalable hosting services such
as Amazon Elastic Compute Cloud (EC2). Google App Engine is free up to a use of certain amount
of resources. Users exceeding the per-day or per-minute usage rates for CPU resources, storage,
number of API calles or requests should pay for more to use resources.
Features and Benefits of Google App Engine
GAE supports Java, Python, PhP and Go, as well as the associated development
frameworks for these languages – namely, Spring, Struts and DJango among others. Traditional
databases such as MySQL are supported, as well as next-generation NoSQL datastores and big
data distributions such as MongoDB and Hadoop respectively. Developers have at their disposal
a wide variety of IDEs compatible with GAE, including NetBeans, Eclipse and Komodo.
The developers access their application through the main web interface and manage and
control their applications through App Engines admin console. The admin console enables
developers to perform basic configuration or create/disable/delete applications or view their
performance statistics and other maintenance tasks. The main feature of the admin console is the
ability to set performance options which basically allows app optimization based on the
developers’ preference – for example, tuning down the servers to an optimal pricing range to
alleviate costs. Conversely, one may opt to configure their application for the highest availability
and best response time possible. GAE’s admin console allows these and many other configuration
options.
Google promises 99.95% uptime in its service level agreement (SAL) or an average of
approximately four minutes of downtime per month. The performance and sstatus of GAE services
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
28
can be checked publicly on GAE’s system status page. If it is unable to meet the SAL, Google
offers customers a certain amount of free service days per billing cycle.
5. Explain Salesforce.com and Force.com?
Salesforce.com offers Force.com as its on-demand platform. Force.com features
breakthrough Visualforce technology, which allows customers, developers, and ISVs to design
any app, for any user, anywhere with the world’s first User Interface-as-a-Service. The Force.com
platform offers global infrastructure and services for database, logic, workflow, integration, user
interface, and application exchange.
“With Force.com, customers, developers and ISVs can choose innovation, not
infrastructure,” said Marc Benioff, chairman and CEO, Salesforce.com. “Google, Amazon, and
Apple have all shown that by revolutionizing a user interface you can revolutionize an industry.
With Visualforce we’re giving developers the power to revolutionize any interface,and any
industry, on demand.”
A capability of the Force.com platform, Visualforce provides a framework for creating
user experiences, and enables the creation of new interface designs and user interactions to be
built and delivered with no software or hardware infrastructure requirements.
With Visualforce, developers have control over the look and feel of their Force.com
applications enabling wide flexibility in terms of application creation. From a handheld device for
a sales rep in the field, to an order-entry kiosk on a manufacturing shop floor, Visualforce enables
the creation of new user experiences that can be customized and delivered in real time on any
screen.
Platform – As – A – Service
Platform as a Service (PaaS) is a way to build applications and have them hosted by the
cloud provider. It allows you to deploy applications without having to spend the money to buy the
servers on which to house them. In this section we’ll take a closer look at companies RightScale
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
29
and Google. We’ll talk about their services, what they offer, and what other companies are getting
out of those services.
6. Explain RightScale?
RightScale entered into a strategic product and partnership, broadening its cloud
management platform to support emerging clouds from new vendors, including FlexiScale and
GoGrid, while continuing its support for Amazon’s EC2. RightScale is also working with
Rackspace to ensure compatibility with their cloud offerings, including Mosso and CloudFS.
RightScale offers an integrated management dashboard, where applications can be deployed once
and managed across these and other clouds.
Businesses can take advantage of the nearly infinite scalability of cloud computing by using
RightScale to deploy their applications on a supported cloud provider. They gain the capabilities
of built-in redundancy, fault tolerance, and geographical distribution of resources—key enterprise
demands for cloud providers.
Customers can leverage the RightScale cloud management platform to automatically
deploy and manage their web applications—scaling up when traffic demands, and scaling back as
appropriate—allowing them to focus on their core business objectives.
RightScale’s automated system management, prepackaged and reusable components,
leading service expertise, and best practices have been proven as best-of-breed, with customers
deploying hundreds of thousands of instances on Amazon’s EC2.
“Cloud computing is a disruptive force in the business world because it provides payas-
you-go, on-demand, virtually infinite compute and storage resources that can expand or contract
as needed,” said Michael Crandell, CEO of RightScale, Inc.
“A number of public providers are already adopting cloud architectures—and we also see
private enterprise clouds coming on the horizon.
Today’s announcement of RightScale’s partnerships with FlexiScale and GoGrid is an
exciting indication of how mid-market and enterprise organizations can really take advantage of
multicloud architectures.
There will be huge opportunities for application design and deployment—we are at
th“Cloud computing for the enterprise has arrived with the GoGrid and RightScale partnership,”
said GoGrid CEO, John Keagy.
“Corporations now have few excuses not to, and multiple reasons to deploy and manage
complex and redundant cloud infrastructures in real-time using the GoGrid, RightScale, and
FlexiScale technologies.” Rackspace Hosting provides IT systems and computing-as-a-service e
beginning of a tidal shift in IT infrastructure.
7. Explain Rackspace?
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
30
The Rackspace Cloud is a set of cloud computing products and services billed on a utility
computing basis from the US-based company. Rackspace Offerings include web
application hosting or platform as a service ("Cloud Sites"), Cloud Storage ("Cloud Files"), virtual
private server ("Cloud Servers"), load balancers, databases, backup, and monitoring.
It offers Cloud Block Storage and Cloud Backup. It is used to deliver higher performance
than object-based clouds by using a combination of hard drives and solid-state drives.
The services provided by RackSpace:
Dedicated Servers: From server, networking and storage configuration, monitoring and support,
to bursting to the cloud of your choice, rackspace got the options and expertise to create a best-fit
solution. And when time is of the essece, we’ve got on-demand configurations that are truly single
tenant and secure, and as always, backed by Fanatical support. The benefits are: Security and
control, High peroformance compute, Cloud-ready and scalable.
Cloud files is a cloud hosting service that provides "unlimited online storage and CDN" for
media (examples given include backups, video files, user content) on a utility computing basis. It
was originally launched as MossoCloudFS as a private beta release on May 5, 2008 and is similar
to Amazon Simple Storage Service. Unlimited files of up to 5 GB can be uploaded, managed via
the online control panel or RESTful API.
API
In addition to the online control panel, the service can be accessed over a RESTful API
with open source client code available in C#/.NET, Python, PHP, Java, and Ruby. Rackspace-
owned Jungle Disk allows Cloud Files to be mounted as a local drive within supported operating
systems (Linux, Mac OS X, and Windows).
Security
Redundancy is achieved by replicating three full copies of data across multiple computers
in multiple "zones" within the same data center, where "zones" are physically (though not
geographically) separate and supplied separate power and Internet services. Uploaded files can be
distributed via Akamai Technologies to "hundreds of endpoints across the world" which provides
an additional layer of data redundancy.
The control panel and API are protected by SSL and the requests themselves are signed
and can be safely delivered to untrusted clients. Deleted data is zeroed out immediately.
Use cases
Use cases considered as "well suited" include backing up or archiving data, serving images
and videos (which are streamed directly to the users' browsers), serving content over content
delivery networks, storing secondary static web-accessible data, developing data storage
applications, storing fluctuating and/or unpredictable amounts of data and reducing costs.
Rackspace Hosting provides IT systems and computing-as-a-service to more than 33,000
customers worldwide. Combining RightScale’s technologies with Rackspace’s focus on Fanatical
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
31
Support will allow companies to focus more on their business and not spend a disproportionate
amount of resources on IT demands.
8. Explain Services and Benefits of PaaS?
Force.com PaaS provides the building blocks necessary to build business apps, whether
they are simple or sophisticated, and automatically deploy them as a service to small teams or
entire enterprises. The Force.com platform gives customers the power to runmultiple applications
within the same Salesforce instance, allowing all of a company’s Salesforce applications to share
a common security model, data model, and user interface.
The multitenant Force.com platform encompasses a feature set for the creation of business
applications such as an on-demand operating system, the ability to create any database on demand,
a workflow engine for managing collaboration between users, the Apex Code programming
language for building complex logic, the Force.com Web Services API for programmatic access,
mashups, and integration with other applications and data, and now Visualforce for a framework
to build any user interface.
As part of the Force.com platform, Visualforce gives customers the means to design
application user interfaces for any experience on any screen. Using the logic and workflow
intelligence provided by Apex Code, Visualforce offers the ability to meet the requirements of
applications that feature different types of users on a variety of devices.
Visualforce uses Internet technology, including HTML, AJAX and Flex, for business
applications. Visualforce enables the creation and delivery of any user experience, offering control
over an application’s design and behavior that is only limited by the imagination.
There are various benefits of Force.com as they provide everything you could need as a
part of their service. The ease of use of Salesforce as a technology and majority of Fortune 500
brands are harnessing the power of Salesforce, it’s no coincidence by any means.
Force.com by Salesforce.com is a platform that offers advanced cloud computing as a service.
It supports multitenant applications and caters to various clients with only one instance of the
application running.
9. What are the uses of Force.com?
Making an application on Force.com platform is easy and fast. Various tools provided by
the platform make things really easy for the developers. Force.com provides many features like
multi-layered security and social and mobile optimization.
Form builder: There are several tools featured on the platform, such as drag and drop tools, auto-
generated UIs, and pre-designed components and templates. With all these tools development and
deployment has become easy. An object that is created can be dragged to the pages and it starts to
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
32
interact with the data. Forms can also very easy to make without using any complex codes or
technical knowledge.
Optimized for mobile & social media: The platform provides a mobile optimized platform for
your application. The application runs on iPad, iPhones and all other Smartphones automatically.
Report creation: Personal reports can be analyzed through integrating with the existing ERPs of
your business. These reports can be retrieved any time by dragging and dropping personalised
reports.
Automation: Force.com platform has the power to automate almost every business process. The
business logic needs to be added to the applications and some database triggers need to be written
for automating every process of the business. There is a visual process workflow that allows for
adding complex business logic to the applications.
Development: The platform gives the liberty of creating the user interface of choice and adds
business logic to it as and when needed. The native languages of the force.com platform like Apex
and Visualforce can be used in combination with flash and HTML to develop rich interfaces.
Security: The platform has a built-in robust security and privacy program which has been tested
by some of the most trusted organisations.
UNIT-4
INFRASTRUCTURE AS A SERVICE
Infrastructure as a Service is Everything as a Service. That is, you are using a virtualized
server and running software on it. One of the most prevalent is Amazon Elastic Compute Cloud
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
33
(EC2). Another player in the field is GoGrid. In this section we’ll take a closer look at both Amazon
and GoGrid.
1. List IAAS service providers?
Amazon EC2
Amazon Elastic Compute Cloud (http://aws.amazon.com/ec2) is a web service that
provides resizable computing capacity in the cloud. Amazon EC2’s simple web service interface
allows businesses to obtain and configure capacity with minimal friction. It provides control of
computing resources and lets organizations run on Amazon’s computing environment.
Amazon EC2 reduces the time required to obtain and boot new server instances to minutes,
allowing quick scaling capacity, both up and down, as computing requirements change. Amazon
EC2 changes the economics of computing by allowing you to pay only for capacity that you
actually use.
GoGrid
GoGrid is a service provider of Windows and Linux cloud-based server hosting, and offers
32-bit and 64-bit editions of Windows Server 2008 within its cloud computing infrastructure.
Parent company ServePath is a Microsoft Gold Certified Partner, and launched Windows Server
2008 dedicated hosting in February of this year.
GoGrid becomes one of the first Infrastructure as a Service (IaaS) providers to offer
Windows Server 2008 “in the cloud.” The Windows Server 2008 operating system from
Microsoft offers increased server stability, manageability, and security over previous versions of
Windows Server. As such, interest from Windows Server customers wanting to try it out has
been high. GoGrid customers can deploy Windows Server 2008 servers in just a few minutes for
as little as 19 cents an hour, with no commitment.
GoGrid enables system administrators to quickly and easily create, deploy, load-balance,
and manage Windows and Linux cloud servers within minutes. GoGrid offers what it calls
Control in the CloudTM with its web-based Graphical User Interface (GUI) that allows for
“point and click” deployment of complex and flexible network infrastructures, which include
load balancing and multiple web and database servers, all set up with icons through the GUI.
Initial Windows Server 2008 offerings on GoGrid include both 32-bit and 64-bit
preconfigured templates. GoGrid users select the desired operating system and then choose
preconfigured templates in order to minimize time to deploy. Pre configurations include
• Windows Server 2008 Standard with Internet Information Services 7.0 (IIS 7)
• Windows Server 2008 Standard with IIS 7 and SQL Server 2005 Express Edition
• Windows Server 2008 Standard with IIS 7, SQL Server 2005 Express Edition, and
ASP.NET
2. Explain about Amazon EC2 Benefits
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
34
1. Elastic Web-Scale Computing: Amazon EC2 enables you to increase or decrease capacity
within minutes, not hours or days. You can commission one, hundreds, or even thousands
of server instances simultaneously. You can also use Auto Scaling to maintain availability
of your EC2 fleet and automatically scale your application up and down depending on its
needs in order to maximize performance and minimize cost.
2. Completely Controlled: You have complete control of your instances including root
access and the ability to interact with them as you would any machine. You can stop any
instance while retaining the data on the boot partition, and then subsequently restart the
same instance using web service APIs. Instances can be rebooted remotely using web
service APIs, and you also have access to their console output.
3. Flexible Cloud Hosting Services: You have the choice of multiple instance types,
operating systems, and software packages. Amazon EC2 allows you to select a
configuration of memory, CPU, instance storage and the boot partition size that is optimal
for your choice of operating system and application. For example, choice of operating
systems includes numerous Linux distributions and Microsoft Windows Server.
4. Integrated: Amazon EC2 is integrated with most AWS services such as Amazon Simple
Storage Service (Amazon S3), Amazon Relational Database Service (Amazon RDS), and
Amazon Virtual Private Could (Amazon VPC) to provide a complete, secure solution for
computing, query processing, and cloud storage across a wide range of applications.
5. Reliable: Amazon EC2 offers a highly reliable environment where replacement instances
can be rapidly and predictably commissioned. The service runs within Amazon’s proven
network infrastructure and data centers. The Amazon EC2 Service Level Agreement
commitment is 99.95% availability for each Amazon EC2 Region.
6. Secure: Cloud security at AWS is the highest priority. As an AWS customer, you will
benefit from a data center and network architecture built to meet the requirements of the
most security-sensitive organizations. Amazon EC2 works in conjunction with Amazon
VPC to provide security and robust networking functionality for your compute resources.
7. Inexpensive: Amazon EC2 passes on to you the financial benefits of Amazon’s scale. You
pay a very low rate for the compute capacity you actually consume.
8. Easy to Start: There are several ways to get started with Amazon EC2. You can use the
AWS Management Console, the AWS Command Line Tools (CLT), or AWS SDKs.
Recent Developments
In 2009, AWS announced plans for several new features that make managing cloud-based
applications easier. Thousands of customers employ the compute power of Amazon EC2 to build
scalable and reliable solutions.
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
35
AWS will deliver additional features that automate customer usage of Amazon EC2 for
more cost-efficient consumption of computing power and provide greater visibility into the
operational health of an application running in the AWS cloud.
3. Write about Amazon EC2 Service Level Agreement.
With over two years of operation Amazon EC2 exited its beta into general availability and
offers customers a Service Level Agreement (SLA). The Amazon EC2 SLA guarantees 99.95
percent availability of the service within a region over a trailing 365-day period, or customers are
eligible to receive service credits back.
The Amazon EC2 SLA is designed to give customers additional confidence that even the
most demanding applications will run dependably in the AWS cloud.
Service Commitment
AWS will use commercially reasonable efforts to make Amazon EC2 and Amazon EBS
each available with a Monthly Uptime Percentage (defined below) of at least 99.95%, in each case
during any monthly billing cycle (the “Service Commitment”). In the event Amazon EC2 or
Amazon EBS does not meet the Service Commitment, you will be eligible to receive a Service
Credit as described below.
• “Monthly Uptime Percentage” is calculated by subtracting from 100% the percentage of
minutes during the month in which Amazon EC2 or Amazon EBS, as applicable, was in
the state of “Region Unavailable.” Monthly Uptime Percentage measurements exclude
downtime resulting directly or indirectly from any Amazon EC2 SLA Exclusion (defined
below).
• “Region Unavailable” and “Region Unavailability” mean that more than one Availability
Zone in which you are running an instance, within the same Region, is “Unavailable” to
you.
• “Unavailable” and “Unavailability” mean:
o For Amazon EC2, when all of your running instances have no external
connectivity.
o For Amazon EBS, when all of your attached volumes perform zero read write IO,
with pending IO in the queue.
• A “Service Credit” is a dollar credit, calculated as set forth below, that we may credit
back to an eligible account.
Service Commitments and Service Credits
Service Credits are calculated as a percentage of the total charges paid by you (excluding
one-time payments such as upfront payments made for Reserved Instances) for either Amazon
EC2 or Amazon EBS (whichever was Unavailable, or both if both were Unavailable) in the Region
affected for the monthly billing cycle in which the Region Unavailability occurred in accordance
with the schedule below.
Monthly Uptime Percentage Service Credit Percentage
Less than 99.95% but equal to or greater than 99.0% 10%
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
36
Less than 99.0% 30%
We will apply any Service Credits only against future Amazon EC2 or Amazon EBS
payments otherwise due from you. At our discretion, we may issue the Service Credit to the credit
card you used to pay for the billing cycle in which the Unavailability occurred. Service Credits
will not entitle you to any refund or other payment from AWS.
4. Write the Advantages and Disadvantages ofIAAS.
Advantages:
1. Cost Savings: An obvious benefit of moving to the IaaS model is lower infrastructure
costs. No longer do organizations have the responsibility of ensuring uptime, maintaining
hardware and networking equipment, or replacing old equipment. IaaS also saves
enterprises from having to buy more capacity to deal with sudden business spikes.
Organizations with a smaller IT infrastructure generally require a smaller IT staff as well.
2. Scalability and flexibility: One of the greatest benefits of IaaS is the ability to scale up
and down quickly in response to an enterprise’s requirements. IaaS providers generally
have the latest, most powerful storage servers and networking technology to accommodate
the needs of their customers. This on-demand scalability provides added flexibility and
greater agility to respond to changing opportunities and requirements.
3. Support for DR, BC and high availability: While every enterprise has some type of
disaster recovery plan, the technology behind those plans is often expensive and unwidely.
Organizations with several disparate locations often have different disaster recovery and
business continuity plans and technologies, making management virtually impossible.
4. Focus on business growth: Time, money and energy spent making technology decisions
and hiring staff to manage and maintain the technology infrastructure is time not spent on
growing the business. By moving infrastructure to a service-based model, organizations
can focus their time and resources where they belong, on developing innovations in
applications and solutions.
5. Innovate rapidly: As soon as you have decided to launch a new product or initiative, the
necessary computing infrastructure can be ready in minutes or hours, rather than the days
or weeks – and sometimes months – it could take to set up internally.
6. Respond quicker to shifting business conditions: IaaS enables you to quickly scale up
resources to accommodate spikes in demand for your application (elasticity of the cloud)
– during the holidays, for example, then scale resources back down again when activity
decreases to save money.
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
37
7. Better Security: With the appropriate service agreement, a cloud service provider can
provide security for your applications and data that may be better than what you can attain
in-house. Better security may come in part because it is critical for the IaaS Cloud Provider
and is part of their main business.
8. Backups: There is no need to manage backups. This is handled by the IaaS Cloud provider.
9. Multiplatform: Some IaaS Providers provide development options for multiple platforms:
mobile, browser, and so on. If you or your organization want to develop software that can
be accessed from multiple platforms, this might be an easy way to make that happen.
Disadvantages:
1. The organization is responsible for the versioning/upgrades of software developed.
2. The maintenance and upgrades of tools, database systems, etc. and the underlying
infrastructure is your responsibility or the responsibility of your organization.
3. There may be legal reasons that prevent the use of off-premise or out-of-country data
storage.
4. If you need for high-speed interaction between internal software and software on Cloud
and the IaaS Cloud Provider may not provide the speed that you need.
5. Most expensive, since the customer is now leasing a tangible resource, the provider can
charge for every cycle, bit of RAM or disk space used.
6. Unlike with SaaS or PaaS, customer is responsible for all aspects of VM Management.
CLOUD DEPLOYMENT MODELS
A cloud deployment model represents a specific type of cloud environment, primarily
distinguished by ownership, size, and access.
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
38
There are four common cloud deployment models:
1. Public Clouds
2. Community Clouds
3. Private Clouds
4. Hybrid cloud
5. Explain public cloud ? with neat diagram?
A public cloud is one based on the standard cloud computing model, in which a service
provider makes resources, such as virtual machines (VMs), applications or storage, available to
the general public over the internet. Public cloud services may be free or offered on a pay-per-
usage model.
The main benefits of using a public cloud service are:
• it reduces the need for organizations to invest in and maintain their own on-premises IT
resources;
• it enables scalability to meet workload and user demands; and
• there are fewer wasted resources because customers only pay for the resources they use.
Public cloudarchitecture
Public cloud is a fully virtualized environment. In addition, providers have a multi-tenant
architecture that enables users -- or tenants -- to share computing resources. Each tenant's data in
the public cloud, however, remains isolated from other tenants. Public cloud also relies on high-
bandwidth network connectivity to rapidly transmit data.
Public cloud storage is typically redundant, using multiple data centers and careful
replication of file versions. This characteristic has given it a reputation for resiliency.
Public cloud architecture can be further categorized by service model. Common service
models include:
• software as a service (SaaS), in which a third-party provider hosts applications and makes
them available to customers over the internet;
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
39
• platform as a service (PaaS), in which a third-party provider delivers hardware and
software tools -- usually those needed for application development -- to its users as a
service; and
• infrastructure as a service (IaaS), in which a third-party provider offers virtualized
computing resources, such as VMs and storage, over the internet.
6. Explain Private Cloud architecture?
Private Cloud allows systems and services to be accessible within an organization. The
Private Cloud is operated only within a single organization. However, it may be managed
internally by the organization itself or by third-party. The private cloud model is shown in the
diagram below.
Benefits
There are many benefits of deploying cloud as private cloud model. The following diagram
shows some of those benefits:
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
40
High Security and Privacy
Private cloud operations are not available to general public and resources are shared from
distinct pool of resources. Therefore, it ensures high security and privacy.
More Control
The private cloud has more control on its resources and hardware than public cloud
because it is accessed only within an organization.
Cost and Energy Efficiency
The private cloud resources are not as cost effective as resources in public clouds but they
offer more efficiency than public cloud resources.
Disadvantages
Here are the disadvantages of using private cloud model:
• Restricted Area of Operation: The private cloud is only accessible locally and is very
difficult to deploy globally.
• High Priced: Purchasing new hardware in order to fulfill the demand is a costly
transaction.
• Limited Scalability: The private cloud can be scaled only within capacity of internal
hosted resources.
7. Explain Community cloud?
Community Cloud is an online social platform that enables companies to connect
customers, partners, and employees with each other and the data and records they need to get work
done. This next-generation portal combines the real-time collaboration of Chatter with the ability
to share any file, data, or record anywhere and on any mobile device.
Community Cloud allows you to streamline key business processes and extend them across
offices and departments, and outward to customers and partners. So everyone in your business
ecosystem can service customers more effectively, close deals faster, and get work done in real
time.
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
41
You can build communities to gain deeper relationships with customers or provide better
service by enabling customers to find information and assist each other online. Or you can connect
your external channel partners, agents, or brokers to reduce friction and accelerate deals. And you
can empower employees to connect and collaborate wherever business takes them.
Because Community Cloud is built on the Salesforce platform, you can connect any third
party system or data directly into the community. Your organization gains the flexibility to easily
create multiple communities for whatever use case your business demands.
HR and IT Help Desk can engage employees and deliver critical knowledge and
instructions. And from onboarding to payroll to IT troubleshooting, employees can help
themselves to the information they need, 24/7.
Employees find, share, and collaborate on content in real time, and connect with others in
the social intranet — beyond the boundaries of their department, office, or even country.
SECURITY
Community Cloud is built on the trusted Salesforce1 platform. The robust and flexible
security architecture of the platform is relied on by companies around the world, including those
in the most heavily regulated industries — from financial services to healthcare to government.
It provides the highest level of security and control over everything from user and client
authentication through administrative permissions to the data access and sharing model
ADVANTAGES
Companies of any size can create seamless, branded community experiences quickly and
easily with Community Cloud. For example, Lightning Community Builder and Templates
provide a great out-of-the-box solution to get you started, with simple customization options as
your business grows.
Lightning Community Builder makes it easy to customize your mobile-optimized
community to perfectly match your brand. This includes incorporating third-party and custom
components for ultimate customization.
Community Templates are secure, reliable, scalable, and optimized for mobile. These state-
of-the-art templates are designed to be used right out of the box — no coding or IT required.
8. Explain Hybrid Cloud?
Hybrid Cloud is a mixture of public and private cloud. Non-critical activities are
performed using public cloud while the critical activities are performed using private cloud. The
Hybrid Cloud Model is shown in the diagram below.
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
42
Benefits
There are many benefits of deploying cloud as hybrid cloud model. The following diagram
shows some of those benefits:
Scalability: It offers features of both, the public cloud scalability and the private cloud scalability.
Flexibility: It offers secure resources and scalable public resources.
Cost Efficiency: Public clouds are more cost effective than private ones. Therefore, hybrid clouds
can be cost saving.
Security: The private cloud in hybrid cloud ensures higher degree of security.
Disadvantages
Networking Issues: Networking becomes complex due to presence of private and public cloud.
Security Compliance: It is necessary to ensure that cloud services are compliant with security
policies of the organization.
Infrastructure Dependency: The hybrid cloud model is dependent on internal IT infrastructure,
therefore it is necessary to ensure redundancy across data centers.
9. what are the Advantages of Cloud Computing?
Cost Savings
Perhaps, the most significant cloud computing benefit is in terms of IT cost savings.
Businesses, no matter what their type or size, exist to earn money while keeping capital and
operational expenses to a minimum. With cloud computing, you can save substantial capital costs
with zero in-house server storage and application requirements. The lack of on-premises
infrastructure also removes their associated operational costs in the form of power, air conditioning
and administration costs. You pay for what is used and disengage whenever you like - there is no
invested IT capital to worry about. It’s a common misconception that only large businesses can
afford to use the cloud, when in fact, cloud services are extremely affordable for smaller
businesses.
Reliability
With a managed service platform, cloud computing is much more reliable and consistent
than in-house IT infrastructure. Most providers offer a Service Level Agreement which guarantees
24/7/365 and 99.99% availability. Your organization can benefit from a massive pool of redundant
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
43
IT resources, as well as quick failover mechanism - if a server fails, hosted applications and
services can easily be transited to any of the available servers.
Manageability
Cloud computing provides enhanced and simplified IT management and maintenance
capabilities through central administration of resources, vendor managed infrastructure and SLA
backed agreements. IT infrastructure updates and maintenance are eliminated, as all resources are
maintained by the service provider. You enjoy a simple web-based user interface for accessing
software, applications and services – without the need for installation - and an SLA ensures the
timely and guaranteed delivery, management and maintenance of your IT services.
Strategic Edge
Ever-increasing computing resources give you a competitive edge over competitors, as the
time you require for IT procurement is virtually nil. Your company can deploy mission critical
applications that deliver significant business benefits, without any upfront costs and minimal
provisioning time. Cloud computing allows you to forget about technology and focus on your key
business activities and objectives. It can also help you to reduce the time needed to market newer
applications and services.
UNIT - 5
VIRTUALIZATION
1. What is virtualization and cloud computing?
"Virtualization software makes it possible to run multiple operating systems and multiple
applications on the same server at the same time," said Mike Adams, director of product marketing
at VMware, a pioneer in virtualization and cloud software and services. "It enables businesses to
reduce IT costs while increasing the efficiency, utilization and flexibility of their existing computer
hardware."
The technology behind virtualization is known as a virtual machine monitor (VMM) or
virtual manager, which separates compute environments from the actual physical infrastructure.
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
44
Virtualization makes servers, workstations, storage and other systems independent of the
physical hardware layer, said John Livesay, vice president of InfraNet, a network infrastructure
services provider. "This is done by installing a Hypervisor on top of the hardware layer, where the
systems are then installed."
2. How is virtualization different from cloud computing?
Essentially, virtualization differs from cloud computing because virtualization is software
that manipulates hardware, while cloud computing refers to a service that results from that
manipulation.
"Virtualization is a foundational element of cloud computing and helps deliver on the value
of cloud computing," Adams said. "Cloud computing is the delivery of shared computing
resources, software or data — as a service and on-demand through the Internet."
Most of the confusion occurs because virtualization and cloud computing work together to
provide different types of services, as is the case with private clouds.
The cloud can, and most often does, include virtualization products to deliver the compute
service, said Rick Philips, vice president of compute solutions at IT firm Weidenhammer. "The
difference is that a true cloud provides self-service capability, elasticity, automated management,
scalability and pay-as you go service that is not inherent in virtualization."
To best understand the advantages of virtualization, consider the difference between
private and public clouds.
"Private cloud computing means the client owns or leases the hardware and software that
provides the consumption model," Livesay said. With public cloud computing, users pay for
resources based on usage. "You pay for resources as you go, as you consume them, from a [vendor]
that is providing such resources to multiple clients, often in a co-tenant scenario."
A private cloud, in its own virtualized environment, gives users the best of both worlds. It
can give users more control and the flexibility of managing their own systems, while providing the
consumption benefits of cloud computing, Livesay said.
On the other hand, a public cloud is an environment open to many users, built to serve
multi-tenanted requirements, Philips said. "There are some risks associated here," he said, such as
having bad neighbors and potential latency in performance.
In contrast, with virtualization, companies can maintain and secure their own "castle,"
Philips said. This "castle" provides the following benefits:
• Maximize resources — Virtualization can reduce the number of physical systems you
need to acquire, and you can get more value out of the servers. Most traditionally built
systems are underutilized. Virtualization allows maximum use of the hardware
investment.
• Multiple systems — With virtualization, you can also run multiple types of applications
and even run different operating systems for those applications on the same physical
hardware.
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
45
• IT budget integration — When you use virtualization, management, administration and
all the attendant requirements of managing your own infrastructure remain a direct cost
of your IT operation.
3. What is need of virtualization?
Can anyone explain me why is virtualization needed for cloud computing? A single
instance of IIS and Windows Server can host multiple web applications. Then why do we need to
run multiple instances of OS on a single machine? How can this lead to more efficient utilization
of resources?
Virtualization is convenient for cloud computing for a variety of reasons:
1. Cloud computing is much more than a web app running in IIS. ActiveDirectory isn't a web
app. SQL Server isn't a web app. To get full benefit of running code in the cloud, you need
the option to install a wide variety of services in the cloud nodes just as you would in your
own IT data center. Many of those services are not web apps governed by IIS. If you only
look at the cloud as a web app, then you'll have difficulty building anything that isn't a web
app.
2. The folks running and administering the cloud hardware underneath the covers need
ultimate authority and control to shut down, suspend, and occasionally relocate your cloud
code to a different physical machine. If some bit of code in your cloud app goes nuts and
runs out of control, it's much more difficult to shut down that service or that machine when
the code is running directly on the physical hardware than it is when the rogue code is
running in a VM managed by a hypervisor.
3. Resource utilization - multiple tenants (VMs) executing on the same physical hardware,
but with much stronger isloation from each other than IIS's process walls. Lower cost per
tenant, higher income per unit of hardware.
COST
Depending on your solution, you can have a cost-free datacenter. You do have to shell out
the money for the physical server itself, but there are options for free virtualization software
and free operating systems.
Microsoft’s Virtual Server and VMware Server are free to download and install. If you use
a licensed operating system, of course that will cost money. For instance, if you wanted five
instances of Windows Server on that physical server, then you’re going to have to pay for the
licenses. That said, if you were to use a free version of Linux for the host and operating system,
then all you’ve had to pay for is the physical server.
Naturally, there is an element of “you get what you pay for.” There’s a reason most
organizations have paid to install an OSon their systems. When you install a free OS, there is often
a higher total cost of operation, because it can be more labor intensive to manage the OS and apply
patches.
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
46
Administration
Having all your servers in one place reduces your administrative burden. According to
VMware, you can reduce your administrative burden from 1:10 to 1:30. What this means is that
you can save time in your daily server administration or add more servers by having a virtualized
environment. The following factors ease your administrative burdens:
• A centralized console allows quicker access to servers.
• CDs and DVDs can be quickly mounted using ISO files.
• New servers can be quickly deployed.
• New virtual servers can be deployed more inexpensively than physical servers.
• RAM can be quickly allocated for disk drives.
• Virtual servers can be moved from one server to another.
Fast deployement:
Because every virtual guest server is just a file on a disk, it’s easy to copy (or clone) a
system to create a new one. To copy an existing server, just copy the entire directory of the current
virtual server.
This can be used in the event the physical server fails, or if you want to test out a new
application to ensure that it will work and play well with the other tools on your network.
Virtualization software allows you to make clones of your work environment for these
endeavors. Also, not everyone in your organization is going to be doing the same tasks. As such,
you may want different work environments for different users. Virtualization allows you to do this.
Reduced Infrastructure Costs
We already talked about how you can cut costs by using free servers and clients, like Linux,
as well as free distributions of Windows Virtual Server, Hyper-V, or VMware. But there are also
reduced costs across your organization. If you reduce the number of physical servers you use, then
you save money on hardware, cooling, and electricity. You also reduce the number of network
ports, console video ports, mouse ports, and rack space.
Some of the savings you realize include
• Increased hardware utilization by as much as 70 percent
• Decreased hardware and software capital costs by as much as 40 percent
• Decreased operating costs by as much as 70 percen
4. Explain the Limitations of Server Virtualization?
The benefits of server virtualization can be so enticing that it's easy to forget that the
technique isn't without its share of limitations. It's important for a network administrator to
research server virtualization and his or her own network's architecture and needs before
attempting to engineer a solution.
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
47
For servers dedicated to applications with high demands on processing power,
virtualization isn't a good choice. That's because virtualization essentially divides the server's
processing power up among the virtual servers.
When the server's procIt's also unwise to overload a server's CPU by creating too many
virtual servers on one physical machine. The more virtual machines a physical server must support,
the less processing power each server can receive.
In addition, there's a limited amount of disk space on physical servers. Too many virtual
servers could impact the server's ability to store data.essing power can't meet application demands,
everything slows down.
Another limitation is migration. Right now, it's only possible to migrate a virtual server
from one physical machine to another if both physical machines use the same manufacturer's
processor. If a network uses one server that runs on an Intel processor and another that uses an
AMD processor, it's impossible to port a virtual server from one physical machine to the other.
Many companies are investing in server virtualization despite its limitations. As server
virtualization technology advances, the need for huge data centers could decline.
Server power consumption and heat output could also decrease, making server utilization
not only financially attractive, but also a green initiative
HARDWARE VIRTUALISATION
5. Explain Full virtualization.
In computer science, full virtualization is a virtualization technique used to provide a
certain kind of virtual machine environment, namely, one that is a complete simulation of the
underlying hardware.
Full virtualization is possible only with the right combination of hardware and
software elements. For example, it was not possible with most of IBM's System/360 series with
the exception being the IBM System/360-67; nor was it possible with IBM's
early System/370 system. IBM added virtual memory hardware to the System/370 series in 1972
which is not the same as Intel VT-x Rings providing a higher privilege level for Hypervisor to
properly control Virtual Machines requiring full access to Supervisor and Program or User modes.
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
48
Full virtualization:
1. Guest operating systems are unaware of each other
2. Provide support for unmodified guest operating system.
3. Hypervisor directly interact with the hardware such as CPU,disks.
4. Hyperwiser allow to run multiple os simultaneously on host computer.
5. Each guest server run on its own operating system
6. Few implementations: Oracle's Virtaulbox , VMware server, Microsoft Virtual PC
Advantages:
1. This type of virtualization provide best isolation and security for Virtual machine.
2. Truly isolated multiple guest os can run simultaneously on same hardware.
3. It's only option that requires no hardware assist or os assist to virtualize sensitive
and privileged instructions.
Limitations:
1. Full virtualization is usually bit slower ,because of all emulation.
2. Hyperwiser contain the device driver and it might be difficult for new device drivers to
be installer by users.
6. Expalin Paravirtualization.
Paravirtualization is virtualization in which the guest operating system (the one being
virtualized) is aware that it is a guest and accordingly has drivers that, instead of issuing hardware
commands, simply issue commands directly to the host operating system. This also includes
memory and thread management as well, which usually require unavailable privileged instructions
in the processor.
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
49
Para virtualization:
1.unlike full virtualization ,guest servers are aware of one another.
2. Hypervisor does not need large amounts of processing power to manage guest os.
3 .The entire system work as a cohesive unit.
Advantages:
1. As a guest os can directly communicate with hypervisor
2. This is efficient virtualization.
3. Allow users to make use of new or modified device drivers.
Limitations:
1. Para virtualization requires the guest os to be modified in order to interact with para
virtualization interfaces.
2. It requires significant support and maintaibilty issues in production environment.
7. Explain Partial virtualization?
Partial virtualization, including address space virtualization, the virtual machine
simulates multiple instances of much of an underlying hardware environment, particularly address
spaces. Usually, this means that entire operating systems cannot run in the virtual machine—which
would be the sign of full virtualization— but that many applications can run. A key form of partial
virtualization is address space virtualization, in which each virtual machine consists of an
independent address space. This capability requires address relocation hardware, and has been
present in most practical examples of partial virtualization.
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
50
In partial virtualization, including address space virtualization, the virtual machine
simulates multiple instances of much of an underlying hardware environment, particularly address
spaces. Usually, this means that entire operating systems cannot run in the virtual machine – which
would be the sign of full virtualization – but that many applications can run. A key form of partial
virtualization is address space virtualization, in which each virtual machine consists of an
independent address space. This capability requires address relocation hardware, and has been
present in most practical examples of partial virtualization.
Partial virtualization was an important historical milestone on the way to full virtualization.
It was used in the first-generation time-sharing system CTSS, in the IBM M44/44X experimental
paging system, and arguably systems like MVS and the Commodore 64 (a couple of ‘task switch’
programs). The term could also be used to describe any operating system that provides separate
address spaces for individual users or processes, including many that today would not be
considered virtual machine systems.
Experience with partial virtualization, and its limitations, led to the creation of the first full
virtualization system (IBM’s CP-40,the first iteration of CP/CMSwhich would eventually become
IBM’s VM family). (Many more recent systems, such as Microsoft Windows and Linux, as well
as the remaining categories below, also use this basic approach.)Partial virtualization is
significantly easier to implement than full virtualization. It has often provided useful, robust virtual
machines, capable of supporting important applications. Partial virtualization has proven highly
successful for sharing computer resources among multiple users.
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
51
DESKTOP VIRTUALIZATION
8. Explain Software Virtualization?
Managing applications and distribution becomes a typical task for IT departments.
Installation mechanism differs from application to application. Some programs require certain
helper applications or frameworks and these applications may have conflict with existing
applications.
Software virtualization is just like a virtualization but able to abstract the software installation
procedure and create virtual software installations.
Virtualized software is an application that will be "installed" into its own self-contained unit.
Example of software virtualization is VMware software, virtual box etc. In the next pages,
we are going to see how to install linux OS and windows OS on VMware application.
Advantages of Software Virtualization
1) Client Deployments Become Easier:
Copying a file to a workstation or linking a file in a network then we can easily install virtual
software.
2) Easy to manage:
To manage updates becomes a simpler task. You need to update at one place and deploy the
updated virtual application to the all clients.
3) Software Migration:
Without software virtualization, moving from one software platform to another platform takes
much time for deploying and impact on end user systems. With the help of virtualized software
environment the migration becomes easier.
9. Explain Storage virtualization?
As we know that, there has been a strong link between the physical host and the locally
installed storage devices. However, that paradigm has been changing drastically, almost local
storage is no longer needed. As the technology progressing, more advanced storage devices are
coming to the market that provide more functionality, and obsolete the local storage.
Storage virtualization is a major component for storage servers, in the form of functional
RAID levels and controllers. Operating systems and applications with device can access the disks
directly by themselves for writing. The controllers configure the local storage in RAID groups and
present the storage to the operating system depending upon the configuration. However, the
storage is abstracted and the controller is determining how to write the data or retrieve the
requested data for the operating system.
Storage virtualization is becoming more and more important in various other forms:
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
52
File servers: The operating system writes the data to a remote location with no need to understand
how to write to the physical media.
WAN Accelerators: Instead of sending multiple copies of the same data over the WAN
environment, WAN accelerators will cache the data locally and present the re-requested blocks at
LAN speed, while not impacting the WAN performance.
SAN and NAS: Storage is presented over the Ethernet network of the operating system. NAS
presents the storage as file operations (like NFS). SAN technologies present the storage as block
level storage (like Fibre Channel). SAN technologies receive the operating instructions only when
if the storage was a locally attached device.
Storage Tiering: Utilizing the storage pool concept as a stepping stone, storage tiering analyze
the most commonly used data and places it on the highest performing storage pool. The lowest one
used data is placed on the weakest performing storage pool.
This operation is done automatically without any interruption of service to the data consumer.
Advantages of Storage Virtualization
• Data is stored in the more convenient locations away from the specific host. In the case of
a host failure, the data is not compromised necessarily.
• The storage devices can perform advanced functions like replication, reduplication, and
disaster recovery functionality.
• By doing abstraction of the storage level, IT operations become more flexible in how
storage is provided, partitioned, and protected.
10. Explain memory virtualization?
In computer science, memory virtualizationdecouples volatile random
access memory (RAM) resources from individual systems in the data center, and then aggregates
those resources into a virtualized memory pool available to any computer in the cluster.
There are two types of memory virtualization: Software-based and hardware-assisted
memory virtualization.
Because of the extra level of memory mapping introduced by virtualization, ESXi can
effectively manage memory across all virtual machines. Some of the physical memory of a virtual
machine might be mapped to shared pages or to pages that are unmapped, or swapped out.
A host performs virtual memory management without the knowledge of the guest operating
system and without interfering with the guest operating system’s own memory management
subsystem.
The VMM for each virtual machine maintains a mapping from the guest operating system's
physical memory pages to the physical memory pages on the underlying machine. (VMware refers
to the underlying host physical pages as “machine” pages and the guest operating system’s
physical pages as “physical” pages.)
Each virtual machine sees a contiguous, zero-based, addressable physical memory space.
The underlying machine memory on the server used by each virtual machine is not necessarily
contiguous.
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
53
For both software-based and hardware-assisted memory virtualization, the guest virtual to
guest physical addresses are managed by the guest operating system. The hypervisor is only
responsible for translating the guest physical addresses to machine addresses. Software-based
memory virtualization combines the guest's virtual to machine addresses in software and saves
them in the shadow page tables managed by the hypervisor. Hardware-assisted memory
virtualization utilizes the hardware facility to generate the combined mappings with the guest's
page tables and the nested page tables maintained by the hypervisor.
The diagram illustrates the ESXi implementation of memory virtualization.
• The boxes represent pages, and the arrows show the different memory mappings.
• The arrows from guest virtual memory to guest physical memory show the mapping
maintained by the page tables in the guest operating system. (The mapping from virtual
memory to linear memory for x86-architecture processors is not shown.)
• The arrows from guest physical memory to machine memory show the mapping
maintained by the VMM.
• The dashed arrows show the mapping from guest virtual memory to machine memory in
the shadow page tables also maintained by the VMM. The underlying processor running
the virtual machine uses the shadow page table mappings.
• Software-Based Memory Virtualization
ESXi virtualizes guest physical memory by adding an extra level of address translation.
• Hardware-Assisted Memory Virtualization
Some CPUs, such as AMD SVM-V and the Intel Xeon 5500 series, provide hardware
support for memory virtualization by using two layers of page tables.
11. Explain Data vrtualization?
Data virtualization is an umbrella term used to describe any approach
to data management that allows an application to retrieve and manipulate data without requiring
technical details about the data, such as how it is formatted or where it is physically located.
Data virtualization is synonymous with information agility - it delivers a simplified,
unified, and integrated view of trusted business data in real time or near real time as needed by the
consuming applications, processes, analytics, or business users. Data virtualization integrates data
from disparate sources, locations and formats, without replicating the data, to create a single
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
54
"virtual" data layer that delivers unified data services to support multiple applications and users.
The result is faster access to all data, less replication and cost, more agility to change.
Data virtualization is modern data integration. It performs many of the same
transformation and quality functions as traditional data integration (Extract-Transform-Load
(ETL), data replication, data federation, Enterprise Service Bus (ESB), etc.) but leveraging modern
technology to deliver real-time data integration at lower cost, with more speed and agility. It can
replace traditional data integration and reduce the need for replicated data marts and data
warehouses in many cases, but not entirely.
Data virtualization is also an abstraction layer and a data services layer. In this sense it is
highly complementary to use between original and derived data sources, ETL, ESB and other
middleware, applications, and devices, whether on-premise or cloud-based, to provide flexibility
between layers of information and business technology.
The following list helps understand Data Virtualization in many forms:
1. Data blending - This is often included as part of a business intelligence (BI) tool
semantic universe layer or is a new module offered by a predominantly BI vendor. Data
blending is able to combine multiple sources (limited list of structured or big data) to feed
the BI tool, but the output is only available for this tool and cannot be accessed from any
other external application for consumption.
2. Data services module - Typically these are offered for additional cost by Data
Integration Suite (ETL / MDM / Data Quality) or Data Warehouse vendors. The suite is
usually very strong in other areas. When it comes to data virtualization, some features
shared with the suite such as modeling, transformation, quality functions are very robust,
but the data virtualization engine, query optimization, caching, virtual security layers,
flexibility of data model for unstructured sources, and overall performance is weak. This
is so because the product is designed to prototype ETL or MDM and not to compete with
it in production use.
3. SQLification Products - This is an emerging offering particularly among Big Data and
Hadoop vendors. These products "virtualize" the underlying big data technologies and
allow them to be combined with relational data sources and flat files and queried using
standard SQL. This can be good for projects focused on that particular big data stack, but
not beyond.
4. Cloud data services. These products are often deployed in the cloud and have pre-
packaged integrations to SaaS and cloud applications, cloud databases and few desktop
and on-premise tools like Excel. Rather than a true data virtualization product with tiered
-views and delegatable query execution, these products expose normalized APIs across
cloud sources for easy data exchange in projects of medium volume. Projects involving
big data analytics, major enterprise systems, mainframes, large databases, flat files and
unstructured data are out of scope.
5. Data virtualization platform. Built from the ground-up to provide data virtualization
capabilities for the enterprise in a many-to-many fashion through a unified "virtual" data
layer. Designed for agility and speed in a wide range of use cases, agnostic to sources and
consumers, and competes and collaborates with other less efficient middleware. Click
here to learn more about the Denodo Platform.
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
55
12. Explain Netwotk Virtualization?
Network virtualization (NV) is defined by the ability to create logical, virtual networks that
are decoupled from the underlying network hardware to ensure the network can better integrate
with and support increasingly virtual environments. Over the past decade, organizations have been
adopting virtualization technologies at an accelerated rate. Network virtualization (NV) abstracts
networking connectivity and services that have traditionally been delivered via hardware into a
logical virtual network that is decoupled from and runs independently on top of a physical network
in a hypervisor.
Beyond L2-3 services like switching and routing, NV typically incorporates virtualized
L4-7 services including fireballing and server load-balancing. NV solves a lot of the networking
challenges in today’s data centers, helping organizations centrally program and provision the
network, on-demand, without having to physically touch the underlying infrastructure. With NV,
organizations can simplify how they roll out, scale and adjust workloads and resources to meet
evolving computing needs.
What Exactly is the Definition of Network Virtualization?
Virtualization is the ability to simulate a hardware platform, such as a server, storage device
or network resource, in software. All of the functionality is separated from the hardware and
simulated as a “virtual instance,” with the ability to operate just like the traditional, hardware
solution would. Of course, somewhere there is host hardware supporting the virtual instances of
these resources, but this hardware can be general, off-the-shelf platforms. In addition, a single
hardware platform can be used to support multiple virtual devices or machines, which are easy to
spin up or down as needed. As a result, a virtualized solution is typically much more portable,
scalable and cost-effective than a traditional hardware-based solution.
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
56
Applying Virtualization to the Network
When applied to a network, virtualization creates a logical software-based view of the
hardware and software networking resources (switches, routers, etc.). The physical networking
devices are simply responsible for the forwarding of packets, while the virtual network (software)
provides an intelligent abstraction that makes it easy to deploy and manage network services and
underlying network resources. As a result, NV can align the network to better support virtualized
environments.
NV and White Box Switching
As it stands, the trend is toward using NV to create overlay networks on top of physical
hardware. Concurrently, using network virtualization reduces costs on the physical (underlay)
network by using white box switches. Referring to the use of generic, off-the-shelf switches and
routers, white box networking limits expenditures by not using expensive proprietary switches.
NV also contributes to decreased expenses by relying on the intelligence of the overlay to provide
necessary advanced network functionality and features.
Virtual Networks
NV can be used to create virtual networks within a virtualized infrastructure. This enables
NV to support the complex requirements in multi-tenancy environments. NV can deliver a virtual
network within a virtual environment that is truly separate from other network resources. In these
instances, NV can separate traffic into a zone or container to ensure traffic does not mix with other
resources or the transfer of other data.
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
57
MICROSOFT IMPLEMENTATION
13. Explain Microsoft Hyper-V?
Microsoft Server 2008 Hyper-V (Hyper-V) is a hypervisor-based virtualization technology
that is a feature of select versions of Windows Server 2008. Microsoft’s strategy and investments
in virtualization—which span from the desktop to the datacenter—help IT professionals and
developers implement Microsoft’s Dynamic IT initiative, whereby they can build systems with the
flexibility and intelligence to automatically adjust to changing business conditions by aligning
computing resources with strategic objectives.
Hyper-V offers customers a scalable and high-performance virtualization platform that
plugs into customers’ existing IT infrastructures and enables them to consolidate some of the most
demanding workloads. In addition, the Microsoft System Center product family gives customers
a single set of integrated tools to manage physical and virtual resources, helping customers create
a more agile and dynamic datacenter.
Architecture
Hyper-V implements isolation of virtual machines in terms of a partition. A partition is a
logical unit of isolation, supported by the hypervisor, in which each guest operating
system executes. A hypervisor instance has to have at least one parent partition, running a
supported version of Windows Server(2008 and later). The virtualization stack runs in the parent
partition and has direct access to the hardware devices. The parent partition then creates the child
partitions which host the guest OSs. A parent partition creates child partitions using
the hypercall API, which is the application programming interface exposed by Hyper-V.
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
58
Currently only the following operating systems support Enlightened I/O, allowing them
therefore to run faster as guest operating systems under Hyper-V than other operating systems that
need to use slower emulated hardware:
• Windows Server 2008 and later
• Windows Vista and later
• Linux with a 3.4 or later kernel
• FreeBSD
Microsoft Hyper-V Server
Stand-alone Hyper-V Server variant does not require an existing of Windows Server 2008
nor Windows Server 2008 R2. The standalone installation is called Microsoft Hyper-V Server for
the non-R2 version and Microsoft Hyper-V Server 2008 R2. Microsoft Hyper-V Server is built
with components of Windows and has a Windows Server Core user experience. None of the other
roles of Windows Server are available in Microsoft Hyper-V Server. This version supports up to
64 VMs per system. System requirements of Microsoft Hyper-V Server are the same for supported
guest operating systems and processor, but differ in the following:
• RAM: Minimum: 1 GB RAM; Recommended: 2 GB RAM or greater; Maximum 1 TB.
• Available disk space: Minimum: 8 GB; Recommended: 20 GB or greater.
Hyper-V Server 2012 R2 has the same capabilities as the standard Hyper-V role in Windows
Server 2012 R2 and supports 1024 active VMs.
14. Explain Vmware features?
Features
VMware Server, the successor to VMware GSX Server, enables users to quickly provision
new server capacity by partitioning a physical server into multiple virtual machines, bringing the
powerful benefits of virtualization to every server.
VMware Server is feature-packed with the following market-leading capabilities:
• Support for any standard x86 hardware
• Support for a wide variety of Linux and Windows host operating systems, including
• 64-bit operating systems
• Support for a wide variety of Linux, NetWare, Solaris x86, and Windows guest
• operating systems, including 64-bit operating systems
• Support for Virtual SMP, enabling a single virtual machine to span multiple physical
• processors
• Quick and easy, wizard-driven installation similar to any desktop software
• Quick and easy virtual machine creation with a virtual machine wizard
• Virtual machine monitoring and management with an intuitive, user-friendly
VMware Server supports 64-bit virtual machines and Intel Virtualization Technology, a set of
Intel hardware platform enhancements specifically designed to enhance virtualization solutions.
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
59
“Central Transport has saved hundreds of thousands of dollars with VMware virtual
infrastructure,” said Craig Liess, server administrator for Central Transport. “Introducing a new
server virtualization product including Virtual SMP and support for 64-bit operating systems and
Intel Virtualization Technology is a natural progression for VMware, furthering the company’s
leadership in the market.
15. Expalin Vmware Infrastructure?
VMware is the biggest name in virtualization, and they offer VMware Infrastructure,
which includes the latest version of VMware ESX Server 3.5 and VirtualCenter 2.5. Vmware
Infrastructure will allow VMware customers to streamline the management of IT environments.
VMware Infrastructure is VMware’s third-generation, production-ready virtualization
suite. According to a study of VMware customers, 90 percent of companies surveyed use VMware
Infrastructure in production environments. With more than 120 industry and technology awards,
VMware provides a much-anticipated complete solution that meets customer demand for a next-
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
60
generation firmware hypervisor, enhanced virtual infrastructure capabilities, and advanced
management and automation solutions.
The new features in VMware Infrastructure are targeted at a broad range of customers and
IT environments—from midsize and small businesses to branch offices and corporate datacenters
within global 100 corporations—and extend the value of all three layers of the virtualization suite.
Features
• Virtualization platform enhancements help deliver new levels of performance, scalability,
and compatibility for running the most demanding workloads in virtual machines:
• Expanded storage and networking choices such as support for SATA local storage and 10
Gig Ethernet as well as enablement of Infiniband devices expand storage and networking
choices for virtual infrastructure.
• Support for TCP Segment Offload and Jumbo frames reduces the CPU overhead
associated with processing network I/O.
• Support for hardware-nested page tables such as in-processor assists for memory
virtualization.
• Support for paravirtualized Linux guest operating systems enables higher levels of
performance through virtualization-aware operating systems.
• Support for virtual machines with 64GB of RAM and physical machines with up to
128GB of memory Virtual infrastructure capabilities help deliver increased infrastructure
availability and Resilience
• VMware Storage VMotion enables live migration of virtual machine disks from one data
storage system to another with no disruption or downtime.
• VMware Update Manager automates patch and update management for VMware ESX
Server hosts and virtual machines.
• VMware Distributed Power Management is an experimental feature that reduces power
consumption in the datacenter through intelligent workload balancing.
• VMware Guided Consolidation, a feature of VMware VirtualCenter, enables companies
to get started with server consolidation in a step-by-step tutorial fashion.
16. Explain virtual Box?
VirtualBox is a cross-platform virtualization application. What does that mean? For one
thing, it installs on your existing Intel or AMD-based computers, whether they are running
Windows, Mac, Linux or Solaris operating systems. Secondly, it extends the capabilities of your
existing computer so that it can run multiple operating systems (inside multiple virtual machines)
at the same time. So, for example, you can run Windows and Linux on your Mac, run Windows
Server 2008 on your Linux server, run Linux on your Windows PC, and so on, all alongside your
existing applications. You can install and run as many virtual machines as you like -- the only
practical limits are disk space and memory.
VirtualBox is deceptively simple yet also very powerful. It can run everywhere from small
embedded systems or desktop class machines all the way up to datacenter deployments and even
Cloud environments.
The techniques and features that VirtualBox provides are useful for several scenarios:
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
61
• Running multiple operating systems simultaneously. VirtualBox allows you to run
more than one operating system at a time. This way, you can run software written for one
operating system on another (for example, Windows software on Linux or a Mac)
without having to reboot to use it. Since you can configure what kinds of "virtual"
hardware should be presented to each such operating system, you can install an old
operating system such as DOS or OS/2 even if your real computer's hardware is no longer
supported by that operating system.
• Easier software installations. Software vendors can use virtual machines to ship entire
software configurations. For example, installing a complete mail server solution on a real
machine can be a tedious task. With VirtualBox, such a complex setup (then often called
an "appliance") can be packed into a virtual machine. Installing and running a mail server
becomes as easy as importing such an appliance into VirtualBox.
• Testing and disaster recovery. Once installed, a virtual machine and its virtual hard
disks can be considered a "container" that can be arbitrarily frozen, woken up, copied,
backed up, and transported between hosts.
Here's a brief outline of VirtualBox's main features:
• Portability. VirtualBox runs on a large number of 32-bit and 64-bit host operating
systems
• No hardware virtualization required. For many scenarios, VirtualBox does not require
the processor features built into newer hardware like Intel VT-x or AMD-V. As opposed
to many other virtualization solutions, you can therefore use VirtualBox even on older
hardware where these features are not present.
• Guest Additions: shared folders, seamless windows, 3D virtualization. The VirtualBox
Guest Additions are software packages which can be installed inside of supported guest
systems to improve their performance and to provide additional integration and
communication with the host system.
• Great hardware support. Among others, VirtualBox supports:
o Guest multiprocessing (SMP). VirtualBox can present up to 32 virtual CPUs to
each virtual machine, irrespective of how many CPU cores are physically present
on your host.
o USB device support. VirtualBox implements a virtual USB controller and allows
you to connect arbitrary USB devices to your virtual machines without having to
install device-specific drivers on the host. USB support is not limited to certain
device categories.
o Hardware compatibility. VirtualBox virtualizes a vast array of virtual devices,
among them many devices that are typically provided by other virtualization
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
62
platforms. That includes IDE, SCSI and SATA hard disk controllers, several virtual
network cards and sound cards, virtual serial and parallel ports and an Input/Output
Advanced Programmable Interrupt Controller (I/O APIC), which is found in many
modern PC systems. This eases cloning of PC images from real machines and
importing of third-party virtual machines into VirtualBox.
o Full ACPI support. The Advanced Configuration and Power Interface (ACPI) is
fully supported by VirtualBox. This eases cloning of PC images from real machines
or third-party virtual machines into VirtualBox. With its unique ACPI power
status support, VirtualBox can even report to ACPI-aware guest operating
systems the power status of the host. For mobile systems running on battery, the
guest can thus enable energy saving and notify the user of the remaining power
(e.g. in full screen modes).
o Multiscreen resolutions. VirtualBox virtual machines support screen resolutions
many times that of a physical screen, allowing them to be spread over a large
number of screens attached to the host system.
o Built-in iSCSI support. This unique feature allows you to connect a virtual
machine directly to an iSCSI storage server without going through the host system.
The VM accesses the iSCSI target directly without the extra overhead that is
required for virtualizing hard disks in container files.
o PXE Network boot. The integrated virtual network cards of VirtualBox fully
support remote booting via the Preboot Execution Environm
17. Explain Thin client?
Desktop and mobile thin clients are solid-state devices that connect over a network to a
centralized server where all processing and storage takes place, providing reduced maintenance
costs and minimal application updates, as well as higher levels of security and energy efficiency.
In fact, thin clients can be up to 80 percent more power-efficient than traditional desktop PCs with
similar capabilities.
Sun
Sun’s thin client solution is called Sun Ray, and it is an extremely popular product.
Contributing to the demand for it is further market demand for Sun Virtual Desktop Infrastructure
(VDI) Software 2.0, which ships on approximately 25 percent of Sun Ray units since being
introduced in March 2008. Further, Sun Ray machines are able to display Solaris, Windows, or
Linux desktops on the same device. Sun Ray virtual display clients, Sun Ray Software, and Sun
VDI Software 2.0 are key components of Sun’s desktop virtualization offering, which are a set of
desktop technologies and solutions within Sun’s vim virtualization portfolio.
Hewlett Packard
Hewlett Packard (HP) is certainly a well-known technology company, and their products
extend into the world of thin clients. In fact, HP is the leading manufacturer of thin clients.
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
63
Offerings
In late 2008, HP introduced three thin client products, including the company’s first mobile
offering, that address business needs for a more simple, secure, and easily managed computing
infrastructure.
Thin clients are at the heart of HP’s remote client portfolio of desktop virtualization
solutions, which also include the blade PC-based HP Consolidated Client Infrastructure platform,
HP Virtual Desktop Infrastructure (VDI), blade workstations, remote deployment, and
management software and services.
HP Compaq t5730 and t5735 Desktop Clients HP also offers its HP Compaq t5730 and
t5735 Thin Clients. The HP Compaq t5730 is based on Microsoft Windows XPe, and select models
include integrated WLAN. Based on Debian Linux, the HP Compaq t5735 supports a variety of
open-source applications.
HP and VMware
HP made another effort to ensure they continue their thin client strides. In early 2009, HP
announced that its entire line of thin clients is certified for VMware View, making the products
even easier for customers to deploy in VMware environments.
Dell
Another well-known player in the world of client development is Dell, and they, too, offer
a thin client (their first). But they are also touting environmental responsibility with a new line of
PCs. Their most recent additions are a line of OptiPlex commercial desktops, Flexible Computing
Solutions, and service offerings designed to reduce costs throughout the desktop life cycle.
CLOUD COMPUTING – LAB
Practical - 1. Cloud Deployment Models
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
64
Practical - 2. Creating a WareHouse Cloud App with Salesforce.com
SalesForce.com best know for its CRM also provides a big and growing framework for
cloud computing and applications. With Force.com you can build apps faster, you can create
applications without the concern of buying hardware or installing software.
First of all you will need to register for a Salesforce.com developer account using the hyperlink
given below:
http://www.developerforce.com/events/regular/registration.php
Once you have a valid username and password, login into SalesForce.com.
In this exercise we will create simple Warehouse application with the following objects:
Product
Fields: Name, Description, Price, Stock quantity
Line Item
Fields :Invoice #, Product #, Units sold, Total value
Invoice
Fields: Description, Invoice Value, Invoice Status
Creating the Objects
To create the objects:
1. go to Your user Name, located in the upper-right corner of the Main page. Select Setup
from the list.
2. The Personal Setup dialog will appear. Click on Create and click on Objects.
3. In the next dialog click on the New Custom Object button.
4. In the next dialog you are going to set the object properties.
5.
6. Then click the Save button.
Creating Tabs
Check the option to Launch New Custom Tab Wizard after saving this custom object.
1. You will see the next dialog, select the Tab Style you prefer and click Next.
2. Click Next again and then click on Save.
Creating Custom Fields & Relationships
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
65
1. Click on New under Custom Fields & Relationships.
2. create the Description Field so select text in the next dialog. Click on Next.
3. Input the information such as Field label, length, constraints etc.
4. Click next again then click on Save.
Inserting Data into Objects
1. Go to the Home page
2. Click on Customize My Tabs
3. Select the objects you have just created and save.
4. Note that now you are now able to create Products, Line Items and Invoices.
5. Input the required fields and click Save.
Practical - 3. Creating an Application in Sales Force.comusing Apex
programming Language
The Developer Console is an integrated development environment with a collection of tools you
can use to create, debug, and test applications in your Salesforce organization.
Follow these steps to open the Developer Console −
Step 1 − Login to the Salesforce.com using login.salesforce.com.
Go to Name → Developer Console
Step 2 − To open the Developer Console, click on Name → Developer Console and then click
on Execute Anonymous as shown below.
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
66
Step 3 – Type the following code to print 1 to 10 numbers there.
integer i;
for(i=1; i<=10; i++)
System.debug(‘ i = ‘ + i);
Step 4 − When we click on Execute, the debug logs will open. Once the log appears in window
as shown below, then click on the log record.
Step 5 − Then type 'USER' in the window as shown below and the output statement will appear
in the debug window. This 'USER' statement is used for filtering the output.
Practical – 4: Social Network: Definition of Social Network –
A social network is usually created by a group of individuals who have a set of common
interests and objectives. There are usually a set of network formulators followed by a broadcast to
achieve the network membership. This advertising happens both in public and private groups
depending upon the confidentiality of the network.
Components of Web2.0 for Social Networks –
● Communities: Communities are an online space formed by a group of individuals to share
their thoughts, ideas.
● Blogging: Blogs give the users of a Social Network the freedom to express their thoughts
in a free form basis and help in generation and discussion of topics.
● Wikis: A Wiki is a set of co-related pages on a particular subject and allow users to share
content.
● File sharing/Podcasting: This is the facility which helps users to send their media files
and related content online for other people of the network to see and contribute more on.
● Mashups: This is the facility via which people on the internet can congregate services from
multiple vendors to create a completely new service. An example may be combining the
location information from a mobile service provider and the map facility of Google maps
in order to find the exact information of a cell phone device from the internet, just by
entering the cell number.
Types and behavior of Social Networks –
The nature of social networks makes for its variety. We have a huge number of types of social
networks based on needs and goals. Keeping these in mind, the main categories identified are given
below:
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
67
● Social Contact Networks: These types of networks are formed to keep contact with
friends and family and are one of the most popular sites on the network today. Examples:
Orkut, Facebook and Twitter.
● Study Circles: These are social networks dedicated for students where they can have areas
dedicated to student study topics, placement related queries and advanced research
opportunity gathering. Examples: FledgeWing and College Tonight.
● Social Networks for specialist groups: These types of social networks are specifically
designed for core field workers like doctors, scientists, engineers, members of the corporate
industries. Examples: LinkedIn.
● Networks for fine arts: These types of social networks are dedicated to people linked with
music, painting and related arts. Examples: Amie Street and Buzznet.
● Sporting Networks: These types of social networks are dedicated to people of the sporting
fraternity and have a gamut of information related to this field. Examples of the same is
Athlinks.
● Social Networks for the ‘inventors’: These are the social networks for the people who
have invented the concept of social networks, the very developers and architects that have
developed the social networks. Examples: Technical Forums and Mashup centers.
Life Cycle of Social Networks –
For any social network, there are a number of steps in its life cycle. In each of the life cycle step
of an online social network, Web 2.0 concepts have a great influence. Consider the diagram below.
For all the steps in the life cycle Web 2.0 has provided tools and concepts which are not only cost
effective but very easy to implement. Often times, online networks have a tendency to die out very
fast due to lack of proper tools to communicate. Web 2.0 provides excellent communication
mechanism concepts like Blogging and individual email filtering to keep everyone in the network
involved in the day to day activities of the network.
Figure. Life Cycle of Social Networks with Web 2.0
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
68
Impact of Social networks using Web2.0 –
The various implementations of social networks using Web 2.0 have already had a profound effect
on society as a whole. One of the most important groups of people – the medical community
already has reaped significant benefits from the technology and is translating the same towards the
betterment of public life.
Future Scope of Web2.0 in Social networks, –
There is a lot of contribution that Web 2.0 has already done for social networks as well as other
areas. However the reach for the technology has not been complete and there are still a number of
areas that need improvement so that the true power of the technology integrated with social
networks can be truly be felt.
The future of Web 2.0 itself is something which will be providing much more exciting features for
social networks. As time progresses the technology in itself is becoming more secure and
transparent and much more user oriented. New features like online video conference instead of
scrap messages/blogs and Object Oriented Programming will also help in introducing new features
within the social network.
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
69
Practical – 5: Case Study – Google App Engine
Google App Engine (often referred to as GAE or simply App Engine, and also used by the acronym
GAE/J) is a platform as a service (PaaS) cloud computing platform for developing and hosting
web applications in Google-managed data centers. Applications are sandboxed and run across
multiple servers. App Engine offers automatic scaling for web applications—as the number of
requests increases for an application, App Engine automatically allocates more resources for the
web application to handle the additional demand.
Google App Engine is free up to a certain level of consumed resources. Fees are charged for
additional storage, bandwidth, or instance hours required by the application. It was first released
as a preview version in April 2008, and came out of preview in September 2011.
Runtimes and frameworks
Currently, the supported programming languages are Python, Java (and, by extension, other JVM
languages such as Groovy, JRuby, Scala, Clojure, Jython and PHP via a special version of
Quercus), and Go. Google has said that it plans to support more languages in the future, and that
the Google App Engine has been written to be language independent.
Reliability and Support
All billed High-Replication Datastore App Engine applications have a 99.95% uptime SLA
Portability Concerns
Developers worry that the applications will not be portable from App Engine and fear being locked
into the technology. In response, there are a number of projects to create open-source back-ends
for the various proprietary/closed APIs of app engine, especially the datastore. Although these
projects are at various levels of maturity, none of them is at the point where installing and running
an App Engine app is as simple as it is on Google’s service. AppScale and TyphoonAE are two of
the open source efforts.
AppScale can run Python, Java, and Go GAE applications on EC2 and other cloud vendors.
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
70
TyphoonAE can run python App Engine applications on any cloud that support linux machines.
Web2py web framework offers migration between SQL Databases and Google App Engine,
however it doesn’t support several App Engine-specific features such as transactions and
namespaces.
Differences with other application hosting
Compared to other scalable hosting services such as Amazon EC2, App Engine provides
more infrastructure to make it easy to write scalable applications, but can only run a limited range
of applications designed for that infrastructure.
App Engine’s infrastructure removes many of the system administration and development
challenges of building applications to scale to hundreds of requests per second and beyond. Google
handles deploying code to a cluster, monitoring, failover, and launching application instances as
necessary.
While other services let users install and configure nearly any *NIX compatible software,
App Engine requires developers to use only its supported languages, APIs, and frameworks.
Current APIs allow storing and retrieving data from a BigTable non-relational database; making
HTTP requests; sending e-mail; manipulating images; and caching. Existing web applications that
require a relational database will not run on App Engine without modification.
Per-day and per-minute quotas restrict bandwidth and CPU use, number of requests served,
number of concurrent requests, and calls to the various APIs, and individual requests are
terminated if they take more than 60 seconds or return more than 32MB of data.
Differences between SQL and GQL
Google App Engine’s datastore has a SQL-like syntax called “GQL”. GQL intentionally does not
support the Join statement, because it seems to be inefficient when queries span more than one
machine. Instead, one-to-many and many-to-many relationships can be accomplished using
ReferenceProperty(). This shared-nothing approach allows disks to fail without the system failing.
Switching from a relational database to the Datastore requires a paradigm shift for developers
when modelling their data.
Unlike a relational database the Datastore API is not relational in the SQL sense.
The Java version supports asynchronous non-blocking queries using the Twig Object Datastore
interface. This offers an alternative to using threads for parallel data processing.
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
71
Practical – 6: Case Study – Amazon EC2
Amazon Elastic Compute Cloud (EC2)
Elastic IP addresses allow you to allocate a static IP address and programmatically assign it to an
instance. You can enable monitoring on an Amazon EC2 instance using Amazon CloudWatch2 in
order to gain visibility into resource utilization, operational performance, and overall demand
patterns (including metrics such as CPU utilization, disk reads and writes, and network traffic).
You can create Auto-scaling Group using the Auto-scaling feature3 to automatically scale your
capacity on certain conditions based on metric that Amazon CloudWatch collects. You can also
distribute incoming traffic by creating an elastic load balancer using the Elastic Load Balancing4
service. Amazon Elastic Block Storage (EBS)5 volumes provide network-attached persistent
storage to Amazon EC2 instances. Point-in-time consistent snapshots of EBS volumes can be
created and stored on Amazon Simple Storage Service (Amazon S3)6.
Amazon S3 is highly durable and distributed data store. With a simple web services interface, you
can store and retrieve large amounts of data as objects in buckets (containers) at any time, from
anywhere on the web using standard HTTP verbs. Copies of objects can be distributed and cached
at 14 edge locations around the world by creating a distribution using Amazon CloudFront7 service
– a web service for content delivery (static or streaming content). Amazon SimpleDB8 is a web
service that provides the core functionality of a database- real-time lookup and simple querying of
structured data – without the operational complexity. You can organize the dataset into domains
and can run queries across all of the data stored in a particular domain. Domains are collections of
items that are described by attribute-value pairs.
Amazon Relational Database Service9 (Amazon RDS) provides an easy way to setup, operate
and scale a relational database in the cloud. You can launch a DB Instance and get access to a full-
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
72
featured MySQL database and not worry about common database administration tasks like
backups, patch management etc.
Amazon Simple Queue Service (Amazon SQS)10 is a reliable, highly scalable, hosted distributed
queue for storing messages as they travel between computers and application components.
Amazon Simple Notifications Service (Amazon SNS) provides a simple way to notify
applications or people from the cloud by creating Topics and using a publish-subscribe protocol.
Amazon Elastic MapReduce provides a hosted Hadoop framework running on the web-scale
infrastructure of Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Simple Storage
Service (Amazon S3) and allows you to create customized JobFlows. JobFlow is a sequence of
MapReduce steps.
Amazon Virtual Private Cloud (Amazon VPC) allows you to extend your corporate network
into a private cloud contained within AWS. Amazon VPC uses IPSec tunnel mode that enables
you to create a secure connection between a gateway in your data center and a gateway in AWS.
Amazon Route53 is a highly scalable DNS service that allows you manage your DNS records by
creating a HostedZone for every domain you would like to manage.
AWS Identity and Access Management (IAM) enable you to create multiple Users with unique
security credentials and manage the permissions for each of these Users within your AWS
Account. IAM is natively integrated into AWS Services. No service APIs have changed to support
IAM, and exiting applications and tools built on top of the AWS service APIs will continue to
work when using IAM.
AWS also offers various payment and billing services that leverages Amazon’s payment
infrastructure.
All AWS infrastructure services offer utility-style pricing that require no long-term commitments
or contracts. For example, you pay by the hour for Amazon EC2 instance usage and pay by the
gigabyte for storage and data transfer in the case of Amazon S3. More information about each of
these services and their pay-as-you-go pricing is available on the AWS Website.
Note that using the AWS cloud doesn’t require sacrificing the flexibility and control you’ve grown
accustomed to:
You are free to use the programming model, language, or operating system (Windows,
OpenSolaris or any flavor of Linux) of your choice.
You are free to pick and choose the AWS products that best satisfy your requirements—you can
use any of the services individually or in any combination.
Because AWSprovidesresizable (storage,bandwidthandcomputing) resources,youare free to
consume as muchor as little andonlypayfor whatyou consume.
You are free touse the systemmanagementtoolsyou’veusedinthe pastandextendyourdatacenter
intothe cloud.
IIIB.Sc. – Semester–6 – ComputerScience CloudComputing
73
Krishna University :: Machilipatnam
March/April – 2018
6*03CSC15-B2 – Cloud Computing
Section – A
Answer any FIVE of the following. ( 5 x 5 = 25 M )
1. What is a Cloud and Cloud Computing?
2. Explain the origins of Cloud Computing.
3. Explain the limitations of Cloud Computing.
4. Explain the differences between SPI and Traditional IT model.
5. Explain about Salesforce.com and Rackspace.
6. Explain the benefits of IAAS.
7. Explain the Memory and Network virtualization.
8. Explain about Thin Client.
Section – B
Answer FIVE of the following. ( 5 x 10 = 50 M )
UNIT – I
9. (a) Explain the components of Cloud Computing.
(OR)
(b) Explain the characteristics of Cloud Computing.
UNIT – II
10. (a) Explain the benefits of Cloud Computing.
(OR)
(b) Explain the Regulatory Issues, Government Policies.
UNIT – III
11. (a) Explain the Cloud Delivery Model.
(OR)
(b) Explain about Software as a Service.
UNIT – IV
12. (a) Explain IaaS Service Providers.
(OR)
(b) Explain Cloud Deployment Model.
UNIT – V
13. (a) Explain the types of Hardware Virtualization.
(OR)
(b) Explain about Microsoft Hyper V and VM-Ware features.

More Related Content

What's hot (20)

PPT
VANETS Vehicular Adhoc NETworkS
Sridhar Raghavan
 
PPTX
Current trends in mobile computing
Sudipto Rocksandrules
 
PDF
Cloud Computing Technology Overview 2012
Janine Anthony Bowen, Esq.
 
PPT
I mode ppt
kondalarao7
 
PDF
Routing protocols in Vanet
Thesis Scientist Private Limited
 
PPSX
The history of cloud computing
MenSagam Technologies
 
PPTX
bluejacking.ppt
Aeman Khan
 
PPTX
Wi-Fi Technology
Naveen Kumar
 
PPTX
Mavenir: RAN Evolution for 5G
Mavenir
 
PPTX
Architecture and security in Vanet PPT
Meghaj Mallick
 
DOCX
Electronics seminar topics
123seminarsonly
 
PPTX
5g wireless technology
Sudhanshu Jha
 
PPT
Cloud computing
Sreehari820
 
PPTX
Wireless Body Area Networks
Musfiqur Rahman
 
PPTX
JARVIS - The Digital Life Assistant
pavan kumar
 
PDF
Mobile computing notes and material
SDMCET DHARWAD
 
PPTX
5G Technology.pptx
Sabarnakrishnamurthy
 
PDF
The essential role of AI in the 5G future
Qualcomm Research
 
PPTX
Introduction of Cloud computing
Rkrishna Mishra
 
PPTX
cloud computing ppt
himanshuawasthi2109
 
VANETS Vehicular Adhoc NETworkS
Sridhar Raghavan
 
Current trends in mobile computing
Sudipto Rocksandrules
 
Cloud Computing Technology Overview 2012
Janine Anthony Bowen, Esq.
 
I mode ppt
kondalarao7
 
Routing protocols in Vanet
Thesis Scientist Private Limited
 
The history of cloud computing
MenSagam Technologies
 
bluejacking.ppt
Aeman Khan
 
Wi-Fi Technology
Naveen Kumar
 
Mavenir: RAN Evolution for 5G
Mavenir
 
Architecture and security in Vanet PPT
Meghaj Mallick
 
Electronics seminar topics
123seminarsonly
 
5g wireless technology
Sudhanshu Jha
 
Cloud computing
Sreehari820
 
Wireless Body Area Networks
Musfiqur Rahman
 
JARVIS - The Digital Life Assistant
pavan kumar
 
Mobile computing notes and material
SDMCET DHARWAD
 
5G Technology.pptx
Sabarnakrishnamurthy
 
The essential role of AI in the 5G future
Qualcomm Research
 
Introduction of Cloud computing
Rkrishna Mishra
 
cloud computing ppt
himanshuawasthi2109
 

Similar to Cloud final with_lab (20)

PPTX
cloud computing module 1 for seventh semester
diksha8842
 
PPTX
Cloud computing presentation
Ahmed Abdisalan
 
PPT
cloud computing
Chetan Chaudhari
 
PDF
A Survey on Cloud Computing Security Issues, Vendor Evaluation and Selection ...
Eswar Publications
 
PPTX
Beginners Guide and general overview to Cloud Computing.pptx
samuelspiritus83
 
PDF
Cloud computing white paper
Vaneesh Bahl
 
PPTX
Itecn453 cloud computing
Ahmad Ammari
 
PDF
Cloud computing project report
Naveed Farooq
 
PDF
A Brief Introduction to Cloud Computing
IRJET Journal
 
PDF
Cloud computing final_report
akshatjain4444
 
PPTX
Cloud computing by NADEEM AHMED
NA000000
 
PDF
Cloud computing implementation practically using vmware
sameer sardar
 
PPTX
History of Cloud Computing.pptx
varshaJujare1
 
DOCX
The seminar report on cloud computing
Divyesh Shah
 
PPTX
cloud.pptx
Smartyking2
 
PPT
Cloud computing computer
Sanath Surawar
 
PPTX
ICC1_Module 1_Fundamentals of Cloud Computing.pptx
DeepakGour17
 
PPT
Cloud Computing MODULE 1 basics of cloud computing .ppt
mithunrocky72
 
PPTX
Introduction to Cloud Computing.pptx
ojaswiniwagh
 
PPTX
Cloud-Computing.pptx
MayuriPatel399208
 
cloud computing module 1 for seventh semester
diksha8842
 
Cloud computing presentation
Ahmed Abdisalan
 
cloud computing
Chetan Chaudhari
 
A Survey on Cloud Computing Security Issues, Vendor Evaluation and Selection ...
Eswar Publications
 
Beginners Guide and general overview to Cloud Computing.pptx
samuelspiritus83
 
Cloud computing white paper
Vaneesh Bahl
 
Itecn453 cloud computing
Ahmad Ammari
 
Cloud computing project report
Naveed Farooq
 
A Brief Introduction to Cloud Computing
IRJET Journal
 
Cloud computing final_report
akshatjain4444
 
Cloud computing by NADEEM AHMED
NA000000
 
Cloud computing implementation practically using vmware
sameer sardar
 
History of Cloud Computing.pptx
varshaJujare1
 
The seminar report on cloud computing
Divyesh Shah
 
cloud.pptx
Smartyking2
 
Cloud computing computer
Sanath Surawar
 
ICC1_Module 1_Fundamentals of Cloud Computing.pptx
DeepakGour17
 
Cloud Computing MODULE 1 basics of cloud computing .ppt
mithunrocky72
 
Introduction to Cloud Computing.pptx
ojaswiniwagh
 
Cloud-Computing.pptx
MayuriPatel399208
 
Ad

Recently uploaded (20)

PPTX
Python-Application-in-Drug-Design by R D Jawarkar.pptx
Rahul Jawarkar
 
PPT
DRUGS USED IN THERAPY OF SHOCK, Shock Therapy, Treatment or management of shock
Rajshri Ghogare
 
PPTX
Virus sequence retrieval from NCBI database
yamunaK13
 
PPTX
Introduction to Probability(basic) .pptx
purohitanuj034
 
PPTX
Sonnet 130_ My Mistress’ Eyes Are Nothing Like the Sun By William Shakespear...
DhatriParmar
 
PPTX
Basics and rules of probability with real-life uses
ravatkaran694
 
PPTX
ENGLISH 8 WEEK 3 Q1 - Analyzing the linguistic, historical, andor biographica...
OliverOllet
 
DOCX
Unit 5: Speech-language and swallowing disorders
JELLA VISHNU DURGA PRASAD
 
PPTX
Rules and Regulations of Madhya Pradesh Library Part-I
SantoshKumarKori2
 
PDF
Module 2: Public Health History [Tutorial Slides]
JonathanHallett4
 
PPTX
I INCLUDED THIS TOPIC IS INTELLIGENCE DEFINITION, MEANING, INDIVIDUAL DIFFERE...
parmarjuli1412
 
DOCX
Modul Ajar Deep Learning Bahasa Inggris Kelas 11 Terbaru 2025
wahyurestu63
 
DOCX
pgdei-UNIT -V Neurological Disorders & developmental disabilities
JELLA VISHNU DURGA PRASAD
 
PPTX
Electrophysiology_of_Heart. Electrophysiology studies in Cardiovascular syste...
Rajshri Ghogare
 
PDF
The Minister of Tourism, Culture and Creative Arts, Abla Dzifa Gomashie has e...
nservice241
 
PPTX
Applied-Statistics-1.pptx hardiba zalaaa
hardizala899
 
PDF
EXCRETION-STRUCTURE OF NEPHRON,URINE FORMATION
raviralanaresh2
 
PPTX
Dakar Framework Education For All- 2000(Act)
santoshmohalik1
 
PPTX
Introduction to pediatric nursing in 5th Sem..pptx
AneetaSharma15
 
PPTX
Continental Accounting in Odoo 18 - Odoo Slides
Celine George
 
Python-Application-in-Drug-Design by R D Jawarkar.pptx
Rahul Jawarkar
 
DRUGS USED IN THERAPY OF SHOCK, Shock Therapy, Treatment or management of shock
Rajshri Ghogare
 
Virus sequence retrieval from NCBI database
yamunaK13
 
Introduction to Probability(basic) .pptx
purohitanuj034
 
Sonnet 130_ My Mistress’ Eyes Are Nothing Like the Sun By William Shakespear...
DhatriParmar
 
Basics and rules of probability with real-life uses
ravatkaran694
 
ENGLISH 8 WEEK 3 Q1 - Analyzing the linguistic, historical, andor biographica...
OliverOllet
 
Unit 5: Speech-language and swallowing disorders
JELLA VISHNU DURGA PRASAD
 
Rules and Regulations of Madhya Pradesh Library Part-I
SantoshKumarKori2
 
Module 2: Public Health History [Tutorial Slides]
JonathanHallett4
 
I INCLUDED THIS TOPIC IS INTELLIGENCE DEFINITION, MEANING, INDIVIDUAL DIFFERE...
parmarjuli1412
 
Modul Ajar Deep Learning Bahasa Inggris Kelas 11 Terbaru 2025
wahyurestu63
 
pgdei-UNIT -V Neurological Disorders & developmental disabilities
JELLA VISHNU DURGA PRASAD
 
Electrophysiology_of_Heart. Electrophysiology studies in Cardiovascular syste...
Rajshri Ghogare
 
The Minister of Tourism, Culture and Creative Arts, Abla Dzifa Gomashie has e...
nservice241
 
Applied-Statistics-1.pptx hardiba zalaaa
hardizala899
 
EXCRETION-STRUCTURE OF NEPHRON,URINE FORMATION
raviralanaresh2
 
Dakar Framework Education For All- 2000(Act)
santoshmohalik1
 
Introduction to pediatric nursing in 5th Sem..pptx
AneetaSharma15
 
Continental Accounting in Odoo 18 - Odoo Slides
Celine George
 
Ad

Cloud final with_lab

  • 1. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 1 III B.Sc. – VI SEMESTER Cloud Computing is an emerging computing technology that uses the Internet and Central remote servers to maintain data and applications. Cloud Computing provides us a means by which we can access the applications as utilities, over the Internet. It allows us to create, configure, and customize applications online. With Cloud Computing we can access database resources via the Internet from anywhere for as long as they need without worrying about any maintenance and management of actual resources.
  • 2. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 2 SYLLABUS UNIT - I Cloud Computing Overview – Origins of Cloud computing – Cloud components - Essential characteristics – On-demand self-service , Broad network access , Location independent resource pooling , Rapid elasticity , Measured service. UNIT - II Cloud scenarios – Benefits: scalability , simplicity , vendors ,security. Limitations – Sensitive information - Application development – Security concerns - privacy concern with a third party - security level of third party - security benefits Regularity issues: Government policies. UNIT - III Cloud architecture: Cloud delivery model – SPI framework , SPI evolution , SPI vs. traditional IT Model; Software as a Service (SaaS): SaaS service providers – Google App Engine, Salesforce.com and google platfrom – Benefits – Operational benefits - Economic benefits – Evaluating SaaS; Platform as a Service ( PaaS ): PaaS service providers – Right Scale – Salesforce.com – Rackspace – Force.com – Services and Benefits UNIT - IV Infrastructure as a Service ( IaaS): IaaS service providers – Amazon EC2 , GoGrid – Microsoft soft implementation and support – Amazon EC service level agreement – Recent developments – Benefits; Cloud deployment model : Public clouds – Private clouds – Community clouds - Hybrid clouds - Advantages of Cloud computing. UNIT - V Virtualization: Virtualization and cloud computing - Need of virtualization – cost , administration , fast deployment , reduce infrastructure cost – limitations; Types of hardware virtualization: Full virtualization - partial virtualization - para virtualization; Desktop virtualization: Software virtualization – Memory virtualization - Storage virtualization – Data virtualization – Network virtualization; Microsoft Implementation: Microsoft Hyper V – Vmware features and infrastructure – Virtual Box - Thin client. Reference Books 1. Cloud computing a practical approach - Anthony T.Velte , Toby J. Velte Robert Elsenpeter TATA McGraw- Hill , New Delhi - 2010 2. Cloud Computing: Web-Based Applications That Change the Way You Work and Collaborate Online - Michael Miller - Que 2008 3. Cloud Computing, Theory and Practice, Dan C Marinescu, MK Elsevier. 4. Cloud Computing, A Hands on approach, Arshadeep Bahga, Vijay Madisetti, University Press 5. Mastering Cloud Computing, Foundations and Application Programming, Raj Kumar Buyya, Christenvecctiola, S Tammarai selvi, TMH
  • 3. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 3 INDEX UNIT TOPIC PAGE UNIT-1 CLOUD COMPUTING OVERVIEW 4 UNIT-2 CLOUD SCENARIOS 11 UNIT-3 CLOUD ARCHITECTURE (SOFTWAREAS ASERVICE, PLATFORM ASASERVICE) 18 UNIT-4 INFRASTRUCTURE AS A SERVICE 33 UNIT-5 VIRTUALIZATION 44
  • 4. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 4 UNIT-1 CHAPTER-1 1. Explain cloud computing over view? Cloud Computing provides us means of accessing the applications as utilities over the Internet. It allows us to create, configure, and customize the applications online. What is Cloud? The term Cloud refers to a Network or Internet. In other words, we can say that Cloud is something, which is present at remote location. Cloud can provide services over public and private networks, i.e., WAN, LAN or VPN. Applications such as e-mail, web conferencing, customer relationship management (CRM) execute on cloud. What is Cloud Computing? Cloud Computing refers to manipulating, configuring, and accessing the hardware and software resources remotely. It offers online data storage, infrastructure, and application. Cloud computing offers platform independency, as the software is not required to be installed locally on the PC. Hence, the Cloud Computing is making our business applications mobile and collaborative. 2. Explain the origin of Cloud computing. There are certain services and models working behind the scene making the cloud computing feasible and accessible to end users. Following are the working models for cloud computing: ● Deployment Models ● Service Models Deployment Models Deployment models define the type of access to the cloud, i.e., how the cloud is located? Cloud can have any of the four types of access: Public, Private, Hybrid, and Community.
  • 5. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 5 Public Cloud: The public cloud allows systems and services to be easily accessible to the general public. Public cloud may be less secure because of its openness. Private Cloud: The private cloud allows systems and services to be accessible within an organization. It is more secured because of its private nature. Community Cloud: The community cloud allows systems and services to be accessible by a group of organizations. Hybrid Cloud: The hybrid cloud is a mixture of public and private cloud, in which the critical activities are performed using private cloud while the non-critical activities are performed using public cloud. Service Models Cloud computing is based on service models. These are categorized into three basic service models which are - ● Infrastructure-as–a-Service (IaaS) ● Platform-as-a-Service (PaaS) ● Software-as-a-Service (SaaS) 3. Explain history of cloud computing? Before emerging the cloud computing, there was Client/Server computing which is basically a centralized storage in which all the software applications, all the data and all the controls are resided on the server side. If a single user wants to access specific data or run a program, he/she need to connect to the server and then gain appropriate access, and then he/she can do his/her business. Then after, distributed computing came into picture, where all the computers are networked together and share their resources when needed. On the basis of above computing, there was emerged of cloud computing concepts that later implemented. At around in 1961, John MacCharty suggested in a speech at MIT that computing can be sold like a utility, just like a water or electricity. It was a brilliant idea, but like all brilliant ideas, it was ahead if its time, as for the next few decades, despite interest in the model, the technology simply was not ready for it.
  • 6. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 6 But of course time has passed and the technology caught that idea and after few years we mentioned that: In 1999, Salesforce.com started delivering of applications to users using a simple website. The applications were delivered to enterprises over the Internet, and this way the dream of computing sold as utility were true. In 2002, Amazon started Amazon Web Services, providing services like storage, computation and even human intelligence. However, only starting with the launch of the Elastic Compute Cloud in 2006 a truly commercial service open to everybody existed. In 2009, Google Apps also started to provide cloud computing enterprise applications. Of course, all the big players are present in the cloud computing evolution, some were earlier, some were later. In 2009, Microsoft launched Windows Azure, and companies like Oracle and HP have all joined the game. This proves that today, cloud computing has become mainstream. 4. Explain cloud components? In a simple, topological sense, a cloud computing solution is made up of several elements: clients, the datacenter, and distributed servers. These components make up the three parts of a cloud computing solution. Each element has a purpose and plays a specific role in delivering a functional cloud- based application, so let’s take a closer look. Clients Clients are, in a cloud computing architecture, the exact same things that they are in a plain, old, everyday local area network (LAN). They are, typically, the computers that just sit on your desk. But they might also be laptops, tablet computers, mobile phones, or PDAs—all big drivers for cloud computing because of their mobility. Anyway, clients are the devices that the end users interact with to manage their information on the cloud. Clients generally fall into three categories: ● Mobile Mobile devices include PDAs or smartphones, like a Blackberry, WindowsMobile Smartphone, or an iPhone.
  • 7. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 7 ● Thin Clients are computers that do not have internal hard drives, but rather let theserver do all the work, but then display the information. ● Thick This type of client is a regular computer, using a web browser like Firefoxor Internet Explorer to connect to the cloud. Datacenter The datacenter is the collection of servers where the application to which you subscribe is housed. It could be a large room in the basement of your building or a room full of servers on the other side of the world that you access via the Internet. A growing trend in the IT world is virtualizing servers. That is, software can be installed allowing multiple instances of virtual servers to be used. In this way, you can have half a dozen virtual servers running on one physical server. DistributedServers But the servers don’t all have to be housed in the same location. Often, servers are in geographically disparate locations. But to you, the cloud subscriber, these servers act as if they’re humming away right next to each other. This gives the service provider more flexibility in options and security. For instance, Amazon has their cloud solution in servers all over the world. If something were to happen at one site, causing a failure, the service would still be accessed through another site. Also, if the cloud needs more hardware, they need not throw more servers in the safe room—they can add them at another site and simply make it part of the cloud. 5. Explain Essentialcharacteristicsofcloud computing? On-Demand Self-Service Cloud computing provides resources on demand, i.e. when the consumer wants it. This is made possible by self-service and automation. Self-service means that the consumer performs all the actions needed to acquire the service herself, instead of going through an IT department, for example. The consumer’s request is then automatically processed by the cloud infrastructure, without human intervention on the provider’s side. To make this possible, a cloud provider must obviously have the infrastructure in place to automatically handle consumers’ requests. Most likely, this infrastructure will be virtualized, so different consumers can use the same pooled hardware. On-demand self-service computing implies a high level of planning. For instance, a cloud consumer can request a new virtual machine at any time, and expects to have it working in a couple of minutes. The underlying hardware, however, might take 90 days to get delivered to the provider. It is therefore necessary to monitor trends in resource usage and plan for future situations well in advance.
  • 8. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 8 Advantages: Simple User Interfaces The cloud provider can’t assume much specialized knowledge on the consumer’s part. In a traditional enterprise IT setting, IT specialists process requests from business. They know, for instance, how much RAM is going to be needed for a given use case. Policies The high level of automation required for operating a cloud means that there is no opportunity for humans to thoroughly inspect the specifics of a given situation and make an informed decision for a request based on context. Broad NetworkAccess Cloud computing separates computing capabilities from their consumers, so that they don’t have to maintain the capabilities themselves. A consequence of this is that the computing capabilities are located elsewhere, and must be accessed over a network. Network A computer network is a collection of two or more computers linked together for the purposes of sharing information. Resource Pooling Resource pooling, the sharing of computing capabilities, leads to increased resource utilization rates. This means you need fewer resources and thus save costs. Multi-tenancy Pooling resources on the software level means that a consumer is not the only one using the software. The software must be designed to partition itself and provide scalable services to multiple unrelated tenants. This is not a new concept: in the 1960s and 1970s, in mainframe environments, this was called time sharing. In the 1990s, the term in vogue was Application Service Provider (ASP). Nowadays people speak of cloud services. Billing and Metering When multiple consumers share the same resources, the question arises who pays for them. Billing and metering infrastructure automatically collects per tenant usage of resources. For this to work, each request must be assigned a unique transaction ID, that is related to the tenant. The transaction ID must be passed along to all sub-components, so that each can add their usage cost to the transaction. Data Partitioning It may make sense to store data from different tenants in different locations. For instance, storing data close to where it’s used may decrease latency and thereby improve performance for the cloud consumer. Data for different tenants may be combined into a shared. Rapid Elasticity Since consumers can ask for and get resources at any time and in any quantity, the cloud must be able to scale up and down as load demands. Note that scaling down is just as important as scaling up, to conserve resources and thereby reduce cost.
  • 9. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 9 Different applications running in the cloud will have different workload patterns, be they seasonal, batch, transient, hockey stick, or more complex. Because of these differences, high workloads in some applications will coincide with low workloads in others. This is why resource pooling leads to higher resource utilization rates and economies of scale. Scalability To achieve these economies of scale, the cloud infrastructure must be able to scale quickly. Scalability is the ability of a system to improve performance proportionally after adding hardware. In a scalable cloud, one can just add hardware whenever the demand rises, and the applications keep performing at the required level. Since resources in a system typically have some overhead associated with them, it’s important to understand what percentage of the resource you can actually use. The measurement of the additional output by adding a unit of resource, as compared to the previously added unit of resource is called the scalability factor. Based on this concept we can distinguish the following types of scalability: ● Linear scalability: The scalability factor stays constant when capacity is added. ● Sub-linear scalability: The scalability factor decreases when capacity is added. ● Supra-linear scalability: The scalability factor increases when capacity is added. For instance, I/O across multiple disk spindles in a RAID gets better with more spindles. ● Negative scalability: The performance of the system gets worse, instead of better, when capacity is added. Dynamic Provisioning Cloud systems must not only be able to scale, but scale at will, since cloud consumers should get the resources they want whenever they want it. It is, therefore, important to be able to dynamically provision new computing resources. Dynamic provisioning relies heavily on demand monitoring. Measured Service In order to know when to scale up or down, one needs information about the current demand on the cloud. In other words, one needs to measure things like CPU, memory, and network bandwidth usage to make sure cloud consumers never run out of those resources. The types of resources to measure depend in part on the types of services that the cloud system offers. 6. Explain the characteristics of cloud computing. Cloud computing has following major characteristics: 1. Swiftness of organizations gets improved, as cloud computing may increase users flexibility of adding, expanding technological infrastructure resources. 2. Access applications as utilities over the Internet or intranet. 3. Configure the application online at any time. It does not require to install a specific piece of software to access or manipulate cloud applications. 4. Cloud computing offers online development and deployment tools, programming runtime environment through Platform As A Service model.
  • 10. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 10 5. Cloud resources are available over the network in a manner that provides platform independent access to any type of clients. 6. Cloud computing offers on-demand self-service. The resources can be used without interaction with cloud service provider. 7. Cloud computing is highly cost effective because it operates at higher efficiencies with greater utilization. It just requires an Internet connection. 8. Cloud computing offers load balancing that makes it more reliable. Costs savings depend on the type of activities supported and the type of infrastructure available in-house. 7. Explain Service Level Agreements. One of the advantages of cloud computing is that the consumer no longer has the burden of making sure capacity is adequate for fulfilling demand. Consumers sign up for Service Level Agreements (SLAs), that guarantee them enough capacity. An SLA should contain: ● The list of services the provider will deliver and a complete definition of each service ● Metrics to determine whether the provider is delivering the service as promised and an auditing mechanism to monitor the service. ● Responsibilities of the provider and the consumer and remedies available to both if the terms of the SLA are not met ● A description of how the SLA will change over time Auditing To prove that certain QoS attributes are met, it may be necessary to keep an audit trail of performed operations. High Availability One of the most important things to settle in an SLA is availability. This is usually expressed in a number of nines, e.g. five nines stands for 99.999% uptime. Replication In replication, a logical variable x that can be read and written to, actually consists of a set of physical variables x0, … xn and an associated protocol that makes sure that reads and writes to the replicas are performed in a way that looks indistinguishable from reads and writes to the original variable. There are three major types of data replication protocols: Transactional replication maintains replication within the boundaries of a single transaction. Virtual synchrony is an inter-process message passing technology that guarantees that messages are delivered to all nodes, in the order they were sent. State machine consensus / Paxos is a way of achieving consensus among a group of distributed servers that guarantees fault-tolerance. ● Read repair The correction is done when a read finds an inconsistency. This slows down the read operation. ● Write repair The correction is done during a write operation, if an inconsistency has been found out, slowing down the write operation. ● Asynchronous repair The correction is not part of a read or write operation.
  • 11. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 11 UNIT-2 1. Explain cloud scenarios? Scenarios There are three different major implementations of cloud computing. How organizations are using cloud computing is quite different at a granular level, but the uses generally fall into one of these three solutions. Compute Clouds Compute Clouds Compute clouds allow access to highly scalable, inexpensive, on- demand computing resources that run the code that they’re given. Three examples of compute clouds are • Amazon’s EC2 • Google App Engine • Berkeley Open Infrastructure for Network Computing (BOINC) Compute clouds are the most flexible in their offerings and can be used for sundry purposes; it simply depends on the application the user wants to access. Sign up for a cloud computing account, and get started right away. These applications are good for any size organization, but large organizations might be at a disadvantage because these applications don’t offer the standard management, monitoring, and governance capabilities that these organizations are used to. Enterprises aren’t shut out, however. Amazon offers enterprise- class support and there are emerging sets of cloud offerings like Terremark’s Enterprise Cloud, which are meant for enterprise use. Cloud Storage One of the first cloud offerings was cloud storage and it remains a popular solution. Cloud storage is a big world. There are already in excess of 100 vendors offering cloud storage. This is an ideal solution if you want to maintain files off-site.
  • 12. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 12 Security and cost are the top issues in this field and vary greatly, depending on the vendor you choose. Currently, Amazon’s S3 is the top dog Cloud Applications Cloud applications differ from compute clouds in that they utilize software applications that rely on cloud infrastructure. Cloud applications are versions of Software as a Service (SaaS) and include such things as web applications that are delivered to users via a browser or application like Microsoft Online Services. These applications offload hosting and IT management to the cloud. Cloud applications often eliminate the need to install and run the application on the customer’s own computer, thus alleviating the burden of software maintenance, ongoing operation, and support. Some cloud applications include • Peer-to-peer computing (like BitTorrent and Skype) • Web applications (like MySpace or YouTube) • SaaS (like Google Apps) • Software plus services (like Microsoft Online Services) 2. Explain the Benefits of cloud scenarios? Your organization is going to have different needs from the company next door. However, cloud computing can help you with your IT needs. Let’s take a closer look at what cloud computing has to offer your organization. Scalability If you are anticipating a huge upswing in computing need (or even if you are surprised by a sudden demand), cloud computing can help you manage. Rather than having to buy, install, and configure new equipment, you can buy additional CPU cycles or storage from a third party.
  • 13. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 13 Since your costs are based on consumption, you likely wouldn’t have to pay out as much as if you had to buy the equipment. Once you have fulfilled your need for additional equipment, you just stop using the cloud provider’s services, and you don’t have to deal with unneeded equipment. You simply add or subtract based on your organization’s need. Simplicity Again, not having to buy and configure new equipment allows you and your IT staff to get right to your business. The cloud solution makes it possible to get your application started immediately, and it costs a fraction of what it would cost to implement an on-site solution. Knowledgeable Vendors Typically, when new technology becomes popular, there are plenty of vendors who pop up to offer their version of that technology. This isn’t always good, because a lot of those vendors tend to offer less than useful technology. By contrast, the first comers to the cloud computing party are actually very reputable companies. Companies like Amazon, Google, Microsoft, IBM, and Yahoo! have been good vendors because they have offered reliable service, plenty of capacity, and you get some brand familiarity with these well-known names. Security There are plenty of security risks when using a cloud vendor, but reputable companies strive to keep you safe and secure. Vendors have strict privacy policies and employ stringent security measures, like proven cryptographic methods to authenticate users. Further, you can always encrypt your data before storing it on a provider’s cloud. In some cases, between your encryption and the vendor’s security measures, your data may be more secure than if it were stored in-house. 3. Explain the Limitations of cloud scenarios. There are other cases when cloud computing is not the best solution for your computing needs. This section looks at why certain applications are not the best to be deployed on the cloud. We don’t mean to make these cases sound like deal-breakers, but you should be aware of some of the limitations.
  • 14. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 14 1. Sensitive Information: Let us understand by an example: A marketing survey company is using google Docs to store the data like your PAN Card, Aadhar Card etc. The company is not the only one who should protect your data. Thought it will be expected from google also to protect your data but google pardons itself of this when agreement with them is signed. This sensitive information can be used by government for specific analysis. 2. Don’t go with Trend: Your development team has given you a product and that product is completely handling your situation well, even then you are planning to move the applications to cloud just to follow the market trend or fashion then probably time to re analyze the situation and don’t take the decision just for the sake of taking it. There are certainly situations where moving to cloud is advantageous but not all. 3. Integration Issues:There are two applications your business house/development team is using, one of the application contains the sensitive data and other one contains non-sensitive data so you decided to move the sensitive data on cloud but moved non-sensitive data on cloud. In this case one application is installed locally and other one is on cloud. It would create issues with security and speed. You might try to run a high-speed application on local machine and it is using the data coming from one application located on cloud, the speed of the application will be controlled by application on the cloud as it will move based on internet speed and other factors. 4. Delay in response: As the application size grow which means the data used by the application changes and grows everyday for example sales/production/logs data. The response coming from the application hosted on cloud might increase and delay specially when data is needed spontaneously. 5. Security is largely juvenile, and requires focused expertise. 6. Your dependent on the cloud computing provider for your IT resources, thus you could be exposed around outages and other service interruptions. 7. Using the Internet can cause network latency with some cloud applications. 8. Much of the technology is proprietary, and thus can cause lock-in. 9. Cost increases exponentially if subscription prices go up in the future. 10. Agreement issues could increase the risks of using cloud computing. 11. Data privacy issues could rise, if cloud provider hunt to monetize the data in their system. 12. Developing Your Own Applications: Often, the applications you want are already out there. However, it may be the case that you need a very specific application. And in that case, you’ll have to commission its development yourself.
  • 15. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 15 Developing your own applications can certainly be a problem if you don’t know how to program, or if you don’t have programmers on staff. In such a case, you’ll have to hire a software company (or developer) or be left to use whatever applications the provider offers. 4. Explain the Security Concerns in Cloud Computing. As with so many other technical choices, security is a two-sided coin in the world of cloud computing—there are pros and there are cons. In this section, let’s examine security in the cloud and talk about what’s good, and where you need to take extra care. IDC conducted a survey of 244 IT executives about cloud services, security led the pack of cloud concerns with 74.5 percent. In order to be successful, vendors will have to take data like this into consideration as they offer up their clouds. Privacy Concerns with a Third Party The first and most obvious concern is for privacy considerations. That is, if another party is housing all your data, how do you know that it’s safe and secure? You really don’t. As a starting point, assume that anything you put on the cloud can be accessed by anyone. There are also concerns because law enforcement has been better able to get at data maintained on a cloud, more so than they are from an organization’s servers. The best plan of attack is to not perform mission-critical work or work that is highly sensitive on a cloud platform without extensive security controls managed by your organization. If you cannot manage security at that rigorous level, stick to applications that are less critical and therefore better suited for the cloud and more “out of the box” security mechanisms. Remember, nobody can steal critical information that isn’t there. The statistics on third party breaches very badly and it's clear that organisations have trust issues when it comes to third parties reliable in notifying them when an incident or a breach occurs. report from insurance company Beazley covering the first 6 months of 2017 indicates that accidental breaches caused by employee error/data breach while controlled by third party suppliers account for 40% of bridges over all. That doesn't mean that there are not reputable companies who would never think of compromising your data and who are not staying on the cutting edge of the network security to keep your data safe. But, even if providers are doing their best to Secure data, it can still be hacked, and your information is at the mercy of who broken in. So before signing in it's always advisable to know are they doing enough to protect your data access the company with a five-star reputation. Hackers There’s a lot of lot hackers can do if they’ve compromised your data. It ranges from selling your proprietary information to your competition to secretly encoding your storage until you pay them. Or they may just delete everything to damage your business and justify the action based on their ethical views. Your data become more prone to them as your data is saved on cloud which is third party. Denial of services:
  • 16. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 16 In a company is recognised worst-case scenario, attackers use multiple internet connected devices each of which is running one or more than one bots to perform distributed denial of service(DDOS) attacks. To get the hackers to stop attack in your network. A Tokyo firm had to pay 2.5 million yen after the network was brought to a halt botnet attacks. Because the attack was so discrete, police was unable to track down the attackers. In the world of cloud computing this is a clearly a huge concern. 5. Explain Security Benefits of cloud scenarios We are not trying to imply that your data is unsecure on the cloud. Service providers do make an effort to ensure security of your data. Otherwise business will dry up. Some of the security benefits of cloud services. By maintaining data on the cloud and ensure strong access control, and putting a limit for an employee to download/access only what they need to perform a task, cloud computing can limit the amount of information that could potentially be lost. Reduced data loss is also ensured by the fact the data is stored at a centralized place making your systems more inherently secure. If your data is maintained on a cloud, it is easier to monitor security than must worry about the security of numerous servers and clients. Of course, the chance that the cloud would be breached puts all the data at risk, but if you are mindful of security and keep up on it, you only must worry about one location, rather than several. If your system breached, you can instantly move the data to another machine and parallel conduct the investigation to find who was behind all breach. This is done at without disturbing your users. Traditionally in such cases the time gets wasted in explaining the management about the cause and taking the approval to shutdown the system so that data is moved to another system. When you developed your own network, you had to buy third-party security software to get the level of protection you want. With a cloud solution, those tools can be bundled in and available to you and you can develop your system with whatever level of security you desire. SaaS providers don’t bill you for all the security testing they do. It’s shared among the cloud users. The result is that t because you are in a pool with others you get lower costs for security testing. This is also the case with PaaS where your developers create their own code, but the cloud code-scanning tools check the code for security weakness. 6. Expalin RegulatoryIssues It’s rare when we actually want the government in our business. In the case of cloud computing, however, regulation might be exactly what we need. Without some rules in place, it’s too easy for service providers to be unsecure or even shifty enough to make off with your data. Government to the Rescue?
  • 17. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 17 Is it the government’s place to regulate cloud computing? As we mentioned, thanks to the Great Depression, we had regulation that protected WaMu’s customers’ money when the bank failed. There are two schools of thought on the issue. First, if government can figure out a way to safeguard data—either from loss or theft—any company facing such a loss would applaud the regulation. On the other hand, there are those who think the government should stay out of it and let competition and market forces guide cloud computing. There are important questions that government needs to work out. First, who owns the data? Also, should law enforcement agencies have easier access to personal information on cloud data than that stored on a personal computer? A big problem is that people using cloud services don’t understand the privacy and security implications of their online email accounts, their LinkedIn account, their MySpace page, and so forth. While these are popular sites for individuals, they are still considered cloud services and their regulation will affect other cloud services. Government Procurement There are also questions about whether government agencies will store their data on the cloud. Procurement regulations will have to change for government agencies to be keen on jumping on the cloud. The General Services Administration is making a push toward cloud computing, in an effort to reduce the amount of energy their computers consume. Hewlett-Packard and Intel produced a study that shows the federal government spends $480 million per year on electricity to run its computers. UNIT-3
  • 18. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 18 CLOUD ARCHITECTURE 1. Explain Cloud Architecture? Cloud Computing architecture comprises of many cloud components, which are loosely coupled. We can broadly divide the cloud architecture into two parts: • Front End • Back End Each of the ends is connected through a network, usually Internet. The following diagram shows the graphical view of cloud computing architecture: Front End The front end refers to the client part of cloud computing system. It consists of interfaces and applications that are required to access the cloud computing platforms, Example - Web Browser. Back End The back End refers to the cloud itself. It consists of all the resources required to provide cloud computing services. It comprises of huge data storage, virtual machines, security mechanism, services, deployment models, servers, etc. 2. Explain SPI Framework ForCloud Computing? A commonly agreed upon framework for describing cloud computing services goes by the acronym ―SPI. This acronym stands for the three major services provided through the cloud: software-as-a-service (SaaS),platform-as-a-service (PaaS), and infrastructure-as-aservice(IaaS)
  • 19. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 19 Infrastructure as a Service The IaaS model provides the required infrastructure to run the applications. A cloud infrastructure enables on-demand provisioning of servers running several types of operating systems and a customized software stack. The provider is in complete control of the infrastructure. Infrastructure services are considered to be the bottom layer of cloud computing systems. Example IBM, The definition of infrastructure as a service (IaaS) is pretty simple. You rent cloud infrastructure—servers, storage and networking on demand, in a pay-as-you-go model. Advantages:- 1. Tremendous control to use whatever content makes sense. 2. Flexibility to secure data to whatever degree necessary 3. Physical Independence from infrastructure(you don’t have to ensure that proper cooling is there etc.) Disadvantages:- 1. Responsible for all configuration implemented on the server (and in application) 2. Responsible for keeping software up to date. 3. Multi-tenancy at hypervisor level. Integration of all aspects of application Platform as a Service In a platform-as-a-service (PaaS) model, the service provider offers a development environment to application developers, who develop applications and offer those services through the provider’s platform. A cloud platform offers an environment on which developers create and deploy applications and do not necessarily need to know how many processors or how much memory that applications will be using. Example Google App Engine [9], an example of Platform as a
  • 20. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 20 Service,offers a scalable environment for developing and hostingWeb applications, which should be written in specific programming languages such as Python or Java, and use the services’ own proprietary structured object data store. Advantages:- a) Reduce complexity because CSP is maintaining the environment. b) The cloud service provider often uses it’s API (a benefit to the developer) Disadvantages:- a) Still responsible to keep software updated. b) Multi-tenancy at platform layer. Software as a Service In a SaaS model, the customer does not purchase software, but rather rents it for use on a subscription or pay-per-use model. Services provided by this layer can be accessed by end users through Web portals. Therefore, consumers are increasingly shifting from locally installed computer programs to on-line software services that offer the same functionally. This model removes the burden of software maintenance for customers and simplifies development and testing for providers. Example Salesforce.com [10],which relies on the SaaS model, offers business productivity applications (CRM) that reside completely on their servers, allowing customers to customize and access applications on demand.
  • 21. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 21 Advantages:- a) Scaling the environment is not the customer problem. b) Updates/configuration/security are all managed by the CSP. Disadvantages:- a) Very little application customization. b) No control of components. c) No control over security. d) Multi-tenancy issue at the application layer 3. Explain SPI Evaluation? Software Process Improvement (SPI) encompasses the analysis and modification of the processes within software development, aimed at improving key areas that contribute to the organizations' goals. The task of evaluating whether the selected improvement path meets these goals is challenging. On the basis of the results of a systematic literature review on SPI measurement and evaluation practices, we developed a framework (SPI Measurement and Evaluation Framework (SPI-MEF)) that supports the planning and implementation of SPI evaluations. CHALLENGES IN MEASURING AND EVALUATING SPI INITIATIVE Challenge I - Heterogeneity of SPI initiatives The spectrum of SPI initiatives ranges from the application of tools for improving specific development processes, to the implementation of organization-wide programs to increase the software development capability as a whole. Challenge II - Partial evaluation The outcome of SPI initiatives is predominately assessed by evaluating measures which are collected at the project level . As a consequence, the improvement can be evaluated only partially,neglecting effects which are visible only outside individual projects. Such evaluations can thereforelead to sub-optimizations of the process. By focusing on the measurement of a single attribute, e.g. effectiveness of the code review process, other attributes might inadvertently change Challenge III - Limited visibility This challenge is a consequence of the previous one since a partial evaluation implies that the gathered information is targeted to a specific audience which may not cover all important stakeholders of an SPI initiative. This means that information requirements may not be satisfied, and that the actual achievements of the SPI initiative may not be visible to some stakeholder as the measurement scope is not adequately determined. Challenge IV - Evaluation effort and validity Due to the vast diversity of SPI initiatives (see Challenge I), it is not surprising that the evaluation strategies vary. The evaluation and analysis techniques are customized to the specific settings were the initiatives are embedded.
  • 22. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 22 4. Explain SPIvs Traditional IT Model? Cloud computing is far more abstract as a virtual hosting solution. Instead of being accessible via physical hardware, all servers, software and networks are hosted in the cloud, off premises. It’s a real-time virtual environment hosted between several different servers at the same time. So rather than investing money into purchasing physical servers in-house, you can rent the data storage space from cloud computing providers on a more cost effective pay-per-use basis. Resilience and Elasticity The information and applications hosted in the cloud are evenly distributed across all the servers, which are connected to work as one. Therefore, if one server fails, no data is lost and downtime is avoided. The cloud also offers more storage space and server resources, including better computing power. This means your software and applications will perform faster. Traditional IT systems are not so resilient and cannot guarantee a consistently high level of server performance. They have limited capacity and are susceptible to downtime, which can greatly hinder workplace productivity. Flexibility and Scalability Cloud hosting offers an enhanced level of flexibility and scalability in comparison to traditional data centres. The on-demand virtual space of cloud computing has unlimited storage space and more server resources. Cloud servers can scale up or down depending on the level of traffic your website receives, and you will have full control to install any software as and when you need to. This provides more flexibility for your business to grow. With traditional IT infrastructure, you can only use the resources that are already available to you. If you run out of storage space, the only solution is to purchase or rent another server.If you hire more employees, you will need to pay for additional software licences and have these manually uploaded on your office hardware. This can be a costly venture, especially if your business is growing quite rapidly. Automation A key difference between cloud computing and traditional IT infrastructure is how they are managed. Cloud hosting is managed by the storage provider who takes care of all the necessary hardware, ensures security measures are in place, and keeps it running smoothly. Traditional data centres require heavy administration in-house, which can be costly and time consuming for your business. Fully trained IT personnel may be needed to ensure regular monitoring and maintenance of your servers – such as upgrades, configuration problems, threat protection and installations. Running Costs Cloud computing is more cost effective than traditional IT infrastructure due to methods of payment for the data storage services. With cloud based services, you only pay for what is used – similarly to how you pay for utilities such as electricity. Furthermore, the decreased likelihood of downtime means improved workplace performance and increased profits in the long run.
  • 23. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 23 With traditional IT infrastructure, you will need to purchase equipment and additional server space upfront to adapt to business growth. If this slows, you will end up paying for resources you don’t use. Furthermore, the value of physical servers decreases year on year, so the return on investment of investing money in traditional IT infrastructure is quite low. Security Cloud computing is an external form of data storage and software delivery, which can make it seem less secure than local data hosting. Anyone with access to the server can view and use the stored data and applications in the cloud, wherever internet connection is available. Choosing a cloud service provider that is completely transparent in its hosting of cloud platforms and ensures optimum security measures are in place is crucial when transitioning to the cloud.. With traditional IT infrastructure, you are responsible for the protection of your data, and it is easier to ensure that only approved personnel can access stored applications and data. . Software-As-A-Service Software as a Service(SaaS) is a way of delivering applications over the Internet- as a service. Instead of installing and maintaining software, we simply access it via the Internet, freeing
  • 24. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 24 from complex software and hardware management. SaaS is simply the cloud vendor providing the given piece of software you want to use, on their servers. The area becomes even more marked by companies like Google and Salesforce that offer both types of services. For instance, not only can you build an application with Salesforce, but you can also allow others to use the application you developed. 1. Explain the Benefits of SaaS? Operational Benefits There are benefits to the way you operate. You can change business processes (for the better) by moving some applications and storage to the cloud. The following are some of the operational benefits: • Reduced cost Since technology is paid incrementally, your organization saves money in the long run. • Increased storage You can store more data on the cloud than on a private network. Plus, if you need more it’s easy enough to get that extra storage. • Automation Your IT staff no longer needs to worry that an application is up to date— that’s the provider’s job. They can focus on duties that matter, rather than being maintenance. • Flexibility You have more flexibility with a cloud solution. Applications can be tested and deployed with ease, and if it turns out that a given application isn’t getting the job done, you can switch to another. • Better mobility Users can access the cloud from anywhere with an Internet connection. This is ideal for road warriors or telecommuters—or someone who needs to access the system after hours. Economic Benefits Where the rubber really meets the road is when you consider the economic benefits of something. And with cloud computing, cost is a huge factor. But it isn’t just in equipment savings; it is realized throughout the organization. These are some benefits to consider: • People We hate to suggest that anyone lose their job, but the honest-to-goodness truth (we’re sorry) is that by moving to the cloud, you’ll rely on fewer staffers. By having fewer staff members, you can look at your team and decide if such-and-such a person is necessary. Is he or she bringing something to the organization? Are their core competencies something you still need? If not, this gives you an opportunity to find the best people to remain on staff. • Hardware With the exception of very large enterprises or governments, major cloud suppliers can purchase hardware, networking equipment, bandwidth, and so forth, much cheaper than a “regular” business. That means if you need more storage, it’s just a matter of upping your subscription costs with your provider, instead of buying new equipment. If you need more computational cycles, you needn’t buy more servers; rather you just buy more from your cloud provider. • Pay as you go Think of cloud computing like leasing a car. Instead of buying the car outright, you pay a smaller amount each month. It’s the same with cloud computing—you just pay for what you use. But, also like leasing a car, at the end of the lease you don’t own
  • 25. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 25 the car. That might be a good thing—the car may be a piece of junk, and in the case of a purchased server, it’s sure to be obsolete. • Time to market One of the greatest benefits of the cloud is the ability to get apps up and running in a fraction of the time you would need in a conventional scenario. Let’s take a closer look at that and see how getting an application online faster saves you money. 2. Explain Evaluating SaaS? Before employing a SaaS solution, there are factors to consider. You should evaluate not only the SaaS provider and its service, but also what your organization wants from SaaS. Be sure the following factors are present as you evaluate your SaaS provider: • Time to value As we mentioned earlier, one of the great benefits of using cloud services is the ability to shorten the time it takes to get a new system or application up and running. Unlike traditional software that might require complex installation, configuration, administration, and maintenance, SaaS only requires a browser. This allows you to get up and running much more quickly than by using traditional software. • Trial period Most SaaS providers offer a 30-day trial of their service. This usually doesn’t happen with traditional software—and certainly you wouldn’t move everyone en masse to the trial. However, you can try out the SaaS vendor’s offering and if it feels like a good fit, you can start making the move. • Low entry costs Another appeal of SaaS is the low cost to get started using it. Rather than laying out an enormous amount of money, you can get started relatively inexpensively. Using anSaaS solution is much less expensive than rolling out a complex software deployment across your organization. • Service In SaaS, the vendor serves the customer. That is, the vendor becomes your IT department—at least for the applications they’re hosting. This means that your own, in- house IT department doesn’t have to buy hardware, install and configure software, or maintain it. That’s all on your SaaS vendor. And if the vendor isn’t responsive to your needs, pack up your toys and move to a different service. It is in the vendor’s best interests to keep you and other customers happy. • Wiser investment SaaS offers a less risky option than traditional software installed locally. Rather than spend a lot of money up front, your organization will pay for the software as it is used. Also, there is no long-term financial commitment. The monetary risk is greatly lessened in anSaaS environment. • Security Earlier in this book we talked about the security concerns with going to the cloud. We mentioned those issues for the sake of completeness, but in reality it is in your vendor’s best interests to keep you as secure as possible. 3. Explain the SaaS Providers. Salesforce.com
  • 26. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 26 Salesforce.com is a cloud computing and social software-as-a-service(SaaS) provider based in San Francisco. It was founded in March 1999 in part of former Oracle executive Marc benioff. Software-as-a-service(SaaS) is a software distribution model in which a third-party provider host applications and make them available to customers over the internet. SaaS is one of the three main categories of cloud computing, alongside infrastructure as a service (IaaS) and platform as a service (PaaS). Salesforce.com Customer Relationship Management Service is broken down into several broad categories • commerce cloud • sales cloud • business logic mentor • Programmable interface • automatic mobile device deployment • data cloud • marketing cloud, community cloud • analytics cloud enter • appcloud • reporting and Analytics Sales cloud is a fully customizable product that brings all the customer information together in an integrated platform that incorporates marketing lead generation, sales, customer service and business analytics and provides access 2007 placation through the app exchange. The platform is provided as software as a service for browser based access; mobile app is also available. A real time social feed for collaboration allows users to share information are asked questions of the user community. salesforce.com offers 5 actions of sales cloud on a paper user • Per month basis, from lowest to highest • Group, Professional, Enterprise, Unlimited and Performance • The company offers three levels of support contracts o Standard success plan o premier success plan o premier + success plan Force.com Force.com is salesforce.com on-demand Cloud Computing platform will basis salesforce.com as the world's first PaaS. Force.com features visualforce, a technology that makes it much simpler for and customers, developers, and independent software vendors to design almost any type of cloud application for a wide range of uses. The force.com platform offers Global infrastructure and services for database, logic, workflow, integration, user interface, and application exchange. Desk.com
  • 27. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 27 Desk.com is a SaaS help desk and customer support of salesforce.com. Desk.com was previously known as Assistly. After being acquired by Salesforce.com assistive was renamed as desk.com in 2012 as a slick social customers of support software. The product differentiate itself from Salesforce other service platform in that desk.com specifically targets a small businesses with its features and functions. Desk.com fitted with a variety of products and third party applications including Salesforce CRM, Salesforce and other apps. Desk.com also supports up to 50 languages. 4. Software-as-a-service withGoogle App Engine Software architects interested in building Software-as-a-Service(SaaS) have a wide variety of deployment options at their disposal, with multiple vendors providing services that cater o their individual needs and requirements. Google App Engine(GAE) is one of the more popular platforms in this arena, providing robust and scalable services inherent with its namesake. With GAE, developers can build a SaaS with the language of their choice while reaping the benefits of cloud computing in hosting their application: infinite and automatic horizontal scalability, metered usage and on-demand deployment of services. A good example of SaaS is Google Docs. Google Docs is a productivity suite that is free for anyone to use. All you need is to login and instantly access word processor, spreadsheet application or power point presentation creator. Google’s online services are managed directly from the web browser and require zero installation. You can access your Google Docs from any computer or mobile deveice with a web browser. Google App Engine provides more infrastructure than other scalable hosting services such as Amazon Elastic Compute Cloud (EC2). Google App Engine is free up to a use of certain amount of resources. Users exceeding the per-day or per-minute usage rates for CPU resources, storage, number of API calles or requests should pay for more to use resources. Features and Benefits of Google App Engine GAE supports Java, Python, PhP and Go, as well as the associated development frameworks for these languages – namely, Spring, Struts and DJango among others. Traditional databases such as MySQL are supported, as well as next-generation NoSQL datastores and big data distributions such as MongoDB and Hadoop respectively. Developers have at their disposal a wide variety of IDEs compatible with GAE, including NetBeans, Eclipse and Komodo. The developers access their application through the main web interface and manage and control their applications through App Engines admin console. The admin console enables developers to perform basic configuration or create/disable/delete applications or view their performance statistics and other maintenance tasks. The main feature of the admin console is the ability to set performance options which basically allows app optimization based on the developers’ preference – for example, tuning down the servers to an optimal pricing range to alleviate costs. Conversely, one may opt to configure their application for the highest availability and best response time possible. GAE’s admin console allows these and many other configuration options. Google promises 99.95% uptime in its service level agreement (SAL) or an average of approximately four minutes of downtime per month. The performance and sstatus of GAE services
  • 28. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 28 can be checked publicly on GAE’s system status page. If it is unable to meet the SAL, Google offers customers a certain amount of free service days per billing cycle. 5. Explain Salesforce.com and Force.com? Salesforce.com offers Force.com as its on-demand platform. Force.com features breakthrough Visualforce technology, which allows customers, developers, and ISVs to design any app, for any user, anywhere with the world’s first User Interface-as-a-Service. The Force.com platform offers global infrastructure and services for database, logic, workflow, integration, user interface, and application exchange. “With Force.com, customers, developers and ISVs can choose innovation, not infrastructure,” said Marc Benioff, chairman and CEO, Salesforce.com. “Google, Amazon, and Apple have all shown that by revolutionizing a user interface you can revolutionize an industry. With Visualforce we’re giving developers the power to revolutionize any interface,and any industry, on demand.” A capability of the Force.com platform, Visualforce provides a framework for creating user experiences, and enables the creation of new interface designs and user interactions to be built and delivered with no software or hardware infrastructure requirements. With Visualforce, developers have control over the look and feel of their Force.com applications enabling wide flexibility in terms of application creation. From a handheld device for a sales rep in the field, to an order-entry kiosk on a manufacturing shop floor, Visualforce enables the creation of new user experiences that can be customized and delivered in real time on any screen. Platform – As – A – Service Platform as a Service (PaaS) is a way to build applications and have them hosted by the cloud provider. It allows you to deploy applications without having to spend the money to buy the servers on which to house them. In this section we’ll take a closer look at companies RightScale
  • 29. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 29 and Google. We’ll talk about their services, what they offer, and what other companies are getting out of those services. 6. Explain RightScale? RightScale entered into a strategic product and partnership, broadening its cloud management platform to support emerging clouds from new vendors, including FlexiScale and GoGrid, while continuing its support for Amazon’s EC2. RightScale is also working with Rackspace to ensure compatibility with their cloud offerings, including Mosso and CloudFS. RightScale offers an integrated management dashboard, where applications can be deployed once and managed across these and other clouds. Businesses can take advantage of the nearly infinite scalability of cloud computing by using RightScale to deploy their applications on a supported cloud provider. They gain the capabilities of built-in redundancy, fault tolerance, and geographical distribution of resources—key enterprise demands for cloud providers. Customers can leverage the RightScale cloud management platform to automatically deploy and manage their web applications—scaling up when traffic demands, and scaling back as appropriate—allowing them to focus on their core business objectives. RightScale’s automated system management, prepackaged and reusable components, leading service expertise, and best practices have been proven as best-of-breed, with customers deploying hundreds of thousands of instances on Amazon’s EC2. “Cloud computing is a disruptive force in the business world because it provides payas- you-go, on-demand, virtually infinite compute and storage resources that can expand or contract as needed,” said Michael Crandell, CEO of RightScale, Inc. “A number of public providers are already adopting cloud architectures—and we also see private enterprise clouds coming on the horizon. Today’s announcement of RightScale’s partnerships with FlexiScale and GoGrid is an exciting indication of how mid-market and enterprise organizations can really take advantage of multicloud architectures. There will be huge opportunities for application design and deployment—we are at th“Cloud computing for the enterprise has arrived with the GoGrid and RightScale partnership,” said GoGrid CEO, John Keagy. “Corporations now have few excuses not to, and multiple reasons to deploy and manage complex and redundant cloud infrastructures in real-time using the GoGrid, RightScale, and FlexiScale technologies.” Rackspace Hosting provides IT systems and computing-as-a-service e beginning of a tidal shift in IT infrastructure. 7. Explain Rackspace?
  • 30. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 30 The Rackspace Cloud is a set of cloud computing products and services billed on a utility computing basis from the US-based company. Rackspace Offerings include web application hosting or platform as a service ("Cloud Sites"), Cloud Storage ("Cloud Files"), virtual private server ("Cloud Servers"), load balancers, databases, backup, and monitoring. It offers Cloud Block Storage and Cloud Backup. It is used to deliver higher performance than object-based clouds by using a combination of hard drives and solid-state drives. The services provided by RackSpace: Dedicated Servers: From server, networking and storage configuration, monitoring and support, to bursting to the cloud of your choice, rackspace got the options and expertise to create a best-fit solution. And when time is of the essece, we’ve got on-demand configurations that are truly single tenant and secure, and as always, backed by Fanatical support. The benefits are: Security and control, High peroformance compute, Cloud-ready and scalable. Cloud files is a cloud hosting service that provides "unlimited online storage and CDN" for media (examples given include backups, video files, user content) on a utility computing basis. It was originally launched as MossoCloudFS as a private beta release on May 5, 2008 and is similar to Amazon Simple Storage Service. Unlimited files of up to 5 GB can be uploaded, managed via the online control panel or RESTful API. API In addition to the online control panel, the service can be accessed over a RESTful API with open source client code available in C#/.NET, Python, PHP, Java, and Ruby. Rackspace- owned Jungle Disk allows Cloud Files to be mounted as a local drive within supported operating systems (Linux, Mac OS X, and Windows). Security Redundancy is achieved by replicating three full copies of data across multiple computers in multiple "zones" within the same data center, where "zones" are physically (though not geographically) separate and supplied separate power and Internet services. Uploaded files can be distributed via Akamai Technologies to "hundreds of endpoints across the world" which provides an additional layer of data redundancy. The control panel and API are protected by SSL and the requests themselves are signed and can be safely delivered to untrusted clients. Deleted data is zeroed out immediately. Use cases Use cases considered as "well suited" include backing up or archiving data, serving images and videos (which are streamed directly to the users' browsers), serving content over content delivery networks, storing secondary static web-accessible data, developing data storage applications, storing fluctuating and/or unpredictable amounts of data and reducing costs. Rackspace Hosting provides IT systems and computing-as-a-service to more than 33,000 customers worldwide. Combining RightScale’s technologies with Rackspace’s focus on Fanatical
  • 31. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 31 Support will allow companies to focus more on their business and not spend a disproportionate amount of resources on IT demands. 8. Explain Services and Benefits of PaaS? Force.com PaaS provides the building blocks necessary to build business apps, whether they are simple or sophisticated, and automatically deploy them as a service to small teams or entire enterprises. The Force.com platform gives customers the power to runmultiple applications within the same Salesforce instance, allowing all of a company’s Salesforce applications to share a common security model, data model, and user interface. The multitenant Force.com platform encompasses a feature set for the creation of business applications such as an on-demand operating system, the ability to create any database on demand, a workflow engine for managing collaboration between users, the Apex Code programming language for building complex logic, the Force.com Web Services API for programmatic access, mashups, and integration with other applications and data, and now Visualforce for a framework to build any user interface. As part of the Force.com platform, Visualforce gives customers the means to design application user interfaces for any experience on any screen. Using the logic and workflow intelligence provided by Apex Code, Visualforce offers the ability to meet the requirements of applications that feature different types of users on a variety of devices. Visualforce uses Internet technology, including HTML, AJAX and Flex, for business applications. Visualforce enables the creation and delivery of any user experience, offering control over an application’s design and behavior that is only limited by the imagination. There are various benefits of Force.com as they provide everything you could need as a part of their service. The ease of use of Salesforce as a technology and majority of Fortune 500 brands are harnessing the power of Salesforce, it’s no coincidence by any means. Force.com by Salesforce.com is a platform that offers advanced cloud computing as a service. It supports multitenant applications and caters to various clients with only one instance of the application running. 9. What are the uses of Force.com? Making an application on Force.com platform is easy and fast. Various tools provided by the platform make things really easy for the developers. Force.com provides many features like multi-layered security and social and mobile optimization. Form builder: There are several tools featured on the platform, such as drag and drop tools, auto- generated UIs, and pre-designed components and templates. With all these tools development and deployment has become easy. An object that is created can be dragged to the pages and it starts to
  • 32. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 32 interact with the data. Forms can also very easy to make without using any complex codes or technical knowledge. Optimized for mobile & social media: The platform provides a mobile optimized platform for your application. The application runs on iPad, iPhones and all other Smartphones automatically. Report creation: Personal reports can be analyzed through integrating with the existing ERPs of your business. These reports can be retrieved any time by dragging and dropping personalised reports. Automation: Force.com platform has the power to automate almost every business process. The business logic needs to be added to the applications and some database triggers need to be written for automating every process of the business. There is a visual process workflow that allows for adding complex business logic to the applications. Development: The platform gives the liberty of creating the user interface of choice and adds business logic to it as and when needed. The native languages of the force.com platform like Apex and Visualforce can be used in combination with flash and HTML to develop rich interfaces. Security: The platform has a built-in robust security and privacy program which has been tested by some of the most trusted organisations. UNIT-4 INFRASTRUCTURE AS A SERVICE Infrastructure as a Service is Everything as a Service. That is, you are using a virtualized server and running software on it. One of the most prevalent is Amazon Elastic Compute Cloud
  • 33. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 33 (EC2). Another player in the field is GoGrid. In this section we’ll take a closer look at both Amazon and GoGrid. 1. List IAAS service providers? Amazon EC2 Amazon Elastic Compute Cloud (http://aws.amazon.com/ec2) is a web service that provides resizable computing capacity in the cloud. Amazon EC2’s simple web service interface allows businesses to obtain and configure capacity with minimal friction. It provides control of computing resources and lets organizations run on Amazon’s computing environment. Amazon EC2 reduces the time required to obtain and boot new server instances to minutes, allowing quick scaling capacity, both up and down, as computing requirements change. Amazon EC2 changes the economics of computing by allowing you to pay only for capacity that you actually use. GoGrid GoGrid is a service provider of Windows and Linux cloud-based server hosting, and offers 32-bit and 64-bit editions of Windows Server 2008 within its cloud computing infrastructure. Parent company ServePath is a Microsoft Gold Certified Partner, and launched Windows Server 2008 dedicated hosting in February of this year. GoGrid becomes one of the first Infrastructure as a Service (IaaS) providers to offer Windows Server 2008 “in the cloud.” The Windows Server 2008 operating system from Microsoft offers increased server stability, manageability, and security over previous versions of Windows Server. As such, interest from Windows Server customers wanting to try it out has been high. GoGrid customers can deploy Windows Server 2008 servers in just a few minutes for as little as 19 cents an hour, with no commitment. GoGrid enables system administrators to quickly and easily create, deploy, load-balance, and manage Windows and Linux cloud servers within minutes. GoGrid offers what it calls Control in the CloudTM with its web-based Graphical User Interface (GUI) that allows for “point and click” deployment of complex and flexible network infrastructures, which include load balancing and multiple web and database servers, all set up with icons through the GUI. Initial Windows Server 2008 offerings on GoGrid include both 32-bit and 64-bit preconfigured templates. GoGrid users select the desired operating system and then choose preconfigured templates in order to minimize time to deploy. Pre configurations include • Windows Server 2008 Standard with Internet Information Services 7.0 (IIS 7) • Windows Server 2008 Standard with IIS 7 and SQL Server 2005 Express Edition • Windows Server 2008 Standard with IIS 7, SQL Server 2005 Express Edition, and ASP.NET 2. Explain about Amazon EC2 Benefits
  • 34. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 34 1. Elastic Web-Scale Computing: Amazon EC2 enables you to increase or decrease capacity within minutes, not hours or days. You can commission one, hundreds, or even thousands of server instances simultaneously. You can also use Auto Scaling to maintain availability of your EC2 fleet and automatically scale your application up and down depending on its needs in order to maximize performance and minimize cost. 2. Completely Controlled: You have complete control of your instances including root access and the ability to interact with them as you would any machine. You can stop any instance while retaining the data on the boot partition, and then subsequently restart the same instance using web service APIs. Instances can be rebooted remotely using web service APIs, and you also have access to their console output. 3. Flexible Cloud Hosting Services: You have the choice of multiple instance types, operating systems, and software packages. Amazon EC2 allows you to select a configuration of memory, CPU, instance storage and the boot partition size that is optimal for your choice of operating system and application. For example, choice of operating systems includes numerous Linux distributions and Microsoft Windows Server. 4. Integrated: Amazon EC2 is integrated with most AWS services such as Amazon Simple Storage Service (Amazon S3), Amazon Relational Database Service (Amazon RDS), and Amazon Virtual Private Could (Amazon VPC) to provide a complete, secure solution for computing, query processing, and cloud storage across a wide range of applications. 5. Reliable: Amazon EC2 offers a highly reliable environment where replacement instances can be rapidly and predictably commissioned. The service runs within Amazon’s proven network infrastructure and data centers. The Amazon EC2 Service Level Agreement commitment is 99.95% availability for each Amazon EC2 Region. 6. Secure: Cloud security at AWS is the highest priority. As an AWS customer, you will benefit from a data center and network architecture built to meet the requirements of the most security-sensitive organizations. Amazon EC2 works in conjunction with Amazon VPC to provide security and robust networking functionality for your compute resources. 7. Inexpensive: Amazon EC2 passes on to you the financial benefits of Amazon’s scale. You pay a very low rate for the compute capacity you actually consume. 8. Easy to Start: There are several ways to get started with Amazon EC2. You can use the AWS Management Console, the AWS Command Line Tools (CLT), or AWS SDKs. Recent Developments In 2009, AWS announced plans for several new features that make managing cloud-based applications easier. Thousands of customers employ the compute power of Amazon EC2 to build scalable and reliable solutions.
  • 35. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 35 AWS will deliver additional features that automate customer usage of Amazon EC2 for more cost-efficient consumption of computing power and provide greater visibility into the operational health of an application running in the AWS cloud. 3. Write about Amazon EC2 Service Level Agreement. With over two years of operation Amazon EC2 exited its beta into general availability and offers customers a Service Level Agreement (SLA). The Amazon EC2 SLA guarantees 99.95 percent availability of the service within a region over a trailing 365-day period, or customers are eligible to receive service credits back. The Amazon EC2 SLA is designed to give customers additional confidence that even the most demanding applications will run dependably in the AWS cloud. Service Commitment AWS will use commercially reasonable efforts to make Amazon EC2 and Amazon EBS each available with a Monthly Uptime Percentage (defined below) of at least 99.95%, in each case during any monthly billing cycle (the “Service Commitment”). In the event Amazon EC2 or Amazon EBS does not meet the Service Commitment, you will be eligible to receive a Service Credit as described below. • “Monthly Uptime Percentage” is calculated by subtracting from 100% the percentage of minutes during the month in which Amazon EC2 or Amazon EBS, as applicable, was in the state of “Region Unavailable.” Monthly Uptime Percentage measurements exclude downtime resulting directly or indirectly from any Amazon EC2 SLA Exclusion (defined below). • “Region Unavailable” and “Region Unavailability” mean that more than one Availability Zone in which you are running an instance, within the same Region, is “Unavailable” to you. • “Unavailable” and “Unavailability” mean: o For Amazon EC2, when all of your running instances have no external connectivity. o For Amazon EBS, when all of your attached volumes perform zero read write IO, with pending IO in the queue. • A “Service Credit” is a dollar credit, calculated as set forth below, that we may credit back to an eligible account. Service Commitments and Service Credits Service Credits are calculated as a percentage of the total charges paid by you (excluding one-time payments such as upfront payments made for Reserved Instances) for either Amazon EC2 or Amazon EBS (whichever was Unavailable, or both if both were Unavailable) in the Region affected for the monthly billing cycle in which the Region Unavailability occurred in accordance with the schedule below. Monthly Uptime Percentage Service Credit Percentage Less than 99.95% but equal to or greater than 99.0% 10%
  • 36. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 36 Less than 99.0% 30% We will apply any Service Credits only against future Amazon EC2 or Amazon EBS payments otherwise due from you. At our discretion, we may issue the Service Credit to the credit card you used to pay for the billing cycle in which the Unavailability occurred. Service Credits will not entitle you to any refund or other payment from AWS. 4. Write the Advantages and Disadvantages ofIAAS. Advantages: 1. Cost Savings: An obvious benefit of moving to the IaaS model is lower infrastructure costs. No longer do organizations have the responsibility of ensuring uptime, maintaining hardware and networking equipment, or replacing old equipment. IaaS also saves enterprises from having to buy more capacity to deal with sudden business spikes. Organizations with a smaller IT infrastructure generally require a smaller IT staff as well. 2. Scalability and flexibility: One of the greatest benefits of IaaS is the ability to scale up and down quickly in response to an enterprise’s requirements. IaaS providers generally have the latest, most powerful storage servers and networking technology to accommodate the needs of their customers. This on-demand scalability provides added flexibility and greater agility to respond to changing opportunities and requirements. 3. Support for DR, BC and high availability: While every enterprise has some type of disaster recovery plan, the technology behind those plans is often expensive and unwidely. Organizations with several disparate locations often have different disaster recovery and business continuity plans and technologies, making management virtually impossible. 4. Focus on business growth: Time, money and energy spent making technology decisions and hiring staff to manage and maintain the technology infrastructure is time not spent on growing the business. By moving infrastructure to a service-based model, organizations can focus their time and resources where they belong, on developing innovations in applications and solutions. 5. Innovate rapidly: As soon as you have decided to launch a new product or initiative, the necessary computing infrastructure can be ready in minutes or hours, rather than the days or weeks – and sometimes months – it could take to set up internally. 6. Respond quicker to shifting business conditions: IaaS enables you to quickly scale up resources to accommodate spikes in demand for your application (elasticity of the cloud) – during the holidays, for example, then scale resources back down again when activity decreases to save money.
  • 37. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 37 7. Better Security: With the appropriate service agreement, a cloud service provider can provide security for your applications and data that may be better than what you can attain in-house. Better security may come in part because it is critical for the IaaS Cloud Provider and is part of their main business. 8. Backups: There is no need to manage backups. This is handled by the IaaS Cloud provider. 9. Multiplatform: Some IaaS Providers provide development options for multiple platforms: mobile, browser, and so on. If you or your organization want to develop software that can be accessed from multiple platforms, this might be an easy way to make that happen. Disadvantages: 1. The organization is responsible for the versioning/upgrades of software developed. 2. The maintenance and upgrades of tools, database systems, etc. and the underlying infrastructure is your responsibility or the responsibility of your organization. 3. There may be legal reasons that prevent the use of off-premise or out-of-country data storage. 4. If you need for high-speed interaction between internal software and software on Cloud and the IaaS Cloud Provider may not provide the speed that you need. 5. Most expensive, since the customer is now leasing a tangible resource, the provider can charge for every cycle, bit of RAM or disk space used. 6. Unlike with SaaS or PaaS, customer is responsible for all aspects of VM Management. CLOUD DEPLOYMENT MODELS A cloud deployment model represents a specific type of cloud environment, primarily distinguished by ownership, size, and access.
  • 38. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 38 There are four common cloud deployment models: 1. Public Clouds 2. Community Clouds 3. Private Clouds 4. Hybrid cloud 5. Explain public cloud ? with neat diagram? A public cloud is one based on the standard cloud computing model, in which a service provider makes resources, such as virtual machines (VMs), applications or storage, available to the general public over the internet. Public cloud services may be free or offered on a pay-per- usage model. The main benefits of using a public cloud service are: • it reduces the need for organizations to invest in and maintain their own on-premises IT resources; • it enables scalability to meet workload and user demands; and • there are fewer wasted resources because customers only pay for the resources they use. Public cloudarchitecture Public cloud is a fully virtualized environment. In addition, providers have a multi-tenant architecture that enables users -- or tenants -- to share computing resources. Each tenant's data in the public cloud, however, remains isolated from other tenants. Public cloud also relies on high- bandwidth network connectivity to rapidly transmit data. Public cloud storage is typically redundant, using multiple data centers and careful replication of file versions. This characteristic has given it a reputation for resiliency. Public cloud architecture can be further categorized by service model. Common service models include: • software as a service (SaaS), in which a third-party provider hosts applications and makes them available to customers over the internet;
  • 39. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 39 • platform as a service (PaaS), in which a third-party provider delivers hardware and software tools -- usually those needed for application development -- to its users as a service; and • infrastructure as a service (IaaS), in which a third-party provider offers virtualized computing resources, such as VMs and storage, over the internet. 6. Explain Private Cloud architecture? Private Cloud allows systems and services to be accessible within an organization. The Private Cloud is operated only within a single organization. However, it may be managed internally by the organization itself or by third-party. The private cloud model is shown in the diagram below. Benefits There are many benefits of deploying cloud as private cloud model. The following diagram shows some of those benefits:
  • 40. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 40 High Security and Privacy Private cloud operations are not available to general public and resources are shared from distinct pool of resources. Therefore, it ensures high security and privacy. More Control The private cloud has more control on its resources and hardware than public cloud because it is accessed only within an organization. Cost and Energy Efficiency The private cloud resources are not as cost effective as resources in public clouds but they offer more efficiency than public cloud resources. Disadvantages Here are the disadvantages of using private cloud model: • Restricted Area of Operation: The private cloud is only accessible locally and is very difficult to deploy globally. • High Priced: Purchasing new hardware in order to fulfill the demand is a costly transaction. • Limited Scalability: The private cloud can be scaled only within capacity of internal hosted resources. 7. Explain Community cloud? Community Cloud is an online social platform that enables companies to connect customers, partners, and employees with each other and the data and records they need to get work done. This next-generation portal combines the real-time collaboration of Chatter with the ability to share any file, data, or record anywhere and on any mobile device. Community Cloud allows you to streamline key business processes and extend them across offices and departments, and outward to customers and partners. So everyone in your business ecosystem can service customers more effectively, close deals faster, and get work done in real time.
  • 41. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 41 You can build communities to gain deeper relationships with customers or provide better service by enabling customers to find information and assist each other online. Or you can connect your external channel partners, agents, or brokers to reduce friction and accelerate deals. And you can empower employees to connect and collaborate wherever business takes them. Because Community Cloud is built on the Salesforce platform, you can connect any third party system or data directly into the community. Your organization gains the flexibility to easily create multiple communities for whatever use case your business demands. HR and IT Help Desk can engage employees and deliver critical knowledge and instructions. And from onboarding to payroll to IT troubleshooting, employees can help themselves to the information they need, 24/7. Employees find, share, and collaborate on content in real time, and connect with others in the social intranet — beyond the boundaries of their department, office, or even country. SECURITY Community Cloud is built on the trusted Salesforce1 platform. The robust and flexible security architecture of the platform is relied on by companies around the world, including those in the most heavily regulated industries — from financial services to healthcare to government. It provides the highest level of security and control over everything from user and client authentication through administrative permissions to the data access and sharing model ADVANTAGES Companies of any size can create seamless, branded community experiences quickly and easily with Community Cloud. For example, Lightning Community Builder and Templates provide a great out-of-the-box solution to get you started, with simple customization options as your business grows. Lightning Community Builder makes it easy to customize your mobile-optimized community to perfectly match your brand. This includes incorporating third-party and custom components for ultimate customization. Community Templates are secure, reliable, scalable, and optimized for mobile. These state- of-the-art templates are designed to be used right out of the box — no coding or IT required. 8. Explain Hybrid Cloud? Hybrid Cloud is a mixture of public and private cloud. Non-critical activities are performed using public cloud while the critical activities are performed using private cloud. The Hybrid Cloud Model is shown in the diagram below.
  • 42. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 42 Benefits There are many benefits of deploying cloud as hybrid cloud model. The following diagram shows some of those benefits: Scalability: It offers features of both, the public cloud scalability and the private cloud scalability. Flexibility: It offers secure resources and scalable public resources. Cost Efficiency: Public clouds are more cost effective than private ones. Therefore, hybrid clouds can be cost saving. Security: The private cloud in hybrid cloud ensures higher degree of security. Disadvantages Networking Issues: Networking becomes complex due to presence of private and public cloud. Security Compliance: It is necessary to ensure that cloud services are compliant with security policies of the organization. Infrastructure Dependency: The hybrid cloud model is dependent on internal IT infrastructure, therefore it is necessary to ensure redundancy across data centers. 9. what are the Advantages of Cloud Computing? Cost Savings Perhaps, the most significant cloud computing benefit is in terms of IT cost savings. Businesses, no matter what their type or size, exist to earn money while keeping capital and operational expenses to a minimum. With cloud computing, you can save substantial capital costs with zero in-house server storage and application requirements. The lack of on-premises infrastructure also removes their associated operational costs in the form of power, air conditioning and administration costs. You pay for what is used and disengage whenever you like - there is no invested IT capital to worry about. It’s a common misconception that only large businesses can afford to use the cloud, when in fact, cloud services are extremely affordable for smaller businesses. Reliability With a managed service platform, cloud computing is much more reliable and consistent than in-house IT infrastructure. Most providers offer a Service Level Agreement which guarantees 24/7/365 and 99.99% availability. Your organization can benefit from a massive pool of redundant
  • 43. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 43 IT resources, as well as quick failover mechanism - if a server fails, hosted applications and services can easily be transited to any of the available servers. Manageability Cloud computing provides enhanced and simplified IT management and maintenance capabilities through central administration of resources, vendor managed infrastructure and SLA backed agreements. IT infrastructure updates and maintenance are eliminated, as all resources are maintained by the service provider. You enjoy a simple web-based user interface for accessing software, applications and services – without the need for installation - and an SLA ensures the timely and guaranteed delivery, management and maintenance of your IT services. Strategic Edge Ever-increasing computing resources give you a competitive edge over competitors, as the time you require for IT procurement is virtually nil. Your company can deploy mission critical applications that deliver significant business benefits, without any upfront costs and minimal provisioning time. Cloud computing allows you to forget about technology and focus on your key business activities and objectives. It can also help you to reduce the time needed to market newer applications and services. UNIT - 5 VIRTUALIZATION 1. What is virtualization and cloud computing? "Virtualization software makes it possible to run multiple operating systems and multiple applications on the same server at the same time," said Mike Adams, director of product marketing at VMware, a pioneer in virtualization and cloud software and services. "It enables businesses to reduce IT costs while increasing the efficiency, utilization and flexibility of their existing computer hardware." The technology behind virtualization is known as a virtual machine monitor (VMM) or virtual manager, which separates compute environments from the actual physical infrastructure.
  • 44. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 44 Virtualization makes servers, workstations, storage and other systems independent of the physical hardware layer, said John Livesay, vice president of InfraNet, a network infrastructure services provider. "This is done by installing a Hypervisor on top of the hardware layer, where the systems are then installed." 2. How is virtualization different from cloud computing? Essentially, virtualization differs from cloud computing because virtualization is software that manipulates hardware, while cloud computing refers to a service that results from that manipulation. "Virtualization is a foundational element of cloud computing and helps deliver on the value of cloud computing," Adams said. "Cloud computing is the delivery of shared computing resources, software or data — as a service and on-demand through the Internet." Most of the confusion occurs because virtualization and cloud computing work together to provide different types of services, as is the case with private clouds. The cloud can, and most often does, include virtualization products to deliver the compute service, said Rick Philips, vice president of compute solutions at IT firm Weidenhammer. "The difference is that a true cloud provides self-service capability, elasticity, automated management, scalability and pay-as you go service that is not inherent in virtualization." To best understand the advantages of virtualization, consider the difference between private and public clouds. "Private cloud computing means the client owns or leases the hardware and software that provides the consumption model," Livesay said. With public cloud computing, users pay for resources based on usage. "You pay for resources as you go, as you consume them, from a [vendor] that is providing such resources to multiple clients, often in a co-tenant scenario." A private cloud, in its own virtualized environment, gives users the best of both worlds. It can give users more control and the flexibility of managing their own systems, while providing the consumption benefits of cloud computing, Livesay said. On the other hand, a public cloud is an environment open to many users, built to serve multi-tenanted requirements, Philips said. "There are some risks associated here," he said, such as having bad neighbors and potential latency in performance. In contrast, with virtualization, companies can maintain and secure their own "castle," Philips said. This "castle" provides the following benefits: • Maximize resources — Virtualization can reduce the number of physical systems you need to acquire, and you can get more value out of the servers. Most traditionally built systems are underutilized. Virtualization allows maximum use of the hardware investment. • Multiple systems — With virtualization, you can also run multiple types of applications and even run different operating systems for those applications on the same physical hardware.
  • 45. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 45 • IT budget integration — When you use virtualization, management, administration and all the attendant requirements of managing your own infrastructure remain a direct cost of your IT operation. 3. What is need of virtualization? Can anyone explain me why is virtualization needed for cloud computing? A single instance of IIS and Windows Server can host multiple web applications. Then why do we need to run multiple instances of OS on a single machine? How can this lead to more efficient utilization of resources? Virtualization is convenient for cloud computing for a variety of reasons: 1. Cloud computing is much more than a web app running in IIS. ActiveDirectory isn't a web app. SQL Server isn't a web app. To get full benefit of running code in the cloud, you need the option to install a wide variety of services in the cloud nodes just as you would in your own IT data center. Many of those services are not web apps governed by IIS. If you only look at the cloud as a web app, then you'll have difficulty building anything that isn't a web app. 2. The folks running and administering the cloud hardware underneath the covers need ultimate authority and control to shut down, suspend, and occasionally relocate your cloud code to a different physical machine. If some bit of code in your cloud app goes nuts and runs out of control, it's much more difficult to shut down that service or that machine when the code is running directly on the physical hardware than it is when the rogue code is running in a VM managed by a hypervisor. 3. Resource utilization - multiple tenants (VMs) executing on the same physical hardware, but with much stronger isloation from each other than IIS's process walls. Lower cost per tenant, higher income per unit of hardware. COST Depending on your solution, you can have a cost-free datacenter. You do have to shell out the money for the physical server itself, but there are options for free virtualization software and free operating systems. Microsoft’s Virtual Server and VMware Server are free to download and install. If you use a licensed operating system, of course that will cost money. For instance, if you wanted five instances of Windows Server on that physical server, then you’re going to have to pay for the licenses. That said, if you were to use a free version of Linux for the host and operating system, then all you’ve had to pay for is the physical server. Naturally, there is an element of “you get what you pay for.” There’s a reason most organizations have paid to install an OSon their systems. When you install a free OS, there is often a higher total cost of operation, because it can be more labor intensive to manage the OS and apply patches.
  • 46. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 46 Administration Having all your servers in one place reduces your administrative burden. According to VMware, you can reduce your administrative burden from 1:10 to 1:30. What this means is that you can save time in your daily server administration or add more servers by having a virtualized environment. The following factors ease your administrative burdens: • A centralized console allows quicker access to servers. • CDs and DVDs can be quickly mounted using ISO files. • New servers can be quickly deployed. • New virtual servers can be deployed more inexpensively than physical servers. • RAM can be quickly allocated for disk drives. • Virtual servers can be moved from one server to another. Fast deployement: Because every virtual guest server is just a file on a disk, it’s easy to copy (or clone) a system to create a new one. To copy an existing server, just copy the entire directory of the current virtual server. This can be used in the event the physical server fails, or if you want to test out a new application to ensure that it will work and play well with the other tools on your network. Virtualization software allows you to make clones of your work environment for these endeavors. Also, not everyone in your organization is going to be doing the same tasks. As such, you may want different work environments for different users. Virtualization allows you to do this. Reduced Infrastructure Costs We already talked about how you can cut costs by using free servers and clients, like Linux, as well as free distributions of Windows Virtual Server, Hyper-V, or VMware. But there are also reduced costs across your organization. If you reduce the number of physical servers you use, then you save money on hardware, cooling, and electricity. You also reduce the number of network ports, console video ports, mouse ports, and rack space. Some of the savings you realize include • Increased hardware utilization by as much as 70 percent • Decreased hardware and software capital costs by as much as 40 percent • Decreased operating costs by as much as 70 percen 4. Explain the Limitations of Server Virtualization? The benefits of server virtualization can be so enticing that it's easy to forget that the technique isn't without its share of limitations. It's important for a network administrator to research server virtualization and his or her own network's architecture and needs before attempting to engineer a solution.
  • 47. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 47 For servers dedicated to applications with high demands on processing power, virtualization isn't a good choice. That's because virtualization essentially divides the server's processing power up among the virtual servers. When the server's procIt's also unwise to overload a server's CPU by creating too many virtual servers on one physical machine. The more virtual machines a physical server must support, the less processing power each server can receive. In addition, there's a limited amount of disk space on physical servers. Too many virtual servers could impact the server's ability to store data.essing power can't meet application demands, everything slows down. Another limitation is migration. Right now, it's only possible to migrate a virtual server from one physical machine to another if both physical machines use the same manufacturer's processor. If a network uses one server that runs on an Intel processor and another that uses an AMD processor, it's impossible to port a virtual server from one physical machine to the other. Many companies are investing in server virtualization despite its limitations. As server virtualization technology advances, the need for huge data centers could decline. Server power consumption and heat output could also decrease, making server utilization not only financially attractive, but also a green initiative HARDWARE VIRTUALISATION 5. Explain Full virtualization. In computer science, full virtualization is a virtualization technique used to provide a certain kind of virtual machine environment, namely, one that is a complete simulation of the underlying hardware. Full virtualization is possible only with the right combination of hardware and software elements. For example, it was not possible with most of IBM's System/360 series with the exception being the IBM System/360-67; nor was it possible with IBM's early System/370 system. IBM added virtual memory hardware to the System/370 series in 1972 which is not the same as Intel VT-x Rings providing a higher privilege level for Hypervisor to properly control Virtual Machines requiring full access to Supervisor and Program or User modes.
  • 48. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 48 Full virtualization: 1. Guest operating systems are unaware of each other 2. Provide support for unmodified guest operating system. 3. Hypervisor directly interact with the hardware such as CPU,disks. 4. Hyperwiser allow to run multiple os simultaneously on host computer. 5. Each guest server run on its own operating system 6. Few implementations: Oracle's Virtaulbox , VMware server, Microsoft Virtual PC Advantages: 1. This type of virtualization provide best isolation and security for Virtual machine. 2. Truly isolated multiple guest os can run simultaneously on same hardware. 3. It's only option that requires no hardware assist or os assist to virtualize sensitive and privileged instructions. Limitations: 1. Full virtualization is usually bit slower ,because of all emulation. 2. Hyperwiser contain the device driver and it might be difficult for new device drivers to be installer by users. 6. Expalin Paravirtualization. Paravirtualization is virtualization in which the guest operating system (the one being virtualized) is aware that it is a guest and accordingly has drivers that, instead of issuing hardware commands, simply issue commands directly to the host operating system. This also includes memory and thread management as well, which usually require unavailable privileged instructions in the processor.
  • 49. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 49 Para virtualization: 1.unlike full virtualization ,guest servers are aware of one another. 2. Hypervisor does not need large amounts of processing power to manage guest os. 3 .The entire system work as a cohesive unit. Advantages: 1. As a guest os can directly communicate with hypervisor 2. This is efficient virtualization. 3. Allow users to make use of new or modified device drivers. Limitations: 1. Para virtualization requires the guest os to be modified in order to interact with para virtualization interfaces. 2. It requires significant support and maintaibilty issues in production environment. 7. Explain Partial virtualization? Partial virtualization, including address space virtualization, the virtual machine simulates multiple instances of much of an underlying hardware environment, particularly address spaces. Usually, this means that entire operating systems cannot run in the virtual machine—which would be the sign of full virtualization— but that many applications can run. A key form of partial virtualization is address space virtualization, in which each virtual machine consists of an independent address space. This capability requires address relocation hardware, and has been present in most practical examples of partial virtualization.
  • 50. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 50 In partial virtualization, including address space virtualization, the virtual machine simulates multiple instances of much of an underlying hardware environment, particularly address spaces. Usually, this means that entire operating systems cannot run in the virtual machine – which would be the sign of full virtualization – but that many applications can run. A key form of partial virtualization is address space virtualization, in which each virtual machine consists of an independent address space. This capability requires address relocation hardware, and has been present in most practical examples of partial virtualization. Partial virtualization was an important historical milestone on the way to full virtualization. It was used in the first-generation time-sharing system CTSS, in the IBM M44/44X experimental paging system, and arguably systems like MVS and the Commodore 64 (a couple of ‘task switch’ programs). The term could also be used to describe any operating system that provides separate address spaces for individual users or processes, including many that today would not be considered virtual machine systems. Experience with partial virtualization, and its limitations, led to the creation of the first full virtualization system (IBM’s CP-40,the first iteration of CP/CMSwhich would eventually become IBM’s VM family). (Many more recent systems, such as Microsoft Windows and Linux, as well as the remaining categories below, also use this basic approach.)Partial virtualization is significantly easier to implement than full virtualization. It has often provided useful, robust virtual machines, capable of supporting important applications. Partial virtualization has proven highly successful for sharing computer resources among multiple users.
  • 51. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 51 DESKTOP VIRTUALIZATION 8. Explain Software Virtualization? Managing applications and distribution becomes a typical task for IT departments. Installation mechanism differs from application to application. Some programs require certain helper applications or frameworks and these applications may have conflict with existing applications. Software virtualization is just like a virtualization but able to abstract the software installation procedure and create virtual software installations. Virtualized software is an application that will be "installed" into its own self-contained unit. Example of software virtualization is VMware software, virtual box etc. In the next pages, we are going to see how to install linux OS and windows OS on VMware application. Advantages of Software Virtualization 1) Client Deployments Become Easier: Copying a file to a workstation or linking a file in a network then we can easily install virtual software. 2) Easy to manage: To manage updates becomes a simpler task. You need to update at one place and deploy the updated virtual application to the all clients. 3) Software Migration: Without software virtualization, moving from one software platform to another platform takes much time for deploying and impact on end user systems. With the help of virtualized software environment the migration becomes easier. 9. Explain Storage virtualization? As we know that, there has been a strong link between the physical host and the locally installed storage devices. However, that paradigm has been changing drastically, almost local storage is no longer needed. As the technology progressing, more advanced storage devices are coming to the market that provide more functionality, and obsolete the local storage. Storage virtualization is a major component for storage servers, in the form of functional RAID levels and controllers. Operating systems and applications with device can access the disks directly by themselves for writing. The controllers configure the local storage in RAID groups and present the storage to the operating system depending upon the configuration. However, the storage is abstracted and the controller is determining how to write the data or retrieve the requested data for the operating system. Storage virtualization is becoming more and more important in various other forms:
  • 52. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 52 File servers: The operating system writes the data to a remote location with no need to understand how to write to the physical media. WAN Accelerators: Instead of sending multiple copies of the same data over the WAN environment, WAN accelerators will cache the data locally and present the re-requested blocks at LAN speed, while not impacting the WAN performance. SAN and NAS: Storage is presented over the Ethernet network of the operating system. NAS presents the storage as file operations (like NFS). SAN technologies present the storage as block level storage (like Fibre Channel). SAN technologies receive the operating instructions only when if the storage was a locally attached device. Storage Tiering: Utilizing the storage pool concept as a stepping stone, storage tiering analyze the most commonly used data and places it on the highest performing storage pool. The lowest one used data is placed on the weakest performing storage pool. This operation is done automatically without any interruption of service to the data consumer. Advantages of Storage Virtualization • Data is stored in the more convenient locations away from the specific host. In the case of a host failure, the data is not compromised necessarily. • The storage devices can perform advanced functions like replication, reduplication, and disaster recovery functionality. • By doing abstraction of the storage level, IT operations become more flexible in how storage is provided, partitioned, and protected. 10. Explain memory virtualization? In computer science, memory virtualizationdecouples volatile random access memory (RAM) resources from individual systems in the data center, and then aggregates those resources into a virtualized memory pool available to any computer in the cluster. There are two types of memory virtualization: Software-based and hardware-assisted memory virtualization. Because of the extra level of memory mapping introduced by virtualization, ESXi can effectively manage memory across all virtual machines. Some of the physical memory of a virtual machine might be mapped to shared pages or to pages that are unmapped, or swapped out. A host performs virtual memory management without the knowledge of the guest operating system and without interfering with the guest operating system’s own memory management subsystem. The VMM for each virtual machine maintains a mapping from the guest operating system's physical memory pages to the physical memory pages on the underlying machine. (VMware refers to the underlying host physical pages as “machine” pages and the guest operating system’s physical pages as “physical” pages.) Each virtual machine sees a contiguous, zero-based, addressable physical memory space. The underlying machine memory on the server used by each virtual machine is not necessarily contiguous.
  • 53. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 53 For both software-based and hardware-assisted memory virtualization, the guest virtual to guest physical addresses are managed by the guest operating system. The hypervisor is only responsible for translating the guest physical addresses to machine addresses. Software-based memory virtualization combines the guest's virtual to machine addresses in software and saves them in the shadow page tables managed by the hypervisor. Hardware-assisted memory virtualization utilizes the hardware facility to generate the combined mappings with the guest's page tables and the nested page tables maintained by the hypervisor. The diagram illustrates the ESXi implementation of memory virtualization. • The boxes represent pages, and the arrows show the different memory mappings. • The arrows from guest virtual memory to guest physical memory show the mapping maintained by the page tables in the guest operating system. (The mapping from virtual memory to linear memory for x86-architecture processors is not shown.) • The arrows from guest physical memory to machine memory show the mapping maintained by the VMM. • The dashed arrows show the mapping from guest virtual memory to machine memory in the shadow page tables also maintained by the VMM. The underlying processor running the virtual machine uses the shadow page table mappings. • Software-Based Memory Virtualization ESXi virtualizes guest physical memory by adding an extra level of address translation. • Hardware-Assisted Memory Virtualization Some CPUs, such as AMD SVM-V and the Intel Xeon 5500 series, provide hardware support for memory virtualization by using two layers of page tables. 11. Explain Data vrtualization? Data virtualization is an umbrella term used to describe any approach to data management that allows an application to retrieve and manipulate data without requiring technical details about the data, such as how it is formatted or where it is physically located. Data virtualization is synonymous with information agility - it delivers a simplified, unified, and integrated view of trusted business data in real time or near real time as needed by the consuming applications, processes, analytics, or business users. Data virtualization integrates data from disparate sources, locations and formats, without replicating the data, to create a single
  • 54. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 54 "virtual" data layer that delivers unified data services to support multiple applications and users. The result is faster access to all data, less replication and cost, more agility to change. Data virtualization is modern data integration. It performs many of the same transformation and quality functions as traditional data integration (Extract-Transform-Load (ETL), data replication, data federation, Enterprise Service Bus (ESB), etc.) but leveraging modern technology to deliver real-time data integration at lower cost, with more speed and agility. It can replace traditional data integration and reduce the need for replicated data marts and data warehouses in many cases, but not entirely. Data virtualization is also an abstraction layer and a data services layer. In this sense it is highly complementary to use between original and derived data sources, ETL, ESB and other middleware, applications, and devices, whether on-premise or cloud-based, to provide flexibility between layers of information and business technology. The following list helps understand Data Virtualization in many forms: 1. Data blending - This is often included as part of a business intelligence (BI) tool semantic universe layer or is a new module offered by a predominantly BI vendor. Data blending is able to combine multiple sources (limited list of structured or big data) to feed the BI tool, but the output is only available for this tool and cannot be accessed from any other external application for consumption. 2. Data services module - Typically these are offered for additional cost by Data Integration Suite (ETL / MDM / Data Quality) or Data Warehouse vendors. The suite is usually very strong in other areas. When it comes to data virtualization, some features shared with the suite such as modeling, transformation, quality functions are very robust, but the data virtualization engine, query optimization, caching, virtual security layers, flexibility of data model for unstructured sources, and overall performance is weak. This is so because the product is designed to prototype ETL or MDM and not to compete with it in production use. 3. SQLification Products - This is an emerging offering particularly among Big Data and Hadoop vendors. These products "virtualize" the underlying big data technologies and allow them to be combined with relational data sources and flat files and queried using standard SQL. This can be good for projects focused on that particular big data stack, but not beyond. 4. Cloud data services. These products are often deployed in the cloud and have pre- packaged integrations to SaaS and cloud applications, cloud databases and few desktop and on-premise tools like Excel. Rather than a true data virtualization product with tiered -views and delegatable query execution, these products expose normalized APIs across cloud sources for easy data exchange in projects of medium volume. Projects involving big data analytics, major enterprise systems, mainframes, large databases, flat files and unstructured data are out of scope. 5. Data virtualization platform. Built from the ground-up to provide data virtualization capabilities for the enterprise in a many-to-many fashion through a unified "virtual" data layer. Designed for agility and speed in a wide range of use cases, agnostic to sources and consumers, and competes and collaborates with other less efficient middleware. Click here to learn more about the Denodo Platform.
  • 55. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 55 12. Explain Netwotk Virtualization? Network virtualization (NV) is defined by the ability to create logical, virtual networks that are decoupled from the underlying network hardware to ensure the network can better integrate with and support increasingly virtual environments. Over the past decade, organizations have been adopting virtualization technologies at an accelerated rate. Network virtualization (NV) abstracts networking connectivity and services that have traditionally been delivered via hardware into a logical virtual network that is decoupled from and runs independently on top of a physical network in a hypervisor. Beyond L2-3 services like switching and routing, NV typically incorporates virtualized L4-7 services including fireballing and server load-balancing. NV solves a lot of the networking challenges in today’s data centers, helping organizations centrally program and provision the network, on-demand, without having to physically touch the underlying infrastructure. With NV, organizations can simplify how they roll out, scale and adjust workloads and resources to meet evolving computing needs. What Exactly is the Definition of Network Virtualization? Virtualization is the ability to simulate a hardware platform, such as a server, storage device or network resource, in software. All of the functionality is separated from the hardware and simulated as a “virtual instance,” with the ability to operate just like the traditional, hardware solution would. Of course, somewhere there is host hardware supporting the virtual instances of these resources, but this hardware can be general, off-the-shelf platforms. In addition, a single hardware platform can be used to support multiple virtual devices or machines, which are easy to spin up or down as needed. As a result, a virtualized solution is typically much more portable, scalable and cost-effective than a traditional hardware-based solution.
  • 56. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 56 Applying Virtualization to the Network When applied to a network, virtualization creates a logical software-based view of the hardware and software networking resources (switches, routers, etc.). The physical networking devices are simply responsible for the forwarding of packets, while the virtual network (software) provides an intelligent abstraction that makes it easy to deploy and manage network services and underlying network resources. As a result, NV can align the network to better support virtualized environments. NV and White Box Switching As it stands, the trend is toward using NV to create overlay networks on top of physical hardware. Concurrently, using network virtualization reduces costs on the physical (underlay) network by using white box switches. Referring to the use of generic, off-the-shelf switches and routers, white box networking limits expenditures by not using expensive proprietary switches. NV also contributes to decreased expenses by relying on the intelligence of the overlay to provide necessary advanced network functionality and features. Virtual Networks NV can be used to create virtual networks within a virtualized infrastructure. This enables NV to support the complex requirements in multi-tenancy environments. NV can deliver a virtual network within a virtual environment that is truly separate from other network resources. In these instances, NV can separate traffic into a zone or container to ensure traffic does not mix with other resources or the transfer of other data.
  • 57. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 57 MICROSOFT IMPLEMENTATION 13. Explain Microsoft Hyper-V? Microsoft Server 2008 Hyper-V (Hyper-V) is a hypervisor-based virtualization technology that is a feature of select versions of Windows Server 2008. Microsoft’s strategy and investments in virtualization—which span from the desktop to the datacenter—help IT professionals and developers implement Microsoft’s Dynamic IT initiative, whereby they can build systems with the flexibility and intelligence to automatically adjust to changing business conditions by aligning computing resources with strategic objectives. Hyper-V offers customers a scalable and high-performance virtualization platform that plugs into customers’ existing IT infrastructures and enables them to consolidate some of the most demanding workloads. In addition, the Microsoft System Center product family gives customers a single set of integrated tools to manage physical and virtual resources, helping customers create a more agile and dynamic datacenter. Architecture Hyper-V implements isolation of virtual machines in terms of a partition. A partition is a logical unit of isolation, supported by the hypervisor, in which each guest operating system executes. A hypervisor instance has to have at least one parent partition, running a supported version of Windows Server(2008 and later). The virtualization stack runs in the parent partition and has direct access to the hardware devices. The parent partition then creates the child partitions which host the guest OSs. A parent partition creates child partitions using the hypercall API, which is the application programming interface exposed by Hyper-V.
  • 58. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 58 Currently only the following operating systems support Enlightened I/O, allowing them therefore to run faster as guest operating systems under Hyper-V than other operating systems that need to use slower emulated hardware: • Windows Server 2008 and later • Windows Vista and later • Linux with a 3.4 or later kernel • FreeBSD Microsoft Hyper-V Server Stand-alone Hyper-V Server variant does not require an existing of Windows Server 2008 nor Windows Server 2008 R2. The standalone installation is called Microsoft Hyper-V Server for the non-R2 version and Microsoft Hyper-V Server 2008 R2. Microsoft Hyper-V Server is built with components of Windows and has a Windows Server Core user experience. None of the other roles of Windows Server are available in Microsoft Hyper-V Server. This version supports up to 64 VMs per system. System requirements of Microsoft Hyper-V Server are the same for supported guest operating systems and processor, but differ in the following: • RAM: Minimum: 1 GB RAM; Recommended: 2 GB RAM or greater; Maximum 1 TB. • Available disk space: Minimum: 8 GB; Recommended: 20 GB or greater. Hyper-V Server 2012 R2 has the same capabilities as the standard Hyper-V role in Windows Server 2012 R2 and supports 1024 active VMs. 14. Explain Vmware features? Features VMware Server, the successor to VMware GSX Server, enables users to quickly provision new server capacity by partitioning a physical server into multiple virtual machines, bringing the powerful benefits of virtualization to every server. VMware Server is feature-packed with the following market-leading capabilities: • Support for any standard x86 hardware • Support for a wide variety of Linux and Windows host operating systems, including • 64-bit operating systems • Support for a wide variety of Linux, NetWare, Solaris x86, and Windows guest • operating systems, including 64-bit operating systems • Support for Virtual SMP, enabling a single virtual machine to span multiple physical • processors • Quick and easy, wizard-driven installation similar to any desktop software • Quick and easy virtual machine creation with a virtual machine wizard • Virtual machine monitoring and management with an intuitive, user-friendly VMware Server supports 64-bit virtual machines and Intel Virtualization Technology, a set of Intel hardware platform enhancements specifically designed to enhance virtualization solutions.
  • 59. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 59 “Central Transport has saved hundreds of thousands of dollars with VMware virtual infrastructure,” said Craig Liess, server administrator for Central Transport. “Introducing a new server virtualization product including Virtual SMP and support for 64-bit operating systems and Intel Virtualization Technology is a natural progression for VMware, furthering the company’s leadership in the market. 15. Expalin Vmware Infrastructure? VMware is the biggest name in virtualization, and they offer VMware Infrastructure, which includes the latest version of VMware ESX Server 3.5 and VirtualCenter 2.5. Vmware Infrastructure will allow VMware customers to streamline the management of IT environments. VMware Infrastructure is VMware’s third-generation, production-ready virtualization suite. According to a study of VMware customers, 90 percent of companies surveyed use VMware Infrastructure in production environments. With more than 120 industry and technology awards, VMware provides a much-anticipated complete solution that meets customer demand for a next-
  • 60. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 60 generation firmware hypervisor, enhanced virtual infrastructure capabilities, and advanced management and automation solutions. The new features in VMware Infrastructure are targeted at a broad range of customers and IT environments—from midsize and small businesses to branch offices and corporate datacenters within global 100 corporations—and extend the value of all three layers of the virtualization suite. Features • Virtualization platform enhancements help deliver new levels of performance, scalability, and compatibility for running the most demanding workloads in virtual machines: • Expanded storage and networking choices such as support for SATA local storage and 10 Gig Ethernet as well as enablement of Infiniband devices expand storage and networking choices for virtual infrastructure. • Support for TCP Segment Offload and Jumbo frames reduces the CPU overhead associated with processing network I/O. • Support for hardware-nested page tables such as in-processor assists for memory virtualization. • Support for paravirtualized Linux guest operating systems enables higher levels of performance through virtualization-aware operating systems. • Support for virtual machines with 64GB of RAM and physical machines with up to 128GB of memory Virtual infrastructure capabilities help deliver increased infrastructure availability and Resilience • VMware Storage VMotion enables live migration of virtual machine disks from one data storage system to another with no disruption or downtime. • VMware Update Manager automates patch and update management for VMware ESX Server hosts and virtual machines. • VMware Distributed Power Management is an experimental feature that reduces power consumption in the datacenter through intelligent workload balancing. • VMware Guided Consolidation, a feature of VMware VirtualCenter, enables companies to get started with server consolidation in a step-by-step tutorial fashion. 16. Explain virtual Box? VirtualBox is a cross-platform virtualization application. What does that mean? For one thing, it installs on your existing Intel or AMD-based computers, whether they are running Windows, Mac, Linux or Solaris operating systems. Secondly, it extends the capabilities of your existing computer so that it can run multiple operating systems (inside multiple virtual machines) at the same time. So, for example, you can run Windows and Linux on your Mac, run Windows Server 2008 on your Linux server, run Linux on your Windows PC, and so on, all alongside your existing applications. You can install and run as many virtual machines as you like -- the only practical limits are disk space and memory. VirtualBox is deceptively simple yet also very powerful. It can run everywhere from small embedded systems or desktop class machines all the way up to datacenter deployments and even Cloud environments. The techniques and features that VirtualBox provides are useful for several scenarios:
  • 61. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 61 • Running multiple operating systems simultaneously. VirtualBox allows you to run more than one operating system at a time. This way, you can run software written for one operating system on another (for example, Windows software on Linux or a Mac) without having to reboot to use it. Since you can configure what kinds of "virtual" hardware should be presented to each such operating system, you can install an old operating system such as DOS or OS/2 even if your real computer's hardware is no longer supported by that operating system. • Easier software installations. Software vendors can use virtual machines to ship entire software configurations. For example, installing a complete mail server solution on a real machine can be a tedious task. With VirtualBox, such a complex setup (then often called an "appliance") can be packed into a virtual machine. Installing and running a mail server becomes as easy as importing such an appliance into VirtualBox. • Testing and disaster recovery. Once installed, a virtual machine and its virtual hard disks can be considered a "container" that can be arbitrarily frozen, woken up, copied, backed up, and transported between hosts. Here's a brief outline of VirtualBox's main features: • Portability. VirtualBox runs on a large number of 32-bit and 64-bit host operating systems • No hardware virtualization required. For many scenarios, VirtualBox does not require the processor features built into newer hardware like Intel VT-x or AMD-V. As opposed to many other virtualization solutions, you can therefore use VirtualBox even on older hardware where these features are not present. • Guest Additions: shared folders, seamless windows, 3D virtualization. The VirtualBox Guest Additions are software packages which can be installed inside of supported guest systems to improve their performance and to provide additional integration and communication with the host system. • Great hardware support. Among others, VirtualBox supports: o Guest multiprocessing (SMP). VirtualBox can present up to 32 virtual CPUs to each virtual machine, irrespective of how many CPU cores are physically present on your host. o USB device support. VirtualBox implements a virtual USB controller and allows you to connect arbitrary USB devices to your virtual machines without having to install device-specific drivers on the host. USB support is not limited to certain device categories. o Hardware compatibility. VirtualBox virtualizes a vast array of virtual devices, among them many devices that are typically provided by other virtualization
  • 62. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 62 platforms. That includes IDE, SCSI and SATA hard disk controllers, several virtual network cards and sound cards, virtual serial and parallel ports and an Input/Output Advanced Programmable Interrupt Controller (I/O APIC), which is found in many modern PC systems. This eases cloning of PC images from real machines and importing of third-party virtual machines into VirtualBox. o Full ACPI support. The Advanced Configuration and Power Interface (ACPI) is fully supported by VirtualBox. This eases cloning of PC images from real machines or third-party virtual machines into VirtualBox. With its unique ACPI power status support, VirtualBox can even report to ACPI-aware guest operating systems the power status of the host. For mobile systems running on battery, the guest can thus enable energy saving and notify the user of the remaining power (e.g. in full screen modes). o Multiscreen resolutions. VirtualBox virtual machines support screen resolutions many times that of a physical screen, allowing them to be spread over a large number of screens attached to the host system. o Built-in iSCSI support. This unique feature allows you to connect a virtual machine directly to an iSCSI storage server without going through the host system. The VM accesses the iSCSI target directly without the extra overhead that is required for virtualizing hard disks in container files. o PXE Network boot. The integrated virtual network cards of VirtualBox fully support remote booting via the Preboot Execution Environm 17. Explain Thin client? Desktop and mobile thin clients are solid-state devices that connect over a network to a centralized server where all processing and storage takes place, providing reduced maintenance costs and minimal application updates, as well as higher levels of security and energy efficiency. In fact, thin clients can be up to 80 percent more power-efficient than traditional desktop PCs with similar capabilities. Sun Sun’s thin client solution is called Sun Ray, and it is an extremely popular product. Contributing to the demand for it is further market demand for Sun Virtual Desktop Infrastructure (VDI) Software 2.0, which ships on approximately 25 percent of Sun Ray units since being introduced in March 2008. Further, Sun Ray machines are able to display Solaris, Windows, or Linux desktops on the same device. Sun Ray virtual display clients, Sun Ray Software, and Sun VDI Software 2.0 are key components of Sun’s desktop virtualization offering, which are a set of desktop technologies and solutions within Sun’s vim virtualization portfolio. Hewlett Packard Hewlett Packard (HP) is certainly a well-known technology company, and their products extend into the world of thin clients. In fact, HP is the leading manufacturer of thin clients.
  • 63. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 63 Offerings In late 2008, HP introduced three thin client products, including the company’s first mobile offering, that address business needs for a more simple, secure, and easily managed computing infrastructure. Thin clients are at the heart of HP’s remote client portfolio of desktop virtualization solutions, which also include the blade PC-based HP Consolidated Client Infrastructure platform, HP Virtual Desktop Infrastructure (VDI), blade workstations, remote deployment, and management software and services. HP Compaq t5730 and t5735 Desktop Clients HP also offers its HP Compaq t5730 and t5735 Thin Clients. The HP Compaq t5730 is based on Microsoft Windows XPe, and select models include integrated WLAN. Based on Debian Linux, the HP Compaq t5735 supports a variety of open-source applications. HP and VMware HP made another effort to ensure they continue their thin client strides. In early 2009, HP announced that its entire line of thin clients is certified for VMware View, making the products even easier for customers to deploy in VMware environments. Dell Another well-known player in the world of client development is Dell, and they, too, offer a thin client (their first). But they are also touting environmental responsibility with a new line of PCs. Their most recent additions are a line of OptiPlex commercial desktops, Flexible Computing Solutions, and service offerings designed to reduce costs throughout the desktop life cycle. CLOUD COMPUTING – LAB Practical - 1. Cloud Deployment Models
  • 64. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 64 Practical - 2. Creating a WareHouse Cloud App with Salesforce.com SalesForce.com best know for its CRM also provides a big and growing framework for cloud computing and applications. With Force.com you can build apps faster, you can create applications without the concern of buying hardware or installing software. First of all you will need to register for a Salesforce.com developer account using the hyperlink given below: http://www.developerforce.com/events/regular/registration.php Once you have a valid username and password, login into SalesForce.com. In this exercise we will create simple Warehouse application with the following objects: Product Fields: Name, Description, Price, Stock quantity Line Item Fields :Invoice #, Product #, Units sold, Total value Invoice Fields: Description, Invoice Value, Invoice Status Creating the Objects To create the objects: 1. go to Your user Name, located in the upper-right corner of the Main page. Select Setup from the list. 2. The Personal Setup dialog will appear. Click on Create and click on Objects. 3. In the next dialog click on the New Custom Object button. 4. In the next dialog you are going to set the object properties. 5. 6. Then click the Save button. Creating Tabs Check the option to Launch New Custom Tab Wizard after saving this custom object. 1. You will see the next dialog, select the Tab Style you prefer and click Next. 2. Click Next again and then click on Save. Creating Custom Fields & Relationships
  • 65. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 65 1. Click on New under Custom Fields & Relationships. 2. create the Description Field so select text in the next dialog. Click on Next. 3. Input the information such as Field label, length, constraints etc. 4. Click next again then click on Save. Inserting Data into Objects 1. Go to the Home page 2. Click on Customize My Tabs 3. Select the objects you have just created and save. 4. Note that now you are now able to create Products, Line Items and Invoices. 5. Input the required fields and click Save. Practical - 3. Creating an Application in Sales Force.comusing Apex programming Language The Developer Console is an integrated development environment with a collection of tools you can use to create, debug, and test applications in your Salesforce organization. Follow these steps to open the Developer Console − Step 1 − Login to the Salesforce.com using login.salesforce.com. Go to Name → Developer Console Step 2 − To open the Developer Console, click on Name → Developer Console and then click on Execute Anonymous as shown below.
  • 66. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 66 Step 3 – Type the following code to print 1 to 10 numbers there. integer i; for(i=1; i<=10; i++) System.debug(‘ i = ‘ + i); Step 4 − When we click on Execute, the debug logs will open. Once the log appears in window as shown below, then click on the log record. Step 5 − Then type 'USER' in the window as shown below and the output statement will appear in the debug window. This 'USER' statement is used for filtering the output. Practical – 4: Social Network: Definition of Social Network – A social network is usually created by a group of individuals who have a set of common interests and objectives. There are usually a set of network formulators followed by a broadcast to achieve the network membership. This advertising happens both in public and private groups depending upon the confidentiality of the network. Components of Web2.0 for Social Networks – ● Communities: Communities are an online space formed by a group of individuals to share their thoughts, ideas. ● Blogging: Blogs give the users of a Social Network the freedom to express their thoughts in a free form basis and help in generation and discussion of topics. ● Wikis: A Wiki is a set of co-related pages on a particular subject and allow users to share content. ● File sharing/Podcasting: This is the facility which helps users to send their media files and related content online for other people of the network to see and contribute more on. ● Mashups: This is the facility via which people on the internet can congregate services from multiple vendors to create a completely new service. An example may be combining the location information from a mobile service provider and the map facility of Google maps in order to find the exact information of a cell phone device from the internet, just by entering the cell number. Types and behavior of Social Networks – The nature of social networks makes for its variety. We have a huge number of types of social networks based on needs and goals. Keeping these in mind, the main categories identified are given below:
  • 67. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 67 ● Social Contact Networks: These types of networks are formed to keep contact with friends and family and are one of the most popular sites on the network today. Examples: Orkut, Facebook and Twitter. ● Study Circles: These are social networks dedicated for students where they can have areas dedicated to student study topics, placement related queries and advanced research opportunity gathering. Examples: FledgeWing and College Tonight. ● Social Networks for specialist groups: These types of social networks are specifically designed for core field workers like doctors, scientists, engineers, members of the corporate industries. Examples: LinkedIn. ● Networks for fine arts: These types of social networks are dedicated to people linked with music, painting and related arts. Examples: Amie Street and Buzznet. ● Sporting Networks: These types of social networks are dedicated to people of the sporting fraternity and have a gamut of information related to this field. Examples of the same is Athlinks. ● Social Networks for the ‘inventors’: These are the social networks for the people who have invented the concept of social networks, the very developers and architects that have developed the social networks. Examples: Technical Forums and Mashup centers. Life Cycle of Social Networks – For any social network, there are a number of steps in its life cycle. In each of the life cycle step of an online social network, Web 2.0 concepts have a great influence. Consider the diagram below. For all the steps in the life cycle Web 2.0 has provided tools and concepts which are not only cost effective but very easy to implement. Often times, online networks have a tendency to die out very fast due to lack of proper tools to communicate. Web 2.0 provides excellent communication mechanism concepts like Blogging and individual email filtering to keep everyone in the network involved in the day to day activities of the network. Figure. Life Cycle of Social Networks with Web 2.0
  • 68. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 68 Impact of Social networks using Web2.0 – The various implementations of social networks using Web 2.0 have already had a profound effect on society as a whole. One of the most important groups of people – the medical community already has reaped significant benefits from the technology and is translating the same towards the betterment of public life. Future Scope of Web2.0 in Social networks, – There is a lot of contribution that Web 2.0 has already done for social networks as well as other areas. However the reach for the technology has not been complete and there are still a number of areas that need improvement so that the true power of the technology integrated with social networks can be truly be felt. The future of Web 2.0 itself is something which will be providing much more exciting features for social networks. As time progresses the technology in itself is becoming more secure and transparent and much more user oriented. New features like online video conference instead of scrap messages/blogs and Object Oriented Programming will also help in introducing new features within the social network.
  • 69. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 69 Practical – 5: Case Study – Google App Engine Google App Engine (often referred to as GAE or simply App Engine, and also used by the acronym GAE/J) is a platform as a service (PaaS) cloud computing platform for developing and hosting web applications in Google-managed data centers. Applications are sandboxed and run across multiple servers. App Engine offers automatic scaling for web applications—as the number of requests increases for an application, App Engine automatically allocates more resources for the web application to handle the additional demand. Google App Engine is free up to a certain level of consumed resources. Fees are charged for additional storage, bandwidth, or instance hours required by the application. It was first released as a preview version in April 2008, and came out of preview in September 2011. Runtimes and frameworks Currently, the supported programming languages are Python, Java (and, by extension, other JVM languages such as Groovy, JRuby, Scala, Clojure, Jython and PHP via a special version of Quercus), and Go. Google has said that it plans to support more languages in the future, and that the Google App Engine has been written to be language independent. Reliability and Support All billed High-Replication Datastore App Engine applications have a 99.95% uptime SLA Portability Concerns Developers worry that the applications will not be portable from App Engine and fear being locked into the technology. In response, there are a number of projects to create open-source back-ends for the various proprietary/closed APIs of app engine, especially the datastore. Although these projects are at various levels of maturity, none of them is at the point where installing and running an App Engine app is as simple as it is on Google’s service. AppScale and TyphoonAE are two of the open source efforts. AppScale can run Python, Java, and Go GAE applications on EC2 and other cloud vendors.
  • 70. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 70 TyphoonAE can run python App Engine applications on any cloud that support linux machines. Web2py web framework offers migration between SQL Databases and Google App Engine, however it doesn’t support several App Engine-specific features such as transactions and namespaces. Differences with other application hosting Compared to other scalable hosting services such as Amazon EC2, App Engine provides more infrastructure to make it easy to write scalable applications, but can only run a limited range of applications designed for that infrastructure. App Engine’s infrastructure removes many of the system administration and development challenges of building applications to scale to hundreds of requests per second and beyond. Google handles deploying code to a cluster, monitoring, failover, and launching application instances as necessary. While other services let users install and configure nearly any *NIX compatible software, App Engine requires developers to use only its supported languages, APIs, and frameworks. Current APIs allow storing and retrieving data from a BigTable non-relational database; making HTTP requests; sending e-mail; manipulating images; and caching. Existing web applications that require a relational database will not run on App Engine without modification. Per-day and per-minute quotas restrict bandwidth and CPU use, number of requests served, number of concurrent requests, and calls to the various APIs, and individual requests are terminated if they take more than 60 seconds or return more than 32MB of data. Differences between SQL and GQL Google App Engine’s datastore has a SQL-like syntax called “GQL”. GQL intentionally does not support the Join statement, because it seems to be inefficient when queries span more than one machine. Instead, one-to-many and many-to-many relationships can be accomplished using ReferenceProperty(). This shared-nothing approach allows disks to fail without the system failing. Switching from a relational database to the Datastore requires a paradigm shift for developers when modelling their data. Unlike a relational database the Datastore API is not relational in the SQL sense. The Java version supports asynchronous non-blocking queries using the Twig Object Datastore interface. This offers an alternative to using threads for parallel data processing.
  • 71. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 71 Practical – 6: Case Study – Amazon EC2 Amazon Elastic Compute Cloud (EC2) Elastic IP addresses allow you to allocate a static IP address and programmatically assign it to an instance. You can enable monitoring on an Amazon EC2 instance using Amazon CloudWatch2 in order to gain visibility into resource utilization, operational performance, and overall demand patterns (including metrics such as CPU utilization, disk reads and writes, and network traffic). You can create Auto-scaling Group using the Auto-scaling feature3 to automatically scale your capacity on certain conditions based on metric that Amazon CloudWatch collects. You can also distribute incoming traffic by creating an elastic load balancer using the Elastic Load Balancing4 service. Amazon Elastic Block Storage (EBS)5 volumes provide network-attached persistent storage to Amazon EC2 instances. Point-in-time consistent snapshots of EBS volumes can be created and stored on Amazon Simple Storage Service (Amazon S3)6. Amazon S3 is highly durable and distributed data store. With a simple web services interface, you can store and retrieve large amounts of data as objects in buckets (containers) at any time, from anywhere on the web using standard HTTP verbs. Copies of objects can be distributed and cached at 14 edge locations around the world by creating a distribution using Amazon CloudFront7 service – a web service for content delivery (static or streaming content). Amazon SimpleDB8 is a web service that provides the core functionality of a database- real-time lookup and simple querying of structured data – without the operational complexity. You can organize the dataset into domains and can run queries across all of the data stored in a particular domain. Domains are collections of items that are described by attribute-value pairs. Amazon Relational Database Service9 (Amazon RDS) provides an easy way to setup, operate and scale a relational database in the cloud. You can launch a DB Instance and get access to a full-
  • 72. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 72 featured MySQL database and not worry about common database administration tasks like backups, patch management etc. Amazon Simple Queue Service (Amazon SQS)10 is a reliable, highly scalable, hosted distributed queue for storing messages as they travel between computers and application components. Amazon Simple Notifications Service (Amazon SNS) provides a simple way to notify applications or people from the cloud by creating Topics and using a publish-subscribe protocol. Amazon Elastic MapReduce provides a hosted Hadoop framework running on the web-scale infrastructure of Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Simple Storage Service (Amazon S3) and allows you to create customized JobFlows. JobFlow is a sequence of MapReduce steps. Amazon Virtual Private Cloud (Amazon VPC) allows you to extend your corporate network into a private cloud contained within AWS. Amazon VPC uses IPSec tunnel mode that enables you to create a secure connection between a gateway in your data center and a gateway in AWS. Amazon Route53 is a highly scalable DNS service that allows you manage your DNS records by creating a HostedZone for every domain you would like to manage. AWS Identity and Access Management (IAM) enable you to create multiple Users with unique security credentials and manage the permissions for each of these Users within your AWS Account. IAM is natively integrated into AWS Services. No service APIs have changed to support IAM, and exiting applications and tools built on top of the AWS service APIs will continue to work when using IAM. AWS also offers various payment and billing services that leverages Amazon’s payment infrastructure. All AWS infrastructure services offer utility-style pricing that require no long-term commitments or contracts. For example, you pay by the hour for Amazon EC2 instance usage and pay by the gigabyte for storage and data transfer in the case of Amazon S3. More information about each of these services and their pay-as-you-go pricing is available on the AWS Website. Note that using the AWS cloud doesn’t require sacrificing the flexibility and control you’ve grown accustomed to: You are free to use the programming model, language, or operating system (Windows, OpenSolaris or any flavor of Linux) of your choice. You are free to pick and choose the AWS products that best satisfy your requirements—you can use any of the services individually or in any combination. Because AWSprovidesresizable (storage,bandwidthandcomputing) resources,youare free to consume as muchor as little andonlypayfor whatyou consume. You are free touse the systemmanagementtoolsyou’veusedinthe pastandextendyourdatacenter intothe cloud.
  • 73. IIIB.Sc. – Semester–6 – ComputerScience CloudComputing 73 Krishna University :: Machilipatnam March/April – 2018 6*03CSC15-B2 – Cloud Computing Section – A Answer any FIVE of the following. ( 5 x 5 = 25 M ) 1. What is a Cloud and Cloud Computing? 2. Explain the origins of Cloud Computing. 3. Explain the limitations of Cloud Computing. 4. Explain the differences between SPI and Traditional IT model. 5. Explain about Salesforce.com and Rackspace. 6. Explain the benefits of IAAS. 7. Explain the Memory and Network virtualization. 8. Explain about Thin Client. Section – B Answer FIVE of the following. ( 5 x 10 = 50 M ) UNIT – I 9. (a) Explain the components of Cloud Computing. (OR) (b) Explain the characteristics of Cloud Computing. UNIT – II 10. (a) Explain the benefits of Cloud Computing. (OR) (b) Explain the Regulatory Issues, Government Policies. UNIT – III 11. (a) Explain the Cloud Delivery Model. (OR) (b) Explain about Software as a Service. UNIT – IV 12. (a) Explain IaaS Service Providers. (OR) (b) Explain Cloud Deployment Model. UNIT – V 13. (a) Explain the types of Hardware Virtualization. (OR) (b) Explain about Microsoft Hyper V and VM-Ware features.