IT702 CloudComputing Assignment
IT702 CloudComputing Assignment
Answer :
The basic components of cloud computing in a simple topology are divided into 3
(three) parts, namely clients, datacenter, and distributed servers. The three basic
components have specific goals and roles in running cloud computing operations. The
concept of the three components can be described as follows:
• Clients on cloud computing architecture are said to be the exact same things
that are plain, old, everyday local area networks (LANs). They are, typically,
the computers that just sit on your desk. But they might also be laptops, tablet
computers, mobile phones, or PDAs - all big drivers for cloud computing
because of their mobility. Clients are interacting with to manage their
information on the cloud.
• Datacenter is collection of servers where the application to which you
subscribe is housed. It could be a large room in the basement of your building
full of servers on the other side of the world that you access via the Internet. A
growing trend in the IT world is virtualizing servers. That is, software can be
installed allowing multiple instances of virtual servers to be used. In this way,
you can have half a dozen virtual servers running on one physical server.
• Distributed Servers is a server placement in a different location. But the
servers don't have to be housed in the same location. Often, servers are in
geographically disparate locations. But to you, the cloud subscribers, these
servers act as if they're humming away right next to each other.
Example-
b. Cloud Services, products, services and solutions that are used and delivered real-
time via internet media.
Example :
Example :
Example :
Example :
Example:
Private cloud:
Private clouds are distributed systems that work on a private infrastructure and
providing the users with dynamic provisioning of computing resources. Instead
of a pay-as-you-go model as in public clouds, there could be other schemes in
that take into account the usage of the cloud and proportionally billing the
different departments or sections of an enterprise.
Hybrid cloud:
Hybrid cloud is a heterogeneous distributed system resulted by combining
facilities of public cloud and private cloud. For this reason they are also
called heterogeneous clouds.
A major drawback of private deployments is the inability to scale on demand
and to efficiently address peak loads. Here public clouds are needed. Hence, a
hybrid cloud takes advantages of both public and private cloud.
Iaas is the most basic category of cloud computing services. With Iaas, we can rent
IT infrastructure servers, and virtual machines (VMs), storage, networks, and
operating systems from a cloud provider on a pay-as-you-go basis. It’s an instant
computing infrastructure, provisioned and managed over the internet. Virtual
hardware is provided on demand in the form of virtual machines instances. Pricing
can be hourly basis. Virtual storage is either raw disk space or an object store
which is the higher level of abstraction entities rather than file. For example data in
s3 storage which when we access data using JAVA API then we can’t treat them as
file however they are shown as file with s3cmd ls command inside the s3 bucket.
Virtual networking is the collection of the services that manages networking
among virtual instances.
Saas is software that is centrally hosted and managed for the end customer. It
allows users to connect to and use cloud-based apps over the internet. Common
examples are email, calendars, and office tools such as Microsoft Office 365.SaaS
provides application and services on demand. Most of the common functionalities
of desktop applications (office automation, document management, photo editing,
customer relationship management (CRM)) are provided via web browser which
can make applications more scalable. Applications are shared by multiple users.
For example, social networking sites like Facebook, twitter which are hosted on
cloud we just use them as software. Most of the social networking makes use of
cloud-based infrastructures.
4. Explain virtual desktop infrastructure
Answer:
Virtual desktop infrastructure (VDI) is a technology that refers to the use of virtual
machines to provide and manage virtual desktops. VDI hosts desktop environments on
a centralized server and deploys them to end-users on request.
In VDI, a hypervisor segments servers into virtual machine that in turn host virtual
desktops, which users access remotely from their devices. Users can access these
virtual desktops from any device or location, and all processing is done on the host
server. Users connect to their desktop instances through a connection broker, which is
a software-based gateway that acts as an intermediary between the user and the server.
VDI can be either persistent or nonpersistent. Each type offers different benefits:
• With persistent VDI, a user connects to the same desktop each time, and
users are able to personalize the desktop for their needs since changes are
saved even after the connection is reset. In other words, desktops in a
persistent VDI environment act exactly like a personal physical desktop.
• In contrast, nonpersistent VDI, where users connect to generic desktops and
no changes are saved, is usually simpler and cheaper, since there is no need to
maintain customized desktops between sessions. Nonpersistent VDI is often
used in organizations with a lot of task workers, or employees who perform a
limited set of repetitive tasks and don’t need a customized desktop.
VDI offers a number of advantages, such as user mobility, ease of access, flexibility
and greater security. In the past, its high-performance requirements made it costly and
challenging to deploy on legacy systems, which posed a barrier for many businesses.
However, the rise in enterprise adoption of hypercoverage infrastructure (HCI) offers
a solution that provides scalability and high performance at a lower cost.
Although VDI’s complexity means that it isn’t necessarily the right choice for every
organization, it offers a number of benefits for organizations that do use it. Some of
these benefits include:
• Remote access: VDI users can connect to their virtual desktop from any
location or device, making it easy for employees to access all their files and
applications and work remotely from anywhere in the world.
Although VDI can be used in all sorts of environments, there are a number of use
cases that are uniquely suited for VDI, including:
• Remote work: Since VDI makes virtual desktops easy to deploy and update
from a centralized location, an increasing number of companies are
implementing it for remote workers.
• Bring your own device (BYOD): VDI is an ideal solution for environments that
allow or require employees to use their own devices. Since processing is done
on a centralized server, VDI allows the use of a wider range of devices. It also
offers better security, since data lives on the server and is not retained on the
end client device.
• Task or shift work: Nonpersistent VDI is particularly well suited to
organizations such as call centers that have a large number of employees who
use the same software to perform limited tasks.
Virtualization is the ability that allows sharing the physical instance of a single
application or resource among multiple organizations or users. This technique is done
by assigning a name logically to all those physical resources & provides a pointer to
those physical resources based on demand.
Over an existing operating system & hardware, we generally create a virtual machine
that and above it, we run other operating systems or applications. This is called
Hardware Virtualization. The virtual machine provides a separate environment that is
logically distinct from its underlying hardware. Here, the system or the machine is the
host & the virtual machine is the guest machine. This virtual environment is managed
by firmware, which is termed as a hypervisor.
Types of Virtualization:
1.Application Virtualization.
2.Network Virtualization.
3.Desktop Virtualization.
4.Storage Virtualization.
5.Server Virtualization.
6.Data virtualization.
1. Application Virtualization:
Application virtualization helps a user to have remote access of an
application from a server. The server stores all personal information and
other characteristics of the application but can still run on a local
workstation through the internet. Example of this would be a user who
needs to run two different versions of the same software. Technologies
that use application virtualization are hosted applications and packaged
applications.
2. Network Virtualization:
The ability to run multiple virtual networks with each has a separate
control and data plan. It co-exists together on top of one physical network.
It can be managed by individual parties that potentially confidential to
each other.
Network virtualization provides a facility to create and provision virtual
networks—logical switches, routers, firewalls, load balancer, Virtual
Private Network (VPN), and workload security within days or even in
weeks.
3. Desktop Virtualization:
Desktop virtualization allows the users’ OS to be remotely stored on a
server in the data centre. It allows the user to access their desktop
virtually, from any location by a different machine. Users who want
specific operating systems other than Windows Server will need to have a
virtual desktop. Main benefits of desktop virtualization are user mobility,
portability, easy management of software installation, updates, and
patches.
4. Storage Virtualization:
Storage virtualization is an array of servers that are managed by a virtual
storage system. The servers aren’t aware of exactly where their data is
stored, and instead function more like worker bees in a hive. It makes
managing storage from multiple sources to be managed and utilized as a
single repository. storage virtualization software maintains smooth
operations, consistent performance and a continuous suite of advanced
functions despite changes, break down and differences in the underlying
equipment.
5. Server Virtualization:
This is a kind of virtualization in which masking of server resources takes
place. Here, the central-server(physical server) is divided into multiple
different virtual servers by changing the identity number, processors. So,
each system can operate its own operating systems in isolate manner.
Where each sub-server knows the identity of the central server. It causes
an increase in the performance and reduces the operating cost by the
deployment of main server resources into a sub-server resource. It’s
beneficial in virtual migration, reduce energy consumption, reduce
infrastructural cost, etc.
6. Data virtualization:
This is the kind of virtualization in which the data is collected from various
sources and managed that at a single place without knowing more about
the technical information like how data is collected, stored & formatted
then arranged that data logically so that its virtual view can be accessed
by its interested people and stakeholders, and users through the various
cloud services remotely. Many big giant companies are providing their
services like Oracle, IBM, At scale, Cdata, etc.
It is the division of physical server into several virtual servers and this division is
mainly done to improvise the utility of server resource. In other word it is the masking
of resources that are located in server which includes the number & identity of
processors, physical servers & the operating system. This division of one physical
server into multiple isolated virtual servers is done by server administrator using
software. The virtual environment is sometimes called the virtual private-servers.
In this process, the server resources are kept hidden from the user. This partitioning of
physical server into several virtual environments; result in the dedication of one server
to perform a single application or task.
This technique is mainly used in web-servers which reduces the cost of web-hosting
services. Instead of having separate system for each web-server, multiple virtual
servers can run on the same system/computer.
These are:
1. Virtual Machine model: are based on host-guest paradigm, where each guest
runs on a virtual replica of hardware layer. This technique of virtualization
provide guest OS to run without modification. However it requires real
computing resources from the host and for this a hypervisor or VM is required
to coordinate instructions to CPU.
2. Para-Virtual Machine model: is also based on host-guest paradigm & uses
virtual machine monitor too. In this model the VMM modifies the guest
operating system's code which is called 'porting'. Like that of virtual machine,
similarly the Para-virtual machine is also capable of executing multiple
operating systems. The Para-virtual model is used by both Xen & UML.
3. Operating System Layer Virtualization: Virtualization at OS level functions in a
different way and is not based on host-guest paradigm. In this model the host
runs a single operating system kernel as its main/core and transfers its
functionality to each of the guests. The guest must use the same operating
system as the host. This distributed nature of architecture eliminated system
calls between layers and hence reduces overhead of CPU usage. It is also a
must that each partition remains strictly isolated from its neighbors because
any failure or security breach of one partition won't be able to affect the other
partitions.
Advantages of Server Virtualization
TC is controversial as the hardware is not only secured for its owner, but also
secured against its owner. Such controversy has led opponents of trusted computing,
such as free software activist Richard Stallman to refer to it instead as treacherous
computing, even to the point where some scholarly articles have begun to place scare
quotes around "trusted computing".
controls and the service location (enterprise, cloud provider, 3rd party) should be highlighted
in the security patterns.
Security architecture patterns serve as the North Star and can accelerate application
migration to clouds while managing the security risks. In addition, cloud security architecture
patterns should highlight the trust boundary between various services and components
deployed at cloud services. These patterns should also point out standard interfaces, security
protocols (SSL, TLS, IPSEC, LDAPS, SFTP, SSH, SCP, SAML, OAuth, Tacacs, OCSP, etc.) and
mechanisms available for authentication, token management, authorization, encryption
methods (hash, symmetric, asymmetric), encryption algorithms (Triple DES, 128-bit AES,
Blowfish, RSA, etc.), security event logging, source-of-truth for policies and user attributes
and coupling models (tight or loose).Finally the patterns should be leveraged to create
security checklists that need to be automated by configuration management tools like
puppet.
In general, patterns should highlight the following attributes (but not limited to) for
each of the security services consumed by the cloud application figure 4.1:
Logical location – Native to cloud service, in-house, third party cloud. The location may have
an implication on the performance, availability, firewall policy as well as governance of the
service.
Protocol – What protocol(s) are used to invoke the service? For example REST with X.509
certificates for service requests.
Service function – What is the function of the service? For example encryption of the artifact,
logging, authentication and machine finger printing.
Input/Output – What are the inputs, including methods to the controls, and outputs from
the security service? For example, Input = XML doc and Output =XML doc with encrypted
attributes.
Control description – What security control does the security service offer? For example,
protection of information confidentiality at rest, authentication of user and authentication of
application.
Actor – Who are the users of this service? For example, End point, End user, Enterprise
administrator, IT auditor and Architect.
9. Define OLAP.
Answer :
Most business data have multiple dimensions—multiple categories into which the data
are broken down for presentation, tracking, or analysis. For example, sales figures
might have several dimensions related to location (region, country, state/province,
store), time (year, month, week, day), product (clothing, men/women/children, brand,
type), and more.
But in a data warehouse, data sets are stored in tables, each of which can organize data
into just two of these dimensions at a time. OLAP extracts data from multiple
relational data sets and reorganizes it into a multidimensional format that enables very
fast processing and very insightful analysis.
The idea behind an intercloud is that a single common functionality would combine
many different individual clouds into one seamless mass in terms of on-demand
operations. To understand how this works, it’s helpful to think about how existing
cloud computing setups are designed.
Cloud hosting is largely intended to deliver on-demand services. Through careful use
of scalable and highly engineered technologies, cloud providers are able to offer
customers the ability to change their levels of service in many ways without waiting
for physical changes to occur. Terms like rapid elasticity, resource pooling and on-
demand self-service are already part of cloud hosting service designs that are set up to
make sure the customer or client never has to deal with limitations or disruptions.
Building on all of these ideas, the intercloud would simply make sure that a cloud
could use resources beyond its reach by taking advantage of pre-existing contracts
with other cloud providers.
Although these setups are theoretical as they apply to cloud services, telecom
providers already have these kinds of agreements. Most of the national telecom
companies are able to reach out and use parts of another company’s operations where
they lack a regional or local footprint, because of carefully designed business
agreements between the companies. If cloud providers develop these kinds of
relationships, the intercloud could become reality.
As a means toward allowing this kind of functionality, the Institute of Electrical and
Electronics Engineers (IEEE) developed the intercloud testbed in 2013, a set of
technical standards that would go a long way towards helping cloud provider
companies to federate and inter-operate in the kinds of ways theorized in intercloud
design principles.