0% found this document useful (0 votes)
2 views45 pages

Cloud

The document provides an overview of cloud computing environments, detailing various types such as public, private, hybrid, community, and multi-cloud models, along with their advantages and disadvantages. It also covers technologies like Eucalyptus, OpenNebula, OpenStack, and Hadoop, explaining their functionalities and architectures. Additionally, it discusses virtualization, its benefits, and the role of hypervisors in managing virtual machines.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views45 pages

Cloud

The document provides an overview of cloud computing environments, detailing various types such as public, private, hybrid, community, and multi-cloud models, along with their advantages and disadvantages. It also covers technologies like Eucalyptus, OpenNebula, OpenStack, and Hadoop, explaining their functionalities and architectures. Additionally, it discusses virtualization, its benefits, and the role of hypervisors in managing virtual machines.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 45

CLOUD

COMPUTING
WELCOME

PRESENTED BY,
R. SARAVANA KUMAR,
M.E.,
PROJECT MANAGER, ICONIX SOFTWARE SOLUTION
Cloud Computing Environment

• Cloud environment is a very broad term that includes a collection of


services offered to enterprises for enhancing their functionality and IT
capacity.
• The major feature of the cloud environment setup is that it centralizes
resources and improves business efficiency.
• It enables mobilization for the workforce and facilitates remote
workstations and working possible
• Specific characteristics of the cloud computing environment can be
briefed as follows:
• On-demand self-service
• Broad network access
• Resource pooling and multi-tenancy
• Rapid scalability and elasticity
• Measured service
Cloud Computing Environment
Types
• Public Cloud
• Private Cloud
• Hybrid Cloud
• Community Cloud
• Multi-Cloud Model
Public cloud
• Public clouds have immense storage space and that turns into easy
scalability.
• The public cloud companies provide infrastructure as well as services that
can be shared by all the customers.
• The Advantages of multi-tenancy in a cloud computing environment are
extremely beneficial for companies such as Google, Microsoft, or Amazon.
• The applications are designed portable they can also be deployed to the
private cloud for production.
• Pros • Cons
• Versatile • Cloud environment provider has
• Pay as you Go the entire control
• Cost-efficient • There is a risk of security
• Built-in redundancies vulnerability
• Realized disaster management plan
Private cloud
• Private cloud providers are normally preferred by a single business
organization with a high regulatory requirement.
• Under this kind of cloud environment only authorized users are permitted to
utilize, access, or store data.
• Private cloud is a complete on-premise cloud normally behind firewalls

• Pros
• Cons
• High and essential security and
• Lesser economical model
control
• The consumer owns the
• Access to valuable assets is
infrastructure and software
restricted
• Portable data
• The risk of downtime is
comparatively lesser
Hybrid cloud
• A combination of private and public clouds is a
hybrid cloud. • Pros
• This kind of multi-cloud environment is designed for • Highly scalable and
enterprises that require features of both the cloud flexible
environment. • Cost-efficient
• The applications are well designed to allow both • Enhanced security
platforms to interact and run seamlessly with high
portability amongst both platforms.
• Cloud bursting has a private cloud for storing data • Cons
and proprietary applications. • Network-level
• The traffic and the demand increases, this model
communication and
then ports to a public cloud model to supplement connectivity gets
the private cloud model hampered
• In the other kind of hybrid cloud, most of the data • Performance risk
and applications are in a private cloud environment. prevails
• The non-critical and lesser important applications
are outsourced to the public cloud.
• This kind of arrangement is normally preferred by
Community cloud
• The community cloud environment is a lesser popular environment.
• Environment can be specified as a multi-tenant, collaborative platform where
multiple entities share the same application.
• The consumers may belong to the same industry or share the same concerns
such as performance, compliance, and security.
• It is a strategically private cloud performing like a public cloud environment.
• Financial services firm, healthcare organizations, governmental agencies, etc
prefer this kind of set up
• Pros • Cons
• Better scalability • Issues in the inter-cloud
• Cost-efficient
environment such as
• Compliance with industrial regulations
performance, prioritization,
is ensured etc
• Interests of the group are preferred • Risk of data security
and implemented
• Highly flexible system
Multi-cloud model
• This is a more complex hybrid cloud environment, where there are multiple
public cloud services that combine with the private cloud environment.
• This arrangement is recommended when a single public cloud is not capable
to cater to the requirements

• Cons
• Pros • Risk of data security
• Specialized and versatile • May create confusion in
• Highly flexible handling
• Cost-efficient
Eucalyptus

• Eucalyptus is a paid and open-


source computer software for
building Amazon Web Services
(AWS)-compatible private and
hybrid cloud computing
environments, originally developed
by the company Eucalyptus
Systems.
• Eucalyptus is an acronym for
Elastic Utility Computing
Architecture for Linking Your
Programs To Useful Systems.
• Eucalyptus enables pooling
compute, storage, and network
resources that can be dynamically
OpenNebula

• OpenNebula is a cloud computing


platform for managing
heterogeneous distributed data
center infrastructures.
• The OpenNebula platform
manages a data center's virtual
infrastructure to build private,
public and hybrid implementations
of Infrastructure as a Service.
• The platform is also capable of
offering the cloud infrastructure
necessary to operate a cloud on
top of existing VMware
infrastructure.
OpenStack

• OpenStack is a free, open standard cloud


computing platform.
• It is mostly deployed as infrastructure-
as-a-service (IaaS) in both public and
private clouds where virtual servers and
other resources are made available to
users.
• The software platform consists of
interrelated components that control
diverse, multi-vendor hardware pools of
processing, storage, and networking
resources throughout a data center.
• Users manage it either through a web-
based dashboard, through command-line
tools, or through RESTful web services.
Hadoop

• Hadoop is an Apache open source framework written in java


that allows distributed processing of large datasets across
clusters of computers using simple programming models.
• The Hadoop framework application works in an environment
that provides distributed storage and computation across
clusters of computers.
• Hadoop is designed to scale up from single server to thousands
of machines, each offering local computation and storage.
Hadoop Working

• To build bigger servers with heavy configurations that handle


large scale processing, but as an alternative, you can tie
together many commodity computers with single-CPU, as a
single functional distributed system and practically, the
clustered machines can read the dataset in parallel and provide
a much higher throughput.
• It is cheaper than one high-end server.
• So this is the first motivational factor behind using Hadoop that
it runs across clustered and low-cost machines.
• Hadoop runs code across a cluster of computers.
Hadoop Process

• Data is initially divided into directories and files. Files are divided
into uniform sized blocks of 128M and 64M (preferably 128M).
• These files are then distributed across various cluster nodes for
further processing.
• HDFS, being on top of the local file system, supervises the
processing.
• Blocks are replicated for handling hardware failure.
• Checking that the code was executed successfully.
• Performing the sort that takes place between the map and reduce
stages.
Advantages of Hadoop
• Hadoop framework allows the user to quickly write and test
distributed systems.
• It is efficient, and it automatic distributes the data and work
across the machines and in turn, utilizes the underlying
parallelism of the CPU cores.
• Hadoop does not rely on hardware to provide fault-tolerance and
high availability (FTHA), rather Hadoop library itself has been
designed to detect and handle failures at the application layer.
• Servers can be added or removed from the cluster dynamically
and Hadoop continues to operate without interruption.
• Another big advantage of Hadoop is that apart from being open
source, it is compatible on all the platforms since it is Java based.
Hadoop Architecture
• At its core, Hadoop has two major layers
namely −

• Processing/Computation layer
(MapReduce)
• Storage layer (Hadoop Distributed File
• Hadoop Common − These are Java
System)
libraries and utilities required by other
Hadoop modules.

• Hadoop YARN − This is a framework


for job scheduling and cluster resource
management.
MapReduce

• It is a parallel programming model for writing distributed


applications devised at Google for efficient processing of large
amounts of data (multi-terabyte data-sets), on large clusters
(thousands of nodes) of commodity hardware in a reliable,
fault-tolerant manner.
• The MapReduce program runs on Hadoop which is an Apache
open-source framework.
Hadoop Distributed File
System
• HDFS is based on the Google File System (GFS) and provides a
distributed file system that is designed to run on commodity
hardware.
• Highly fault-tolerant and is designed to be deployed on low-
cost hardware.
• Provides high throughput access to application data and is
suitable for applications having large datasets.
Features of HDFS

• It is suitable for the distributed storage and processing.


• Hadoop provides a command interface to interact with HDFS.
• The built-in servers of namenode and datanode help users to
easily check the status of cluster.
• Streaming access to file system data.
• HDFS provides file permissions and authentication.
Why HDFS?
• Fault detection and recovery
• Since HDFS includes a large number of commodity
hardware, failure of components is frequent.
• HDFS should have mechanisms for quick and automatic fault
detection and recovery.
• Huge datasets
• HDFS should have hundreds of nodes per cluster to manage
the applications having huge datasets.
• Hardware at data
• A requested task can be done efficiently, when the
computation takes place near the data.
• Especially where huge datasets are involved, it reduces the
network traffic and increases the throughput.
MapReduce

• MapReduce is a processing technique and a program model for


distributed computing based on java.
• The MapReduce algorithm contains two important tasks,
namely Map and Reduce.
• Map takes a set of data and converts it into another set of data,
where individual elements are broken down into tuples
(key/value pairs).
• Reduce task, which takes the output from a map as an input
and combines those data tuples into a smaller set of tuples.
• As the sequence of the name MapReduce implies, the reduce
task is always performed after the map job.
MapReduce Algorithm
Basics of Virtual
Machines
• Implemented by adding layers of software to the real
machine to support the desired VM architecture.
• E.g. Virtual PC on Apple MAC/PowerPC emulates
Windows/x86.
• Uses:
• Multiple OS’s on one machine
• Isolation
• Enhanced security
• Platform emulation
• On-the-fly optimization
• Realizing ISAs not found in physical machines
Components of Virtual
Machines
• Configuration file
• Hard disk file(s)
• Virtual machine state file
• In-memory file
Taxonomy of Virtual Machines
Process Virtual Machine

• Virtualizing software translates instructions from one platform


to another.
• Helps execute programs developed for a different OS or
different ISA. (Think of java)
• VM terminates when guest process terminates.
System Virtual Machine

• Provides a complete system environment


• OS + user processes + networking + I/O + display + GUI
• Lasts as long as host is alive
• Usually requires guest to use same ISA as host
Virtual Machine
Applications

Emulation & Replication Composition


Optimization
• Emulation: Mix-and-match cross-platform portability
• Optimization: Usually done with emulation for platform-
specific performance improvement
• Replication: Multiple VMs on single platform
• Composition: form more complex flexible systems
Types of Process Virtual
Machines
Multiprogramming
Emulators
• Standard OS syscall interface
+ instruction set • Support one instruction set on
• Can support multiple processes hardware designed for another
with its own address space and
virtual machine view.

Dynamic Binary Interpreter


Translator
• Blocks of source instructions
• Fetches, decodes and
converted to target instructions.
emulates the execution of
• Translated blocks cached to
individual source instructions.
exploit locality.
Can be slow.
Types of System Virtual
Machines
Classic System VMs

• Try to execute natively on the


host ISA
• VMM directly controls
hardware
• Provides all device drivers
• Traditional mainframe model
Types of System Virtual
Machines
Hosted VMs

• Similar to classic system VM


• Operates in process space
• Relies on host OS to provide
drivers
• E.g. VMWare
Types of System Virtual
Machines
Whole System VMs: Emulation

• Host and Guest ISA are different


• Hosted VM + emulation
• So emulation is required
• E.g. Virtual PC (Windows on
MAC)
Types of System Virtual
Machines
Co-designed VMs
• Performance improvement of existing ISA
• Customized microarchitecture and ISA at
hardware level
• Native ISA not exposed to applications
• VMM
• co-designed with native ISA
• Part of native hardware implementation
• Emulation/translation
• E.g. Transmeta Crusoe
• Native ISA based on VLIW
• Guest ISA = x86
• Goal power savings
Virtualization

• Virtualization is the ability to run multiple operating


systems on a single physical system and share the
underlying hardware resources
• It is the process by which one computer hosts the
appearance of many computers.
• Virtualization is used to improve IT throughput and costs by
using physical resources as a pool from which virtual
resources can be allocated.
Virtualization Architecture

• A Virtual machine (VM) is an isolated runtime environment


(guest OS and applications)
• Multiple virtual systems (VMs) can run on a single physical
system.
Virtualization Benefits

• Sharing of resources helps cost reduction


• Isolation: Virtual machines are isolated from each other as if
they are physically separated
• Encapsulation: Virtual machines encapsulate a complete
computing environment
• Hardware Independence: Virtual machines run
independently of underlying hardware
• Portability: Virtual machines can be migrated between
different hosts.
• Save money and energy
• Simplify management
Virtualization in Cloud
Computing
• You don’t need to own the hardware
• Resources are rented as needed from a cloud
• Various providers allow creating virtual servers:
• Choose the OS and software each instance will have
• The chosen OS will run on a large server farm
• Can instantiate more virtual servers or shut down existing
ones within minutes
• You get billed only for what you used
Hypervisor

• A hypervisor, a virtual machine manager/monitor (VMM), or


virtualization manager, is a program that allows multiple
operating systems to share a single hardware host.
• Each guest operating system appears to have the host's
processor, memory, and other resources all to itself.
• However, the hypervisor is actually controlling the host
processor and resources, allocating what is needed to each
operating system in turn and making sure that the guest
operating systems (called virtual machines) cannot disrupt
each other.
Vendors of Virtualization
Desktop Virtualization

• VMware Workstation (Local)


• Microsoft Virtual PC (Local)
• Citrix XenDesktop (Centralized)
Desktop Virtualization
Architecture
Virtual
Applications Applications Applications

Guest OS Guest OS Guest OS


(Windows) (Linux) (VMware ESX)
Virtual Machine Virtual Machine Virtual Machine
Physical

Virtual Machine Manager

Host OS

Hardware
Storage virtualization

• This is widely used in datacenters where you have a big


storage and it helps you to create, delete, allocated storage
to different hardware.
• This allocation is done through network connection.
Network virtualization

• It is a part of virtualization infrastructure, which is used


especially if you are going to visualize your servers.
• It helps you in creating multiple switching, Vlans, NAT-ing,
etc.
THANKS FOR WATC
HING

You might also like