Booklet of Handouts - Cloud Computing V4
Booklet of Handouts - Cloud Computing V4
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 1
Course Title (Course Code) VU
45 Special Topics in Cloud Computing and Conclusion of Course 182
Lesson No. 01
INTRODUCTION TO COURSE
Module No – 001
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 2
Course Title (Course Code) VU
Lesson No. 02
INTRODUCTION TO CLOUD COMPUTING
Module No – 002:
“Cloud computing is a model for enabling ubiquitous, convenient, on-demand network
access to a shared pool of configurable computing resources (e.g., networks, servers,
storage, applications, and services) that can be rapidly provisioned and released with
minimal management effort or service provider interaction.”
Lesson No. 03
HISTORY AND BACKGROUND OF CLOUD COMPUTING
Module No – 003:
Computer Scientist John McCarthy is attributed with delivering the idea that
computations will be provisioned as utilities in future. This idea was presented in 1961.
In 1960s and 1970s, the mainframes (giant powerful computers) were leased out by the
manufacturers.
The idea of grid computing emerged in 1990s to use the processing power of networked
PCs for scientific calculations during idle times.
In 1990s, Salesforce.com started bringing remotely provisioned software services to the
enterprises. Amazon Web Services (AWS) were launched in 2002.
In 2006, the term “cloud computing” emerged that enabled organizations to “lease” the
computing capacity and processing power from cloud providers.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 5
Course Title (Course Code) VU
Cluster computing has laid the foundation of modern day super computers, computational
grids and cloud computing.
Important Benefits of Cluster Computing:
o Scalability
o High availability and fault tolerance
o Use of commodity computers
Cluster Architecture (basic):
Cluster Architecture
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 6
Course Title (Course Code) VU
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 7
Course Title (Course Code) VU
o Several organizations may unite to form a grid in the shape of a virtual
organization (VO). For example multiple hospitals and research centers may
collaborate in a VO to find a cure for cancer.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 8
Course Title (Course Code) VU
Lesson No. 04
BASICS OF COMPUTERS
Module No – 008:
Mainframe:
o A mainframe is a large, expensive, powerful server that can handle hundreds or
thousands of connected users/servers simultaneously. For example a single
mainframe server of IBM’s Z series can provide the equivalent computing
throughput of at least 500 servers.
o In 1960s and 1970s, the mainframes were leased out by the manufacturers rather
than sold because of enormous cost of ownership.
Mainframe leasing model:
o The customers were charged on monthly basis for the use of hardware such as
CPU, memory and peripheral devices.
o The software (compilers, editors etc.) usage was charged for the time of usage.
o The mainframe leasers used to develop customized software exclusively for a
client organization and charged for it.
o The client was also charged for the maintenance of those customized software.
o This model still exists in the form of cloud computing.
Server:
o A server is a computer which provides services to other computers and/or devices
connected to it. Services provided by a server include the controlled access to
hardware and software resources and storage.
o A server can support hundreds and thousands of simultaneous users.
o Servers are available in a variety of sizes and types.
o Web server: stores websites and web apps and provides them on your desktops
and mobiles through web browsers.
o Domain Name Server (DNS): Stores domain names and the corresponding IP
addresses.
o Database server: Hosts database and provides access to data and provides data
manipulation functionality.
Desktop:
o A desktop is a computer which is designed to remain in a stationary position. It is
used as a personal computer.
o Intended to be used by one person at a time.
o Performs the activities such as
Input
Processing
Output
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 9
Course Title (Course Code) VU
Storage
.
Lesson No. 05
BASICS OF DATA COMMUNICATIONS
Module No – 009:
Data Communication: Exchange of data over some transmission medium between two
devices.
The following factors are essential for data communication:
o Data must be delivered to correct destination.
o There must be timely delivery of the data.
o There must not be uneven delay among the packet arrival time during audio or
video transmission.
Components:
o Message: The data to be sent. Can be text, numbers, pictures, audio and video.
o Sender
o Receiver
o Transmission medium: The physical path through which a message travels from
sender to receiver.
o Protocol: The set of agreed-upon communication-rules between sender and
receiver devices. Two devices can be connected but not communicating without a
protocol.
Data Representation:
o Text: Represented by bit pattern called code e.g.; Unicode and American
Standard Code for Information Interchange (ASCII).
o Numbers: Directly converted binary of the number. ASCII is not used to represent
numbers.
o Images: Sent as binary patterns. Image is represented by a matrix of pixels. Pixel
is a small dot. Each pixel is assigned a bit pattern on the basis of color.
o Audio: A continuous stream of data. Different from text, numbers and images.
o Video: Can be a continuous stream or a sequence of image combinations.
Data Flow:
o Simplex: Unidirectional communication in which either one of the sender or
receiver device can transmit. For example: key board, monitor etc.
o Half Duplex: Both devices can communicate but one at a time. The entire
capacity of the transmission medium is available to the transmitting device. For
example: walkie-talkies.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 10
Course Title (Course Code) VU
o Full Duplex: Both devices can send and receive at the same time. The
transmission medium should provide separate paths (channels) for the
transmission of each device. For example telephone conversation is full duplex.
Lesson No. 06
BASICS OF COMPUTER NETWORKING
Module No – 011:
Computer networking was conceived in 1960s soon after the invention of computers.
A network is a collection of computers and devices connected together through
transmission media.
Devices:
o Hosts: Large computers, desktops, laptops, cellular phone or security system.
o Connecting devices:
Router: A device which connects the network with other networks.
Switch: A device which connects devices within the network.
Modem: A device which changes the form of data (modulates-
demodulates).
Network Criteria:
o Performance: It is often evaluated by two metrics:
Throughput (bulk of data transmitted in unit of time) and delay.
Performance: It is often evaluated by two metrics:
• Increasing the throughput may increase the congestion and hence
increase the network delay.
• The transit time (message travel time) and response time (time
between inquiry and response) indicate the network performance
also.
Reliability: It is measured in terms of frequency of network failure, time to
recover from a failure and robustness from disasters.
Security: Protecting data from unauthorized access and damage, and
implementation of security policies and procedures for recovery from
breaches and data losses.
Physical Structures:
o Network Connections: Communication can only take place if the devices are
simultaneously connected to the same communication-path or link or connection.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 11
Course Title (Course Code) VU
o Link: A link can be dedicated link (Point to Point) or shared among devices
(multipoint).
Mesh: Every device has a dedicated point to point link to every other device.
o Advantage: Robustness of network from failure of any link.
o Disadvantage: The bulk of cabling involved.
Star: All devices are connected to a central device. Unlike mesh, there is no direct traffic
between any two devices but through the central device such as hub.
o Advantage: Requires only one I/O port in each device as compared to mesh.
o Disadvantage: If the central device fails, the whole network fails.
o
Bus: A multipoint topology in which one long cable is used a network backbone.
o Advantage: Ease of installation. Requires less cabling than mesh and star.
o Disadvantage: Difficult to extend, signal drops along the length of cable results in
limited number of connections, breaking of backbone cable isolates the network
segments and introduces noise.
Ring: The devices are connected in the form of ring. Each device acts as repeater.
o Advantages: Easy to expand and alter the network.
o Disadvantage: Failure of a single device can disable the entire network,
transmitting device needs to retain the token signal to perform transmission which
slows down the data rate.
Local Area Network (LAN): It is a privately owned network and has a scope of an office,
building or a campus. A LAN can even extend throughout a company.
o Each host in a LAN has a unique identifier or address.
o The communication packets between any two hosts in a LAN contain the source
and destination hosts’ addresses.
o Key features:
Media type: wired/wireless, twisted pair/cable/fiber, radio, infrared
Topologies: Bus, Star, Mesh, Ring, Tree
Bit rate: from 1Mbps to 1Gbps
Unicast, Broadcast, Multicast
o Typical LANs:
Ethernet (CSMA/CD): Carrier Sense with Multiple Access with Collision
Detection (retransmission after collision detection)
Local Talk (CSMA/CA): CSMA with Collision Avoidance (reserve the
media before transmission)
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 12
Course Title (Course Code) VU
Wireless LAN: IEEE 802.11, Range: < 100 m, Speed: 2Mbps
Token Ring: A token travels around the ring, it must be retained by the
sender computer to send a single packet, 4,6 or 100 Mbps
FDDI: Token ring with fiber optic cable, 100 Mbps
ATM: Star based, uses switch, multiple devices can communicate
simultaneously, 25, 45, 155, 600+ Mbps
Wide Area Network (WAN): A network that spans large geographical area such as town,
cities, states or even countries. Usually interconnects multiple LANs.
o Unlike LAN which is owned by the user organization, a WAN is normally created
and run by communication companies. It is leased to the user organizations.
o Types: Point to Point (P2P), Switched
P2P WAN: Connecting two devices through wired or wireless media eg;
connecting two LANs to form a private internet or internetwork of a
company.
Switched WAN: A network with more than two ends. It is a combination
of several P2P WANs connected by switches.
Metropolitan Area Network (MAN): It is a computer network covering a large
geographical area bigger than LAN and smaller than WAN.
o Diameter: 5 to 50 km, several buildings or a whole city
o MAN is not owned by a single organization generally just like WAN. The MAN
equipment are usually owned by a service provider.
o MAN usually provides high speed connectivity to allow sharing regional
resources.
A WAN is a switched network in which a switch connects two links together to forward
data from one network to the other.
Two common types of switched networks are:
o Circuit-Switched Network: A dedicated physical connection (circuit) is
established, maintained and terminated through a carrier network for each
communication session.
Used extensively by telephone companies. Only useful when all the
circuits are being utilized simultaneously; otherwise the network is being
underutilized.
o Packet-Switched Network: It is a WAN switching method in which a single link
is shared among multiple network devices.
Statistical multiplexing is used to enable devices to share the packet-
switching circuits.
Module No – 015: The Internet History and Accessing the Internet:
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 13
Course Title (Course Code) VU
o Backbones: Large networks owned by communication companies such as PTCL,
AT&T etc.
o Provider Networks: Use the service of backbone for a fee. Connected to backbone
through peering points. Sometimes connected to other provider networks as well.
o Customer Networks: Use the services such as internet connectivity provided by
provider networks and pay a fee to provider for that.
o The Backbones and provider networks are also called Internet Service Providers
(ISPs).
o Accessing the Internet:
Telephone Networks: Dial-up service, DSL Service
Cable Networks
Wireless Networks
o Internet today: World Wide Web, Multimedia, Peer-to-Peer Applications
TCP/IP Protocol Stack: Transmission Control Protocol (TCP) was proposed in 1973 to
ensure a reliable, end-to-end and error free transmission control.
o It was latter split into TCP and Internet Protocol (IP) layers with IP handling the
message routing and TCP performing the error control.
o Since 1981, TCP/IP is included in the operating systems.
o Consists of layers of protocols which paved the way for creating today’s internet.
These layers help in dividing a complex task into several smaller and simpler
tasks:
Physical Layer: Deals with transmission of bits into signals and
transmission of signals over the link.
Data-link Layer: Creates the frames of data. Each frame contains the data
and is addressed with the MAC address of the receiving device and also
contains the MAC address of sending device.
Network Layer: Is responsible for host to host communication through
their IP addresses and related protocols. No control for error and
congestion is performed. Packets are called datagrams.
Transport Layer: Responsible for transporting a message from application
program running over source host to corresponding application program
on destination host. Works on port numbers on corresponding hosts. Main
protocols are:
Transmission Control Protocol (TCP): Provides flow control,
congestion control and error control as it is a connection oriented
protocol.
User Datagram Protocol (UDP): Is light weight and is not
connection oriented.
TCP Message = segment
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 14
Course Title (Course Code) VU
The identifier used in Network layer of TCP/IP suit is the address of the internet
connection of receiver and sender devices.
IPv4 is a 32 bit universally unique address while IPv6 is the 128 bit universally unique
address.
o Total IPv4 addresses = 232
o Total IPv6 addresses = 2128
The address is in fact of the connection and may change when the device is moved to
another network.
A device can have two IP addresses if it has two connections with the internet.
IP address is usually represented by dotted decimal numbers. For example: IP v4 address:
193.63.82.10
The IP addresses are allocated by the Internet Corporation for Assigned Names and
Numbers (ICANN) to ISPs and large organizations.
Smaller organizations can get IP addresses from ISPs.
The IP address consists of a prefix part (the Network ID) and postfix part (the Host ID or
the Subnet).
Classification of IPv4 addresses:
o Class A: 8 bits for Network ID
Total networks 27
Network id starts with ‘0’ binary
First byte: 0 to 127
o Class B: 16 bits for Network ID
Total networks 214
Network id starts with ‘10’ binary
First byte: 128 to 191
o Class C: 24 bits for Network ID
Total networks 221
Network id starts with ‘110’ binary
First byte: 192 to 223
o Class D: Used for multicasting
No prefix or Network ID
First byte: 224 to 239
o Class E: Reserved for future use
First byte: 240 to 255
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 15
Course Title (Course Code) VU
Address Masking:
o Classful addressing lead to depletion of IP addresses and/or unused addresses.
o Solution:
Classless addresses with variable sized prefix according to the needs of
organizations
A notation representing the length of prefix is added at the end of a
classless address with a slash ‘/’ to indicate the addresses in a classless
address block.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 16
Course Title (Course Code) VU
Module No – 019: Ethernet:
Wired LAN:
o Medium: Wires
o Broadcasting and multicasting possible when required
o Physical connection to network
o Hosts are connected through link layer switch
o Connection to other networks through router
Wireless LAN
o Medium: Air
o All devices are broadcasting
o No physical connection to network
o No link layer switch exists
o Connected to other networks through access point (a device that connects a
wireless and wired network)
IEEE 802.11
o It is a wireless LAN standard by IEEE that covers physical and data-link layers
o Synonyms: WiFi, Wireless LAN
o Basic architecture consists of an access point (AP) and capable devices connected
to Access Point (AP
o In the absence of AP, the wireless devices connect to form adhoc network
o Multiple overlapping APs are used to cover a larger area
o A device is connected to only one of the nearest APs
o CSMA/CA is used. The sender sends a Request To Send (RTS) packet, the
receiver sends Clear To Send (CTS) packet, the sender sends data after receiving
CTS, the receiver sends acknowledgement, the other senders can send now.
o If no CTS is received, the sender marks it as a collision
o 802.11 a, b, g, n
o 802.11a: 50 feet, 22 Mbps
o 802.11b: 100 feet, 11 Mbps
o 802.11g: 100 feet, 54 Mbps
o 802.11n: 50 feet, 700 Mbps (to be implemented)
Cellular Networks have evolved from first generation (1G) to fifth generation (5G). Let
us briefly look at these generations:
o 1G
Invented around 1980.
First implementation in Tokyo (Japan)
Based upon analog technology
Expanded to cover all the population of Japan in few years
Not secure
Anyone with an all-band radio can listen to calls and get the phone
number of the subscriber
Analog mobiles were larger in size and heavy in weight
o 2G
Invented in 1991, implemented first time in Finland
Technologies: Global System for Mobile (GSM) Communication, General
Packet Radio Service (GPRS), Code Division Multiple Access
(CDMA) [digital signal] and Enhanced Data Rates for GSM Evolution
(EDGE)
Short Messaging Service (SMS), Multi-Media Messaging Service (MMS
Typical data rate: 100 Kbps
Email, Web browsing, Camera phones
Signal strength decay problem, performance degrades with the rise in
number of users in a cell (area maintained by a base station)
o 3G
From 2000 to 2010
Technologies: CDMA, WLAN, Bluetooth, Universal Mobile
telecommunication Systems (UMTS), High Speed Downlink Packet
Access (HSDPA)
Features: Global Roaming Clarity in voice calls, Fast Communication,
Internet, Mobile T.V, Video Conferencing, Video Calls, Multi
Media Messaging Service (MMS), 3D gaming and Multiplayer-
Gaming, smart phones
Typical data rate: Up to a few Mbps
Expensive mobile phones, battery life issue
o 4G
Since 2010
Technologies: Long Term Evolution (LTE) Standard based on the
GSM/EDGE and UMTS/HSPA, Multiple In Multiple Output (MIMO)
smart antenna technology, Orthogonal Frequency Digital Multiplexing
(OFDM), WiMAX
Typical data rate: Up to a few tens of Mbps
MAGIC: Mobile multimedia–Anytime anywhere–Global mobile support–
Integrated wireless solutions–Customized personal service
Maintaining data rate is an issue, not fully implemented in all the world,
battery consumption is a bigger problem than 3G
o 5G
To be implemented
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 20
Course Title (Course Code) VU
Technologies: New releases of LTE
Faster data rate than 4G (> 1Gbps), higher data rate at cell edges
Research is still in progress
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 21
Course Title (Course Code) VU
Advantages of Switch
o Collision elimination
o Connecting heterogeneous devices (in terms of data rate capacity)
Router
o It is a three layer device:
o Physical (regenerating the signals)
o Data-link layer(checking the MAC addresses of source and destination)
o It is a three layer device:
Network layer (checks the IP addresses of source and destination,
connects multiple networks to form bigger networks)
o Has multiple interfaces. Each interface has a MAC address and IP address.
o A router:
o Only processes those packets which are addressed to the interface at which they
arrive.
o Has multiple interfaces. Each interface has a MAC address and IP address.
o A router changes the source and destination MAC address when it forwards the
packets.
Virtual LAN (VLAN):
o A logical (not physical) segment of a physical LAN.
o VLANs are defined by software. Each VLAN is a work group in an organization,
has a VLAN ID and receives the broadcast messages addressed to its own ID.
o A VLAN may span over multiple switches in a LAN.
o No need to update the physical topology to relocate a person from one VLAN to
other, just the software configuration is to be updated.
In a physical network, multiple LANs and WANs are joined together by the routers.
Hence there can be more than one route between two hosts.
Routing is a service of Network layer to find the best route.
Routing is performed by applying routing protocols and using the decision tables called
routing tables in each router.
Forwarding is the action performed by a router on the basis of routing protocol and
routing table according to the destination address of each packet received at any interface.
At network layer, each message from higher layer is broken down into packets.
A router performs packet switching.
Types of routing:
o Unicast routing: A router forwards the packet to only one of the attached
networks.
o Multicast routing: A packet is forwarded to multiple attached networks.
Routing a packet from a source host to destination host can also be defined as routing a
packet from a source router (the default router of the source host) to a destination router
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 22
Course Title (Course Code) VU
(the router connected to the destination network) through the intermediate routers using
routing algorithms.
Types of routing:
o Connectionless routing: All packets of the same message are treated
independently and may or may not follow the same route.
o Connection oriented routing: All the packets of same message are labeled and
routed through a virtual circuit or a fixed route.
An internet can be considered as a graph with each network as an edge and each router as
a node.
In a weighted graph, each edge has a weight or cost.
Least cost routing can be performed. Example algorithms: Distance-Vector routing, Link-
State routing
Lesson No. 07
ADVANCED TOPICS OF COMPUTER NETWORKS
Module No – 026: Broadband Networks & Internet:
All clouds are inherently dependent upon internetworking or Internet for ubiquitously
remote provisioning of IT resources.
The cloud providers and consumers connect to Internet through ISPs.
The largest backbone networks of the Internet are strategically interconnected by core
routers.
The core-routers connect the international networks.
The Internet has become a dynamic and complex aggregate of ISPs.
There is a hierarchical topology for worldwide interconnectivity composed of tiers.
There are three tiers of worldwide connectivity:
o Tier 1 consists of large-scale international connectivity providers.
o Tier 2 consists of large regional ISPs connected to tier 1.
o Tier 3 consists of local ISP providers connected to tier 2.
The cloud providers and users connect directly to tier 3 providers.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 23
Course Title (Course Code) VU
In cloud deployment using the on-premises, the provider sets up a fully controlled
corporate network and a corporate Internet connection for the deployment of IT solutions
and applications.
In the on-premises deployment, the internal users access the cloud through corporate
network. The remote users connect through internet by using virtual private network
(VPN).
A VPN creates a secure connection between a remote device and the corporate servers
over the internet as if the device is inside the LAN.
For the internet based deployment, the cloud provider has an Internet connection and al
the internal and external users access the cloud resources through cloud provider’s
internet connection.
In this deployment, there is an extra charge for internet connectivity.
Scalable computing may refer to the dynamic resizing of the available computing
resources (processing, memory, bandwidth, storage etc.) with demand.
The growth of users and user demands for scalable computing over internet has been
accompanied with matching growth in network, computing and resource management
technologies.
The computing platforms have evolved as follows
o Mainframes (1950-70)
o Minicomputers (1960-1980)
o Personal computers (1970-1990)
o Portable computers (1980-2000)
Since 1990, the High Performance Computing (HPC) and High Throughput Computing
(HTC) have been relying upon clusters, grids and the Internet clouds.
The speed for HPC systems (supercomputers) has increased from Gflops in early 1990s
to now Pflops in 2010.
The network bandwidth has been doubling each year in the recent past (Gilder’s law).
Processor speed has been doubling every 18 months (Moore’s law).
Means that there has been a steady growth in these technologies.
Fine grain (instruction level) parallelism and coarse grain (job level) parallelism are
available.
Ubiquitous computing is what refers to computing at any place and time using pervasive
devices and wired or wireless communications.
Utility computing works upon a business model in which the customers pay for
computational resources from a provider.
Cloud computing provides ubiquitous utility computing.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 24
Course Title (Course Code) VU
The processor speed and network bandwidth have shown a remarkable growth in last few
decades.
The processor clock rate has risen from 10 MHz in 1970s to over 4GHz in 2010s.
The network band has increased from 10 Mbps to over 100,000 Mbps
The excessive heat generation from single processor core with high frequency has limited
the maximum speed unless the chip technology matured.
This has lead to the multi-core architecture of CPUs with dual, quad, six or more cores.
The graphical processing unit (GPU) development has adopted a many-core architecture
with hundreds to thousands of cores.
Modern architecture of CPUs and GPUs have enhanced the instruction level parallelism
(ILP) and the volume of millions of instructions per second (MIPS).
Sun’s Niagara CPU can provide 64 count for ILP.
Intel’s Core i7 990x can provide 159,000 MIPS execution rate
The CPUs and GPUs are multithreaded, which means that each core can execute multiple
processes or threads concurrently.
A GPU unit has far more (but slower) cores than a multi-core CPU
The DRAM memory chip capacity has increased from 16 KB in 1976 to 64 GB in 2011.
The hard disk capacity has increased from 260 MB in 1981 to 3TB a few years ago.
The flash memory and solid-state drives are rapidly evolving.
Disk arrays are being utilized to enhance the storage.
Servers can be connected to network storage such as disk arrays through storage area
network (SAN)
A disk array can be connected to client hosts through network attached storage (NAS)
The high bandwidth networks in WAN scope can connect the host computers to network
storage.
A single host can be shared among multiple instances of operating systems through
virtualization technology. More on this latter.
A VPN extends a private network over public network and enables the users to
communicate as if their devices are directly connected to the private network.
A VPN creates a secured and encrypted network over a less secured network such as the
Internet.
Normally a VPN is provided and managed by a service provider.
VPN allows the corporate employees to securely access the applications hosted over
enterprise LAN.
VPN is based upon IP tunneling.
IP tunneling, or port forwarding is the transmission of private network packets over a
public network (Internet) as the payload of public network packets such that the routing
devices do not come to know about this.
There are many protocols for VPN establishment and encryption: IP Security (IPSec),
Secure Socket Layer(SSL), Point-To-Point Tunneling Protocol (PPTP), Multiprotocol
Label Switching (MPLS) etc.
VPN although provide secured connectivity to extend a private network but the
implementation may have performance issues.
VPN is implementable over Layer 1-3.
Types of VPN:
o Remote-access VPN: A VPN client on user’s device connected to VPN gateway
of the enterprise.
o Site-to-site VPN: Establishes a VPN between two networks over the Internet by
using VPN gateway.
VPN technology provides access to cloud resources. The VPN gateway exists in the
cloud with a secure link provided by the cloud provider.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 26
Course Title (Course Code) VU
Module No – 040: Networking Structure of Cloud Hosting Data center:
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 27
Course Title (Course Code) VU
Lesson No. 08
VIRTUALIZATION
Module No – 031:
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 28
Course Title (Course Code) VU
o Hardware Abstraction level: The hardware components (CPU, RAM, Disk, NIC)
of a physical system are virtualized and shared among virtual machines using
Virtual Machine Monitor (VMM) tool or hypervisor which performs as
abstraction layer.
o Operating System Level: The OS running over a server accommodates multiple
containers or VMs. The host operating system acts as the abstraction layer
between hardware and the containers.
o Library support level: The API calls for hardware acceleration such as vCUDA
stubs for graphic processing units (GPUs) are available at VM level.
o Application level: An application acts as a VM through wrapping of application in
an abstraction layer which isolates it from OS and other applications. Another
type is using virtualization layer as programming environment e.g; Java Virtual
Machine (JVM).
We know that the virtualization layer transforms the physical hardware into virtual
hardware. There are three classes of VM architectures.
o Hypervisor Architecture:
It is the hardware level virtualization. Also called the bare-metal
virtualization
The hypervisor sits between the hardware and the VMs and manages the
VMs.
Example: Xen, VMware
o Full-virtualization Architecture:
The guest operating system (OS) or the VM’s OS does not know that it is
installed on a VM.
The Virtualization layer manages the hardware acceleration. For example
VMware
The virtualization layer can be installed on hardware or on host’s OS.
Some of the instructions of a gust VM are directly run on hardware to
enhance the performance.
o Para-virtualization Architecture:
The guest OS is modified to comply with virtualization layer. All calls for
hardware acceleration are handled by virtualization layer.
For example: KVM
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 29
Course Title (Course Code) VU
o Generic:
o Monolithic:
o Microkernel:
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 30
Course Title (Course Code) VU
o Full Virtualization:
o Para Virtualization:
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 31
Course Title (Course Code) VU
To support virtualization, processors such as x86 architecture use a special mode and
instructions known as hardware-assisted virtualization.
In this way, the hypervisor is able to trap the sensitive instructions of the guest OS and its
applications.
The modern processors allow multiple processes to run simultaneously. Any process can
execute a critical instruction to crash the whole system.
Therefore, the critical instructions are executed in privileged or supervisor mode of the
processor. The OS controls this mode on behalf of the processes being executed.
The second type of instructions are non-privileged or non-critical instructions which are
run in user-mode of the processor.
CPU Virtualization: A CPU is virtualizable if it is able to run the privileged and un-
privileged instructions of a VM in user mode and the hypervisor running in supervisor
mode.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 32
Course Title (Course Code) VU
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 33
Course Title (Course Code) VU
o Aggregation of smaller resources into a single big virtual resource (e.g., Storage)
o Dynamic relocation/provisioning of virtual resources is easier than physical
resources
o Easier management of virtual resources/devices/machines.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 34
Course Title (Course Code) VU
A data center is a facility with networked computers and is used by businesses and other
organizations to process, store and share large amounts of data.
Companies like Google, Yahoo, Amazon, Microsoft, IBM, HP, Apple etc. have invested
billions of dollars for constructing the data centers.
Data center automation refers to the dynamic provisioning of hardware and software
resources to millions of users simultaneously.
Data centers can host Clouds.
Data center automation is triggered by the growth of virtualization products.
The data center owner has three major considerations:
o Assuring Performance and QoS
o Increase resource utilization
o Saving costs
Enhanced resource allocation (to jobs and/or VMs) may be performed in data centers to
assure performance and QoS.
The over allocation of computing resources may result in decrease in average utilization
of these resources.
This also leads to increased costs due to power consumption.
Example: A VM hosted on a server with 1.5 GHz *4 cores and 16 GB of RAM is
allocated 1.5GHz * 2 vCPUS, 4 GB vRAM (half of the processing and 1/4th RAM).
Suppose if there are two such VMs. But the overall average workload of the hosted VMs
keeps the physical utilization to less than 50%. This is a resource wastage as 50% of the
resources remain idle.
Server consolidation is a technique by which more VMs are aggregated on a single server
(by migrating jobs/VMs to it) while assuring performance and QoS.
o This increases the resource utilization across data center.
o More servers are available to take more workload. . Otherwise, the idle servers
can be shut down to save power.
Virtualization technology also helps in setting of virtual storage (over VMs) to offer
virtual disks to other VMs.
Virtualization can synchronize with cloud management systems to dynamically provision
cloud services and billing systems.
Hence, virtualization is essential for Cloud computing.
Multiple virtual Network Interface Cards (vNIC) are linked to physical NIC or pNIC
through a virtual Switch (vSwitch) inside a hypervisor.
VM 1 VM 2 VM 3
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 35
Network Virtualization
Lesson No. 09
ESSENTIAL CHARATERISTICS OF CLOUD COMPUTING
Module No – 041:
On-demand self-service: The user can automatically be allocated the computing resources
without any manual operations (except the initial signing up process). The cloud
management software handles the resource management and provisioning.
Broad Network Access: The cloud resources can be accessed through network through
broad range of wired and wireless devices. Various connectivity technologies are
available.
Resource pooling: Resources (Computing, memory, storage, network) are available in
volumes and therefore can be pooled. The resources can be physical or virtual. Multiple
users can simultaneously share these resources through dynamic allocation and
reallocation.
Rapid elasticity: The cloud resources are virtually unlimited. So much so, the
provisioning of these resources can shrink and expand elastically according to demand.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 36
Course Title (Course Code) VU
Measured Service: The resource usage is charged by the provider from users, according
to usage.
Module No – 043: Revisiting NIST Definition of Cloud Computing
Some key terms and concepts essential for understanding Cloud Computing course:
o IT Resources
o On-premises
o Cloud Consumers
o Cloud Providers
Cloud IT Resources: Can be physical or virtual resources (virtual resources are
implemented in software):
o Physical/Virtual machines/servers
o Physical/virtual storage
On-premises: An IT resource which is hosted/located at the enterprise's premises.
o It is different from a Cloud resource since a Cloud resource is hosted on Cloud.
o An on-premises IT resource can be connected to a Cloud resource and/or can be
moved to a Cloud.
o However the distinction is difficult for private clouds.
Cloud Providers: The party providing the cloud-based IT resources.
Cloud Consumer: The user of cloud-based IT resources is called cloud consumer.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 38
Course Title (Course Code) VU
Cloud Service:
o Any IT resource (software/VM) that is made remotely available by the cloud
provider.
o Remember that not all the IT resources deployed in a cloud environment are
remotely accessible. Some resources are used within the Cloud for support and
monitoring etc.
The human users interact with a leased VM.
Client programs interact with cloud software service/s through API calls.
The software program and service accessing a cloud service is called a cloud service
consumer
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 39
Course Title (Course Code) VU
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 40
Course Title (Course Code) VU
Module No – 054:
Resiliency: The ability of a computer system to recover from a failure is called resiliency.
o The redundant implementation of IT-resources paves the way to a resilient
system.
o The whole system is pre-configured so that as soon as a resource fails, the
processing is automatically handed over to the redundant resource.
o Resiliency is one of the features of cloud computing whereby the redundancy of
IT-resources is implemented at different physical locations and/or in different
clouds.
o For example the data can be kept at two different locations and replicated. If the
primary hard disk fails, the secondary drive takes the data connectivity.
A cloud service can be configured at two different VMs (A and B) and each VM is
placed on a separate server or a different cloud. VM B is kept as failsafe resource. In case
VM A fails, the VM B starts processing the user service user/s requests.
Lesson No. 10
BENEFITS OF CLOUD COMPUTING
Module No – 046:
Increased scalability: The cloud can dynamically and instantly provide the computing
resources.
This provision can be on demand or as per user configuration.
Similarly these IT resources can be released automatically or manually with the decrease
in processing demand.
This dynamic scalability avoids the over-provisioning and under-provisioning and the
associated disadvantages.
Availability: The availability of IT resources sometimes can be referred to profit and
customer retention.
o If an IT resource becomes unavailable (such as a database dealing with clients’
orders) then this may result in customer dissatisfaction and loss of business.
Reliability: The reliability of IT resources is very important for continual business data
processing and response time.
o The failure of any IT resource can be cause the collapse the IT system. For
example failure of the Ethernet switch may crash a distributed application.
The modular structure and resource redundancy in cloud increases the availability and
reliability. Cloud, on the other hand provides a guaranteed level of availability and
reliability through a legal agreement called service level agreement (SLA) between the
cloud provider and cloud user.
The recovery time after failure is the added penalty. It is the time when the system
remains unavailable.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 42
Course Title (Course Code) VU
The modular structure and resource redundancy in cloud increases the availability and
reliability. It also improves the recovery time.
Lesson No. 11
RISKS AND CHALLENGES OF CLOUD COMPUTING
Module No – 048:
Module No – 049:
Reduced operational governance control: The cloud consumer gets a lesser privileged
control over the resources leased from the cloud.
There can be risks arising as to how the cloud provider manages the cloud.
An unreliable cloud provider may not abide by the guarantees offered in SLA of the
cloud services. This will directly affect the quality of cloud consumer solutions
(enterprise software) which rely upon these services.
The cloud consumer should keep track of actual level of service being provided by the
cloud provider.
o The SLA violations can lead to penalties receivable from the cloud provider.
Limited portability between cloud providers: Due to lack of industry standards for cloud
computing, the public clouds environments remain proprietary to their providers.
It is quite challenging to move a custom-built software from one cloud to another if it has
dependencies upon the proprietary environment (such as security framework) of the
former cloud.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 43
Course Title (Course Code) VU
Multi-regional compliance and legal issues: Cloud providers tend to set their data centers
in regions favoring affordability and/or convenient. This may lead to legal issues for
cloud provider as well as cloud consumers.
Some countries such as some UK laws require the personal data of UK citizens to be
hosted inside UK.
Thus a cloud provider with multi-regional data centers including UK, can not migrate the
UK citizen’s personal data outside UK.
The UK citizens are legally bound to keep the personal data on clouds hosted in UK only.
Some countries such as USA allows government agencies’ access to data hosted inside
USA.
Despite that the owners of this data are neither residing inside nor the citizens of USA,
but still their data is accessible by the USA government agencies if hosted inside USA.
Lesson No. 12
ROLES AND BOUNDARIES IN CLOUD COMPUTING
Module No – 050:
Module No – 051:
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 44
Course Title (Course Code) VU
Cloud Resource Administrator: This role is responsible for administering the cloud
resources (including cloud services).
o Cloud resource administrator can be:
Cloud consumer (as cloud service owner)
Cloud provider (when the service resides inside the cloud)
Third party contracted to administer a cloud service
Additional roles:
o Cloud Auditor: Provides an unbiased assessment of trust building features of the
cloud. These include the security, privacy impact and performance of the cloud.
The cloud consumer may rely upon the cloud audit report for choosing a cloud.
o Cloud Broker: A party that provides mediation services to cloud providers (seller)
and cloud consumers (buyer) for the purchasing of cloud services.
o Cloud Carrier: The party responsible for providing connectivity between cloud
provider and cloud consumer. The ISPs can be assumed as cloud carriers.
o The cloud provider and cloud carrier are in legal agreement (SLA) to assure a
certain level of connectivity and network security.
Module No – 052:
Trust boundary: When an organization takes the role of cloud consumer, then it has to
extend its trust boundary to include the cloud resources. A trust boundary represents a
border around trusted IT-resources.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 45
Course Title (Course Code) VU
Lesson No. 13
CLOUD SERVICE MODELS
Module No – 055: IaaS, PaaS & SaaS Provisioning:
IaaS: The IT-resources are typically virtualized and packaged in a preplanned way.
o The IT-resources are usually freshly instanced e.g., VMs.
o The cloud consumer has a high level of control and configuration-responsibility.
o The cloud consumer also has the duty of configuring these resources.
o Sometimes a cloud provider will contract IaaS offerings from other cloud
provider to scale its own cloud environment.
o The VMs can be obtained specifying the hardware requirements such as processor
capacity, memory, storage etc.
PaaS: Delivers a programming environment containing preconfigured tools to support
the development lifecycle of custom applications.
o PaaS products are available with different development stacks such as Google
App Engine provides a Python and Java environment.
o The PaaS is chosen:
To enhance or substitute the on-premises software development
environment.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 46
Course Title (Course Code) VU
To create a cloud service in order to provide a cloud service to other cloud
consumers.
o The PaaS saves the consumer from administrative tasks such as installations and
configurations to set up the software development infrastructure.
o On the other hand the cloud consumer has lower level of control over the
underlying infrastructure.
SaaS: Is the software hosted over cloud infrastructure and offered as a utility services.
o SaaS is provided as a reusable utility service commercially available to different
users.
o A SaaS can be deployed over IaaS and/or PaaS instance. Whereby the cloud
consumer (of IaaS/PaaS) becomes the provider.
o The service consumer has a very limited control over the underlying SaaS
implementation.
Module No – 056:IaaS, PaaS & SaaS Comparison
Control level:
o SaaS: Usage and usage related configuration
o PaaS: Limited administrative
o IaaS: Full administrative
Functionality provided to cloud consumer:
o SaaS: Access to front-end user-interface
o PaaS: Moderate level of administrative control over programming platform
o IaaS: Full administrative control over virtual resources of the VMs
Common activities of cloud consumer:
o SaaS: Use and configure the service
o PaaS: Develop, debug and deploy the cloud services and cloud based solutions
o IaaS: Installation and configuration of software, configure the infrastructure of
VM
Common Cloud Provider’s Activities:
o SaaS: Implementation, management and maintenance of cloud service.
o PaaS: Providing the pre-configured programming platform, middleware and any
other IT resource needed.
o IaaS: Provisions and manages the VMs and underlying physical infrastructure.
The three cloud models of cloud delivery can be combined in a way that one delivery
model is deployed over another. Such as:
o PaaS over IaaS
o SaaS over PaaS
o SaaS over PaaS over IaaS
NIST definition of SaaS: “Software deployed as a hosted service and accessed over the
Internet.”
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 47
Course Title (Course Code) VU
The SaaS is a software solution having the code and data executing and residing on
cloud.
A user accesses the SaaS through browser.
Remember: The cloud service consumer is a temporary runtime role assumed by a
software program when it accesses a cloud service.
[Thomas Erl [2014], Cloud Computing Concepts, Technology and Architecture,
Pearson]
For the time being we shall assume that the browser acts as cloud service consumer when
accessing a SaaS.
SaaS solutions eliminate the need of on-premises (data center based) applications,
application administration and data storage.
The customer is allowed to adopt pay-as-you-go type of rental.
SaaS offers scalability and device-independent access to the SaaS solution/s.
SaaS provider assures that the software provided is solidly tested and supported.
The notable disadvantage of SaaS is that the data resides off-premises.
The notable disadvantage of SaaS is that the data resides off-premises.
Therefore the data security is of prime importance because the customers’ data may be
proprietary and business-sensitive.
The SaaS provider offers SaaS apps executing over IT-resources. These resources can be
from a physical servers or a VM owned/rented by the provider.
Each instance of a SaaS app (consumed by a user) is allocated separate set of IT-
resources.
Classes of SaaS:
o Business logic: Connect the suppliers, employees, investors and customers.
Example: Invoicing, fund transfer, inventory management, customer
relationship management (CRM)
o Collaboration: Support teams of people work together.
Examples: Calendar systems, email, screen sharing, conference
management and online gaming.
o Office productivity: Office environment support.
Examples: word processors, spreadsheets, presentation and database
software.
o Software tools: For the support of developing software and solving compatibility
problems.
Examples: format conversion tools, security scanning, compliance
checking and Web development.
Software that are not suitable for public SaaS offerings (according to NIST):
o Real-time software: They require precise response time. Due to variable response
time and network delays, these software are not suitable to be offered as SaaS.
Such as flight control systems and factory robots etc.
o Bulk-consumer data: When extremely large amount of data is originating
physically at the consumer’s side such as physical monitoring and patient
monitoring data. It is not feasible to transfer this data in real time over WAN to
SaaS provider.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 48
Course Title (Course Code) VU
o Critical software: A software is labeled critical if its failure or delay in handling
can cause loss of life or loss of property. These software are not suitable for SaaS
because achieving a continuous acceptable reliability for critical software in
public SaaS is quite challenging due to (unreliable) public network based access.
SaaS billing: Based on
o Number of users
o Time in use
o Per-execution, per-record-processed
o Network bandwidth consumed
o Quantity/duration of data stored
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 49
Course Title (Course Code) VU
Application: Email
Middleware: software libraries, run time environments (Java, Python)
Service provider has admin control over application and total control over the rest of
the layers.
Service consumer has limited admin control over the application and no control over
the rest of the stack.
A consumer can create, send and manage the emails and even the email accounts.
But the email provider has absolute control over the SaaS software stack in order to
perform its duties such as provisioning, management, updates and billing in email
app.
Modest software tool footprint: There is no need for complex installation procedures
because the SaaS applications and accessible through web browsers. This is one of
the reasons of widespread use of SaaS applications.
Efficient use of software licenses: The license issuance and management procedure is
quite efficient. A single client is issued a single license for multiple computers. This
is because the software is running directly on provider’s infrastructure and thus can
be billed and monitored directly.
Centralized management and data: The consumer’s data is stored in cloud. The
provider assures the security and availability of data. The data seems centralized for
the consumer may in fact be distributed and replicated by the provider. Data backup
is provided at possibly additional charges.
Platform responsibilities managed by providers: Consumer does not has to bother
about operating system type, hardware and software configurations, software
installation and upgrades.
Savings in up-front costs: (as discussed before) the up-front costs such as equipment
acquisition and hardware provisioning etc. are avoided by SaaS consumer.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 50
Course Title (Course Code) VU
The NIST has identified few issues and concerns about SaaS.
Most of these issues are due to network dependency of SaaS.
o Browser based risks and remedies: Since the SaaS is accessed through
browser installed on consumers’ device, the inherent vulnerabilities of the
web browsers do have impact over SaaS security.
Although the browsers apply encryption upon network traffic, yet
various network attacks such as brute force and man in the middle
attacks are possible upon the SaaS data.
The resources leased by a consumer can be hijacked by malicious
users due to poor implementation of cryptographic features of
browsers.
If the consumer’s browser is already infected with a security threat
(due to a visit to malicious website) then later, the same browser is
used for SaaS access, then the SaaS data might get compromised.
If a single consumer accesses multiple SaaS services using browser
instances, then the data of these SaaS instances may get mixed up.
A few suggestions by NIST:
Use different browsers to access each different SaaS.
Do not use the same web browser for web surfing and SaaS
access.
Use a VM to access the SaaS.
o Network dependence: SaaS application depends upon reliable and
continuously available network.
The reliability of a public network (Internet) can not be guaranteed as
compared to dedicated and protected communication links of private
SaaS applications.
o Lack of portability between SaaS clouds:, It may not be trivial to import
export data among different SaaS applications deployed over different clouds
due to customized development and deployment of SaaS applications and data
formats.
o Isolation vs. Efficiency (Security vs. Cost Tradeoffs): The SaaS provider has
to make a trade-off decision as to deploy separate IT-resources (such as VMs)
for each client or concurrently server multiple clients through a single
deployment of SaaS application.
Module No – 062: NIST Recommendations for SaaS
Data protection: The consumer should analyze the data protection, configuration,
database transaction processing technologies of SaaS provider. Compare them with
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 51
Course Title (Course Code) VU
the confidentiality, integrity, availability and compliance requirement of the
consumer.
Client device/application protection: The consumer’s client device (browser running
over a computer) should be protected to control the exposure to attacks.
Encryption: Strong encryption algorithm with key of required strength should be used
for each web session as well as for data.
Secure data deletion: The data deletion through consumer’s request should be reliably
done.
Application
Middleware
Operating
System
Hardware
PaaS Software stack
PaaS Provider/ Consumer Scope of Control: The provider has administrative control
of middleware.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 52
Course Title (Course Code) VU
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 53
Course Title (Course Code) VU
Benefits:
o Lower total cost of ownership in terms of hardware and software investment.
o Lower administrative overhead of system development.
o No requirement of software upgrades of tools.
o Faster application development and deployment.
o Scalable resources available for the applications. The user pays only for the
resources used.
Disadvantages:
o The inherent problem of data placed offsite raises the security concerns.
o The integration of PaaS applications with on-site legacy solutions is not
trivial.
o The PaaS provider has to be trusted for data and application security.
o The issues of SaaS are also the issues of PaaS such as browser based risks,
network dependence and isolation vs efficiency.
o Portability of PaaS applications across different providers may not be possible
sue to incompatibility in coding structures (hash, queue, file etc.).
Generic interfaces: The consumer should make sure that the interfaces for hash tables,
queues and files etc. are generic so that there will be less issues of portability (among
PaaS providers) and interoperability (of applications) in future.
Standard language and tools: Choose a PaaS provider which offers standardized
language and tools unless it is absolutely unavoidable to use the proprietary languages
and tools.
Data access: The provider with the standardized data access protocol (such as SQL)
should be preferred.
Data protection: The confidentiality, compliance, integrity and availability needs of
the organization should be compared with the data protection mechanisms of the
provider.
Application framework: The PaaS providers which offer the features in application
development framework for eliminating security vulnerabilities of the application
should be chosen.
Component testing: The software libraries provided by the PaaS provider should be
aiming at providing proper functionality and performance.
Security and secure data deletion: Ensure that the PaaS applications can be configured
to run in a secure manner (e.g., using cryptography during communication) and that a
reliable mechanism for data deletion is provided by the PaaS provider.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 54
Course Title (Course Code) VU
As an alternative to PaaS, some consumers may prefer to use IaaS in order to have
management control over the IT resources.
The IaaS provider makes available the computing resources in the form of VMs.
The consumer has the duty of installing OS and software.
The provider also provides stable network access, network components such as
firewalls, and data storage.
IaaS Provider/Consumer Scope of Control: The provider has no control over top three
layers.
IaaS Provider/Consumer Scope of Control: The provider has admin control over
hypervisor and total control over hardware layer.
IaaS Provider/Consumer Scope of Control: The consumer has total control over top
three layers.
IaaS Provider/Consumer Scope of Control: The consumer can request the provider to
deliver a VM from hypervisor layer.
The consumer has no control over hardware layer.
Customer billing:
o Per CPU hour
o Data GB stored per hour
o Network bandwidth consumed, network infrastructure used (e.g., IP
addresses) per hour
o Value-added services used (e.g., monitoring, automatic scaling).
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 55
Course Title (Course Code) VU
Saving in upfront cost: As in SaaS and PaaS. Although the responsibility of installing
OS and software is of the consumer.
Full administrative control over VM:
o Start, shut down, pause
o Installation of OS and applications
o Accessing VM through network services of VM through a network protocol
such as Secure Shell.
o Flexible and scalable renting: The VMs can be rented in any volume desired
by the consumer. The rental for each VM can be on usage (of raw resources
such as CPU, memory, bandwidth, storage, firewall, database etc.) basis.
o Portability and interoperability with legacy applications: Since the consumer
has full control over the VM to install OS and other applications, the legacy
applications (which are usually installed on consumer owned server/s) can be
configured to run with or ported to the VM.
Module No – 070: IaaS Issues and Concerns:
Network dependence
Browser based risks (same as discussed for SaaS and PaaS).
Compatibility with legacy software vulnerabilities: Since the consumer is allowed to
install the legacy applications on VMs rented through IaaS, this exposes the VMs to
the vulnerabilities in those legacy software.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 56
Course Title (Course Code) VU
Implementation challenges exist for VM isolation: In order to prevent the VMs from
eavesdropping other VMs mounted over same server, the isolation features of
hypervisor are utilized. But these features may not withstand a sophisticated attacks.
Dynamic network configuration for VM traffic isolation: A dynamic network path is
provided from VM to consumer when a VM is rented. The provider has to isolate VM
consumers from accessing the network traffic of other consumers.
Data erase practices: When a VM is no longer rented by a consumer, the virtual drive
of that VM must be erased/overwritten multiple times to eliminate any chance of
residual data access by the next consumer of that VM.
NIST recommendations for IaaS: The provider should implement data and network
traffic isolation for the VM consumers. The features of data security as well as secure
deletion of residual data of VM consumer.
Lesson No. 14
DATA STORAGE IN CLOUDS
Module No – 073:Network Storage:
Computers attached to a local area network (LAN) may require additional storage space
to support file sharing, file replication and storage for large files.
Traditionally this additional space is provided through file servers which have larger disk
capacity.
With the evolution of computer networks, the file server was extended through the use of
storage area network (SAN).
The SAN enabled storage devices are attached to the network.
The software running over SAN devices allows direct access to these devices throughout
network.
Later on, a class of storage devices emerged to be implemented as network attached
storage (NAS).
Advantages of network storage (particularly of SAN) are:
o Data reliability and reconstruction through replication.
o Better performance than file server.
o Compatibility with common file systems and operating systems.
o Best choice for backups.
Cloud storage is the next step in the evolution of network storage devices.
Instead of storing the data locally, the data can be stored on cloud and can be accessed
through web.
The user can have virtually unlimited storage space available at affordable rates.
There are various modes of data access in Cloud:
o Using web browser interfaces to move the files to and from the cloud storage.
o Through a mounted disk drive that appears local to the user’s computer.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 57
Course Title (Course Code) VU
o Through API calls to access the cloud storage.
There are a number of cloud storage providers which offer file storage, sharing and
synchronization. Such as:
Carbonite
pCloud
Dropbox
ElephantDrive
These providers offer a certain volume of free storage as well as paid storage at low
prices.
Advantages:
o Scalability: The user can scale the storage capacity (up or down) according to
requirement.
o Various convenient costing models are available from one time payment to
monthly payment to pay as per use.
o Reliability: The storage providers provide the assurance for data reliability
(through replication).
o The data can be accessed worldwide by using Internet.
o Various methods of data access are available (as discussed before).
Disadvantages:
o Performance: Because of the Internet based access, the cloud storage can never be
as fast as SAN or NAS based local storage.
o Security: Not all the users may be able to trust the cloud provider for the users’
data.
o Data orphans: The user has to trust the data deletion policies of the provider. The
files (on cloud storage) deleted by the user may not be immediately (or ever) be
deleted from the cloud storage.
The term backup refers to the copying of (data and/or database) files to a secondary site
for preservation in case of device or software failures.
Backup is an important part of disaster recovery plan.
In case of a disaster, the data can be restored to the state of last backup.
Cloud based backup system comprises of procedures to send the copy of data over a
proprietary or public network to a remote server hosted by the cloud service provider.
The provider charges the user according to number of accesses or data volume or number
of users.
Cloud based backup or online backup system is implemented through a client software
installed on the user’s computer. The software collects, compresses and sends the data to
cloud backup on timely basis.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 58
Course Title (Course Code) VU
Advantages:
o The data is backed up in encrypted form.
o Backup can be performed on the convenience of user (daily, weekly, monthly).
o The user can easily retrieve the backup files from the cloud.
Disadvantages / Limitations:
o Due to security concerns, the critical data backup is preferably stored on local
storage.
o The long term data storage in heavy volume over cloud may have humongous
cost.
o Due to network cost, the incremental backup is preferred.
Cloud based block storage is a sequence of bits and provided as a block on cloud storage.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 59
Course Title (Course Code) VU
Amazon Elastic Block Store (EBS) is a highly available, scalable and reliable block
storage solution which supports block sizes of up to 1 terabytes.
Lesson No. 15
MISCELLANEOUS SERVICES OF CLOUD COMPUTING
Module No – 071:Identity as a Service (IDaaS):
Today within most companies, the users may have to log in to several applications
servers (on premises and/or cloud) to perform daily tasks. Some of these systems may be
cloud based.
The user has to remember multiple logins and passwords.
When a user leaves a company, the related logins and passwords must be deleted.
The identity management is a complex task and therefore provided as a service for cloud
consumers.
For example single sign on (SSO). Single sign on (SSO) software is installed over
authentication server.
Before connecting to application servers, the user connects with the authentication server
to obtain a secure ticket.
The authentication server maintains the user login security credentials required by
application servers.
When the user leaves the company, only the user’s login on authentication server is
needed to be disabled to block the user’s access to all the application servers.
There are a few examples of IDaaS providers for on-premises and cloud applications
such as Ping IDaaS and PasswordBank IDaaS.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 60
Course Title (Course Code) VU
Collaboration is defined as the process in which two or more people work together to
achieve a goal.
Traditionally, the collaboration has been achieved through face to face meetings in
conference rooms.
Some team members had to travel (from near or far) to attend the meetings.
Those who could not personally arrive at the meeting had either of the following two
choices:
Phone call to a speaker phone placed at the conference table
Study the minutes of meeting
A solution that could reduce the requirements of personal meetings was required to save
time and effort and to increase the productivity from the collaborations.
The web based collaboration began with the web mail.
Users can compose, send, receive and read the emails by using the web browser and
Internet connection.
A single user can address multiple recipients in a single mail.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 61
Course Title (Course Code) VU
(IM) provide a real time exchange of messages and replies (chat) by using messaging
software.
IM is another form of traditional collaboration. Current tools for IM allow file exchange
and audio/video calling.
Voice over Internet Protocol (VoIP) enables the users to make phone calls over the
Internet.
VoIP tools such as Skype provide a convenient way to perform conference calls by using
computers and mobile phones.
Sending and/or receiving fax traditionally required the fax machine and telephone
connection.
Similarly, phone calling has been dependent upon telephone infrastructure.
In modern days, many companies have started providing cloud based calling and cloud
based fax services.
These companies have all the calling/fax operations performed over the Cloud and
provisioned over the Internet.
Taking example of Google Voice Phone System: The account holder receives the
services of call answering and voice mail.
The user can even configure the service to forward the incoming phone calls to a cell
number.
Google delivers the voice messages left by the callers as audio messages as well as in the
form of text which are receivable anywhere through the Internet.
Cloud based fax service provided by various companies is provisioned as a separate
virtual number to each subscriber. This number corresponds o a virtual fa machine.
The fax received over the virtual fax machine are delivered through email as PDF
attachment.
Similarly, to send a fax, a simple email (with PDF file) to virtual fax account will send
the fax to recipient/s.
As we have seen that data and files can be stored on Cloud storage.
It is also possible to edit the files (located on Cloud storage) shared among concurrent
users.
Provides another way of collaboration.
A number of service providers offer the editing of shared files such as text, spreadsheet
and presentation files. These include the famous providers:
o Dropbox
o Microsoft
o Google
Dropbox offers file sharing through public folders among the Dropbox users.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 62
Course Title (Course Code) VU
It is allowed to edit the MSWord, Excel and PowerPoint files in browser and without the
MSOffice installed.
Simultaneous users can edit a shared document.
Google provided Google Docs service offers web-based free access to a word processor,
spreadsheet and presentation programs to create, share, edit, print and download the
documents stored on Cloud.
Google Docs can be shared through simple email link.
Social media and streaming video contents provide yet another way for collaboration.
Cloud hosted social media such as Facebook and SalesForce.com’s Chatter tool are
available for collaboration among team members.
The team member can easily exchange updates, comments and reviews regarding
different tasks.
Files can be shared among the team members.
Photos and videos can be uploaded and shared to demonstrate a situation.
Live video streaming can also be broadcasted if required.
YouTube offers a free, reliable and Web accessed cloud storage for video contents
worldwide.
Videos created for collaboration can be shared among team members and publicly as
well.
The collaborative videos may include technical training clips, discussions and/or site
coverage etc.
The viewers can discuss and upload written comments on the video clip.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 63
Course Title (Course Code) VU
Lesson No. 16
CLOUD DEPLOYMENT MODELS
Module No – 083: Public Cloud:
Public cloud is one of the deployment models of Cloud through which the IT resources
are publicly available and accessible trough public Internet.
Characteristics of Public Cloud according to NIST:
o The consumer is generally not aware of the location of IT resources unless a
location restriction is imposed by either of provider or consumer. Still it is
difficult for the consumer to verify the location on map from where the IT
resources are being provisioned.
o The consumer workload may be a co-resident of the workload of other consumer
(multi-tenancy) which may include the rivals, adversaries and in worst case, the
attackers.
o The consumer has limited visibility of the software and procedures of the
provider. The consumer has to trust the provider for securing the consumer’s data
and fully disposing the deleted data.
o The consumer undergoes a limited upfront cost regarding the provisioning of IT
resources as compared to in house or locally setting up the IT infrastructure.
o Thanks to the workload management, dynamic collaboration among cloud
providers and (generally) large setups, the public clouds can give the illusion of
unlimited resources and elasticity to the consumers.
o The provider is in a limited legal Service Level Agreement (SLA) with the
consumer. The SLA covers the minimum performance assurance/s by the
provider and penalty in case of violation to the assurance/s.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 64
Course Title (Course Code) VU
o However, in case of outsourced Private Cloud, the consumer organization may
have some knowledge of the cluster location and network segment serving the
Private Cloud at the provider’s end.
o Consumer workload is vulnerable to cons of multi-tenancy from the insider
malicious colleagues.
o Modest cost for outsourced private Cloud (excludes infrastructure cost):
Negotiation with the provider, Upgradation in network equipment, updating of
legacy software to work on Cloud, training of staff etc.
o Significant cost for onsite private Cloud (includes the data center and
infrastructure cost): Updating of legacy software to work on Cloud, training of
staff etc.
o Resource limitation in on-site private Cloud but extendible resources available in
case of outsourced private Cloud.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 65
Course Title (Course Code) VU
o Extensive resources are available for outsourced Community Cloud just like
outsourced Private Cloud.
o Due to a number of members, there are a number of security perimeters (hence
complex cryptography) and dedicated communication lines in a Community
Cloud. This offers a better security from external threats.
Lesson No. 17
SERVICE ORIENTED ARCHITECTURE
Module No – 087: Web Applications & Multitenant Technology:
Web Applications: These are the applications which use web technologies (URL, HTTP,
HTML, XML) and generally use web browser based interface.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 66
Course Title (Course Code) VU
Web Services are independent units of software (code) which allow network based
machine-to-machine interaction.
o Have no user interface.
o Process data between the computers through API calls.
o Examples: SOAP and REST based web services
Service oriented architecture (SOA) is usually a collection of services (web services)
These services communicate with each other for the exchange of data and processing.
Two or more services may be coordinating an activity.
Examples of web services:
o Return the weather conditions for a specific zip code
o Return real-time traffic conditions doe a road or highway
o Return a stock price for a particular company
Web services are not web pages.
To use a web service (which resides on a remote server), a program exchanges messages
with the service.
The user program sends parameters (through API call) such as zip code to the web
service and waits for the reply.
Web services are treated as black box by the programmer.
Web services are interoperable which means that programs written in dissimilar
language/s than the web-based service can call the API functions.
Web Services: The core technologies are:
o Web Service Description Language (WSDL): A markup language to define the
API of the web service including the functions and the input/output messages
associated with each function.
o Message input/output are in the form of XML and defined by XML schema.
o The message formatting is according to a common messaging format defined by
Simple Object Access Protocol (SOAP) or through Representation State Transfer
(REST).
o Universal Description, Discovery and Integration (UDDI) is a standard which
regulates the service registries in which WSDL definitions can be published so
that they can be discovered by the users.
Cloud Service & Web Services:
o These two are not alike.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 68
Course Title (Course Code) VU
o Can be used independent of each other in a SOA.
o Cloud services are SaaS, PaaS & IaaS
o Web services are API Calls.
o Web services can be the front door for the cloud services running at the backend.
o Cloud services are often provided over web services.
o For example Amazon Web Service (AWS) based cloud services (e.g., data
processing service deployed by a provider) can be accessed over network through
API developed (by the same provider) using Amazon API Gateway.
Lesson No. 18
CLOUD SECURITY THREATS
Module No – 089:
This module is about the prominent security threats to the Cloud computing.
The following are significant threats to Cloud Security:
o Traffic Eavesdropping: This module is about the prominent security threats to the
Cloud computing. Compromises the message contents. Can go undetected for
extended periods of time.
o Malicious Intermediately: The messages are illegally intercepted and then the
contents are updated. The updated message is then relayed towards the cloud.
The messages are illegally intercepted and then the contents are updated. The
updated message is then relayed towards the cloud. The message may be updated
with malicious contents which reach the VM hosting the cloud service undetected.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 69
Course Title (Course Code) VU
…continued
o Denial of Service (DoS): The purpose is to overload the IT resources so the sage
where they can not work properly. Can be launched in the following ways:
Workload on a cloud service is artificially increased through fake messages or
repeated communication requests. Network is overloaded with traffic to cripple
the performance and increasing the response time. Multiple cloud service requests
are sent. Each request is designed to consume excessive memory and processing
resources.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 70
Course Title (Course Code) VU
o Insufficient Authorization based attack: It is a situation when a malicious user
gets direct access to IT resources which are supposed to be accessed by trusted
users only. Happens when a broad access is provided to the IT resources and/or
due to erroneously.
o Weak authentication based attacks: Happen when weak passwords or shared
(login) accounts are used to protect the IT resources. The impact of attacks due to
insufficient authorization and weak authentication depends upon the range of IT
resources and the range of access to those IT resources is compromised.
…continued
o Virtualization Attack: Based upon the administrative privileges provided to the
Cloud consumers and multi-tenancy, it is possible to compromise the underlying
physical hardware. It is also possible that the security flaws be arising due to VM
sprawl (a lack of security patches on OS installed on VM). Another possibility is
the installation of VM-aware malware to exploit the security flaws of hypervisor.
Following are possible sources in which the physical server may be compromised:
By an imposter in disguise of a legitimate consumer. The attacker cracks
the (weak) password of a consumer.
By a trusted but malicious consumer.
In either case, the vulnerabilities in the virtualization platform are
exploited over a single VM to take control of the physical server hosting
the infected VM. Makes all the VMs hosted on the compromised server as
vulnerable.
A more severe scenario arises when the infected VM is migrated to other
server for load balancing. In this case, a number of servers may get
compromised.
o Overlapping Trust Boundaries: Moving of consumer data to Cloud means that the
provider now shares (with the consumer) the responsibilities of availability,
confidentiality and integrity of data. The consumer thus extends the trust
boundary to include the cloud provider. This is prone to vulnerabilities. When
multiple consumers of a cloud share an IT resource, the trust boundaries overlap.
The provider may not be able to provider the security features that can satisfy the
security requirement of all the consumers of shared IT resource on a Cloud. More
complex scenarios arise when the consumer data is replicated and stored on
multiple sites.
o Another complexity arises when the Cloud provider handover the business to a
new owner. The data integrity becomes threatened in both cases.
…continued.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 71
Course Title (Course Code) VU
o Flawed Implementation: The implementation of Cloud services may have some
flaws related to configuration resulting into the occurring of unexpected events.
Particularly the security and operational weaknesses in Cloud provider’s
software/hardware can be targeted by the attackers to put the integrity,
confidentiality and/or availability of IT resources of the provider at stake. Equally
important point is the implementation flaws of Cloud services may result in the
crash of VM and thus will effect all the other services on that VM as well. For
example service A has some implementational flaws to crash the hosting VM
when a certain message is sent. This will also effect the services B and C and can
be exploited by an attacker.
…continued.
o Contracts: As an additional consideration, the SLA offered by the provider should
be carefully examined to clarify the liabilities taken by the provider and the
security policy implemented by the provider. This helps in determining the
following:
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 72
Course Title (Course Code) VU
If the consumer deploys its own solution over the Cloud resources then it
is a situation of consumer’s assets deployed over provider’s assets. Then
how the blame will be determined when a security breach or a runtime
failure occurs ?
If the consumer can apply its own security policies while the cloud
provider keeps the administrative rights to the IT infrastructure. Then how
this disparity will be overcome.
o Risk Management: The cloud consumers should perform a cyclic process of risk
management to access the potential threats and challenges related to Cloud
adoption. This should be a part of risk management strategy. It is a three stage
process.
Lesson No. 19
TRUST ISSUES IN CLOUD
Module No – 094: Brief overview (more in Lesson 39):
Lesson No. 20
MECHANISMS RELATED TO CLOUD INFRASTRUCTURE
Module No – 095: Logical Network Perimeter:
It establishes the boundary of virtual network to hold with in and isolate a set of related
cloud-IT resources that may be distributed physically. Implemented as virtual
environment, it has the following components:
o Virtual Firewall to filter the traffic of isolated network to and from Internet.
o Virtual Network consisting of virtual nodes and virtual links.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 74
Course Title (Course Code) VU
Virtual Server: Virtual servers or Virtual Machines (VMs) emulate the physical servers.
o Each virtual server can host numerous IT resources, cloud-based solutions and
other cloud computing mechanisms. Depending upon the capacity, a physical
server may host multiple virtual servers.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 75
Course Title (Course Code) VU
In order to rapidly provision the VMs with installed and preconfigured software such as
OS, programming platforms etc., the virtual servers are cloned by templates.
A template is a master copy of virtual server. It contains the configuration, installed
software, any configured virtual devices and disk contents.
A consumer can:
o Connect to a self-service portal of Cloud provider.
o Choose a suitable template.
o Instantiate a virtual server through administrative portal which works with the
help of virtual infrastructure manager (VIM) module.
o Customize the virtual server through usage and administrative portal.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 76
Course Title (Course Code) VU
It is a software used to collect and process the data related to Cloud-based IT resources.
o The reporting and analysis requirements of the Cloud usage module determines
the scope and volume of data collected/extracted.
There are a few generic types or formats of Cloud usage monitors:
o Monitoring Agent: It transparently monitors and analyzes the dataflow over
communication paths. It measures the network traffic and messages.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 77
Course Title (Course Code) VU
o Resource Agent: Collects the resource usage data related to certain events such as
initiating, suspending, resuming and vertical scaling. It interacts with the Cloud
resource management module.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 78
Course Title (Course Code) VU
o Polling Agent: Collects the Cloud service usage data after periodic polling to IT
resources. For example the uptime/downtime of a Cloud service. Records the
updated status of the resource.
It is a technique by which multiple copies of the IT resources are created to increase the
availability and productivity of the IT resources. Virtualization technology is used for
Cloud IT resources’ replication.
For example, due to a physical server failure and in order to over come the resultant
downtime of a Cloud service deployed over a VM hosted by that physical server, the
entire VM along with the software (Cloud service implementation) is replicated to
another server.
Another example is the horizontal scaling of IT resources such as increasing or
decreasing of Cloud service instances by replication of VM hosting the service instance,
corresponding to workload.
The resource replication process yields the IT resources which are monitored under the
Cloud usage monitor mechanism.
Resource replication is also essential for pay-as-you-go type of usage & billing.
This mechanism represents the provisioning of preconfigure PaaS instances with ready to
use and customizable programming environments. Provide the dependable PaaS
instances.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 79
Course Title (Course Code) VU
Lesson No. 21
SERVICE AGREMENTS(SAs)
Module No – 101:
NIST identifies that the consumer and provider are under a legal agreement or terms of
service.
The agreement has two parts:
o Service Agreement
o Service Level Agreement (SLA)
Service agreement contains the legal terms of contract.
The SLA contains the technical performance promises by the provider and the remedies
for performance failures.
Over all called Service Agreements by NIST
The following promises are made to consumer by the provides:
o Availability:
Usually 99.5% to 100% availability is assured.
The assurance is for a time intervals of a billing cycle e.g., 15 minute, 1
hour, 1 Year etc. for which the service status will be “up” for sure.
But this has to be clarified that for example time period of assurance is 15
minutes and even if the service is “down” for 14 minutes, then it legally
means that the service was not “down” for the whole interval.
Typically, several failures in subsystems are required to completely
“down” a service for the whole period of billing.
The provider may adjust the availability promises on case to case basis.
o Remedies for Failure to Perform:
In case of violation of the promise of availability (during a time period) by
the provider, the customer will be compensated in terms of service credit
for future use of Cloud service.
A refund is usually not given.
Consumer is responsible to monitor the availability of service and claim
for compensation.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 80
Course Title (Course Code) VU
The following situations result in termination of Cloud IT resources usage
for a consumer:
Voluntarily by consumer
Terminated by the provider for violating the provider’s rule of
service and/or for non-payment.
o The providers usually take no responsibility for preserving
the data in later case. While in former case, the preservation
is done for few days.
Lesson No. 22
CLOUD HOSTING DATA CENTER DESIGN
Module No – 102:
Key terms:
o CRAC: Computer Room Air Conditioning
o Hot aisle
o Cold aisle
o Server cabinets (Racks)
o Hollow floor
o Perforated tiles
Cloud hosting data center has a layered architecture for the Internet access.
The servers are physically connected to layer 2 switches. There is a top of rack (TOR) in
each rack. One server is connected to only one TOR switch.
The TORs are connected to aggregate switches (AGS).
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 82
Course Title (Course Code) VU
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 83
Course Title (Course Code) VU
Data centers consume huge amounts of electricity. As much as a small town in USA.
A large data center can host hundreds of thousands physical servers.
It is more costly to setup and run a small data center in terms of unit costs (per server, per
MB of storage, per GHz, Network bandwidth) and operational costs as compared to
larger data centers.
Google has 900,000 physical servers around the world in its data centers. Together these
servers consume 260 million watts of power which accounts to 0.01% of global energy
usage.
Facebook data center servers process 2.4 billion pieces of content and 750TB of data
every day.
Module No – 103:Data center Interconnection Networks
The network connecting the data center servers is called data center interconnection
network.
It is a core design of data center.
The network design must support the following features:
o Low latency
o High bandwidth
o Low cost
o Message-passing interface (MPI) communication support
o Fault tolerance
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 84
Course Title (Course Code) VU
o Must satisfy both point-to-point and collective communication patterns among all
server nodes.
Application Traffic Support: The data center interconnection network must support the
MPI communication and high bandwidth.
o Example: Distributed file access, Map and Reduce functions etc.
o Some servers can be configured to be master and others be slaves.
Network Expandability: The interconnection network must be expandable.
o Should support load balancing and data movement.
No bottlenecks
Can be expanded in the unit of data center container which contains
hundreds of servers and is a building block of large data centers.
Fault Tolerance and Graceful Degradation: Can be implemented through:
o Replication in software and hardware resources
o Redundant links among any two servers
o No single point of failure or critical links
o Two layered design should be used (a network layer close to servers and the upper
layer or backbone) to support modular (container) based expandable design.
Modular Data Center in Shipping Containers: The modern data centers are a the
collection of container based clusters that can be shipped from one location to another
through trucks.
It is an alternative to warehouse based data center.
Modular Data Center in Shipping Containers:
For example: The SGI ICE Cube container can house 46,080 processing cores or 30 PB
of storage per container.
Modular Data Center in Shipping Containers:
o Such a design:
Is more energy efficient in terms of cooling cost as compared to
warehouse based design.
Is more mobile and easily transportable.
Is ready to be deployed since it is assembled with servers, networking,
power supplies and cooling mechanisms. It is then tested and shipped.
Helps in dynamic scalability of data center.
Makes the relocation of data center as relatively easier than warehouse
based design.
Inter-Module Connection Networking requires an extra layer over modular
containers to allow dynamic scaling and interconnection.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 85
Course Title (Course Code) VU
Modern day data centers handle ever larger volumes of data and conduct the processing
massive amounts of user requests around the globe.
In order to maintain user satisfaction and performance, the managing of a data center has
become a set of complex tasks. These include (but not limited to):
Making common users happy by providing quality services.
Ensuring uninterrupted and high availability of (Cloud) services.
Managing multiple modules concurrently. Such as processing, networking, security and
maintenance etc.
Managing and planning for the scalability of data center.
Ensuring the reliability of virtual infrastructure through fault tolerant and recovery
mechanism to minimize the downtime and data loss.
Managing and lowering the operational costs and transferring the cost benefit to Cloud
providers and consumers.
Security enforcement and data protection
Implementation of Green information technology usage to lower the amount of energy
consumption.
Lesson No. 23
CLOUD ARCHITECTURE
Module No – 106: Generic Cloud Architecture Considerations:
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 86
Course Title (Course Code) VU
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 87
Course Title (Course Code) VU
The IT resources and data are prone to disasters (natural and/or human made) which
damage them partially or fully and thus may crash the whole computing system of an
organization.
Key terms:
o Failover: It is process through which a system transfers control (usually
automatedly)to an alternate deployment upon failure of primary deployment.
o Failback: The process of restoring of the system from alternative to primary
deployment and restoration of original state.
o The use of virtualization can implement the failover and brings reduction in
failback time.
o As compared to (for example) a data disaster for data stored on magnetic tapes,
days are require for restoration/recovery.
o The redundant deployment of software solutions, data and IT resources is quite
easy by using virtualization.
o One deployment is considered as primary, while other deployment/s are kept as
backup.
o The backup deployment is either updated periodically or the image/snapshot of
the primary deployment (e.g., VMs) can be saved.
o Upon failure, the backup deployment takes over.
o The primary deployment is then restored from the most recent snapshot.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 88
Course Title (Course Code) VU
o Virtualization has become the core part of disaster recovery plans of major
organizations since last decade.
o Virtualization even allows the testing of disaster recovery plan through emulation
and without disturbing the production/primary deployment.
o Although the failed physical servers have to be re-purchased/repaired, but the
virtualization lowers the additional costs and time related to failback.
o The organizations should mark the critical applications and data and use
replication of data in virtualized environments to support effective disaster
recovery.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 89
Course Title (Course Code) VU
o The heterogeneity in hardware and/or hypervisor makes it challenging to
dynamically include more hardware/virtualized IT resources.
o The open virtualization format (OVF) describes and open, secure, efficient,
portable and extensible format for packaging and distribution of VMs and the
software to be deployed over VMs.
o OVF allows hypervisor, guest OS and hardware platform independent packaging
of VMs and software.
o Interoperability should be provided for cross hypervisor and cross platform (intel
& AMD) live migration of VMs.
Challenge 6: Software Licensing and Reputation Sharing:
o The fact that the license model of commercial software is not suitable for utility
computing, the providers have to rely upon open source software and/or bulk
usage license.
o If the reputation of a provider is affected (due to consumers’ malicious behavior),
then there is no service to safe-guard the provider’s reputation.
Google App Engine (GAE): It is a popular platform for developing Cloud applications.
o Based upon technologies:
Google File System (GFS): Stores large volumes of data
MapReduce: Used in parallel job execution on massive data
Chubby (Distributed applications’ locking)
BigTable (Storage service to access structured data)
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 90
Course Title (Course Code) VU
o Consumers are allowed to develop applications in popular languages such as Java,
PHP, Go and Python. The following are components of GAE:
Datastore
Application runtime environment (for web applications)
Software Development Kit (SDK) (for local application development)
Administration console (management of user application development
cycles)
Web service infrastructure (interfaces for flexible use of storage and
networks resources)
o Well known applications of GAE are Google Search Engine, Google Docs,
Google Earth, and Gmail.
o Consumers can create Cloud applications by using GAE which run on Google
data centers.
Amazon Web Services (AWS):
o Amazon provides the SOAP web services and IaaS to the consumers/developers
to create and host Cloud services.
o Amazon Elastic Computing Cloud (EC2) is a web service to provide the VMs for
hosting Cloud applications.
o Simple Storage Service (S3) provides the object-oriented storage service.
o Elastic Block Service (EBS) provides the block storage interface.
o Simple Queue Service (SQS) provides inter process message passing.
o Amazon DevPay service can be used for online billing and account management
for the service providers to sell the applications developed and/or hosted on AWS.
Lesson No. 24
SPECIALIZED CLOUD MECHANISMS
Module No – 112: Automated Scaling Listener (ASL)
It is the software module (service agent) which monitors and tracks the communication
between Cloud service and the service consumer for dynamic scaling purpose.
o Can indicate the need for scaling to cloud consumer.
o Indicates to cloud manager for scaling in/out (if configured to auto scaling by the
consumer).
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 91
Course Title (Course Code) VU
The startup setup with one consumer, two service instances and two ASL
modules.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 92
Course Title (Course Code) VU
The ASL indicates the Virtual Infrastructure Manager (VIM) for increased load and lack of
resources for VM hosting Service 2 on current host/server.
The VIM initiates the migration of VM hosting service 2 to new host for resource availability.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 93
Course Title (Course Code) VU
The VM hosting service 2 is migrated to the new host with more resources.
The number of service consumers of Service 2 have decreased. The ASL indicates this to VIM
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 94
Course Title (Course Code) VU
The VIM prepares for migration of service 2 hosting VM for server consolidation.
Load Balancer: It is the service agent which distributes workload among multiple
processing resources such as multiple service instances. Workload is distributed on the
basis of:
o Processing capacity of the IT resource
o Workload prioritization
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 95
Course Title (Course Code) VU
o Content-Aware distribution
SLA Monitor: Works by pinging (for example) to a service instance to record the
“down” status with time.
o The statistics are used to evaluate the extent of SLA violation.
o Uses a polling agent (studied before).
Pay-per-use Monitor: It is based upon a monitoring agent (studied before).
o It collects the resource usage by intercepting the messages sent to a Cloud service
by the consumer.
o Collected data (such as transmitted data volume, bandwidth consumption etc.) is
used for billing purpose.
o
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 97
Course Title (Course Code) VU
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 98
Course Title (Course Code) VU
Module No – 117:
Multi-Device Broker: This mechanism is used to transform the messages (received from
heterogenous devices of Cloud consumers ) into a standard format before conveying them
to the Cloud service.
o The response messages from Cloud service are intercepted and transformed back
to the device specific format before conveying to the devices through the multi-
device broker mechanism.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 101
Course Title (Course Code) VU
State Management Database: It is a device used to temporarily store the state data of
software programs.
o State data can be (for example) the configuration and number of VMs being
employed to support a user subscription to a PaaS instance.
o In this way, the programs do not use the RAM for state-caching purposes and thus
the amount of memory consumed is lowered.
o The services can then be in a “stateless” condition.
o For example, a PaaS instance (ready-made environment) requires three VMs. If
user pauses activity, the state data is saved in state management software and the
underlying infrastructure is scaled in to a single VM.
o When the user resumes the activity, the state is restored by scaling out on the
basis of data retrieved from state management database.
Lesson No. 25
CLOUD MANAGEMENT
Module No – 118: Remote Administration System
It is a Cloud mechanism which provides the APIs and tools to the providers to develop
and used online portals.
These portals also provide some administrative controls to the Cloud consumers as well.
Usage and Administration Portal:
o Management controlling of Cloud IT resources
o IT resources usage reports
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 102
Course Title (Course Code) VU
Self-Service Portal:
o The consumer can look at and choose various Cloud services
o The chosen services/package is submitted to Cloud provider for automated
provisioning
The remote administration console can be used to:
o Configure and setting cloud services
o Provision and releasing IT resources for on-demand usage
o Monitor cloud service status, usage and performance
o QoS and SLA fulfillment monitoring
o IT-resource leasing cost and usage fee management
o Managing user accounts, security credentials, authorization and access control
o The remote administration console can be used to:
o Capacity planning
If allowed, a Cloud consumer can create its own front-end application using API calls of
remote administration system.
Utilizes the virtual infrastructure manager (VIM) for creating and managing the virtual IT
resources.
Typical tasks include:
o Managing the templates used to initialize the VMs
o Allocating and releasing the virtual IT resources
o Starting, pausing, resuming and termination of virtual IT resources in response to
allocation/release of these resources
o Coordination of IT resources for resource replication, load balancer and failover
system
o Implementation of usage and security policies for a Cloud service
o Monitoring the operational conditions of IT resources
These tasks can be accessed by the cloud resource administrators (personnel) employed
by the cloud provider or cloud consumer.
The provider (and/or the administrator staff of provider) can access the resource
management directly through native VIM console.
The consumer (and/or administrator staff of the consumer) use the remote administration
system(created by the provider and) based upon API calls of resource management
system.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 103
Course Title (Course Code) VU
The SLA management system provides features for management and monitoring of SLA.
Uses a monitoring agent to collect the SLA data on the basis of predefined metrics.
The SLA monitoring agent periodically pings the service to evaluate the “down” time if
occurs.
The collected data is made available to the usage and administrative portals so that an
external and/or internal administrator can access the data for querying and reporting
purposes.
The SLA metrics monitored are in accordance with the SLA agreement.
The billing management system collects and processes the data related to service usage.
This data is used to generate consumer invoice and for accounting purposes provider.
The pay-as-you-go type of billing specifically require the usage data.
The billing management system can cater for different pricing (pay-per-use, flat rate, per
allocation etc.) models as well as custom pricing models.
Billing arrangement can be pre-usage or post-usage.
Lesson No. 26
FUNDAMENTAL CLOUD ARCHITECTURES
Module No – 121: Resource Pooling Architecture
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 104
Course Title (Course Code) VU
It is based upon using one or more resource pool in which identical IT resources are
grouped and maintained automatically by a system which also ensures that the resource
pools remain synchronized.
A few examples of resources pools are as follows:
o Physical server pools consisting of (ready to use) networked servers with installed
OS and other tools.
o VM (virtual server) pool/s configured by using one or more templates selected by
the consumer during provisioning.
o Cloud storage pools consisting of file/block based storage structures.
o Network pools consist of different (preconfigured) network connecting devices
that are created for redundant connectivity, load balancing and link aggregation.
o CPU pools are ready to be allocated to VMs by the multiple of single core.
Dedicated pools can be created for each type of IT resources.
Individual resource pools can become sub-groups into larger pool.
A resource pool can be divided into sibling pools as well as nested pools.
Sibling pools are independent and isolated from each other. May have
different types of IT resources.
Nested pools are drawn from a bigger pool and consist of the same types
of IT resources as are present in the parent pool.
Resource pools created for different consumers are isolated from each other.
The additional mechanisms associated with resource pooling are:
o Audit monitor: Tracks the credentials of consumers when they login for IT
resource usage.
o Cloud Usage Monitor
o Hypervisor
o Logical Network Perimeter
o Pay-Per-Use Monitor
o Remote Administration System
o Resource Management System
o Resource Replication
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 105
Course Title (Course Code) VU
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 106
Course Title (Course Code) VU
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 107
Course Title (Course Code) VU
Module No – 124: Elastic Disk Provisioning Architecture
Cloud costing model for disk storage may charge on the basis of total volume of allocated
storage space instead of total space used.
The elastic disk provisioning architecture implements a dynamic storage provisioning
based billing.
The user is charged only for the consumed storage.
The technique of thin-provisioning of storage is used.
Thin-provisioning allocates the storage space dynamically for the VM’s storage.
Requires some extra overhead when more storage space is to be allocated.
The thin-provisioning software is required to be installed on VMs to coordinate the thin-
provisioning process with the hypervisor.
Requires the implementation of:
o Cloud usage monitor
o Resource replication module (for converting thin-provisioning into thick or static
disk storage)
o Pay-per use monitor tracks and reports the granular billing related to disk usage.
In order to avoid data loss and service unavailability due to disk failure, redundant
storage is applied.
Additionally, in case of network failure, the disruptions in Cloud services can be avoided
through redundant storage incident.
This is part of failover system (active-passive).
The primary and secondary storage are synchronized so that in case of a disaster, the
secondary storage can be activated.
A storage device gateway (part of failover system) diverts the Cloud consumers’ requests
to secondary storage device whenever the primary storage device fails.
The primary and secondary storage locations may be geographically apart (for disaster
recovery) with a (possibly leased) network connection among the two sites.
Lesson No. 27
ADVANCED CLOUD ARCHITECTURES
Module No – 125: Hypervisor Clustering Architecture:
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 109
Course Title (Course Code) VU
It balances the physical server utilization through VM migration in the hypervisor cluster
architecture.
Avoids over/under utilization of physical servers.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 110
Course Title (Course Code) VU
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 111
Course Title (Course Code) VU
Either the duplication of cloud service or service migration is used to provide a non-
disruptiveness in service.
By using the specialized Cloud architectures, the non-disruptive service relocation
architecture can be implemented.
Either the duplication of cloud service or service migration is used to provide a non-
disruptiveness in service.
In service-duplication implementation, the service run-time is temporarily replicated to
another location and then synchronized with the primary deployment.
The consumers’ requests are diverted to the temporary deployment and then the primary
deployment is made unavailable for maintenance.
In case of migration of service to another location (such as on the indication of automated
scaling listener), then it is a permanent relocation. Means, the temporary duplicate copy is
not made.
The service hosting VM’s migration procedure depends upon the fact that the VM
storage is hosted on shared or local physical server storage is used.
In later case a replicated copy of to-be-migrated VM is made on the destination server
and then powered-on, the consumers’ requests are redirected to the duplicated instance
through load balancer module. After that, the original VM is deactivated.
In formal case, the above procedure is not required if the destination server can also
access the same shared storage.
The following are the important modules for the implementation:
o Automated Scaling Listener
o Load balancer
o Hypervisors
o VMs
o Cloud storage device
The additional modules are:
o Cloud usage monitor
o Pay per use monitor
o Resource replication
o SLA management system
o SLA monitor
The failure of the physical server results in the unavailability ofVMs hosted on that
server.
The services deployed over the unavailable VMs are obviously disrupted.
The Zero downtime architecture implements a failover system through which the VMs
(from the failed physical server) are dynamically shifted to another physical server
without any interruption.
The VMs are required to be stored on a shared storage.
The additional modules required may include:
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 112
Course Title (Course Code) VU
o Cloud usage monitor
o Logical network perimeter
o Resource cluster group (containing active-active clusters to assure high
availability of IT-resources for VM)
o Resource replication
Cloud Balancing Architecture: It is the implementation of failover system across
multiple clouds.
o It improves/increases the following features:
Performance and scalability of IT-resources
Availability and reliability of IT resources
Load balancing
o Requires an automated scaling listener and failover system.
o The automated scaling listener redirects the requests of service consumers
towards multi-cloud redundant implementations of service instances based on on-
going scaling and performance requirements.
o The failover system (detects any failure/s and) coordinates with automated scaling
listener with information regarding the extent of failure so that the automated
scaling listener can adjust the relaying of consumers’ requests accordingly.
Resource constraint situation may also arise for the IT-resources not configured for
sharing such as nested and/or sibling pools when one pool borrows the resources from the
other pool. The lending pool may create resource constraints for its consumers later on if
the borrowed resources are not returned sooner.
If each consumer can be assured the availability of a minimum volume of:
o Single IT resource
o Portion of an IT resource
o Multiple It resources
Then this implements a resource reservation architecture.
In case of implementation for resource pools, the reservation system must assure that
each pool maintains a certain volume of resource/s in unborrowable form.
The resource management system mechanism (studied earlier) can be utilized for
resource reservation.
The resource/s volume in a pool or the capacity of a single IT resource which exceeds the
reservation threshold can be shared among the consumers.
The resource management system manages the borrowing of IT resources across multiple
resource pools.
The additional modules that can be implemented are:
o Cloud usage monitor
o Logical network perimeter (for resource borrowing boundary)
o Resource replication (just in case new IT resources are to be generated)
It may be possible to detect and counter some failures in Cloud environment if there is an
automated system with failure diagnosis and solution selection intelligence.
This architecture establishes a resilient watchdog/module containing the definitions of
pre-marked events and the runtime logic to select the best (predefined) routine to coup
with those events.
The resilient module generates alarms/reports the events which are not predefined.
The resilient watchdog module performs the following five core functions:
o Monitoring
o Identifying an event
o Executing the reactive routine/s
o Reporting
This architecture allows the implementation of an automated recovery policy consisting
of predefined steps and may involve actions such as:
o Running a script
o Sending a message
o Restarting services
Can be integrated into a failover system along with SLA management system.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 114
Course Title (Course Code) VU
The provisioning Cloud IT-resources can be automated to save time, reduce human
related errors and to increase the throughput.
For example a consumer can initiate the automated provisioning of 50 VMs
simultaneously instead of waiting for one VM at a time.
The rapid provisioning architecture has a (centralized) control module complemented by:
o Server templates
o Server images (for bare-metal provisioning)
o Applications and PaaS packages (software and applications & environments)
OS and Application baselines (configuration templates applied after installation of OS
and applications)
Customized scripts and management modules for smooth procedures
The following steps can be visualized during the automated rapid provisioning:
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 115
Course Title (Course Code) VU
o A consumer chooses a VM package through self-service portal and submits the
provisioning request.
o The centralized provisioning module selects an available VM and initiates it
through a suitable template.
o Upon initiation, the baseline/s templates are applied.
o The VM is ready to use now.
Logical Unit Number is a logical drive that represents a partition of a physical drive.
The storage workload management architecture ensures the even distribution of all
logical-unit-numbers across the Cloud storage devices.
The even distribution of logical unit numbers is done through implementation of a
storage capacity system and a storage monitoring module.
The storage capacity monitoring module highlights the overburden storage device.
The storage capacity system evenly distributes the logical-unit-number drives.
o Additional modules:
o Cloud usage monitor
o Load balancer
o Logical network perimeter
Initial distribution of logical unit numbers across the Cloud storage devices.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 116
Course Title (Course Code) VU
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 117
Course Title (Course Code) VU
Storage capacity monitoring module indicates the Storage capacity system for migration of
logical unit number to another storage device.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 118
Course Title (Course Code) VU
Storage capacity system identifies the destination storage device and shifts the logical unit
number to destination device.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 119
Course Title (Course Code) VU
Storage capacity monitoring module indicates the Storage capacity system for migration of
logical unit number to another storage device.
Storage capacity system identifies the destination storage device and shifts the logical unit
number to destination device.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 120
Course Title (Course Code) VU
The result is the even distribution of logical unit numbers across all storage devices.
The VMs access various physical I/O circuits/cards of the hosting physical server through
the hypervisor. This is called I/O virtualization.
However, times may come when the hypervisor assisted access may become a bottleneck
for concurrent I/O requests.
The direct I/O architecture is the possibility of accessing the physical I/O devices from
VMs without intervention of hypervisor.
The physical server’s CPU has to be compatible to direct I/O.
Additional modules required are:
o Cloud usage monitor
o Logical network perimeter (to allow only a limited number of VMs to for direct
I/O)
o Pay-per-use monitor
It is a type of direct I/O in which the VMs access the logical unit numbers directly.
The VMs can also be given direct access to block level storage.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 121
Course Title (Course Code) VU
A type of direct I/O in which the VMs access the logical unit numbers directly.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 122
Course Title (Course Code) VU
The duplication of data over Cloud can cause some problems such as:
Increase time to store, backup and copy the data
More space is required
More cost is to be paid by the consumer
Data synchronization issues, time consumed in data synchronization and resolving the
synchronization related issues.
The provider has to arrange more storage space and allocate more resources for
monitoring and management of replicated data.
Implements the data de-duplication by preventing the consumers to store replicated data.
Can be applied to block storage and file based storage.
Analyzes the received data before sending to storage.
Each data block is analyzed and a hash code is generated according to the contents.
The hash code is compared to already stored data blocks
If a duplicate code is found, the new data block is rejected and a pointer to the already
stored block is saved instead.
The new blocks are saved after the hash code check.
Can also be applied to backup storage devices.
Network bandwidth limit may inhibit the performance and may become a bottleneck.
It is the software which implements the dynamic scalability of network bandwidth.
The scalability is provided on per user basis.
Each user is connected to a separate network port.
Automated scaling listener, elastic network capacity controller and a resource pool of
network ports are used for implementation.
The automated scaling listener monitors the network traffic and indicates the elastic
network capacity controller to enhance the bandwidth and/or number of ports when
required.
When applied to virtual switches, then each virtual switch is configured to induct more
physical uplinks.
Alternatively, the direct I/O can be used to enhance network bandwidth for any VM.
The approach is to dynamically shift the logical unit number to another storage device
with larger capacity in terms of number of requests processed per second and the amount
of data being handled.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 123
Course Title (Course Code) VU
o As compared to traditional approach, it is not constrained by the availability of
free space on the physical storage device hosting the logical unit number.
o Automated scaling listener and storage management modules are required for the
implementation.
o The automated scaling listener monitors the number of requests being sent to the
logical unit numbers.
o When a pre-set threshold of number of requests to a logical unit number is
reached, the automated scaling listener signals the storage management module to
shift that logical unit number to another device with higher capacity.
o While moving a logical unit number, the connectivity/availability of data is not
interrupted.
Required when there are security and/or legal constraints regarding data migration across
different storage devices.
The data is stored over logical unit numbers.
This is the implementation of vertical scaling capability over a single cloud storage
device.
The single storage device optimally uses different disks with varying features/capacities.
Different disks are graded and marked according to capacity.
Implemented through automated scaling listener and storage management software.
The automated scaling listener monitors the logical unit numbers.
A logical unit number is hosted over a disk. The grade of the disk may be chosen
randomly or according to a policy.
Upon rise of performance requirements for a logical unit number, the automated scaling
listener signals the storage management program to move the logical unit number to a
disk with higher grade.
Situations may arise when the consumers’ requests may face unlimited delays and packet
loss due to network congestion between physical host and network device on uplink.
The main reason of data packet loss and delays are usually due to single physical uplink.
This architecture implements a network load balancing mechanism for a physical server
(and virtual switch hosted over that server) through the use of multiple physical uplinks.
This means that more than one NICs are used for each physical server.
By using link aggregation techniques, network traffic can be distributed among multiple
physical uplinks.
The phenomenon of network bottleneck due to single physical uplink can be avoided.
Can also maintain the availability of the VMs even if any uplink fails.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 124
Course Title (Course Code) VU
The virtual switch has to be configured for compatible and seamless use of multiple
physical NICs.
The redundant NIC is connected to the physical switch through separate links.
The virtual switch is configured to use all the redundant NIC.
But only one NIC is kept primary and active.
The secondary NIC does not forward any packets although it receives packets from VMs
until the primary NIC fails.
The process is transparent to the hosted VMs.
The Cloud storage devices needs to undergo for maintenance process in order to maintain
their working potential.
A Cloud storage device hosts multiple logical unit numbers.
It is not practical to disconnect the storage device/s and then perform maintenance.
In order to maintain the availability of data, this architecture temporarily copies the data
from a to-be-maintained storage device to a secondary device.
The data is (for example) arranged/stored in the form of logical unit numbers which in-
turn are connected to different VMs and/or accessed by different consumers.
It is therefore important that the logical unit numbers be migrated live.
The connectivity and availability of data are maintained.
Once the data is migrated, the primary device is made unavailable. The secondary device
serves the data requests even during migration.
The storage service gateway forwards the consumer requests to secondary storage.
The data is moved back to the primary storage after the maintenance is over.
The whole process remains transparent.
Lesson No. 28
CLOUD FEDERATION & BROKERAGE
Module No – 144: Cloud Federation:
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 126
Course Title (Course Code) VU
Due to the availability of a finite number of physical resources, a single Cloud can handle
a certain number of consumers’ requests in a unit time.
We are supposing that a time deadline exists to process a consumer’s request.
If a Cloud infrastructure cannot meet the requests’ deadlines, then it is experiencing
resource shortage or congestion.
At this point, the chances of SLA violation start becoming solid.
The Cloud provider may be heading towards SLA penalties if the situation persists.
A decision has to be made by the Cloud provider to process the consumer requests that
are in excess to the current capacity on the basis of:
Revenue to be earned from processing the extra requests
The cost to be paid to other provider/s
The deadline of the requests vs. latency of remote provider
A Cloud federation may also be created to fulfill the requests of a remote consumer
through the closest provider in that region to reduce network latency.
Thus federation of Clouds offer a better solution to resource shortage and latency issues
in Cloud computing.
Federation can be horizontal. In this, the Cloud services (IaaS, PaaS and SaaS) are
horizontally expanded.
In vertical federation, a Cloud provider A (for example) may host a SaaS/PaaS instant of
another provider B over its own IaaS to fulfil the requests of provider A.
Federation can also be hybrid.
Module No – 146: Cloud Brokerage:
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 127
Course Title (Course Code) VU
Lesson No. 29
CLOUD DELIVERY/SERVICE MODELS’ PERSPECTIVES
Module No – 147: Cloud Provider's Perspective about IaaS:
In this and next two modules, we shall discuss the overall perspective of Cloud provider
in establishing and managing of Cloud services. Namely:
o IaaS
o PaaS
o SaaS
The two basic IT resources of IaaS are:
o VMs
o Cloud storage
These are offered along with the:
o OS
o (virtual) RAM
o (virtual) CPU
o (virtual) Storage
VMs are usually provisioned through VM images which are predefined configurations.
Bare-metal provisioning is also provided to the consumers with administrative access.
Snapshots of VMs can be occasionally taken for failover and replication purposes.
A cloud may be provisioned through multiple data centers spanning at different
geographical locations and connected through highspeed networking.
VLANs and network access control are used to isolate a networked set of VMs (into a
network perimeter) which are provisioned to a single consumer/organization.
Cloud resource pools and resource management systems can be used to provide
scalability.
Replication is used to ensure high availability and forming a failover system.
Multipath resource access architecture is used to provide reliability.
Resource reservation architecture is used for provisioning of dedicated IT resources.
Different monitors such as pay-per-use monitor and SLA monitors continuously overlook
VM lifecycles, data storage and network usage to establish billing system and SLA
management.
Cloud security (encryption, authentication and authorization systems) are to be
implemented.
PaaS instances are hired by developers who want to develop Cloud applications.
Readymade environments are usually created to provide on-demand access to a pre-
configured set of software tools and SDK.
The PaaS environments can simulate a Cloud environment with security procedures to
enable the developer to test the application/s being developed.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 128
Course Title (Course Code) VU
The PaaS environments also help in establishing multitenancy and auto scalability
features in developed applications.
Scalability can be provided to an overloaded application on the recommendation and
budget of the PaaS consumer.
Automated scaling listener and load balancers are utilized for workload distribution.
Non-disruptive service relocation architecture and failover systems are utilized to ensure
reliability in PaaS instances.
PaaS instances may comprise of multiple VMs and can be distributed across different
data centers.
PaaS instances may comprise of multiple VMs and can be distributed across different
data centers.
Pay-per-use monitor and SLA monitor can be used to collect data regarding resource
usage and failures.
The security features of IaaS are usually ample for PaaS instances.
SaaS instances are unique from IaaS and PaaS instances due to the existence of
concurrent users.
The SaaS implementations depend upon scalability& workload distribution mechanisms
and non-disruptive service relocation architectures for smooth provisioning and
overcoming failures.
Unlike the IaaS and PaaS, every SaaS deployment is unique from other implementations.
Every SaaS deployment has different programming logic, resource requirements and
consumer workloads.
The diverse SaaS deployments include: Wikipedia, Google talk, email, Android play
store, Google search engine etc.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 129
Course Title (Course Code) VU
A consumer accesses the VM through a remote terminal application. The VM has to have
an OS installed.
o Remote desktop client for Windows
o SSH client for Mac and Linux based systems
Cloud storage device can directly be connected to the VM or to a local device on-
premises.
The Cloud storage data can be handled and rendered through Networked file system,
storage area network and/or object-based storage accessible through Web-based interface.
The administrative rights of the IaaS consumer include, controlling of:
o Scalability
o Life cycle of VM (powering-On/Off and restarting)
o Network setting (firewall and network perimeter)
o Cloud storage attachment
o Failover setting
o SLA monitoring
o Basic software installations (OS and pre installed software)
o VM initializing image selection
o Passwords and credentials management for Cloud IT-resources
o Costs
IaaS resources are managed through remote administration portals and/or command line
interfaces through execution of code scripts.
Lesson No. 30
INTER-CLOUD RESOURCE MANAGEMENT
Module No – 153:
Many providers look for getting reasonable clients to generate more revenue and to make
good use of idle resources.
Cloud federation gives a solution to this problem.
But a bigger picture lies in Inter-Cloud where the global federation takes place.
The Inter-Cloud can be established where each member Cloud is connected to other
member Clouds just like Internet connects the networks.
It is the ultimate future of Cloud federation.
Technological giants such as IBM, HP, CISCO, RedHat etc. are actively working on
establishment of cloud-of-clouds.
We hope that soon the issues of interoperability, inter-cloud communication, security and
workload migration will be addressed.
Lesson No. 31
CLOUD COST METRICS AND PRICING MODELS
Module No – 154:
In next few modules, we shall discuss different cost metrics and pricing models of Cloud.
Business Cost Metrics: The common types of metrices related to cost benefit analysis of
Cloud computing.
Upfront Costs: Related to initial investment regarding IT resource acquiring and
installations.
High costs for on-premises installation as compared to leased from Cloud.
On-going Costs: Include the running costs of the IT resources e.g., licensing fee,
electricity, insurance and labor.
The long term ongoing costs of Cloud IT-resource can exceed the on-premises costs.
Additional Costs: These are specialized cost metrics. These may include:
o Cost of Capital: It is the cost of raising a capital amount. It is higher if a high
capital is to be arranged in short time. The organization may have to bear some
costs in raising a large amount. This is important decision for up-front cost
metrics.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 132
Course Title (Course Code) VU
o Sunk Costs: These are the costs already spent by the organization over IT-
infrastructure. If the Cloud is preferred then these costs are sunk. Hence should be
considered along with up-front cost of Cloud. Difficult to justify the leasing of
Cloud IT resources in the presence of high sunk costs.
o Integration Costs: The time and labor costs required to integrate a Cloud
solution which include the testing of Cloud services acquired.
o Locked-in Costs: The costs related to being dependent upon a single Cloud
provider due to lack of interoperability among different providers. Affects the
business benefits of leasing the Cloud based IT-resources.
Module No – 155:
Cloud Usage Cost Metrics: In this module we shall study different metrics related to
cost calculation of Cloud IT resource usage.
o Network Usage: Cumulative of, or separate of outbound and inbound network
traffic in bytes over the monitored time. Costing may be cumulative or separate
for inbound and outbound traffic. Many Cloud providers do not charge for
inbound traffic to promote the consumers to shift their data towards Cloud.
May also be based upon the static IP address usage and network traffic
processed by Virtual Firewall.
o VM Usage: Related to the number of VMs and the usage of the allocated VMs.
Can be static cost, pay-per-use, or according to the features of VM. Applicable to
IaaS and PaaS instances.
o Cloud Storage Device Usage: It is charged by the amount of storage used.
Usually the on-demand storage allocation pattern in used to calculate bill on time
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 133
Course Title (Course Code) VU
basis for example on hourly basis. Another(scarcely used) billing option is to
charge on the basis of I/O operations to and from storage.
o Cloud Service Usage: The service usage can be charged on the basis of duration
of subscription, number of nominated users and/or number of transaction served
by the service.
Module No – 156: Case Study for Total Cost of Ownership (TCO) Analysis
The TCO includes the costs of acquiring, installing and maintaining the hardware and
software to perform the IT tasks of the organization.
In this module, we shall perform a case study to evaluate the TCO for on-premises and
Cloud based solution.
Suppose a company wants to migrate a legacy application to PaaS. The application
requires a database server and 4 VMs hosted on 2 physical servers.
Next we perform a TCO analysis for 3 years:
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 134
Course Title (Course Code) VU
Cost management can take place across the lifecycle phases of Cloud services. These
phases may include:
o Design & Development
o Deployment
o Service Contracting
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 135
Course Title (Course Code) VU
o Provisioning & Decommissioning
The cost templates used by the providers depend upon:
o Market competition
o Overhead occurred during design, deployment and operations of the service
o Cost reduction considerations through increased sharing of IT resources
A pricing model for Cloud services can be composed of:
o Cost metrics
o Fixed and variable rates definitions
o Discount offerings
o Cost customization possibilities
o Negotiations by consumers
o Payment options
Module No – 158:
Case study: We shall now see and example case of different price offering from a Cloud
provider.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 136
Course Title (Course Code) VU
Lesson No. 32
CLOUD SERVICE QUALITY METRICS
Module No – 159:
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 137
Course Title (Course Code) VU
Service performance refers to the ability of an IT resource to carryout its functions within
expected perimeters.
o Measured with respect to capacity metrics.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 138
Course Title (Course Code) VU
These are related to the IT resource’s elastic capacity, the maximum capacity that an IT
resource can reach and the adaptability of an IT resource to workload fluctuations.
For example a VM can be scaled up to 64 cores and 256 GB of RAM or can be scaled out
to 8replicated instances.
Storage Scalability (Horizontal) Metric: The permissible capacity change of a storage
device in accordance with the increase in workload.
o Measured in GB.
o Applicable to IaaS, PaaS and SaaS.
o E.g., 1000 GB maximum (automatic scaling)
Server Scalability (Horizontal) Metric: The permissible server capacity in response to
increased workload.
o Measure in number of VMs in resource pool.
o Applicable to IaaS, PaaS
o E.g., 1 VM minimum up to 10 VMs maximum (automated scaling)
Server Scalability (vertical) Metric: Measured in terms of number of vCPUs, vRAM size
in GB.
o Applicable to IaaS and PaaS.
o E.g., 256 cores maximum and 256 GB od RAM
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 139
Course Title (Course Code) VU
o Operational phase: Measure the difference in service levels (in terms of
availability, reliability, performance and scalability metrics) before, during and
after a downtime event.
o
Recovery phase: To measure the rate at which an IT resource recovers from downtime.
For example the meantime for a system to record a downtime event and switch over to a
new VM.
Two common metrics related to measuring resiliency are as follows:
o Mean-Time to Switchover (MTSO) Metric: The time to switch over to a replicated
instance after the failure of a VM.
Measured in terms of time (from time of failure to recovery)
Measure as per month and/or year
Applicable to IaaS and PaaS
E.g., 12 minutes average
o Mean-Time System Recovery (MTSR) Metric: The time expected for a resilient
system to perform a complete recovery from a VM failure.
Measured as total time spent during recovery/total number of failures
Measured as monthly and/or yearly
Applicable to IaaS, PaaS and SaaS
E.g., 100 minutes average
In this module, we shall discuss some of the best practices of Cloud consumers for
dealing with SLAs.
Mapping of test-cases to the SLAs: A consumer should highlight some test cases
(disasters, performance, workload fluctuations etc.) and evaluate the SLA accordingly.
The SLA should be aligned with the consumer’s requirements of the outcome of these
test-cases.
Understanding the scope of SLA: A clear understanding of the scope of SLA should be
made. It is possible that a software solution may be partially covered by an SLA for
example the database may be left uncovered.
Documenting the guarantees: It is important to document all the guarantees at proper
granularity. Any particular guarantee requirement should also be properly and clearly
mentioned in SLA.
Defining penalties: The penalties and reimbursements should be clearly defined and
documented in SLA.
SLA Monitoring from independent party: Consider the SLA monitoring from a third
party.
SLA monitoring data archives: The consumer may want the provider to delete the
monitored data due to privacy requirement. This should be disclosed as an assurance by
the provider in SLA.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 140
Course Title (Course Code) VU
Lesson No. 33
CLOUD SIMULATOR
Module No – 166:CloudSim: Introduction
Some configurations are required for the CloudSim. The important requirements are
discussed in this module.
CloudSim requires Sun’s Java 8 or newer version. Older versions of Java are not
compatible.
You can download Java for desktops and notebooks from https://java.com/en/download/
CloudSim requires Sun’s Java 8 or newer version. Older versions of Java are not
compatible.
You can download Java for desktops and notebooks from https://java.com/en/download/
CloudSim setup is just needed to be unpacked before using. If you want to remove
CloudSim, remove the folder.
CloudSim setup comes with various coded examples which can be test run for
understanding the CloudSim architecture.
CloudSim site has video tutorial explaining the step-by-step configuration and execution.
CloudSim setup is just needed to be unpacked before using. If you want to remove
CloudSim, remove the folder.
CloudSim setup comes with various coded examples which can be test run for
understanding the CloudSim architecture.
CloudSim site has video tutorial explaining the step-by-step configuration and execution.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 141
Course Title (Course Code) VU
Module No – 167:CloudSim: Example Code
Lesson No. 34
COMPUTER SECURITY BASICS
Module No – 169: Computer Security Overview:
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 142
Course Title (Course Code) VU
specific to his physical, psychological, mental, economic, cultural or social
identity.
European Commission: Directive 95/46/EC of the European Parliament and of the
Council of 24 October 1995 on the protection of individuals with regard to the processing
of personal data and on the free movement of such data.
The key terminologies of privacy are:
o Data controller: An individual or a body which individually or jointly determines
the purpose and procedure of processing an item of personal information.
o Data processor: An individual or body which processes the personal information
on behalf of the data controller
o Data subject: An identified or identifiable individual to whom personal
information relates directly or indirectly.
Privacy is regarded as human right in Europe while in America it traditionally refers to
avoiding harm to people in any context.
Trust is a psychological state comprising of intensions to accepts the risks on the basis of
positive expectations of the behavior of another person or entity.
Trust is broader term than security because the trust is also based upon experience and
criteria.
Trust has two types:
o Hard trust: Requires the usage of security-oriented aspects such as authentication,
encryption and security (CIA).
o Soft trust: Consists of non-security oriented phenomenon such as human
psychology, brand loyalty and user friendliness.
Usually people find it harder to trust online services than offline services.
Trust on online services can be enhanced/revived by using security features but it is not a
guaranteed solution.
The trust in Cloud computing is of two types: Persistent trust (long term) and Dynamic
trust (short term).
The trust of Cloud consumer can be enhanced and established through security elements.
More to come in next modules.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 144
Course Title (Course Code) VU
Something the user posses: Electronic cards, smart card and physical keys. Also called
token.
Something the individual is (static biometric): Fingerprint, retina and face scan.
Dynamic biometric: voice, handwriting and typing pattern.
The purpose of access control is to limit the actions or operations that an authenticated
user of a computer system can perform.
This includes the privileges of the user as well as the programs executing on behalf of the
user.
Access control is enforced by a software module which monitors every action performed
by the user and the programs executing on behalf of the user.
The authorization of each user is set by the security administrator according to policy of
the organization.
Access control requires authentication.
Zombie/bot: A program which is activated over remote system to make a team attack
over a victim computer.
A firewall is a hardware and/or software based module to block unauthorized access (but
allowing authorized access) in a networked environment.
o Stands between a local network and Internet.
o Filters the harmful traffic.
Firewall preforms packet filtering on the base of source/destination IP address.
Firewall checks the packets on the basis of connections (stateful firewall).
Other types of firewalls also exist.
Intrusion detection system (IDS) is a software or hardware device installed on a network
or a host to detect intrusion attempts, monitors malicious activity or policy violations.
Intrusion detection system (IDS) is a software or hardware device installed on a network
or a host to detect intrusion attempts, monitors malicious activity or policy violations.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 146
Course Title (Course Code) VU
Allows the attacker to control or crash the process or to modify its internal variables.
Can be launched through DoS attack.
Can also occur by chance.
It occurs when a program attempts to write more data to a block of memory or buffer
than the allowed volume.
The overflowed data is written to the adjacent block/s of the memory. Thus overwriting
the adjacent blocks.
If the adjacent memory buffer is overwritten then the attacker may overwrite a chosen
address to a function pointer in that buffer. The chosen address is of a memory location
with malicious address.
Now the function pointer is pointing at the malicious code.
When the (overwritten) function pointer is executed, the malicious code starts to execute
and the attacker gets the system control.
Can occur wherever direct memory access is allowed such as in C and C++.
C# and Java have reduced coding errors causing buffer overflow.
The installation of operating system requires some security measures such as:
Planning: The purpose, user, administrator and data to be processed on that system.
Installation: The security measures should start from the base.
BIOS level access should be secured and with a password.
The OS should be patched/updated with latest critical security patches before installing
any applications.
Remove unnecessary services, applications and protocols.
Configure the users, groups and authentication according to security policy.
Configure the resource control/permissions. Avoid the default permissions. Must go
through all the permissions.
Install additional security tools such as anti-virus, malware removal, intrusion detection
system, firewall etc.
Identify the white listed applications which can execute on the system.
Virtualization Security: The main concern should be:
o Isolation of all guest OSs.
o Monitoring all the guest OSs.
o Maintenance and security of the OS-images and snapshots.
Can be implemented through:
o Clean install of hypervisor from secure and known source.
o Ensure only the administrative access to hypervisor, snapshots and OS images.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 147
Course Title (Course Code) VU
o The guest OS should be preconfigured to not to allow any modifications/access to
underlying hypervisor by the users.
o Proper mapping of virtual devices over physical devices.
o Network monitoring etc.
Threat: It is a potential security breach to affect the privacy and/or cause a harm.
o Can occur manually and/or automatically.
o A threat executed results in an attack.
o Threats are designed to exploit the known weaknesses or Vulnerabilities.
Vulnerability: It is a (security) weakness which can be exploited.
o It exists because of:
o Insufficient protection exists and/or the protection is penetrated through an attack.
o Configuration deficiencies
o Security policy weaknesses
o User error
o Hardware or firmware weaknesses and software bugs
o Poor security architecture
Risk: It is a possibility of harm or loss as a result of an activity.
o Measured according to
Threat level
Number of possible vulnerabilities
Can be expressed as:
Probability of occurring of a threat to exploit vulnerabilities
The expectation of loss due to compromise of an IT resource
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 148
Course Title (Course Code) VU
Trusted Attacker: Is in the form of legitimate Cloud consumer and launches attacks on
other Cloud consumers and the provider to steal information, DoS, hacking of weak
authentication processes etc.
Malicious Insider: Typically human threat agent. Can be current or pervious employee.
Can cause significant damage with administrative rights.
Lesson No. 35
NETWORK SECURITY BASICS
Module No – 178: Internet Security:
It is a branch of computer security which specifically deals with threats which are
Internet based.
The major threats include the possibilities of unauthorized access to any one or more of
the following:
o Computer system
o Email account
o Website
o Personal details and banking credentials
Viruses and other malware
Social engineering
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 149
Course Title (Course Code) VU
Secure Socket Layer (SSL): It s security protocol for encrypting the communication
between a web browser and web server.
o The website has to enable SSL over its deployment.
o The browser has to be capable of requesting a secure connection to the websites.
o Upon request, the website shares its security certificate (issued by a Certificate
Authority (CA)) with the browser which the browser confirms for validity.
o Upon confirmation of security certificate, the browser generates the session key
for encryption and shares it with website, after this the encrypted communication
session starts.
o Websites implementing the SSL use HTTPS (https://...) in the URL instead of
HTTP (http://...) and a sign of padlock before the URL.
The wireless network security is applied to wireless networks and is also known as
wireless security.
It is used to secure the wireless communication from unauthorized access.
There are a lot of threats for wireless networks. Such as:
o The packets can be easily eavesdropped and recorded.
o The traffic can be modified and retransmitted more easily as compared to wired
networks.
o Prone to DoS attacks at access points (APS).
Some prominent security protocols for wireless security are:
o Wired Equivalent Privacy (WEP): Designed to provide the same level of
security as the wired networks.
First standard of 802.11
Uses RC4 standard to generate encryption keys of length 40-128 bits.
Has a lot of security flaws, difficult to configure and can easily be
cracked.
o Wi-Fi Protected Access (WPA): Introduced as an alternative to WEP while a
long-term replacement to WEP was being developed.
Uses enhanced RC4 through Temporal Key Integrity Protocol (TKIP)
which improves wireless security.
Backward compatible with WEP.
o Wi-Fi Protected Access 2 (WPA2): Standardized release by IEEE as 802.11i the
successor to WPA.
Considered as the most secure wireless security standard available
Replaces the RC4-TKIP with stronger encryption and authentication
methods:
Advanced Encryption Standard (AES)
Counter Mode with Cipher Block Chaining Message Authentication Code
Protocol (CCMP)
Allows seamless roaming from one access point to another without
reauthentication.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 150
Course Title (Course Code) VU
Lesson No. 36
CLOUD SECURITY MECHANISMS
Module No – 183: Encryption:
Module No – 184:
It is a mechanism comprising of policies and procedures to track and manage the user
identities and access privileges for IT resources.
Consist of four main components:
o Authentication: Usernames+passwords, biometric, remote authentication through
registered IP or MAC addresses.
o Authorization: Access control and IT resource availability
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 152
Course Title (Course Code) VU
o User management: Creating new user-identities, password updates and managing
privileges.
o Credential management: It establishes identities and access control rules for
defined user accounts.
As compared to PKI, the IAM uses access control policies and assigns user privileges.
Single Sign-On: Saves the Cloud consumers from signing-in to subsequent services if
the consumer is executing an activity which requires several Cloud services.
o A security broker authorizes the consumer and creates a security context
persistent across multiple services.
Module No – 187:
Cloud-based Security Groups: Cloud IT resources are segmented for easy management
and provisioning to separate users and groups.
o The segmentation process creates Cloud-based security groups with separate
security policies.
o These are logical groups which act as network perimeters.
o Each Cloud-based IT resource is assigned to atleast one logical cloud-based
security group.
o Multiple VMs hosted over same physical server can be allocated to different
cloud-based security groups.
o Safeguard against DoS attacks, insufficient authorization and overlapping trust
boundaries threats.
o Closely related to logical network perimeter mechanism.
Hardened Virtual Server Images: It is a process of removing unnecessary software
components from the VM templates.
o It also includes closing unnecessary ports, removing root access and guest login
and disabling unnecessary services.
o Makes the template more secured than non-hardened server image templates.
Lesson No. 37
PRIVACY ISSUES OF CLOUD COMPUTING
Module No – 188: Lack of user control:
Data privacy issues such as unauthorized access, secondary usage of data without
permission, retention of data and data deletion assurance occur in Cloud Computing.
With the data of a SaaS user placed in Cloud, there is a lack of user control over that data.
A few reasons are as follows:
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 153
Course Title (Course Code) VU
o Ownership and control of infrastructure: The user has neither ownership nor the
control of underlying infrastructure of the Cloud. There is a threat of theft, misuse
and unauthorized sale of user’s data.
o Access and transparency: In many cases, it is not clear that a Cloud service
provider can/will access the users’ data. It is also not clear that an unauthorized
access can be detected by the Cloud user/provider.
o Control over data lifecycle: The Cloud user can not confirm that the data deleted
by the user is actually been deleted. There is no assurance for the data deletion of
terminated accounts as well. There is no regulation to implement a must-erase
liability on Cloud provider.
o Changing provider: It is not clear how to completely retrieve the data from
previous provider and how to make sure that the data is completely deleted by the
previous provider.
o Notification and redress: It is not clear how to determine the responsibility of
(user or provider for) an unauthorized access.
The deployment and running of Cloud service may require the recruitment of highly
skilled personals.
For example the STEM skills (Science, Technology, Engineering and Mathematics)
should be present in the recruited people.
The lack of STEM skilled and/or trained persons can be a Cloud security issue.
Such people may also lack the understanding of the privacy impact of their decisions.
Due to the rapid speed and spread of computing devices among the employees, now more
employees may introduce a privacy threat on average.
For example multiple employees may leave their laptops unattended with a further
possibility of unencrypted sensitive data.
The employees can access different public Cloud services through self service portals.
Care and control must be observed regarding public Cloud access to overcome the
privacy issues.
There is a high tendency that the data stored or processed over Cloud may be put to
unauthorized usage.
A legal secondary-usage of Cloud consumers’ data is to sell the statistics for targeting the
advertisements.
However an illegal secondary-usage example is the selling of sales data to competitors of
the consumer.
Therefore it may be necessary to legally address the usage of consumer’s data by the
Cloud provider.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 154
Course Title (Course Code) VU
So far there are no measures and means to verify the illegal secondary-usage of
consumers’ data by the Cloud provider/s.
In future, a technological solution may be implemented for checking and preventing the
unauthorized secondary usage of consumers’ data.
Module No – 191: Complexity of Regulatory Compliance:
The global nature of Cloud computing makes it complex to abide by all the rules and
regulations in different regions of the world.
The legal bindings regarding data location is complex to implement because the data may
be replicated on multiple locations at the same time.
It is also possible that the each replicated copy of the data is managed by different entities
for example backup services obtained from two different providers.
The backup provided by a single provider may be spread across different data centers
which may or may not be within the legal location-boundary.
The rapid provisioning architecture of the Cloud makes it impossible to predict the
location of to-be-provisioned Cloud resource such as storage and VMs.
The cross border movement of data while in transit is very difficult to control. Specially
when the data processing is outsourced to another Cloud provider. Then the location
assurance of such Cloud provider is a complex task at runtime.
The privacy and data protection regulations in many countries restrict the trans-border
flow of personal information of the citizens.
These countries include EU and European Economic Area (EEU) countries, Australia,
Canada etc.
From EU/EEU countries, the personal information can flow to countries which have
adequate protection. These include the EU/EEU countries and Canada etc.
The flow of personal information to other countries is restricted, unless some
rules/agreements are followed by those countries.
For example the information can be transferred from EU to USA if the receiving entity
has joined the US Safe Harbor agreement.
If the receiving country has signed a model contract with the EU country/ies then the
personal information can flow towards the receiving country.
So far the trans-border regulations are not complied with Cloud computing and there is
more to be done to implement these data flow restrictions.
A Cloud Service Provider (CSP) may be forced to hand over the consumers’ data due to a
court writ.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 155
Course Title (Course Code) VU
For example, in a case handled by the US court of law, with state vs. the defendant, the
US govt. was allowed the access to Hotmail service (of Microsoft) through the court
orders.
. The govt. always wants to check the relevance of evidence with the case. For that, the
court can allow access to consumers’ data.
But for private entities, this situation can be avoided through the clauses of legal
agreement to bind the CSP for disallowing any access(by a non govt. entity) to the data.
OR to govern the response of CSP to any writ from such entities.
Module No – 194: Legal Uncertainty:
Since the Cloud computing moves ahead of the law, there are legal uncertainties about
the privacy rights in the Cloud.
Also, it is hard to predict the outcome of applying the current legal rules regarding trans-
border flow of data to Cloud computing.
One of the areas of uncertainty is about the procedure of anonymizing or encrypting of
personal data requires a legal consent from the owner and the processing related to
enhancement of data privacy is exempt from privacy protection requirements?
Also, it is not clear that the anonymized data (which may or may not contain personal
data) is also governed by the trans-border data flow legislations or not.
In short, the legal uncertainty exists regarding the application of legal frameworks for
privacy protection upon Cloud computing.
Lesson No. 38
SECURITY ISSUES OF CLOUD COMPUTING
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 156
Course Title (Course Code) VU
Module No – 196: Gap in Security:
Although the security controls for the Cloud are same as of other IT environments, but
the lack of user control in Cloud computing introduces security risks.
These security risks are due to a possible lack of effort for addressing the security issues
by the Cloud service provider.
SLAs do not include any provision of the security procedures made necessary by the
consumer or through any standard.
The gap in security also depends upon the type of service (IaaS, PaaS & SaaS).
The more privileges given to the consumer (for example in IaaS), the more responsibility
of security procedures lies with the consumer.
The consumer may need to gain the knowledge of the security procedures of provider.
The provider gives some security recommendations to IaaS and PaaS consumers.
For SaaS, the consumer needs to implement its own identity management system for
access security.
Generally, it is very difficult to implement protection throughout the Cloud. In few cases
the Cloud providers are bound by law for the protection of personal data of the citizens.
It is difficult to ensure the standardized security when a Cloud provider is outsourcing
resources from other providers.
Currently the providers take no responsibility/liability for deletion, loss or alteration of
data.
The terms of service are usually in favor of the provider.
Cloud consumers may experience unwanted access to their data from the governments.
There are many laws in the world (for example the US Patriot Act) which allow the
government a privileged access to the Cloud consumers’ data.
The other type of unwanted access is from the lack of adequate security when the Cloud
provider is in a supply chain link with other providers.
A malicious employee may have a privileged access to data (because of being the
employee).
Data thieves and even the other consumers of the same service may break into the
consumers’ data if the data of each consumer is not adequately separated.
The damage can be far greater than non Cloud environments due to the presence of
various roles in Cloud architectures with administrative level access.
In general the Cloud storage is more prone to risks from malicious behavior than the
Cloud processing
This is because the data may remain in Cloud storage for longer period of time and hence
exposed to more risks.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 157
Course Title (Course Code) VU
So far there is no surety or confirmation functionality for the deleted data being really
deleted and non-recoverable by the service provider.
This is due to lack of consumer control over life cycle of the data (as discussed before).
This problem is increased with the presence of duplicate copies of the data.
It might not be possible to delete a virtual disk completely because several consumers
might be sharing it or the data of multiple consumers resides over same disk.
For IaaS and PaaS, the reallocation of VMs to subsequent consumers may introduce the
problem of data persistency across multiple reallocations.
This problem exists until the VM is completely deleted.
For SaaS, each consumer is one of the users of a multitenant application. The customer’s
data is available each time the customer logs-in.
The data is deleted when the SaaS consumer’s subscription ends.
There is correspondingly higher risk to customers’ data when the Cloud IT-resources
(such as VM and storage) are reused or reallocated to a subsequent consumer.
As discussed previously, the management interfaces are available through remote access
via Internet.
This poses an increased risk compared to traditional hosting providers.
There can be vulnerabilities associated with browsers and remote access.
These vulnerabilities can result in the grant of malicious access to a large set of resources.
This increased risk is persistent even if the access is controlled by a password.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 158
Course Title (Course Code) VU
In order to provide high level of reliability and performance, a Cloud provider makes
multiple copies of the data and store them at different locations.
This introduces many vulnerabilities.
There is a possibility of data loss from Storage as a Service.
A simple solution is to place data at consumer’s premises and use the Cloud to store
(possibly encrypted) backup of data.
A loss of data may occur before taking backup.
A subset of the data may get separated and unlinked form the rest and thus becomes
unrecoverable.
The failure/loss of data-keys may significantly destroy the data context.
Sometimes the consumers of traditional (non-Cloud) backup service suffer a complete
loss of their data on non-payment of periodic fee.
In general, the Cloud service show more resiliency than these traditional (non-Cloud)
services.
As discussed before, the Cloud provider take lesser liabilities in case of data loss.
Therefore, the consumers should obtain some assurance from the Cloud provider
regarding the safety of their data.
Consumers may also demand for getting the warning/s regarding any attack/unauthorized
access/loss of data.
A few frameworks exist for security assurance in Cloud. The Cloud providers offer the
assurance on the basis of these frameworks.
However these assurances may not be applied in case of frequent data accesses and/or in
case of some instances such as isolation failure (discussed previously).
Still, there is no compensation offered by the Cloud providers for the incidents of data
loss.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 159
Course Title (Course Code) VU
The best assurance for data security in Cloud computing is achievable through keeping
the data in private Cloud.
Although automated data security assurance evaluation frameworks exist but they still
need to evolve in order to comply with all the security issues discussed in this course.
A Cloud consumer should be able to audit the data processing over Cloud to ensure that
the Cloud procedures are in compliance with the security policy of the consumer.
Similarly the Cloud consumers may want to monitor SLA compliance by the provider but
the complexity of Cloud infrastructure makes it very difficult to extract the appropriate
information or to perform a correct analysis.
Cloud providers could implement the internal compliance monitoring controls in addition
to external audit process.
The consumers may even b allowed a ‘right to audit’ for those particular consumers who
have regulatory compliance responsibilities.
Although the existing procedures for audit can be applied to Cloud computing but the
provision of a full audit trail with the public Cloud models is still an unsolved issue.
Lesson No. 39
TRUST ISSUES OF CLOUD COMPUTING
Module No – 206: Trust in the Clouds:
Cloud consumers have to trust the Cloud mechanisms for storing and processing the
sensitive data.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 160
Course Title (Course Code) VU
In the past various surveys in Europe have revealed the lack of consumer trust upon the
protection of their data kept online.
Up to 70% of Europeans were concerned about the non authorized secondary usage of
their data.
The survey about trust on Cloud provider showed the following statistics:
o Reputation: 29%
o Recommendation from trusted party: 27%
o Trial experience: 20%
o Contractual: 20%
o Others: 4%
The consumer trust depends upon the compatibility level of data protection provided by
the Cloud provider vs. the consumer’s expectations.
A few such expectations include the regulatory compliance of data handling procedures
and control over data lifecycle even in supply chain Cloud provisioning.
70% of the business users (in selected regions of the world) are already using private
Clouds according to a study.
However different surveys showed that the enterprises are concerned about:
o Data security: 70%
o SLA compliance : 75%
o Vendor lock-in:79%
o Interoperability: 63%
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 161
Course Title (Course Code) VU
Although the Cloud provider/s may be using a supply chain mechanism through the IT
resources of subcontractors.
This may jeopardize the security and privacy of the consumers’ data (as discussed before)
and thus weakens the trust relationships.
Even if the trust relationships are weak in service delivery chain, but at least some trust
exists so that the rapid provisioning of the Cloud services can be performed.
Significant business risks may arise when critical data is placed on cloud and the
consumer has lack of control over the passing of this data to a subcontractor.
So the trust along the service delivery chain from the consumer to Cloud provider is non-
transitive.
There is a lack of transparency for the consumer in the process of data flow. The
consumer may even not know the identity of the subcontractor/s.
In-fact, the ‘On-demand’ and ‘pay-as-you-go’ models may be based upon weak trust
relationships.
This is because new providers have to be added on the go to provide the extra capacity on
short notice.
The consensus about the use of trust management approaches for Cloud computing is
missing.
Trust measurement is a major challenge due to the difficulty of contextual representation
of trust.
Some standardized trust models are required to be created for evaluating and assurance of
accountability.
Almost all of the existing models for trust evaluation are not adequate for Cloud
computing.
The existing models of trust evaluation in Cloud computing partially cover the trust
categories.
Trust models are lacking a suitable metrics for accountability.
There is no consensus on type of evidence required for the verification of the
effectiveness of trust mechanisms.
Trust is widely considered as a key concern for consumers, enterprises and regulators.
Lack of trust is the key factor which inhibits the wide adoption of Cloud services by the
end-users.
People are worried about what will happen to their data when it is placed on Cloud.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 162
Course Title (Course Code) VU
In order to monitor and evaluate the trust, the systematic trust-management is required.
There should be a system to manage the trust.
The trust management system should be able to measure the “trustfulness” of the Cloud
services.
The following attributes can be considered:
o Data integrity: Consisting of security, privacy and accuracy.
o Security of consumers’ personal data.
o Credibility: Measured through QoS.
o Turnaround efficiency: The actual vs. promised turnaround-time. It is the time
from placement of consumer’s task to the finishing of that task.
o Availability of Cloud service provider’s resources and services.
o Reliability or success rate of performing of agreed upon functions within the
agreed upon time deadline.
o Adaptability with reference to avoidance of single point of failures through
redundant processing and data storage.
o Customer support provided by the Cloud provider.
o The consumer feedback on the service being offered.
These attributes can be graded and trust computation can be performed. The computed
value can be saved for future comparison.
In this module we shall briefly discuss the possible approaches to solve the privacy,
security and trust issues in Cloud.
There are three main dimensions in this regard:
Innovative regulatory frameworks to facilitate the Cloud operations as well as to solve
the possible issues regarding privacy, security and trust.
Responsible company governance should be exhibited by the provider to show the
intension of safeguarding the consumer’s data and intension to prove this intension
through audit.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 163
Course Title (Course Code) VU
Lesson No. 40
OPEN ISSUES IN CLOUD
Module No – 212: Overview:
The real time applications require high performance and high degree of predictability.
Cloud computing shows some performance issues which are similar to those of other
forms of distributed computing.
o Latency: As measured through round-trip-time (the time from sending a message
to receiving a response) is not predictable for Internet based communications.
o Offline Data Synchronization: For the offline updates in data, the
synchronization with all the copies of data on Cloud is a problem. The solution to
this problem requires the mechanisms of version control, group collaboration and
other synchronization capabilities.
o Scalable Programming: The legacy applications have to be updated to fully
benefit from scalable computing capacity feature of Cloud computing.
o Data Storage Management: The consumers require the control over data life
cycle and the information regarding any intrusion or unauthorized access to the
data.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 164
Course Title (Course Code) VU
It is a probability that a system will offer failure-free service for a specified period of
time for a specified environment.
It depends upon the Cloud infrastructure of the provider and the connectivity to the
subscribed services.
Measuring the reliability of a specific Cloud will be difficult due to the complexity of
Cloud procedures.
Several factors affect the Cloud reliability:
o Network Dependence: The unreliability of Internet and the associated attacks
affect the Cloud reliability.
o Safety-Critical Processing: The critical applications and hardware such as
controls of avionics, nuclear material and medical devices may harm the human
life and/or cause the loss of property.
These are not suitable to be hosted over Cloud
Module No – 215:
Economic Goals: Although the Cloud provides economic benefits such as saving upfront
costs and elimination of maintenance costs and provides consumers with economies of
scale.
However there are a number of economic risks associated with Cloud computing.
SLA Evaluation: The lack of automated mechanisms for SLA compliance by the
provider requires the development of a common template that could cover the majority of
SLA clauses and could give an overview of SLA complaisance.
This would be useful in decision making for investing the time and money in manual
audit.
Portability of Workloads: The initial barriers to Cloud adoption are the needs of a
reliable and secure mechanism for data transfer to Cloud as well as to port the workload
to other providers are open issues.
Interoperability between Cloud Providers: The consumers face or are in fear of vendor
lock-in due to lack of interoperability among different providers.
Disaster Recovery: The physical and/or electronic disaster recovery requires the
implementation of recovery plans for hardware as well as software based disasters so that
the provider and consumers can be saved from economic and performance losses.
The consumer is although responsible for compliance but the implementation is actually
performed by the provider.
The consumer has a lack of visibility regarding the actual security procedures being
adopted and/or applied by the provider. However the consumer may request for the
deployment of monitoring procedures.
The consumers (having their data processed on provider’s premises) need to acquire
assurance from the provider regarding the compliance with various laws. For example in
US: the health information protection act, payment security standard, information
protection accountability act etc.
The forensics support regarding any incidence should be provided. This will evaluate the
type of attack, the extent and damage associated and collection of information for
possible legal actions in future.
The forensic analysis for SaaS is the responsibility of the provider while the forensic
analysis of IaaS is the responsibility of the consumer.
Lesson No. 41
DISASTER RECOVERY IN CLOUD COMPUTING
Module No – 219: Understanding the threats:
Disk Failure: disk drives are electro-mechanical devices which wear out and eventually
fail.
Failure can be due to disaster such as fire and floods. Can also be due to theft.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 166
Course Title (Course Code) VU
Power Failure or Disruption: The Computers can be damaged due to a power surge
caused by a storm or some fault in power supply system.
Power surge may permanently damage the disk storage.
The user looses all the unsaved data when a power-blackout happens.
A few disaster recovery plans are as follows:
o Traditionally, the surge protector devices are used. But these devices are not
helpful in saving the (unsaved) data in case of a blackout.
o The in-house data centers can use huge and expensive uninterruptable power
supply (UPS) devices and/or generators.
o Another solution is to shift the data to another site. But this is expensive and time
consuming.
o The best option is to move the data center to Cloud. The Cloud providers have
better(and expensive) power backups and their cost is divided among the
consumers. Also, the Cloud mechanism may automatically shift the data to a
remote site on another power grid (in case of power failures of longer duration).
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 167
Course Title (Course Code) VU
Computer Viruses: While surfing the web, the users may potentially be downloading
and installing software and/or share the drive such as junk drives over their computing
devices.
These devices are at the risk of attacks through computer virus and spyware.
Traditionally, the following techniques have been used for safeguarding against the virus
attacks:
o Making sure each computer has anti-virus installed and set to auto-update to get
the most recent virus and spyware signatures.
o Restrict the user privilege to install software.
o Using a firewall over router or on the computer or around the LAN.
Cloud computing presents difficulties for non-Cloud based viruses to penetrate. This is
because of the complexities of virtualization technologies. Also the Cloud providers
ensure reasonable security measures for the consumer’s data and software.
Fire, Flood & Disgruntled Employees: The fire as well as the fire extinguishing
practices can destroy the computing resources, data and backup.
Similarly the heavy and/or unexpected rainfall may cause an entire block or whole city
including the computing equipment to be affected by a flood.
. Similarly an angry employee can cause harm by launching a computer virus, deleting
files and leaking the passwords.
Traditionally the office equipment is ensured to lower the monitory damage. Backup is
used for data protection. Data centers use special mechanisms for fire-extinguishing
without water sprinkles.
By residing the data center over Cloud, the consumer is freed from making efforst and
expenditures for fire prevention systems as well as for data recovery. The cloud provider
manages all these procedures and includes the cost as minimal part of the rental.
Unlike fire, the floods can not be avoided or put-off.
The only possibility to avoid the damage due to floods is to avoid setting up the data
center in a flood zone.
Similarly, choose a Cloud provider which is outside any flood zone.
Companies apply access control and backup to limit the access to data as well as the
damage to data due to unsatisfied employees.
In Cloud, the Identity as a Service (IDaaS) based single sign-on excludes the access
privileges of terminated employees as quickly as possible to prevent any damages.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 168
Course Title (Course Code) VU
Lost Equipment & Desktop Failure: The loss of equipment such as a laptop may
immediately leads to the loss of data and a possible loss of identity.
If the data stored on the lost device is confidential then this may lead to even more
damage.
Traditionally the risk of damage due to lost or stolen devices is reduced by keeping
backup and to safeguard the sensitive data, login and strong password for the devices are
used.
But even the strong passwords are not difficult to break for the experienced hackers. Yet
most of the criminals are still prevented to access the data.
For the Cloud computing, the data can be synchronized over multiple devices using the
Cloud service. Therefore the user can get the data from online interface or from other
synced devices.
In case of desktop failure, the user (such as an employee of a company) becomes offline
until the worn out desktop is replaced.
If there was no backup, the data stored on the failed desktop may become unrecoverable.
Traditionally, data backup is kept for the desktops in an enterprise. The backup is stored
on a separate computer. In case of desktop failure, the maintenance staff tries to provide
alternative desktop and restore the data s soon as possible.
Whereas in Cloud, the employees work on the instances of IaaS or Desktop as a Service
by using the local desktops.
In case of desktop failure, the employee can just walk to another computer and log in to
the Cloud service to resume the work.
Server failure & Network Failure: Just like the desktops, the severs can also fail.
The replacement of blade server is relatively simple process and mostly the blade servers
are preferred by the users.
Ofcourse there has to be a replacement server in stock to replace with the failed server.
Traditionally the enterprises keep redundant servers to quickly replace a failed server.
In case of Cloud computing, the providers of IaaS and PaaS manage to provide 99.9%
up-time through server redundancy and failover systems. Therefore the Cloud consumers
do not have to worry about server failure.
The network failure can occur due to a faulty device and will cause downtime.
Traditionally, the users keep 3G and 4G wireless hotspot devices as a backup. While the
enterprises obtain redundant Internet connections from different providers.
Since the Cloud consumers access the Cloud IT resources through the Internet, the
consumers have to have redundant connections and/or backup devices for connectivity.
Same is true for the Cloud service provider. The 99.9% up-time is assured due to
backup/redundant Network connections.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 169
Course Title (Course Code) VU
Database System Failure & phone system failure: Most of the companies rely upon
database systems to store a wide range of data.
There are many applications dependent upon database in corporate environment such as
customers record keeping, sale-purchase and HR systems etc.
The failure of data base will obviously makes the dependent application unavailable.
. Traditionally, the companies either use a backup or replication of database instances.
The former case results in downtime of database system while the latter results in
minimum downtime or no downtime but is more complicate to implement.
The Cloud based storage and database systems use replication to minimize the downtime
with the help of failover systems.
Many companies maintain phone systems for conference calling, voice mail and call
forwarding.
Although the employees can switch to using mobile phones in case the phone system
fails. But the customers are left unaware of the phone number to connect to the company
till the phone system recovers.
Traditionally, the solutions are applied to reduce the impact of phone failure.
Cloud based phone systems on the other hand provide reliable and failure safe telephone
service. Internally, the redundancy is used in the implementation.
The process of reducing risks will often have some cost. For example the resource
redundancy and backups etc.
This indicates that investment on risk-reduction mechanisms will be limited.
The IT staff should therefore evaluate and classify each risk according to its impact upon
the routine operations of the company.
A tabular representation of the risks, the probability of occurrence and the business
continuity impact can be shown.
The next step is to formally document the disaster recovery plan (DRP).
A template of DRP can contain the plan overview, goals and objectives, types of events
covered, risk analysis and the mitigation techniques for each type of risk identified in
earlier step.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 170
Course Title (Course Code) VU
Data Access Standards: Before developing the Cloud based applications, the consumers
should make sure that the application interfaces provided in Cloud are generic and/or data
adaptors could be developed for portability and interoperability of the Cloud applications
can happen when required.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 171
Course Title (Course Code) VU
Data Separation: The consumer should make sure that proactive measures are
implemented at the provider’s end for separation of sensitive and non-sensitive data.
Data Integrity: Consumers should use checksum and replication technique to ensure the
integrity of the data to detect any violations of data integrity.
Data Regulations: The consumer is responsible to ensure that the provider is complying
with all the regulations regarding data which are applicable to consumer regarding data
storage and processing.
Data Disposition: The consumer should make sure that the provider offer such
mechanisms which delete the data of consumer whenever the consumer requests for it.
Also make sure that the evidence or proof of data deletion is generated.
Data Recovery: The consumer should examine the data backup, archiving and recovery
procedures of the provider and make sure they are satisfactory.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 172
Course Title (Course Code) VU
Module No – 230: VMs, Software & Applications:
Lesson No. 42
MIGRATING TO THE CLOUD
Module No – 231: Define System Goals and Requirements:
The migration to Cloud should be well planned. The first step should be to define the
system goals and requirements. The following considerations are important:
o Data security and privacy requirements
o Site capacity plan: The Cloud IT resources needed initially for application to
operate.
o Scalability requirements at runtime
o System uptime requirements
o Business continuity and disaster requirements
o Budget requirements
o Operating system and programming language requirements
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 173
Course Title (Course Code) VU
o Type of Cloud: public, private or hybrid
o Single tenant or multitenant solution requirements
o Data backup requirements
o Client device support requirements such as for desktop, tab or smartphone
o Training requirements
o Programming API requirements
o Data export requirements
o Reporting requirements
[Jamsa, K. (2012). Cloud computing. Jones & Bartlett Publishers]
Module No – 232: Protect existing data and know your application characteristics:
It is highly recommended that before migrating to Cloud, the consumer should backup
the data. This will help in restoring the data to a certain time.
The consumer should discuss with provider and agree upon a periodic backup plan.
The data life cycle and disposal terms and conditions should be finalized at the start.
If the consumer is required to fulfill any regulatory requirements regarding data privacy,
storage and access then this should be discussed with the provider and be included in the
legal document of the Cloud agreement.
The consumer should know the IT resource requirements of the application being
deployed over the Cloud.
The following important features should be known:
o High and low demand periods in terms of time
o Average simultaneous users
o Disk storage requirements
o Database and replication requirements
o RAM usage
o Bandwidth consumption by the application
o Any requirement related to data caching
Module No – 233: • Establish a realistic deployment schedule, Review budget and Identify
IT governance issues:
Many companies use a planned schedule for Cloud migration to provide enough time for
training and testing the application after deployment.
Some companies use a beta-release to allow employees to interact with the Cloud based
version to provide feedback and to perform testing.
Many companies use key budget factors such as running cost of in-house datacenter,
payrolls of the IT staff, software licensing costs and hardware maintenance costs.
This helps in calculation of total cost of ownership (TCO) of Cloud based solution in
comparison.
Many Cloud providers offer solutions at lower price than in-house deployments.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 174
Course Title (Course Code) VU
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 175
Course Title (Course Code) VU
Existing & Future capacity: If the application is being migrated to Cloud, then the
current requirement of IT resources should be evaluated and used for initial deployment
as well as for horizontal or vertical scaling configuration.
Configuration management: Since the Cloud based solutions are accessed through any
OS, browser and device, therefore the interfaces of the application should be able to
render the contents with respect to OS, browser and user device.
Deployment: The deployment issues such as related to OS, browser and devices should
be addressed for the initial deployment as well as for future updates.
Environment (Green computing): Design consideration for the Cloud based solution
should contain considerations for power efficient design in order to reduce the
environmental effect of carbon footprint of the Cloud based solution.
Disaster recover: The Cloud solution design should have consideration of disaster
recovery mechanisms. The potential risks for business continuity should be identified and
cost effective mitigation techniques should be configured for these risks.
Interoperability: The design consideration should contain possibility of interoperability
between different Cloud solutions in terms of exchange of data.
Maintainability: The Cloud solution should be designed to increase the reusability of
code through loose coupling of the modules. This will lower down the maintenance cost.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 176
Course Title (Course Code) VU
Module No – 238: Cloud Solution Design Metrics:
Reliability: Design should include the consideration for hardware failure events. The
redundant configuration might be applied according to mean time between failure
(MTBF) for each hardware device or establish a reasonable downtime.
Response time: The response time should be as less as possible. Specifically for the
online form submissions and reports.
Robustness: It refers to the continuous working capability of the solution despite the
errors or system failure. This can be complemented with Cloud resource usage
monitoring for timely alarm for critical events.
Security: The developer should consider the Cloud based security and privacy issues
while designing.
Testability: Test cases should be developed to test the fulfilment of functional and non-
functional requirements of the solution.
Usability: The design can be improved for usability by implementing a prototype and
getting users’ reviews to enhance the ease of usability of the Cloud solution.
Lesson No. 43
CLOUD APPLICATION SCALABILITY AND RESOURCE SCHEDULING
Module No – 239: Cloud Application Scalability:
Review Load Balancing Process: Cloud based solutions should be able to scale up or
down according to demand.
o Remember, the scaling out and scaling up mean acquiring new resources and
upgrading the resources respectively. Scaling in and scaling down are exactly the
reverse of these.
o There should be a load balancer module specially in case of horizontal scaling.
o Load balancing or load allocating (in this regard) is performed by distribution of
workload (which can be in the form of clients’ requests) to Cloud IT resources
acquired by the Cloud solution.
o The allocation paten can be through round robin, random or a more complex
algorithm containing multiple parameters. (More on this in later module).
Application Design: Cloud based solutions should neither be having no-scaling nor the
unlimited scaling.
o There should be a balanced design of Cloud application regarding scaling with
reasonable expectations.
o Both horizontal and vertical scaling options should be explored either individually
or in combination.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 177
Course Title (Course Code) VU
Module No – 240: Cloud Application Scalability:
Minimize objects on key pages: Identify the key pages such as home page, forms and
frequently visited pages of Cloud based solution.
o Reduce the number f objects such as graphics, animation, audio etc. from these
pages so that they can load quickly.
Selecting measurement points: Remember a rule that a 20% of code usually performs
the 80% of processing.
o Identify such code and apply scaling to it.
o Otherwise applying scaling may not have the desired performance improvements.
Analyze database operations: The read/write operations should be analyzed for
improving performance.
o The read operations are non-conflicting and hence can be performed on replicated
databases (horizontal scaling).
o But write operations on one replica database requires the synchronization of all
database instances and hence the horizontal scaling becomes time consuming.
o The statistics of database operations should be used for decision about horizontal
scaling.
Evaluate system's data logging requirements: The monitoring system regarding the
performance and event logging may be consuming disk space and CPU.
o Evaluate the necessity of logging operations before applying or periodically
afterwards and tune them to reduce disk storage and CPU wastage
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 178
Course Title (Course Code) VU
o Execution time
o Energy consumption
Fulfills QoS requirements:
o Reliability
o Security
o Availability
o Scalability
Remember that provider wants to earn more profit and maximize the resource usage.
Cloud consumer wants to minimize the cost and time of execution of workload.
The Cloud resource scheduling can be performed on various grounds as discussed in
coming modules.
o [Singh, S., & Chana, I. (2016). A survey on resource scheduling in cloud
computing: Issues and challenges. Journal of grid computing, 14(2), 217-264.]
Cost & Time-Based Resource Scheduling: Time based scheduling may miss some
tasks’ deadlines or may prove to be expensive if over provisioning of IT resources is used
to meet deadlines.
o The cost based scheduling may miss some deadlines and/or cause starvation to
some costs.
o Better to use a hybrid approach for resource scheduling to gain cost as well as to
minimize task deadline violations.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 179
Course Title (Course Code) VU
Profit-Based Resource Scheduling: This type of scheduling aims at increasing the profit
of Cloud provider.
o This can be done either by reducing the cost or increasing the number of
simultaneous users.
o The SLA violation is to be considered while making the profit based scheduling
decisions.
o The penalties of SLA violations may nullify the profit gained.
SLA & QoS Based Resource Scheduling: In this scheduling, the SLA violations are
avoided and QoS is maintained.
o The more load put on IT resources, the more tasks may be completed in a unit
time.
o Yet it may cause SLA violation when IT resources are overloaded.
o Hence the QoS consideration is applied to ensure SLA is not violated.
o Suitable for homogeneous tasks for which the estimation can be performed for
expected workload and expected time of completion.
Energy-Based Resource Scheduling: The objective is to save energy at data center level
to decrease the running cost and to contribute towards environment.
o Energy consumption estimation is required for each scheduling decision. There
can be a number of possible task distribution across servers and VMs.
o Only that distribution is preferred which shows the least energy consumption for a
batch of tasks at hand.
Module No – 250: Cloud Resource Scheduling Overview:
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 180
Course Title (Course Code) VU
Priority-Based Resource Scheduling: In this type of scheduling, the task starvation can
be avoided. Specially for the situation of resource contention.
o If there is a task classification e.g., on the basis of type, user, resource
requirement etc., so that the priority of one task can cause the resource scheduler
to preempt the other low priority tasks.
o May cause the low priority tasks to suffer starvation.
o Aging factor can be applied to increase the priority of low-priority tasks to avoid
or lower the starvation.
VM-Based Resource Scheduling: Since the VMs can host Cloud based applications and
the VMs can be migrated, the resource scheduling can be performed on VM level.
o The overall demand of all applications hosted on a VM is considered for
scheduling. If a VM is facing resource starvation, it can be migrated to another
server with available IT resources.
o The disadvantage is, there is no guarantee that the destination host also runs out of
IT resources due to already deployed VMs
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 181
Course Title (Course Code) VU
Lesson No. 44
MOBILE CLOUD COMPUTING
Module No – 254:
Introduction: Mobile devices are frequently being used throughout the world.
o Over the time, the users have started to rely more and more upon mobile devices
due to no constraints of time and location.
o The applications installed over mobiles are of various types and of various
computational requirements.
Overview: The mobile devices are inherently constrained by resources shortage such as
processing, memory, storage, bandwidth and battery etc.
o There might be a number of situations when mobile devices become incapable of
processing or running the applications due to resource shortage.
o On the other hand, Cloud computing offers unlimited IT resources over Internet
on-the-go.
Definition: Mobile cloud computing at its simplest, refers to an infrastructure where both
the data storage and data processing happen outside of the mobile device. Mobile cloud
applications move the computing power and data storage away from mobile phones and
into the cloud, bringing applications and Mobile Computing to not just smartphone users
but a much broader range of mobile subscribers’.
o [Dinh, H. T., Lee, C., Niyato, D., & Wang, P. (2013). A survey of mobile cloud
computing: architecture, applications, and approaches. Wireless communications
and mobile computing, 13(18), 1587-1611.]
There are various scenarios which indicate the need of a Mobile Cloud computing
environment.
This module presents a few examples in this regard.
Optical character recognition (OCR) is used to identify and translate the text from one
language to another. An OCR application could be installed over a mobile device for
tourists.
But due to resource shortage over the mobile devices, a better solution is to develop a
Mobile Cloud application.
Data sharing such as images form a site of disaster can be performed over Mobile Cloud
application to help in developing an overall view of the site.
The readings from sensors of multiple mobile devices spread across a vast region can not
be otherwise collected and processed except through a Mobile Cloud application.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 182
Course Title (Course Code) VU
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 183
Course Title (Course Code) VU
Some mobile devices can provide resources to other mobile devices in a Cloud setup
using mobile peer-to-peer network.
The mobile devices can be connected to a cloudlet which is a set of multi-core computers
connected to remotely placed Cloud servers. These cloudlets are usually in closed
vicinity of the mobile devices save the network latency.
[Fernando, N., Loke, S. W., & Rahayu, W. (2013). Mobile cloud computing: A survey.
Future generation computer systems, 29(1), 84-106.]
Cost benefit analysis proves to be useful for deciding about offloading the workload to
Cloud.
This analysis may consider the total investment (initial and running costs) and compare
with the benefits of Mobile Cloud computing.
Considering the goals of performance, energy conservation and quality to decide which
server should receive the offload from mobile devices. Thus the cost benefit analysis in
this case is from the point of view of Cloud infrastructure. Prediction can be used to
estimate the performance, energy consumption and quality.
The data related to devices’ energy consumption, network throughput and application
characteristics can be used to decide for offloading a task (of the profiled application) to
Cloud or execute it locally in order to (for example) conserve battery.
Security and privacy requirements may also be the base of task migration to Cloud.
There are a number of data security and privacy issues in Mobile Cloud computing.
These are in addition to the security and privacy issues of Cloud computing.
The following are the key areas for mobile Cloud security:
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 184
Course Title (Course Code) VU
There are some communication issues regarding Mobile Cloud computing. The
researchers have also proposed different solutions in this regard.
Low Bandwidth: It is one of the biggest issues for mobile Cloud computing because the
radio resource for wireless networks is much scarce as compared with the traditional
networks.
Availability: The availability of the service becomes an important issue when the mobile
deice has lost contact with mobile Cloud application due to network failure, congestion
and loss of signals.
Heterogeneity: The mobile devices accessing a Mobile Cloud Computing application are
of numerous types and use various wireless technologies such as 2G, 3G etc and WLAN.
An important issue is how to maintain the wireless connectivity along with satisfying the
requirements of Mobile Cloud computing such as high-availability, scalability and energy
efficiency etc.
The mobile device has to make the decision for offloading the computational workload to
the Cloud.
If the offloading is not performed efficiently then the desired performance may not be
achieved. Also the battery may get depleted faster than executing the workload locally.
There are two main types of computational offloading:
o Static: In which the offloading decisions (consisting of workload partitioning) are
made at the execution start of a task or a batch of tasks.
o Dynamic: The offloading decisions depend upon the run-time conditions of
dynamic parameters such as network bandwidth, congestion and battery life etc.
The static offloading decisions may not turn out to be fruitful if the dynamic parameters
change unexpectedly.
Better not to offload if the time/battery consumption (cost) for offloading is higher than
the cost of locally processing the task.
Incentives: In the model of Mobile Cloud computing where a mobile device shares
resources with other devices, there should be an incentive to share those resources.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 185
Course Title (Course Code) VU
Data access over mobile Cloud applications may be challenging in case of low
bandwidth, signal loss and/or battery life.
Accessing the files through mobile devices may turn out to be expensive in terms of data
transmission cost, network delays and energy consumption.
Data access approaches are needed to be developed/polished to maintain a performance
level and to save energy.
Some approaches have optimized the data access patterns.
Another approach is to use mobile cloudlets which are intermediate devices acting as file
cache.
Interoperability of data is also a challenge to provision data across heterogeneous devices
and platforms. A generic representation of data should be preferred.
Resource Management: A mobile Cloud application can acquire all the IT resources
from Cloud. Another method is to use the cloudlets which are individual computers or
even clusters in the vicinity of the mobile device running the mobile Cloud application.
o In worst case, the mobile device resources are utilized. All these situations require
separate resource management techniques.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 186
Course Title (Course Code) VU
Processing Power: The processing power of a single mobile device is not at all
comparable to Cloud.
The issue is how to efficiently utilize the huge processing power of Cloud to execute the
tasks of mobile Cloud applications.
Battery Consumption: Computational offloading becomes more energy efficient if the
code size is large and vice versa.
For example, offloading 500KB of code will take 5% of battery as compared to 10%
battery usage if this code is locally processed. Thus 50% of battery is saved when
offloading the code.
But 250KB code is only 30% battery efficient if uploaded.
Support of mobility while assuring connectivity to the Cloud. The network connectivity
becomes very important in this case. Cloudlets can support the connectivity but these
only exist at certain locations such as cafés and malls.
The adhoc creation of mobile Cloud over a set of a set of mobiles around a location
depends upon the availability of capable devices and cost benefit analysis.
Assurance of security is an on-going challenge to ensure privacy and security and to
establish trust between the mobile device users and the service provider/resource
provider.
The conduct and manage the incentives among the resource lenders (in case of a mobile
adhoc Cloud) requires the establishment of trust, payment method and methOds to
prevent free riders.
Typically, both the Cloud computing and Mobile Cloud computing are dependent upon
remote usage of IT resources offered by Cloud.
Cloud computing traditionally works to provide various Cloud services such as IaaS,
PaaS and SaaS etc. to the consumers.
Mobile Cloud computing is however more towards providing Cloud based application
over mobile devices and to deal with the connectivity, security and performance issues.
Cloud computing deals with user requirements from a single user to an enterprise level.
Mobile Cloud applications are more accessed by individual users for personal computing
purposes.
There are multiple models of Cloud computing such as Private, Public, Community and
Hybrid.
Mobile Cloud can be setup over Cloud, Cloudlets and on adhoc basis by using the
capable and resource rich mobile devices sharing a common location on map.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 187
Course Title (Course Code) VU
Lesson No. 45
SPECIAL TOPICS IN CLOUD COMPUTING AND CONCLUSION OF COURSE
Module No – 270: Big Data Processing in Clouds: Overview of Big Data:
The term “Big Data” refers to such enormous volume of data that can not be processed
through traditional database technologies.
The following are the three generic characteristics of big data:
o Data volume is huge
o Data may not be categorized into regular relational databases
o The data is generated, captured and processed at high speed.
Another definition of Big Data can be:
Big data is a set of techniques and technologies that require new forms of integration to
uncover large hidden values from large datasets that are diverse, complex, and of a
massive scale.
[Anuar, N.B., Gani, A., Hashem, I.A., Khan, S.U., Mokhtar, S., & Yaqoob, I. (2015). The
rise of "big data" on cloud computing: Review and open research issues. Inf. Syst., 47,
98-115]
Cloud computing infrastructure can fulfill the data storage and processing requirements
to store and analyze the Big Data.
The data can be stored in large fault tolerant databases. Processing can be performed
trough parallel and distributed algorithms.
Cloud storage can be used to host Big Data while the processing can be done locally on
commodity computers.
Big data Cloud applications can be built to host and process the big data on Cloud.
There are three popular models for big data:
o Distributed Map Reduce model popularized by Hadoop
o NoSQL model used for non-relational, non-tabular storage
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 188
Course Title (Course Code) VU
o SQL RDBMS model for relational tabular storage of structured data.
Traditional tools for big data processing for example can be deployed over Cloud.
Top-rated Hadoop options include Apache Hadoop, SAP's HANA/Hadoop combination,
Hortonworks, Hadapt and VMware's Cloud Foundry, as well as services provided by
IBM, Microsoft and Oracle.
For NoSQL, consider Cassandra, Hbase or MongoDB. IBM also offers NoSQL for the
cloud, and there are plenty of other NoSQL providers.
[http://searchcloudapplications.techtarget.com/tip/How-to-choose-the-best-cloud-big-
data-platform]
In this module we shall cover a few examples of usage of Cloud computing for Big Data
hosting and processing as case studies.
SwiftKey: It is a smart prediction technology for mobile device virtual keyboards.
o Terabytes of data is collected and analyzed for active users around the globe for
prediction and correction of text through an artificial engine.
o Uses Amazon Simple Storage Service and Amazon Elastic Cloud to host Hadoop.
Halo Game: More than 50 million copies have been sold worldwide.
o Collects the game usage data for the players globally for player-ranking in online
gaming tournaments.
o Windows Azure HDInsight Service (based on Hadoop) is used for this purpose.
Nokia: The well known mobile manufacturer collects terabytes of data for analysis,
o Uses Teradata Enterprise Data Warehouse, Oracle and MySQL data marts,
visualization technologies, and Hadoop.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 189
Course Title (Course Code) VU
In this module we shall briefly discuss a few challenges and issues related to Big Data
processing on Cloud.
Scalability assurance for storage of rising volume of Big Data.
Availability assurance of any data out of Big Data stored on Cloud storage is a challenge.
Data quality refers to the possibility of data-source verification. It is a challenging task
for Big Data (for example) collected from mobile phones.
Simultaneously handling heterogeneous data is challenging.
Privacy issues arise when the processing of Big Data (through data mining techniques)
may lead to sensitive and personal information. Another issue is the lack of established
laws and regulations in this regard.
Architecture & Processing: In this module we shall consider the example of multimedia
edge Cloud (MEC) consisting of cloudlets.
The multimedia Cloud providers can use the IT resources of cloudlets which are
physically placed over the edge (means very close to the multimedia service consumers)
to reduce network latencies.
There can be multiple MECs which are geographically distributed.
The MECs are connected to central servers through content delivery network (CDN).
The MECs provide multimedia services and maintain the QoS.
Network operators often have to configure the devices (switches & routers) separately
and by using vendor specific commands.
Thus, implementing high level network policies is hard and complex in traditional IP
networks.
The dynamic response and reconfiguration is almost non-existent in current IP networks.
Enforcing the network policies dynamically is therefore challenging.
Further, the control plane (the decision making and forwarding rules) and the data plane
(which performs traffic forwarding according to the decisions made by control plane) are
bundled inside the networking device.
All this reduces the flexibility, innovation and evolution of the networking infrastructure.
Software Defined Networking (SDN) is the new paradigm of networking that separates
the control plane from data plane.
It reduces the limitations of traditional networks.
The switches become the forwarding-only devices, while the control plane is handled by
a software controller.
The controller and switch have a software interface between them.
Controller directly exercises direct control over the data plane devices through a well
defined application program interface (API) such as OpenFlow.
o [Jain, R., & Paul, S. (2013). Network virtualization and software defined
networking for cloud computing: a survey. IEEE Communications Magazine,
51(11), 24-31.]
o [Kreutz, D., Ramos, F. M., Verissimo, P. E., Rothenberg, C. E., Azodolmolky, S.,
& Uhlig, S. (2015). Software-defined networking: A comprehensive survey.
Proceedings of the IEEE, 103(1), 14-76.]
o [Nunes, B. A. A., Mendonca, M., Nguyen, X. N., Obraczka, K., & Turletti, T.
(2014). A survey of software-defined networking: Past, present, and future of
programmable networks. IEEE Communications Surveys & Tutorials, 16(3),
1617-1634.]
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 191
Course Title (Course Code) VU
Module No – 280: History of SDN:
SDN has its roots in history as long ago as 80s and 90s with the development of Network
Control Point (NCP) technology.
NCP was introduced by AT&T as probably the first established technique to separate the
data plane and control plane.
Active Networks was another attempt to introduce computational and packet modification
capabilities to the network nodes.
Network virtualization is a recent development which allows hypervisor like environment
to network infrastructure.
OpenFlow based network operating systems such as ONOS have emerged to make
network administration easier and to develop/deploy new protocols and management
applications.
Each computer system needs at least one L2 NIC (Ethernet card) for communication.
A physical system must have at least one physical NIC (pNIC).
Each VM has at least one virtual NIC (vNIC).
All the vNICs on a physical host (server) are interconnected through a virtual switch
(vSwitch).
The vSwitch is connected to the pNIC.
Multiple pNICs are connected to a physical switch (pSwitch)
There are a number of standards available for NIC virtualization.
A physical ethernet switch can be virtualized by implementing IEEE Bridge Port
Extension standard 802.1BR
The VLANS can span over multiple data centers and there are several approaches to
manage the VLANS.
A VM can be migrated across different data centers by following multiple techniques
proposed by researchers.
The modern processors allow the implementation of software based network devices such
as L2 switch, L3 router etc.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 192
Course Title (Course Code) VU
o The controller software actually controls a subset of the whole network which is
small enough to be controlled by a single controller software.
o Programmable control plane: The controller software is controllable through
API calls.
o This helps in rapid implementation of network policies because the control plane
is centralized and not distributed.
o Standardized APIs: There is a southbound API for communication with
forwarding devices and a northbound API for communication with network
applications.
o The main southbound API is OPenFlow standardized by the Open Networking
Foundation. The northbound APIs have not been standardized yet.
Fog will support Internet of Everything (IoE) applications such as industrial automation,
transportation, network of sensors and actuators etc..
These applications demand real-time/predictable latency and mobility.
Fog can therefore be considered a candidate technology for beyond 5G networks.
Fog will result in the defusing of Cloud among the client devices.
Fog Computing is a scenario where huge number of heterogeneous, ubiquitous and
decentralized devices communicate and potentially cooperate among them and the
network to perform storage and processing tasks without the intervention of third parties.
Network virtualization and SDN are going to be the essential parts of Fog computing.
o [Kitanov, S., Monteiro, E., & Janevski, T. (2016, April). 5G and the Fog—Survey
of related technologies and research directions. In Electrotechnical Conference
(MELECON), 2016 18th Mediterranean (pp. 1-6). IEEE]
Cloud computing provides the computing/IT resources to the users over the Internet in a
pay-as-you-go type of business model.
This course has covered almost all the aspects of Cloud computing and the advanced
topics related to Cloud.
We are hopeful that you will find this course interesting, informative and comprehensive.
We hope that the students of Cloud Computing subject will surely know the importance
and ubiquity of Cloud computing.
We hope that this subject will become the foundation of advanced courses and an initial
source of knowledge for all in this regard.
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 194
Course Title (Course Code) VU
___________________________________________________________________________________
©Copyright Virtual University of Pakistan 196