CC Handbook 2022
CC Handbook 2022
Hand Book
Year :2023-2024
CE Department
Unit 5: Security
What we can do
● Developing new applications and services
● Storage, back up, and recovery of data
● Hosting blogs and websites
● Delivery of software on demand
● Analysis of data
● Streaming videos and audios
● Back-up and restore data : Once the data is stored in the cloud, it is easier to get back-up and
restore that data using the cloud.
● Improved collaboration : Cloud applications improve collaboration by allowing groups of people
to quickly and easily share information in the cloud via shared storage.
● Excellent accessibility : Cloud allows us to quickly and easily access store information anywhere,
anytime in the whole world, using an internet connection. An internet cloud infrastructure
increases organization productivity and efficiency by ensuring that our data is always accessible.
● Low maintenance cost : Cloud computing reduces both hardware and software maintenance costs
for organizations.
● Mobility : Cloud computing allows us to easily access all cloud data via Mobile.
● pay-per-use model : Cloud computing offers Application Programming Interfaces (APIs) to the
users for access services on the cloud and pays the charges as per the usage of service.
● Unlimited storage capacity : Cloud offers us a huge amount of storage capacity for storing our
important data such as documents, images, audio, video, etc. in one place.
● Data security : Data security is one of the biggest advantages of cloud computing. Cloud offers
many advanced features related to security and ensures that data is securely stored and handled.
● Clients/Consumers : They are just the desktops where they have their place on desks.These might
be also in the form of laptops, mobiles, tablets to enhance mobility. Clients hold the responsibility
of interaction which pushes for the management of data on cloud servers.
● Datacentre: It is an array of servers that houses the subscribed application.Progressing the IT
industry has brought the concept of virtualizing servers,where the software might be installed
through the utilization of various instances of virtual servers. This approach streamline the process
of managing dozens of virtual servers on multiple physical servers
● Distributed Servers: These are considered asa server where that is housed in the otherlocation.
So, the physical servers might not be housed in a similar location. Even the distributed server and
the physical servers appear to be in different locations, they perform as they are so close to each
other.
● Cloud provider: person and organization for making a service available to interested based on
market demands(AWS, Microsoft Azure, Google Cloud)
● Subscription: Define your interest in consuming service
● Cloud Broker: Organization creates and maintains relationships with multiple cloud service
providers .Selecting best provider for each Customer and monitoring the services.
● SLA(Service Level Agreement): Contract between provider and consumer that specify consumers
requirements and providers commitment to fulfilling them has -privacy ,Security backup and
recovery procedure.
● Resources Pooling : Cloud provider pulled the computing resources to provide services to
multiple customers with the help of a multi-tenant model
● On-Demand Self-Service :user can continuously monitor the server uptime, capabilities, and
allotted network storage. With this feature, the user can also monitor the computing capabilities.
● Easy Maintenance: The servers are easily maintained and the downtime is very low and even in
some cases, there is no downtime.Cloud Computing comes up with an updateThe updates are more
compatible with the devices .
● Large Network Access: The user can access the data of the cloud or upload the data to the cloud
from anywhere just with the help of a device and an internet connection.
● Availability: The capabilities of the Cloud can be modified as per the use and can be extended a
lot. allows the user to buy extra Cloud storage if needed for a very small amount.
● Automatic System: Cloud computing automatically analyzes the data needed and supports a
metering capability at some level of services. It will provide transparency for the host as well as
the customer.
● Economical : It is the one-time investment as the company has to buy the storage and a small part
of it can be provided to the many companies which save the host from monthly or yearly costs.
● Security: It creates a snapshot of the data stored so that the data may not get lost even if one of the
servers gets damaged.
● Pay as you go : the user has to pay only for the service or the space they have utilized. There is no
hidden or extra charge which is to be paid. The service is economical and most of the time some
space is allotted for free.
● Measured Service : supporting charge-per-use capabilities.
● Economical: It is the one-time investment as the company has to buy the storage and a small part
of it can be provided to the many companies which save the host from monthly or yearly costs.
● Security: It creates a snapshot of the data stored so that the data may not get lost even if one of the
servers gets damaged.
● Pay as you go : the user has to pay only for the service or the space they have utilized. There is no
hidden or extra charge which is to be paid. The service is economical and most of the time some
space is allotted for free.
● Measured Service: supporting charge-per-use capabilities.
● Latest Version Available: provide latest version as long as you connected
● High Availability and Data Recovery.:The high availability (HA) feature of VI managers aims at
minimizing application downtime and preventing business disruption. A few VI managers
accomplish this by providing a failover mechanism, which detects failure of both physical and
virtual servers and restarts VMs onhealthy physical servers. This style of HA protects from
host,but not VM, failures
1.8 Challenges Of Cloud
● Security and Privacy:Security and Privacy of information is the biggest challenge to cloud
computing. Security and privacy issues can be overcome by employing encryption, security
hardware and security applications.
● Portability: that applications should easily be migrated from one cloud provider to another. There
must not be vendor lock-in. However, it is not yet made possible because each of the cloud
providers uses different standard languages for their platforms.
● Interoperability:It means the application on one platform should be able to incorporate services
from the other platforms. It is made possible via web services,but developing such web services is
very complex.
● Reliability and Availability:It is necessary for cloud systems to be reliable and robust because
most of the businesses are now becoming dependent on services provided by third-party.
● Data Lock-In and Standardization:A major concern of cloud computing users is about having
their data locked-in by a certain provider. Users may want to move data and applications out from
a provider that does not meet their requirements.
● Isolation Failure:This risk involves the failure of an isolation mechanism that separates storage,
memory, and routing between the different tenants.
● Insecure or Incomplete Data Deletion:It is possible that the data requested for deletion may not
get deleted. It Happens because either of the following reasonsExtra copies of data are stored but
are not available at the time ofdeletionDisk that stores data of multiple tenants is destroyed.
● Internet Connection:required constant internet connection and does not work on low internet
connection
1.9 Cloud Computing Architecture
Cloud computing technology is used by both small and large organizations to store the information in the
cloud and access it from anywhere at any time using the internet connection.
Cloud computing architecture is divided into the following two parts -
▪ Front End :The front end is used by the client. It contains client-side interfaces and applications that are
required to access the cloud computing platforms. The front end includes web servers (including Chrome,
Firefox, internet explorer, etc.)
▪ Back End : The back end is used by the service provider. It manages all the resources that are required
to provide cloud computing services. It includes a huge amount of data storage, security mechanisms,
virtual machines, deploying models, servers, traffic control mechanisms, etc.
Components of Cloud Computing Architecture
1. Client Infrastructure:Client Infrastructure is a Front end component. It provides GUI (Graphical User
Interface) to interact with the cloud.
2. Application:The application may be any software or platform that a client wants to access.
3. Service:A Cloud Services manages which type of service you access according to the client’s
requirement.
Software as a Service (SaaS) – It is also known as cloud application services. Mostly, SaaS
applications run directly through the web browser means we do not require to download and install
these applications. Google Apps, Salesforce Dropbox
Platform as a Service (PaaS) – It is also known as cloud platform services. It is quite similar to
SaaS, but the difference is that PaaS provides a platform for software creation, but using SaaS, we
can access software over the internet without the need of any platform. Windows Azure,
Force.com.
Infrastructure as a Service (IaaS) – It is also known as cloud infrastructure services. It is
responsible for managing applications data, middleware, and runtime environments. Example:
Amazon Web Services (AWS) EC2
4. Runtime Cloud:Runtime Cloud provides the execution and runtime environment to the virtual
machines.
5. Storage:Storage is one of the most important components of cloud computing. It provides a huge
amount of storage capacity in the cloud to store and manage data.
6. Infrastructure:It provides services on the host level, application level, and network level. Cloud
infrastructure includes hardware and software components such as servers, storage, network devices,
virtualization software, and other storage resources that are needed to support the cloud computing model.
7. Management: Management is used to manage components such as application, service, runtime cloud,
storage, infrastructure, and other security issues in the backend and establish coordination between them.
8. Security:Security is an in-built back end component of cloud computing. It implements a security
mechanism in the back end.
9. Internet:The Internet is a medium through which the front end and back end can interact and
communicate with each other.
● Types of PaaS
● Stand-alone development environments :The stand-alone PaaS works as an independent entity
for a specific function. It does not include licensing or technical dependencies on specific SaaS
applications
● Application delivery-only environments :The application delivery PaaS includes on-demand
scaling and application security.
● Open platform as a service: offers an open source software that helps a PaaS provider to run
applications.
● Add-on development facilities:The add-on PaaS allows customization of the existing SaaS
platform.
Public Cloud: Available to the public owned by a single organization selling cloud service. The cloud
provider is responsible for the creation and on-going maintenance of the public cloud and its IT resources.
Private Cloud:Operated solely for a single organization. Private cloud enables an organization to use
cloud computing technology as a means of centralizing access to IT resources by different parts, locations,
or departments of the organization.
Community Cloud: Shared by several entities that have a common purpose. its access is limited to a
specific community of cloud consumers. The community cloud may be jointly owned by the community
members or by a third-party cloud provider that provides a public cloud with limited access.
Hybrid Cloud:combination of two or more private / community or public clouds.,A hybrid cloud is a
cloud environment comprising two or more different cloud deployment models. a cloud consumer may
choose to deploy cloud services processing sensitive data to a private cloud and other, less sensitive cloud
services to a public cloud.
1.12 Virtualization
● Virtualization is a technique, which allows the sharing of a single physical instance of an
application or resource among multiple organizations or tenants (customers). It does this by
assigning a logical name to a physical resource and providing a pointer to that physical resource
when demanded.
● The Multitenant architecture offers virtual isolation among the tenants. Hence, the organizations
can use and customize their application as though they each have their instances running.
● Refers as the abstraction of the resources across many aspects of computing .one physical machine
supports many virtual machine that run parallel
● It is the abstraction layer that decouples the physical hardware from the operating system to deliver
greater IT resource utilization and flexibility
● It allows multiple virtual machines with heterogeneous OS to run in isolation side by side
Benefits of Virtualization
Cost Savings : The ability to run multiple virtual machines in one piece of physical infrastructure
drastically reduces the footprint and the associated cost. Moreover, as this consolidation is done at the
core, we don’t need to maintain as many servers. We also have a reduction in electricity consumption and
the overall maintenance cost.
Agility and Speed : Spinning up a virtual machine is a straightforward and quick approach. It’s a lot
simpler than provisioning entirely new infrastructure. For instance, if we need a development/test region
for a team, it’s much faster to provision a new VM for the system administrators. Besides, with an
automated process in place, this task is swift and similar to other routine tasks.
1. More flexible and efficient allocation of resources.
2. Enhance development productivity hence improved performance
3. It lowers the cost of IT infrastructure.
4. Remote access and rapid scalability.
5. High availability and disaster recovery.
6. Pay per use of the IT infrastructure on demand.
7. Enables running multiple operating systems.
8. Data center and energy efficiency saving:As company reduces the size of the their hardware and
server footprints , they lower their energy consumption
9. Operational expenditure saving : Once the server are virtualized your IT staff can greatly reduce the
ongoing administration and management of manual work
10. Virtual machine is completed isolated from the host machine and other virtual machines
Cons of Virtualization
1. Not all hardware or software can be virtualized
Types Of Virtualization
1. Hardware
2. Network
3. Storage
4. Desktop
5. Data
6. Memory
7. Application
Unit 2: Software As A Service
Cloud computing made the process simple when there was no necessity for even the installation of the
software on the computer.
Today exponential growth of SaaS and continued improvements to functionality make it a valid option
even for enterprise level businesses .It's also much cheaper and easier to use .SaaS customers frequently
cite cost saving as one of its primary benefits .You can find SaaS products for almost any business
application you can think of .
The SaaS paradigm is on the fast track due to its innate powers and potential. Executives, entrepreneurs,
and end-users are ecstatic about the tactic as well as strategic success of the emerging and evolving SaaS
paradigm. A number of positive and progressive developments started to grip this model.Newer resources
and activities are being consistently revised to be delivered as IT as a Service (ITaaS) is the most recent
and efficient delivery method in the decisive IT landscape. With The meteoric and mesmerizing rise of the
service orientation principles, every single IT resource, activity and infrastructure is being viewed and
visualized as a service that sets the tone for the grand unfolding of the dreamt service era. This is
accentuated due to the pervasive Internet.
Integration as a service (IaaS) is the budding and distinctive capability of clouds in fulfilling the business
integration requirements. Increasingly business applications are deployed in clouds to reap the business
and technical benefits. On the other hand, there are still innumerable applications and data sources locally
stationed and sustained primarily due to the security reason. The question here is how to create a seamless
connectivity between those hosted and on-premise applications to empower them to work together.
IaaS overcomes these challenges by smartly utilizing the time-tested business-to-business (B2B)
integration technology as the value-added bridge between SaaS solutions and in-house business
applications.
The integration is a time consuming and a tedious task for other cloud and on-premise applications, while
onboarding SaaS software. Some of these challenges for SaaS integration include; cloud integration, IT
infrastructure, security and many more. Therefore the crucial question to answer would be about how to
reduce the costs and help achieve integration efforts while onboarding new SaaS software
1. Hybrid IT Infrastructure: More and more companies are aiming for a hybrid IT infrastructure that
combines on-premise software with SaaS applications. However, integrating SaaS with your existing IT
infrastructure can become the biggest hurdle. Though public cloud services bring a lot of benefits, failure
to integrate SaaS tools with existing IT tools and software can negate its benefits. In order to facilitate this
cloud integration, SaaS providers and your IT staff need to work closely together.
2. Access Control: Another challenge that businesses face when transitioning into the cloud is access
control. The access control and monitoring settings that are applicable in a traditional software are not
successfully carried forward to SaaS applications. Admins should have complete control over who (user)
can access what, especially during the transition phase.
3. Cost of Integration: Another major factor for SaaS integration is cost. The integration of existing
software with SaaS requires a high level of expertise. Businesses may need to hire highly skilled
technicians and cloud consulting companies for complicated endeavors. Getting it right may seem
expensive, but getting it wrong can cause real headaches. The best strategy is to count the cost and use of
methods & tools that are reliable and vetted. Integration-as-a-service (IaaS) is one such model that has
received wider adoption and popularity in recent years due to its low-cost approach in solving the
integration conundrum.
4. Time Constraints: Most companies opting for SaaS are generally in a hurry to get the application up
and running. Moving from on-premises to the cloud is time-consuming and can lead to real productivity
issues if not appropriately managed. Integrating SaaS with your traditional applications can prolong, as a
result of which your work may get delayed. This is another challenge that lies ahead in SaaS integration.
Businesses need to plan carefully for any SaaS integration and factor any contingencies and other delays.
5. Inadequate Integration: If the integration is not up to the mark, many problems can arise, wreaking
havoc on an organization. your users are uploading files and making changes in different systems,
invoices are sent to wrong customers, your data is leaked, automatic information gathering is not so
automatic, so on and so forth. Lower productivity, lost revenue, and low employee morale could be
negative consequences of a poorly executed integration. The best practice for successful integration
strategy is carefully examining your SaaS vendors and not relying on just one approach or methodology
but remaining flexible to adopt the right solution.
6. Integration Conundrum: Organization without a method of synchronizing data between multiple lines
of business are at a serious disadvantages in terms of maintaining accurate data ,forecasting and
automating key business processes Real time data and functionality sharing is an essential integrating of
cloud
7. APIs are Insufficient :Many SaaS providers have responded to the integration challenge by developing
application programming interfaces(APIs).Unfortunately, accessing and managing data via an API
requires a significant amount of coding as well as maintenance due to frequentAPI modifications and
updates.
8. Data transmission security:For any relocated application to provide the promised value for businesses
and users, the minimum requirement is the interoperability between SaaS applications and on-premise
enterprise packages. As SaaS applications were not initially designed keeping the interoperability
requirement in mind, the integration process has become a little tougher assignment. There are other
obstructions and barriers that come in the way of routing messages between on-demand applications and
on-premise resources
9. The Impacts of Cloud:On the infrastructural front, in the recent past,the clouds have arrived onto the
scene powerfully and have extended the horizon and the boundary of business applications, events and
data.That is, business applications, development platforms etc. are getting moved to elastic, online and
on-demand cloud infrastructures.Precisely speaking, increasingly for business, technical, financial and
green reasons, applications and services are being readied and relocated to highly scalable and available
clouds.
SaaS integration, or SaaS application integration, involves connecting a SaaS application with another
cloud-based app or an on-premise software via application programming interfaces (APIs). Once
connected, the app can request and share data freely with the other app or on-premise system.
It performs transformation of data model ,handles connectivity ,performs message routing ,converts
communication protocol and potentially manages the composition of multiple requests.
● Massive amount of information that need to move between Saas and on premise systems daily and the
need to maintain data quality and integrity.
● Limited access: Access to cloud resources is more limited than local application .Accessing local
application is quite simple and faster .Imbedding integration points in local as well as custom application
is easier .
● Once applications move to the cloud ,customs applications must be designed to support integration
because there is no longer that low level of access.
1) Jitterbit
● Jitterbit cloud integration enables organization to replicate ,cleanse and synchronize their
cloud based data seamlessly and securely with their on premise enterprise application and
system
● Beside user-friendly interfaces and wizard tools, Jitterbit supports not only XML but also
focuses on Web services. Jitterbitt focuses on data integration in the context of point to
point application integration , ETL and SOA.
● Jitterbit supports SOA, event-driven architectures, and additional data integration methods,
and can easily scale to fit any cloud integration initiative.
● It is a fully graphical integration solution that provides users a versatile platform and a suite
of productivity tools to reduce the integration efforts sharply.
● It can be used as standalone or with existing infrastructure that enables users to create
projects or consume and modify existing ones offered by the open source community or
service provider .
● Jitterbit consist of two parts
Integration environment :point and click graphical user interface that enables to the quickly
configure ,test ,deploy ,and manage integration projects on the jitterbit
Integration server : a powerful and scalable run time engine that processes all the
integration operations ,fully configurable and manageable from the jitterbit application.
2) Boomi Software
● It is an integration service that is completely on demand and connects any combination of
SaaS, PaaS ,cloud and on premise application without the burden of installing and
maintaining software packages or applications.
● Boomi offers the pure Saas integration solution that enables to quickly develop and deploy
connect
3) Bungee Connect
● Bungee Connect web application development and hosting platform. Developers use it to
build desktop-like web applications that leverage multiple web services and databases,
● provides development, testing, deployment, and hosting in a single, on demand platform.
● Bungee Connect reduces the efforts to integrate multiple web services into a single
application .Applications built with Bungee Connect run at native speeds on each platform
.An application built in java with Bungee Connect will run natively on all targeted
platforms .
● Bungee Connect includes the following features:
● Interaction delivered entirely via browser with no download or plug-in for developers or
end users
● Delivery of highly interactive user experience without compromising accessibility and
security
● Automated integration of web services (SOAP/REST) and databases (MySQL/
PostgreSQL)
● Built-in team collaboration testing ,scalability, reliability, security
● Deep instrumentation of end-user application utilization for analytics
● Utility pricing model based on end-user application
4) OpSource Connect
● It unifying different SaaS application as well as legacy application running behind a
corporate firewall
● FEATURE:
● Service bus
● Service connector
● Connect certified integrator program
● Connect service exchange
● Web service enablement program
5) SnapLogic
● SnapLogic is a platform to integrate applications and data, allowing you to quickly connect
apps and data sources . The company is also branching out into connecting and integrating
data from IoT devices .
● SnapLogic offers a solution that provides flexibility for today's data integration challenges
1. Changing data sources :SaaS and on premise application ,Web APIs , and RSS feeds
2. Changing deployment options : On premise , hosted ,private and public cloud platforms
● Advantages :Includes many built in integration and easy tracking of feeds into a system
● Disadvantages: CAn take time to understand how the platform works , error handling not
built -in
Virtual machine provisioning enables the cloud providers to make efficient utilization of available
resources and make a good profit out of it .A cloud provider provisions their resources either statically or
dynamically .In static Virtual machine provisioning the current demand of the user is not considered.
• Historically, when there is a need to install a new server for a certain workload to provide a particular
service for a client, lots of effort was exerted by the IT administrator, and much time was spent to install
and provision a new server.
● Now, with the emergence of virtualization technology and the cloud computing IaaS model:
● It is just a matter of minutes to achieve the same task. All you need is to provision a virtual server
through a self-service interface with small steps to get what you desire with the required specifications.
1) provisioning this machine in a public cloud like Amazon Elastic Compute Cloud (EC2)
2) using a virtualization management software package or a private cloud management solution installed
at your data center in order to provision the virtual machine inside the organization and within the private
cloud setup.
Analogy for Migration Services:
• Previously, whenever there was a need for performing a server‘s upgrade or performing maintenance
tasks, you would exert a lot of time and effort, because it is an expensive operation to maintain or upgrade
a main server that has lots of applications and users.
• Now, with the advance of the revolutionized virtualization technology and migration services associated
with 48 hypervisors‘ capabilities, these tasks (maintenance, upgrades, patches, etc.) are very easy and need
no time to accomplish
• Provisioning a new virtual machine is a matter of minutes, saving lots of time and effort, Migrations of a
virtual machine is a matter of millisecondsVirtual Machine Provisioning and Manageability
• Virtual Machine Lifecycle management (VMLM) is a set of processes designed to help administrators
oversee the implementation , delivery ,operation and maintenance of virtual machines (VMs) over the
course of their existence.
1) IT service request
2) VM provision processing
● select a server from a pool of available servers along with the appropriate OS template you need to
provision the virtual machine.
● load the appropriate software (operating system )you selected in the previous step, device drivers,
middleware, and the
● customize and configure the machine (e.g., IP address, Gateway) to configure an associated
network and storage resources.
● Finally, the virtual server is ready to start with its newly loaded software.
2. Cloning of existing VM
3. VM template
Problem of virtual machine provisioning it provision so rapidly that documenting and managing the VM
cycle become difficult
Process of moving virtual machines from one host server or storage location to another . In the process all
key machine and resources are completely virtualized
Migration Time: Migration time refers to the total amount of time required to transfer a virtual machine at
source to destination node without affecting its availability.
It is used for load balancing and physical machine fault tolerant .It can also be used to reduce power
consumption in cloud data centers.
Virtual machine migration Techniques
1) Hot (live) Migration - Virtual machine keeps running while migration and does not lose its status.
● Also called hot or real time migration
● Movement is done while power is on
● unnoticed with user
● Facilitates proactive maintenance upon failure
● VM should be shared
● CPU compatibility check is required
● Used for load balancing
● Ex : Xen hypervisor
Stage 2: Iterative Pre-Copy. During the first iteration, all pages are transferred from A to B. Subsequent
iterations copy only those pages dirtied during the previous transfer phase.
Stage 3: Stop-and-Copy Running OS instance at A is suspended, and its network traffic is redirected to
B. As described in reference 21, CPU state and any remaining inconsistent memory pages are then
transferred. At the end of this stage, there is a consistent suspended copy of the VM at both A and B. The
copy at A is considered primary and is resumed in case of failure.
Stage 4: Commitment. Host B indicates to A that it has successfully received a consistent OS image.
Host A acknowledges this message as a commit-ment of the migration transaction. Host A may now
discard the original VM, and host B becomes the primary host.
Stage 5: Activation. The migrated VM on B is now activated. Post-migration code runs to reattach the
device’s drivers to the new machine and advertise moved IP addresses.
Assumption This approach to failure management ensures that at least one host has a consistent VM
image at all times during migration. It depends on the assumption that the original host remains stable
until the migration commits and that the VM may be suspended and resumed on that host with no risk of
failure.
2) Cold ( non- live) migration: The status of the VM loses and user can notice the service
interruption
Step 1 : The configuration files, log files, as well as the disks of the virtual machine, are moved from the
source host to the destination host’s associated storage area.
Step 2: The virtual machine is registered with the new host.
Step 3: After the migration is completed, the old version of the virtual machine is deleted from the source
host
Unit 3: Abstraction And Virtualisation
● It is the abstraction layer that decouples the physical hardware from the operating system to deliver
greater IT resource utilization and flexibility
● It allows multiple virtual machines with heterogeneous OS to run in isolation side by side
● Virtualization is an abstraction technique where the finer details of the hardware layout are hidden from
the upper layers of computing such as an operating system or application
● Its provides a sense of existence of computing resources in a way that may not be real
● Migrating the VM
4) Guest Operating System : Whereas the host operating system is software installed on a computer
to interact with the hardware, the guest operating system is software installed onto and running on
the virtual machine. The guest OS can be different from the host OS in virtualization and is either
part of a partitioned system or part of a virtual machine. It mainly provides another OS for
applications. While the guest OS shares resources with the host OS, the two operate independently
of one another. These various operating systems can run at the same time, but the host operating
system must be started initially.
5) Applications: All the application that one can run on the operating systems like Excel, Word etc
● Cloud to be a single point of failure : if the host was go down for any reason the one is likely to
lose the access of the VM hosted on it
● Not Everything can be virtualized: some hardware dependent application require specific hardware
to be present for running like USB flashing software or Bluetooth dongle
● Required skill staff for managing virtualized environment ,installation of Guest Os ,provisioning
,upgrading and security control
3.4) Implementation Level Of Virtualization
● It is not sufficient today to use just a single software in computing. Today the professionals look to test
their software and program across various platforms. However, there are challenges here because of
varied constraints. This gives rise to the concept of virtualization. Virtualization lets the users create
several platform instances, which could be various applications and operating systems.
● It is not simple to set up virtualization. Your computer runs on an operating system that gets configured
on some particular hardware. It is not feasible or easy to run a different operating system using the
same hardware.
● To do this, you will need a hypervisor.It is a bridge between the hardware and the virtual operating
system, which allows smooth functioning.
● Talking of the Implementation levels of virtualization in cloud computing, there are a total of five levels
that are commonly used. Let us now look closely at each of these levels of virtualization
implementation in cloud computing.
For the basic emulation, an interpreter is needed, which interprets the source code and then
converts it into a hardware format that can be read. This then allows processing.
3.4.2) Hardware Abstraction Level
True to its name HAL lets the virtualization perform at the level of the hardware. This makes use
of a hypervisor which is used for functioning. At this level, the virtual machine is formed, and this
manages the hardware using the process of virtualization. It allows the virtualization of each of the
hardware components, which could be the input-output device, the memory, the processor, etc.
3.4.3) Operating System Level
At the level of the operating system, the virtualization model is capable of creating a layer that is
abstract between the operating system and the application. This is an isolated container that is on
the operating system and the physical server, which makes use of the software and hardware.
Each of these then functions in the form of a server.
When there are several users, and no one wants to share the hardware, then this is where the
virtualization level is used. Every user will get his virtual environment using a virtual hardware
resource that is dedicated. In this way, there is no question of any conflict.
This is generally used when you run virtual machines that use high-level languages. The
application will sit above the virtualization layer, which in turn sits on the application program.
It lets the high-level language programs compiled to be used in the application level of the virtual
machine run seamlessly.
● The operating system that is running on a physical server gets converted into a well-defined
OS that runs on the virtual machine and The hypervisor controls the processor, memory,
and other components by allowing different OS to run on the same machine without the
need for a source code.
1. Full Virtualization – In it, the complete simulation of the actual hardware takes place to
allow software to run an unmodified guest OS.
3. Partial Virtualization – In this type of hardware virtualization, the software may need
modification to run.
2. Network
● Network virtualization is specifically useful for networks experiencing a huge, rapid, and
unpredictable increase of usage and improved network productivity and efficiency.
● The ability to run multiple virtual networks with each has a separate control and data plan.
It co-exists together on top of one physical network. It can be managed by individual
parties that are potentially confidential to each other.
● Two categories:
3. Storage
● In this type of virtualization, multiple network storage resources are present as a single
storage device for easier and more efficient management of these resources.
1. Block- It works before the file system exists. It replaces controllers and takes over
at the disk level.
2. File- The server that uses the storage must have software installed on it in order to
enable file-level usage.
4. Desktop
● It provides the work convenience and security. As one can access remotely, you are able to
work from any location and on any PC. It provides a lot of flexibility for employees to
work from home or on the go. It also protects confidential data from being lost or stolen by
keeping it safe on central servers.
● Users who want specific operating systems other than Windows Server will need to have a
virtual desktop.
● Main benefits of desktop virtualization are user mobility, portability, easy management of
software installation, updates and patches.
5. Data
● Without any technical details, you can easily manipulate data and know how it is formatted
or where it is physically located. It decreases the data errors and workload.
● This is the kind of virtualization in which the data is collected from various sources and
managed that at a single place without knowing more about the technical information like
how data is collected, stored & formatted then arranged that data logically so that its virtual
view can be accessed by its interested people and stakeholders, and users through the
various cloud services remotely
6. Memory
● It introduces a way to decouple memory from the server to provide a shared, distributed or
networked function. It enhances performance by providing greater memory capacity
without any addition to the main memory. That’s why a portion of the disk drive serves as
an extension of the main memory.
● Load balancing is the process of distributing workloads across multiple servers . It prevents any
single server from getting overloaded and possibly breaking down. It improves service availability
and helps prevent downtimes. It uses server to route traffic to multiple server which in turn share
workload
● Without load balancers, newly spun virtual servers wouldn’t be able to receive the incoming traffic
in a coordinated fashion or if at all. Some virtual servers might even be left handling zero traffic
while others become overloaded.
● Load balancing divide into three approaches
1. Centralized approach: a single node is responsible for managing the distribution within
the whole system.
2. Distributed approach: each node independently builds its own load vector by collecting
the load information of other nodes. Decisions are made locally using local load vectors.
This approach is more suitable for widely distributed systems such as cloud computing.
3. Mixed approach: A combination between the two approaches to take advantage of each
approach
● Scalability is the ability of an algorithm to perform load balancing for a system with any finite
number of nodes. This metric should be improved.
● Resource Utilization is used to check the utilization of re-sources. It should be optimized for
efficient load balancing.
● Performance is used to check the efficiency of the system. This has to be improved at a
reasonable cost, e.g., reduce task response time while keeping acceptable delays.
● Response Time is the amount of time taken to respond by a particular load balancing algorithm in
a distributed system. This parameter should be minimized.
● Overhead Associated determines the amount of overhead involved while implementing a
load-balancing algorithm. It is composed of overhead due to movement of tasks, inter-processor
and interprocess
● communication. This should be minimized so that a load balancing technique can work
efficiently.
● Throughput is used to calculate the number of task whose execution has been completed .it
should be high
● Fault tolerance is ability of an algorithm to perform uniform load balancing in case of node
failure
● Migration time to migrate the job of resource from one node to another it should be minimized
3.7) Hypervisor
● A hypervisor is a form of virtualization software used in Cloud hosting to divide and allocate the
resources on various pieces of hardware and provides partitioning, isolation or abstraction
● This technique allows multiple guest operating systems (OS) to run on a single host system at the
same time , sometimes also called a virtual machine manager (VMM)
● A hypervisor allows a single host computer to support multiple virtual machines (VMs) by sharing
resources including memory and processing.
● provide greater IT versatility because the guest VMs are independent of the host hardware which is
one of the major benefits of the Hypervisor. This implies that they can be quickly switched
between servers. it helps us to reduce the Space efficiency, the Energy uses , Maintenance
requirements of the server
Benefits of hypervisors
● Speed: The hypervisors allow virtual machines to be built instantly unlike bare-metal servers. This
makes provisioning resources for complex workloads much simpler.
● Efficiency: Hypervisors that run multiple virtual machines on the resources of a single physical
machine often allow for more effective use of a single physical server.
● Flexibility: Since the hypervisor distinguishes the OS from the underlying hardware, the program
no longer relies on particular hardware devices or drivers, bare-metal hypervisors enable operating
systems and their related applications to operate on a variety of hardware types.
● Portability: Multiple operating systems can run on the same physical server thanks to hypervisors
(host machine). The hypervisor's virtual machines are portable because they are separate from the
physical computer.
● DISPATCHER:The dispatcher behaves like the entry point of the monitor and reroutes the
instructions of the virtual machine instance to one of the other two modules.
● ALLOCATOR: The allocator is responsible for deciding the system resources to be provided to the
virtual machine instance.It means whenever a virtual machine tries to execute an instruction that
results in changing the machine resources associated with the virtual machine, the allocator is
invoked by the dispatcher.
● INTERPRETER: The interpreter module consists of interpreter routines.These are executed,
whenever a virtual machine executes a privileged instruction.
3.7.2)Types Of Hypervisor
● A type 1 hypervisor functions as a light operating system that operates directly on the host's
hardware they are isolated from the attack-prone operating system they are extremely stable.
● They are usually faster and more powerful than hosted hypervisors. The majority of enterprise
businesses opt for bare-metal hypervisors for their data center computing requirements.
● Pros: Such kinds of hypervisors are very efficient because they have direct access to the physical
hardware resources(like Cpu, Memory, Network, Physical storage). This causes the empowerment
of security because there is nothing of any kind of the third party resource so that the attacker
couldn’t compromise with anything.
● Cons: One problem with Type-1 hypervisors is that they usually need a dedicated separate
machine to perform its operation and to instruct different VMs and control the host hardware
resources.
Xen Hypervisor
● It is an open source type 1 hypervisor that allows to run multiple virtual machines on a single host
machine
● Characteristics and features of Xen
1. Wide adoption and distribution
2. Open source and flexible
3. Support multiple guest operating systems
4. High scalability and performances
5. Small size
6. Provide security
Xen Architecture:
Physical hardware: bottom most layer that consist of the actual hardware devices such as CPU ,RAm
and storage enclosed in bare metal server
Xen hypervisor: runs directly on the hardware and is responsible for managing CPU ,memory and other
hardware component
Domain 0-The guest OS, which has control ability, is called Domain 0, and the others are called Domain
U. Domain 0 is a privileged guest OS of Xen. It is first loaded when Xen boots without any file system
drivers being available. Domain 0 is designed to access hardware directly and manage devices.
Therefore, one of the responsibilities of Domain 0 is to allocate and map hardware resources for the guest
domains (the Domain U domains).
Guest OS: created virtual machine running its own OS and application
● The type 2 hypervisor is a software layer or framework that runs on a traditional operating
system. It operates by separating the guest and host operating systems. The host operating system
schedules VM services, which are then executed on the hardware.
● Individual users who wish to operate multiple operating systems on a personal computer should
use a form 2 hypervisor. This type of hypervisor also includes virtual machines with it.
● Such hypervisors don't run directly over the underlying hardware; rather they run as an
application in a Host system(physical machine). software installed on an operating system.
Hypervisor asks the operating system to make hardware calls.
● The type-2 hypervisor is very useful for engineers, security analysts(for checking malware, or
malicious source code and newly developed applications).
● Pros: Such kind of hypervisors allow quick and easy access to a guest Operating System alongside
the host machine running. These hypervisors usually come with additional useful features for guest
machines. Such tools enhance the coordination between the host machine and guest machine.
● Cons: Here there is no direct access to the physical hardware resources so the efficiency of these
hypervisors lags in performance as compared to the type-1 hypervisors, and potential security risks
are also there an attacker can compromise the security weakness if there is access to the host
operating system so he can also access the guest operating system.
● KVM is a unique and popular open-source hypervisor built into Linux distributions that allows
creation of VMs on the Linux OS. It has characteristics of both Type 1 and Type 2 hypervisors.
● Since KVM is a kernel module of Linux, it converts the host OS itself into a bare metal Type 1
hypervisor. it is part of the code that interacts with other applications, which can compete with
other kernel modules for resources, giving the installation some characteristics of Type 2.
● KVM offers all the hardware, compute, and storage support of Type 1 hypervisors, including live
migration of VMs, scalability, scheduling, and low latency. VMs created with KVM are
empirically known to be secure.
1) Full Virtualization
● it is underlying Hardware that is entirely simulated. The Guest software does not need to perform
any modifications to run their applications..The hardware architecture completely simulates what
the guest program gains. There’s a very similar environment to a server operating system.
2) Para-Virtualization
● The hardware does not simulate, and the guest program runs an independent device in case of
para-virtualization. The hardware does not need to be simulated, but an API that changes the
guests’ operating system is used. A specific command is given, sent to the hypervisor from the
operating system and called hypercalls. These hypercalls are used to control memory.
3. Emulation Virtualization
● Emulation Virtualization. In this type of Virtualization, the virtual machine simulates the
Hardware, thus becoming independent of it. In this Virtualization, the guest operating system is
not required to perform any modifications.
● guest OS continues to control the mapping of virtual addresses to the physical memory addresses
of VMs. But the guest OS cannot directly access the actual machine memory.Each VM maintains
its page tables that provide the mapping between the virtual page number to physical page
numbers as assigned by the hypervisor.
● Guest virtual address (virtual address used by guest OS/ virtual machine)-GVA
● Guest physical address (physical address used by the Guest OS) GPA
● Host physical address(actual physical address of memory which are not virtual)-HPA
● It is process of Mapping of GVA with GPA and then with HPA to fetch actual data
● Shadow page tables are used by the hypervisor to keep track of the state in which the guest
"thinks" its page tables should be. The guest can't be allowed access to the hardware page
tables because then it would essentially have control of the machine. So, the hypervisor
keeps the "real" mappings (guest virtual -> host physical) in the hardware when the
relevant guest is executing, and keeps a representation of the page
● Since each page table of the guest OSes has a separate page table in the VMM
corresponding to it, the VMM page table is called the shadow page table.Then the
physical memory addresses are translated to machine addresses using another set of page
tables defined by the hypervisor. Then the physical memory addresses are translated to
machine addresses using the hypervisor.
2) Hardware- NESTED PAGE TABLE (used by AMD ) and EXTENDED PAGE TABLE
(Intel)
● Nested page tables add another layer of indirection to virtual memory. It provides
hardware assistance to the two-stage address translation in a virtual execution environment
by using a technology called nested paging
● Nested paging implements some memory management in hardware, which can greatly
accelerate hardware virtualization
● Nested paging eliminates the overhead caused by VM exits and page table accesses. In
essence, with nested page tables the guest can handle paging without intervention from the
hypervisor. Nested paging thus significantly improves virtualization performance.
● When the guest OS changes the virtual memory to a physical memory mapping, the VMM
updates the shadow page tables to enable a direct lookup. Nested paging eliminates the
overhead caused by VM exits and page table accesses.
1. Higher memory utilization by sharing contents and consolidating more virtual machines
on a physical host.
2. Ensuring some memory space exists before halting services until memory frees up.
● How it works
I/O MMU : virtualizes I/O the same way an MMU virtualizes memory. It maps device memory
addresses to real physical addresses. It can keep different guests' DMA out of each other's way.
Device pass through allows both the device and the guest OS to be unaware that any address translation
may be going on
Device isolation lets a device assigned to a VM directly access its memory without interfering with other
guests.
Interrupt remapping is necessary so that the right interrupt goes to the right VM.
3.9.2.1) BENEFITS
1. Abstracting resources provides more flexibility through faster provisioning and increased
utilization of the underlying physical infrastructure.
3. independently adding or removing servers from the cluster and running multiple operating
systems (OSes) on a host machine.
● Device emulation for I/O virtualization implemented inside the middle layer that maps
real I/O devices into the virtual devices for the guest device driver to use. (Full device
emulation)
● This software is located in the VMM and acts as a virtual device. The I/O access requests
of the guest OS are trapped in the VMMwhich interacts with the I/O devices.
2. Para-virtualization:guest OS is not completely isolated but it is partially isolated by the virtual
machine from the virtualization layer and hardware. VMware and Xen are some examples of
paravirtualization.
● It is also known as the split driver model consisting of a frontend driver and a backend
driver. They interact with each other via a block of shared memory.
● The frontend driver manages the I/O requests of the guest OSes and the backend driver is
responsible for managing the real I/O devices and multiplexing the I/O data of different
VMs.
3. Direct I/O:Direct I/O virtualization lets the VM access devices directly.VM are allowed access to
the physical I/O devices directly .Generally used for networking in VM
3.9.2.3) Difference between Full Virtualization and Para Virtualization
1. Software-based : application code gets executed on the processor and the privileged code gets
translated first, and that translated code gets executed directly on the processor. guest programs that are
based on privileged coding runs very smooth and fast
2. Hardware-Assisted CPU Virtualization Here, the guest user uses a different version of code and
mode of execution known as a guest mode. The guest code mainly runs on guest mode. The best part in
hardware-assisted CPU Virtualization is that there is no requirement for translation while using it for
hardware assistance.
3. Virtualization and Processor-Specific Behavior Despite having specific software behavior of the
CPU model, the virtual machine still helps in detecting the processor model on which the system runs.
The processor model is different based on the CPU and the wide variety of features it offers
4. Performance Implications of CPU Virtualization CPU Virtualization adds the amount of overhead
based on the workloads and virtualization used. Any application depends mainly on the CPU power
waiting for the instructions to get executed first. Such applications require the use of CPU Virtualization
that gets the command or executions that are needed to be executed first.
Benefits
Using CPU Virtualization, the overall performance and efficiency are improved to a great extent because
it usually takes virtual machines to work on a single CPU, sharing resources acting like using multiple
processors at the same time. This saves cost and money.
As CPU Virtualization uses virtual machines to work on separate operating systems on a single sharing
system, security is also maintained by it. The machines are also kept separate from each other.
The hardware requirement is less and the physical machine usage is absent, that is why the cost is very
less, and timing is saved. offers great and fast deployment procedure options so that it reaches the client
without any hassle
● One can club virtual resources in their respective clusters and effectively manage them.Virtual
clusters allow aggregating virtual resources for effective operations and management .
● Virtual clusters are built with VMs installed at distributed servers from one or more physical
clus-ters. The VMs in a virtual cluster are interconnected logically by a virtual network across
several physical networks.
PROPERTIES OF VIRTUAL CLUSTER
● The virtual cluster nodes can be either physical or virtual machines. Multiple VMs running with
different OSes can be deployed on the same physical node.
● A VM runs with a guest OS, which is often different from the host OS, that manages the resources
in the physical machine, where the VM is implemented.
● The purpose of using VMs is to consolidate multiple functionalities on the same server. This will
greatly enhance server utilization and application flexibility
● The size (number of nodes) of a virtual cluster can grow or shrink dynamically, similar to the way
an overlay network varies in size in a peer-to-peer (P2P) network.
● VMs can be colonized (replicated) in multiple servers for the purpose of promoting distributed
parallelism, fault tolerance, and disaster recovery.
● The failure of any physical nodes may disable some VMs installed on the failing nodes. But the
failure of VMs will not pull down the host system.
● The various vendors provide several features that work only in virtual clusters and not on
independent hypervisors or virtual machines .Cluster high availability is a feature that works only
on clusters.
● DATA CENTER AUTOMATION: means that huge volumes of hardware, software, and
database resources in these data centers can be allocated dynamically to millions of Internet users
simultaneously, with guaranteed QoS and cost-effectiveness.
● Google, Yahoo!Amazon, Microsoft, HP, Apple, and IBM are all in the game. All these companies
have invested billions of dollars in data-center construction and automation.
● Server Consolidation in Data Centers: Server consolidation is the process of migrating network
services and applications from multiple computers to a singular computer. which include multiple
physical computers to multiple virtual computers on one host computer.
● consolidate computers for several reasons, such as minimizing power consumption, simplifying
administration duties, or reducing overall cost.
● In data centers, a large number of heterogeneous workloads can run on servers at various times
that are of two types
Chatty workloads may burst at some point and return to a silent state at some other point.
Example a web video
Non Interactive workloads do not require people’s efforts to make progress after they are
submitted. Example-High-performance computing
● At various stages, the requirements for resources of these workloads are dramatically different , to
guarantee that a workload will always be able to cope with all demand levels, the workload is
statically allo-cated enough resources so that peak demand is satisfied.
● It is common that most servers in data centers are underutilized. A large amount of hardware,
space, power, and management cost of these servers is wasted.
● Server consolidation is an approach to improve the low utility ratio of hardware resources by
reducing the number of physical servers.
● Consolidation enhances hardware utilization. Many underutilized servers are consolidated into
fewer servers to enhance resource utilization. Consolidation also facilitates backup services and
disaster recovery.
● Enables more provisioning and deployment of resources. In a virtual environment, the images of
the guest OSes and their applications are readily cloned and reused.
● The total cost of ownership is reduced. Server virtualization causes deferred purchases of new
servers, a smaller data-center footprint, lower maintenance costs, and lower power, cooling, and
cabling requirements.
● improves availability and business continuity. The crash of a guest OS has no effect on the host
OS or any other guest OS. It becomes easier to transfer a VM from one server to another, because
virtual servers are unaware of the underlying hardware.
Unit 4: Cloud Infrastructure And Cloud Resource Management
● Basic performance metrics are System throughput and efficiency , multitasking scalability ,
System availability , Security index , Cost effectiveness
● the services to public ,private and hybrid cloud are conveyed to user s through the networking
support over the internet .It is clear that the infrastructure layer is deployed first to support IaaS
type of service
● this IaaS is foundation to build the platform layer of the cloud for supporting PaaS to built the
virtualization compute ,storage and network resource
● the platform layer is general purpose and repeated usage of the collection of software resources
.The application layer is formed with the collection of all needed software module for SaaS
application
● Cloud data portability : It is the capability of moving information from one cloud service to
another and so on without expecting to re-enter the data.
● Cloud application portability: It is the capability of moving an application from one cloud
service to another or between a client’s environment and a cloud service. The application may
require recompiling or relinking for the target cloud service, but it should not be necessary to make
significant changes to the application code.
4.4.1) Scenario where Portability needed
● If we move the application to another cloud, then, naturally, data is also moved. And for some
businesses, data is very crucial. But unfortunately, most cloud service providers charge a small
amount of money to get the data into the cloud.
● The degree of mobility of data can also act as an obstacle. Moving data from one cloud to another
cloud, the capability of moving workload from one host to another should also be accessed.
● As data is highly important in business, the safety of customer’s data should be ensured.varying
software stack make portability more challenging multiple API across several dimensions
● An Inter-Cloud allows for the dynamic coordination and distribution of load among a set of cloud
data centers
4.5.1) Type Of Inter Cloud
a) Centralized – in every instance of this group of architectures, there is a central entity that
either performs or facilitates resource allocation. Usually, this central entity acts as a
repository where available cloud resources are registered but may also have other
responsibilities like acting as a marketplace for resources.
b) Peer-to-Peer – in the architectures from this group, clouds communicate and negotiate
directly with each other without mediators.
2) The term Multi-Cloud denotes the usage of multiple, independent clouds by a client or a service.
Unlike a Federation, a Multi-Cloud environment does not imply volunteer interconnection and
sharing of providers’ infrastructures.
a) Services – application provisioning is carried out by a service that can be hosted either
externally or in-house by the cloud clients. Most such services include broker components
in themselves. Typically, application developers specify an SLA or a set of provisioning
rules, and the service performs the deployment and execution in the background, in a way
respecting these predefined attributes
b) Libraries – often, custom application brokers that directly take care of provisioning and
scheduling application components across clouds are needed. Typically, such approaches
make use of Inter-Cloud libraries that facilitate the usage of multiple clouds in a uniform
way.
● The intercloud would simply make sure that a cloud use resources beyond its reach. Taking
advantage of pre-existing contracts with other cloud providers.
● A single cloud cannot always fulfill the requests or provide required services. When two or more
clouds have to communicate with each other, or another intermediary comes into play and
federates the resources of two or more clouds.
● Provisioning of compute resources provides cloud services by signing SLA with the end user .The
SLA must commit sufficient resources such as CPU ,memory and bandwidth that the user can use
for a preset period.
1) Demand Driven Method: provides static resources and has been used in grid computing
for many years. This method adds or removes computing instances based on the current
utilization level of the allocated resources. In general, when a resource has surpassed a
threshold for a certain amount of time, the scheme increases that resource based on
demand. This method is easy to implement. The scheme does not work out right if the
workload changes abruptly
2) Event Driven Method:This scheme adds or removes machine instances based on a
specific time event. The scheme works better for seasonal or predicted events .During these
events, the number of users grows before the event period and then decreases during the
event period.
3) Popularity Driven Method:In this method, the Internet searches for popularity of certain
applications and creates the instances by popularity demand. The scheme anticipates
increased traffic with popularity.
● The market directory allows participants to locate providers or consumers with the right offers.
● Auctioneers periodically clear bids and asks received from market participants.
● The banking system ensures that financial transactions pertaining to agreements between
participants are
● carried out.
● Brokers perform the same function in such a market as they do in real-world markets: they mediate
between consumers and providers by buying capacity from the provider and sub-leasing these to
the consumers. A broker can accept requests from many users who have a choice of submitting
their requirements to different brokers.
● Consumers, brokers and providers are bound to their requirements and related compensations
through SLAs. An SLA specifies the details of the service to be provided in terms of metrics
agreed upon by all parties, and penalties for meeting and violating the expectations, respectively.
● Pricing can be either fixed or variable depending on the market conditions.
● An admission-control mechanism at a provider’s end selects the auctions to participate in or the
brokers to negotiate with, based on an initial estimate of the utility.
● The negotiation process proceeds until an SLA is formed or the participants decide to break off.
● These mechanisms interface with the resource management systems of the provider in order to
guarantee the allocation being offered or negotiated can be reclaimed, so that SLA violations do
not occur.
● The resource management system also provides functionalities such as advance reservations that
enable guaranteed provisioning of resource capacity.
● Brokers gain their utility through the difference between the price paid by the consumers for
gaining resource shares and that paid to the providers for leasing their resources. Therefore, a
broker has to choose those users whose applications can provide it maximum utility. A broker
interacts with resource providers and other brokers to gain or to trade resource shares. A broker is
equipped with a negotiation module that is informed by the current conditions of the resources and
the current demand to make its decisions.
● Consumers have their own utility functions that cover factors such as deadlines, fidelity of results,
and turnaround time of applications. They are also constrained by the amount of resources that
they can request at any time, usually by a limited budget. Consumers also have their own limited
IT infrastructure that is generally not completely exposed to the Internet. Therefore, a consumer
participates in the utility market through a resource management proxy that selects a set of brokers
based on their offerings. He then forms SLAs with the brokers that bind the latter to provide the
guaranteed resources.
● The enterprise consumer then deploys his own environment on the leased resources or uses the
provider’s interfaces in order to scale his applications.
● Application Security:Application Security, processes , secure coding guidelines ,training and testing
scripts and tools are typically a collaborative effort between the security and the development teams.It
should be a collaborative effort between the security and the product development team.
● Deployment Security:refer to act of creating different instances on hardware and deployed gusset
operating system in each of them
● Risk MAnagement: identification of data and its link to business processes,application and data stores and
assignment of ownership
● Risk Assessment: critical to helping information security organization make informed decisions when
balancing the dueling priorities of business utility and protection assets
● Security Portfolio Management: lack of project management can lead to project never being completed
unsustainable and unrealistic workload and expectation because project are not prioritized according to
strategy ,goals and resources
● Security Awareness : not providing proper awareness and training to the people who may need the can
expose the company ti a variety of security risks
● Third Party Risk Management: it may result in damage to the provider reputation ,revenue loss and legal
action should the provider be found
● Forensic : used to retrieve and analysis data .They analysis data to reconstructs event
● Logical Design – team members create and develop the blueprint for security, and examine as
well as implement key policies that influence later decisions.
● Physical Design – team members evaluate the technology needed to support the security
blueprint, generate alternative solutions, and agree upon a design.
● Implementation – It was acquired, tested, implemented, and tested again. Personnel issues are
evaluated and specific training and education programs conducted
● Maintenance – After implementation it must be operated, properly managed, and kept up to date
by means of established procedures.
A cloud security architecture is defined by the security layers, design, and structure of the platform, tools,
software, infrastructure, and best practices that exist within a cloud security solution. A cloud security
architecture provides the written and visual model to define how to configure and secure activities and
operations within the cloud, including such things as identity and access management; methods and
controls to protect applications and data; overall security; processes for instilling security principles into
cloud services development and operations
● Cloud Consumer: A person or organization that maintains a business relationship with, and uses
service from, cloud providers.
● Cloud Provider: A person, organization, or entity responsible for making a service available to
interested parties.
● Cloud Auditor: A party that can conduct independent assessment of cloud services, information
system operations, performance and security of the cloud implementation.
● Cloud Carrier: An intermediary that provides connectivity and transport of cloud services from
cloud providers to cloud consumers.
● Cloud Broker: An entity that manages the use, performance and delivery of cloud services, and
negotiates relationships between cloud providers and cloud consumers.
5.5 Security levels
5.5.1) Application Security
● Application security is the process of developing, adding, and testing security features within
applications to prevent security vulnerabilities against threats such as unauthorized access and
modification.
○ Authorization: the user may be authorized to access and use the application. The system
can validate that a user by comparing the user’s identity with a list of authorized users.
Authentication must happen before authorization
○ Encryption: protect sensitive data from being seen or even used by a cybercriminal. where
traffic containing sensitive data travels between the end user and the cloud, that traffic can
be encrypted to keep the data safe.
○ Logging: If there is a security breach in an application, logging can help identify who got
access to the data and how. Application log files provide a time-stamped record of which
aspects of the application were accessed and by whom.
○ Application security testing: A necessary process to ensure that all of these security
controls work properly.
● Benefits
○ Flexibility: It provides protection across multiple data centers and in multi-cloud and
hybrid cloud environments, allowing an organization to take advantage of the full benefits
of virtualization while also keeping data secure.
IAM role : is an IAM identity that you can create in your account that has specific permissions.
An IAM role is similar to an IAM user, in that it is an AWS identity with permission policies that
determine what the identity can and cannot do in AWS.
IAM Policy: a document that define the effect ,action ,resources and optional condition and
method to perform the operation,
○ Identity-based policies – Attach managed inline policies to IAM identities (users, groups
to which users belong, or roles). Identity-based policies grant permissions to an identity.
○ Permissions boundaries – Use a managed policy as the permissions boundary for an IAM
entity . That policy defines the maximum permissions that the identity-based policies can
grant to an entity.
○ Organizations SCPs – service control policy (SCP) limit permissions that identity-based
policies or resource-based policies grant to entities within the account, but do not grant
permissions.
○ Access control lists (ACLs) – Use ACLs to control which principals in other accounts can
access the resource to which the ACL is attached. ACLs are similar to resource-based
policies. ACLs are cross-account permissions policies that grant permissions to the
specified principal.
○ Session policies – Session policies limit the permissions that the role or user's
identity-based policies grant to the session. Session policies limit permissions for a created
session, but do not grant permissions.
A-Actios(create ,read,update,delete)
R-Resources(OS,network,files etc)
This concerns providing a secure and timely management of on-boarding (provisioning) and
off-boarding (de-provisioning) of users in the cloud.
When a user has successfully authenticated to the cloud, a portion of the system resources in
terms of CPU cycles, memory, storage and network bandwidth is allocated. Depending on the
capacity identified for the system, these resources are made available on the system even if no
users have been logged on.
Depending on the number of users, the system resources are allocated as and when required, and
scaled down regularly, based on projected capacity requirements. Simultaneously, adequate
measures need to be in place to ensure that as usage of the cloud drops, system resources are
made available for other objectives; else they will remain unused and constitute a dead
investment.
It is tough for the organizations to keep track of the various logins and ID that the employees
maintain throughout their tenure. The centralized federated identity management (FIdM) is the
answer for this issue. Here users of cloud services are authenticated using a company chosen
identity provider (IdP).
By enabling a single sign-on facility, the organization can extend IAM processes and practices to
the cloud and implement a standardized federation model to support single sign-on to cloud
services.
When it comes to cloud services, it’s important to know who has access to applications and data,
where they are accessing it, and what they are doing with it. Your IAM should be able to provide
a centralised compliance reports across access rights, provisioning/deprovisioning, and end-user
and administrator activity. There should be a central visibility and control across all your systems
for auditing purposes.
A lot of services and applications used in the cloud are from 3rd party or vendor networks. You
may have secured your network, but can’t guarantee that their security is adequate.
If you are facing any of these challenges, then Sysfore can help you to establish a secure and
integrated IAM practices, processes and procedures in a scalable, effective and efficient manner
for your organization.
5.6.2) Identity Management Life Cycle
3. User authentication/federation
6. Log management
● Eliminating weak passwords: IAM systems enforce best practices in credential management, and
can practically eliminate the risk that users will use weak or default passwords. They also ensure
users frequently change passwords.
● Mitigating insider threats: IAM can limit the damage caused by malicious insiders, by ensuring
users only have access to the systems with privileges without supervision.
● Common platform for access and identity management information: You can apply the same
security pollies across all the operating platforms and devices used by the organization.
● Ease of use: IAM simplifies signup, sign-in and user management processes for application
owners, end-users and system administrators.
● Productivity gains: IAM centralizes and automates the identity and access management lifecycle,
This can improve processing time for access and identity changes and reduce errors.
● Reduced IT Costs: can lower operating costs, you no longer need local identities for external uses
that services can reduce the need to buy and maintain on-premise infrastructure.
● A system image makes a copy or a clone of the entire computer system inside a single file. The
image is made by using a program called system imaging program and can be used later to restore
a system image.
● A machine image is a Compute Engine resource that stores all the configuration, metadata,
permissions, and data from one or more disks required to create a virtual machine (VM) instance.
You can use a machine image in many system maintenance scenarios, such as instance creation,
backup and recovery, and instance cloning
● Machine images can be used to create instances. You can use machine image to make copies of an
instance that contains most of the VM configurations of the source instance. These copies can
then be used for troubleshooting, scaling VM instances, debugging, or system maintenance.
● Machine imaging is mostly run on virtualization platform due to this it is also called as Virtual
Appliances and running virtual machines are called instances.
● For example Amazon Machine Image (AMI) is a system image that is used in cloud computing.
The Amazon Web Services uses AMI to store copies of a virtual machine. An AMI is a file
system image that contains an operating system, all device drivers, and any applications and state
information that the working virtual machine would have. The AMI files are encrypted and
compressed for security purpose and stored in AmazonS3 (Simple Storage System)
● Because many users share the cloud ,the cloud helps you track information about images ,such as
ownership ,history and so on. you can choose whether image private ,exclusively for your own
use, or to be share with other users in your organization
● if you are an independent software vendor ,you can also add your image to the public catalog
5.8) Autonomic Security
● Autonomic computing refers to a self-managing computing model in which computer systems
reconfigure themselves in response to changing conditions and are self-healing.
● Autonomic Systems : Autonomic systems are based on the human autonomic nervous system,
which is self-managing, monitors changes that affect the body, and maintains internal balances.
● Such a system requires sensory inputs, decision-making capability, and the ability to implement
remedial activities to maintain an equilibrium state of normal operation
1.Self-awareness : system “knows itself” and is aware of its state and its behaviors.
2.Self-configuring : system should be able to configure and reconfigure itself under varying and
unpredictable conditions.
3.Self-optimizing : a system should be able to detect sub-optimal behaviors and optimize itself to
improve its execution.
4.Self-healing :system should be able to detect and recover from potential problems and continue
to function smoothly.
5.Self-protecting: system should be capable of detecting and protecting its resources from both
internal and external attack and maintaining overall system security and integrity.
● Autonomic self-protection involves detecting a harmful situation and taking actions that will
mitigate the situation.
● These systems will also be designed to predict problems from analysis of sensory inputs and
initiate corrective measures.
● Storage traffic over Fiber Channel avoids the TCP/IP packetization and latency issues, as well as
any local area network congestion, ensuring the highest access speed available for media and
mission critical stored data
● SAN Benefits:
○ Often the only solution for demanding applications requiring concurrent shared access.
○ Security is also a main advantage of SAN. If users want to secure their data, then SAN is a
good option to use. Users can easily implement various security measures on SAN.
○ Storage devices can be easily added or removed from the network. If users need more
storage, then they simply add the devices.
○ Another big advantage of using the SAN (Storage Area Network) is better disk utilization.
● Limitations of SAN
○ its cost and administration requirements—having to dedicate and maintain both a separate
Ethernet network for metadata file requests and implement a Fibre Channel network can
be a considerable investment.
● data is identified by file name as well as byte offset. File system is managed by Head unit such as
CPU and Memory. In this for backup and recovery, files are used instead of block by block
copying technique.
● Hard drive array are contained and managed by this dedicated device which connect through a
network and facilitates access to data using file centric data access protocol like NFS(network file
system) and SMB(server message block)
● It allow more hard disk storage space to be added in network that already utilizes servers without
shutting them down for maintenance and upgrades.
● Components of Network Attached Storage (NAS):Head unit: CPU, Memory ,Network Interface
Card (NIC),Optimized operating system
○ Relatively inexpensive.
○ Ease of administration.
○ Every user or client in the network can easily access to Network Attached Storage.
○ A main advantage of NAS is that it is more reliable than the simple hard disks.
○ Another big advantage of NAS is that it offers the consolidated storage space within the
own network of an organization.
○ The devices of NAS are scalable and can be easily accessed remotely.
○ NAS is managed easily. It takes less time for storing and recovering the data from any
computer over the LAN.
○ It offers an affordable option for both small businesses and homes for private cloud
storage.
● Limitation of NAS
scale and performance.:As more users need access, the server might not be able to keep up. it will
need to be replaced with a more powerful system latency (slow or retried connections) is usually
not noticed by users for small files, but can be a major problem in demanding environments such
as video production and editing.
● Cloud disaster recovery has changed everything by eliminating the need for traditional
infrastructure and significantly reducing downtime.
● It takes a very different approach than traditional DR. Instead of loading them servers with the OS
and patching to the last configuration used in production, cloud disaster recovery encapsulates the
entire server, which includes the operating system, applications, patches, and data into a single
software bundle or virtual server. The virtual server is then backed up to an offsite data center on
a virtual host . It is not dependent on hardware, the OS, applications and data can be migrated
from one DC to another faster
● Type of disaster
Natural disaster: such as flood or earthquake that are rare but not infrequent
Human disaster: include misconfiguration or even malicious third party access to cloud service
○ Minimal service interruption means a reduced loss of revenue which, in turn, means user
dissatisfaction is also minimized.
○ Having plans for disaster in place also means your company can define its Recovery Time
Objective (RTO) and its Recovery Point Objective (RPO). The RTO is the maximum
acceptable delay between the interruption and continuation of the service and the RPO is
the maximum amount of time between data recovery points.
6.1 OPENSTACK
● Openstack is a Iaas Software tool for managing and building cloud computing platforms for public
and private clouds.
● It is supported by some of the largest and well-known software companies in software hosting and
development. a non-profit organization, looking after community-building and project
development, manages the OpenStack.
● It is an open source cloud platform that controls pool of compute ,storage,and networking resource
throughout a data center
● Cloud computing makes horizontal scaling easy, which means functions that have benefit from
running in parallel can serve more users by spinning up occurrences.
● it is open-source software means any user who wants to access the source code can make the
changes to the code quickly and freely
● Component of Openstack
6.2 Microsoft Azure
● Microsoft Azure is a Microsoft cloud service provider that provides cloud computing services like
computation, storage, security and many other domains.
● It provides services in the form of Infrastructure as a service, Platform as a Service and Software
as a service. It even provides serverless computing meaning, you just put your code and all your
backend activities as managed by Microsoft Azure.
● Azure queue storage is a service for storing large numbers of message that can be accessed from
anywhere in the world via HTTP
● Azure has low operational cost because it runs on its own servers whose only job is to make the
cloud functional and bug-free, it’s usually a whole lot more reliable than your own, on-location
server.
Web roles used for web application programming and supported by IIS7
Worker roles used for background processing of web roles.
VM roles used for migrating applications to Windows azure easily
Service bus provides secure connectivity between distributed and disconnected applications
in the cloud.
Access control It grants access to applications and services based on the identity user. So
the authorization decisions are pulled out from application
It provides caching for high speed access, scaling, and high availability of data to
applications.
It provides integration between Windows Azure applications and other SAAS.
Composite App It provides a hosting environment for web services and workflows.
● Use for
○ Build a web application that runs and stores data
○ Create virtual machine to develop and test or run
○ Develop massively scalable applications with many users
○ Azure keep backups of all your valuable data. In disaster situations, you can recover all
your data in a single click without your business getting affected.
● ADVANTAGES
○ Microsoft Azure offers high availability It offers you a strong security profile, good
scalability options, It is a cost-effective solution for an IT budget.
○ Azure allows you to use any framework, language, or tool and also allows businesses to
build a hybrid infrastructure
Architecture
6.3 CloudSim
● CloudSim is an open-source framework, which is used to simulate cloud computing infrastructure
and services. It is developed by the CLOUDS Lab organization and is written entirely in Java. It is
used for modelling and simulating a cloud computing environment prior to software development
in order to reproduce tests and results.
● CloudSim Core Simulation Engine provides interfaces for the management of resources such as
VM, memory and bandwidth of virtualized Datacenters.
● CloudSim layer manages the creation and execution of core entities such as VMs Cloudlets, Hosts
etc. It also handles network-related execution along with the provisioning of resources and their
execution and management.
● User Code is the layer controlled by the user. The developer can write the requirements of the
hardware specifications in this layer according to the scenario. Some of the most common classes
used during simulation are:
● Datacenter: used for modelling the foundational hardware equipment. This class provides methods
to specify the functional requirements of the Datacenter as well as methods to set the allocation
policies of the VMs etc.
● Host: this class executes actions related to management of virtual machines. It also defines policies
for provisioning memory and bandwidth to the virtual machines, as well as allocating CPU cores
to the virtual machines.
● VM: this class represents a virtual machine by providing data members defining a VM’s
bandwidth, RAM,
● Cloudlet: a cloudlet class represents any task that is run on a VM, like a processing , a memory
access , or a file updating task etc. It stores parameters defining the characteristics of a task such as
its length, size and provides methods similarly to VM class while also providing methods that
define a task’s execution time, status, cost and history.
● CloudSim: this is the class responsible for initializing and starting the simulation environment after
all the necessary cloud entities have been defined and later stopping after all the entities have been
destroyed.
● Feature :
6.4 EyeOS
● It is free cloud computing operating system software that let you access all your necessary files ,
folders ,office ,calendar ,contact and much more
● Its desktop looklike ordinary desktop but can be customized on the basis of theme and it support
30 languages
● FEATURE:
○ Desktop: similar to regular operating systems OFfice related task : supports Ms-office
document ,spreadsheet and presentation
○ System and file management: uploading /downloading multiple files to the cloud
,compressing in ZIP format and dedicated picture view for slide show
● The goals for eyeOS include:
○ Being able to work from everywhere, regardless of whether or not you are using a
full-featured, modern computer, a mobile gadget, or a completely obsolete PC.
○ Sharing resources easily between different work centers at company, or working from
different places and countries on the same projects.
○ Always enjoying the same applications with the same open formats, and forgetting the
usual compatibility problems between office suites and traditional operating systems.
○ Being able to continue working if you have to leave your local computer or if it just
crashes, without loosing data or time: Just log in to your eyeOS from another place and
continue working.
6.5 Aneka
● Aneka is its support for provisioning resources on different public Cloud providers such as
Amazon EC2, Windows Azure and GoGrid.
● It manage distributed applications with the help of .NET framework .It provide developers with
rich set of API for transparently exploiting such resources and expressing the business logic
● Aneka is a market oriented cloud development and management platform with rapid application
development and workload distribution capabilities.
● It also provide a tool for managing the cloud allowing administrators to easily start,stop and deploy
instances of the Aneka container on new resources and then reconfigure them dynamically to alter
the behavior of the cloud
○ Execution Services. They are responsible for scheduling and executing applications. Each
of the programming models supported by Aneka defines specialized implementations of
these services for managing the execution of a unit of work defined in the model.
○ Foundation Services. These are the core management services of the Aneka container.
They are in charge of metering applications, allocating resources for execution, managing
the collection of available nodes, and keeping the services registry updated.
○ Fabric Services:They constitute the lowest level of the services stack of Aneka and
provide access to the resources managed by the cloud. An important service in this layer is
the Resource Provisioning Service, which enables horizontal scaling in the cloud. Resource
provisioning makes Aneka elastic and allows it to grow or to shrink dynamically to meet
the QoS requirements of applications.
● Google App Engine is a Platform as a Service (PaaS) product that provides Web app developers
and enterprises with access to Google's scalable hosting and tier 1 Internet service.
● The App Engine requires that apps be written in Java or Python, store data in Google BigTable and
use the Google query language. Non-compliant applications require modification to use App
Engine.
● Google App Engine provides more infrastructure than other scalable hosting services such as
Amazon Elastic Compute Cloud (EC2). The App Engine also eliminates some system
administration and developmental tasks to make it easier to write scalable applications.
● Google App Engine is free up to a certain amount of resource usage. Users exceeding the per-day
or per-minute usage rates for CPU resources, storage, number of API calls or requests and
concurrent requests can pay for more of these resources.
● FEATURE :
○ Languages and Runtime: GAE allows you to use PHP, Python, or Go for writing any app’s
engine application. It also allows you to test and deploy an application locally with the
SDK tools.
○ Standard Features: Data search, retrieval, and storage includes functions like Cloud SQL,
search, blobstore, logs, and datastore,Communications functions like URL fetch, mail.
○ Preview Features: functions will be made generally available for users in future releases.
Such features comprise MapReduce, Cloud Storage Library, and Sockets.
○ Secure Framework : Google offers one of the most secure frameworks worldwide and it
rarely allows any unauthorized access to its servers. Google assures your app’s availability
to the globe as it packs impeccable privacy and security policies.
○ Simple Start: The app engine can easily start as there is no need for additional hardware or
product to be purchased.
○ Simple to Use:GAE integrates every tool you require for developing, testing, launching,
and updating the apps
○ Reliability and Performance:Google has been a household name for years now, so there is
no denying about its performance and reliability.
○ Cost Minimization:There is no need to hire additional engineers for managing your servers.
The saved funds can be used for other business activities.
○ Platform Independence:Migrating your data to other platforms does not require hefty tasks
and there is also no dependency on GAE.
● New development are flourishing in emerging market making them attractive to both global and
local cloud provider in search of new revenue opportunities
● Accessibility of clod may become a chief factor in the ability of these markets to expand their
global and local trade capabilities with other emerging market
● This will impact the driving job creation and increasing access to new products and business
configuration
● Developing government the cloud can support efforts to enhance their ability to provide services in
an economical and effective manner to citizen in area such as healthcare ,education
,telecommunication
● Cloud in education : It is bit slower in education with a small number of organization saying the
cloud currently has a pervasive presence in either type of economy .
● Cloud in retail : three quarter of retail sector already in cloud has a strong presence in developing
economies .
● Manufacturing : organization in both developed and developing economies say the cloud has a
significant presence now .Cloud computing is being used to reduce supply chain cost , connect
suppliers and support partnership between customers and suppliers Ensuring common standards
across machines ,communication protocols and a host of other cyber physicals challenges to be
met
7. Migrate in phases
8. Think ahead
● Multi-tenancy
● Network Access
● On demand
● Elastic
● Metering /Chargeback
7.4 Eucalyptus
● Stands for Elastic Utility Computing Architecture for Linking Programs To Useful System.
● It is used to build private ,public and hybrid clouds .It can also produce your own data center into
private clouds and allow you to extend the functionality to many other organizations
● Its name is an acronym for “elastic utility computing architecture for linking your programs to
useful systems.” implements infrastructure as a service (IaaS) methodology for solutions in private
and hybrid clouds.
● provides a platform for a single interface so that users can calculate the resources available in
private clouds and the resources available externally in public cloud services. It is designed with
extensible and modular architecture for Web services. It also implements the industry standard
(AWS) API. helps it to export a large number of APIs for users
● Challenges :
○ Networking: Virtual private network per cloud and must function as an overlay
○ Accounting reports
○ The option to configure policies and service level agreements based on users and the
environment
● Architecture
● Node controller (NC) controls the execution, inspection , and termination of VM instances on the
host where it runs.
● Cluster controller (CC) gathers information about and schedules VM execution on specific node
controllers, as well as manages virtual instance network.
● storage controller (SC) is a put/get storage service that implements Amazon’s S3 interface and
provides a way for storing and accessing VM images and user data.
● Cloud controller (CLC) is the entry point into the cloud for users and administrators. It queries
node managers for information about resources, makes high-level scheduling decisions, and
implements them by making requests to cluster controllers.
● Walrus (W) controller component that manages the storage services . Requests are communicated
using the SOAP/ REST
● Client interface: The CLC essential acts as a translator between the internal Eucalyptus system
interfaces and some defined external clients interface
7.5 AppScale
● Appscale is open source distribute software system that implements a cloud platform as a service
.The goal is to provide developers with a rapid APT driven platform that can run application on
any cloud
● It make application easy to deploy and scale over cloud fabrics ,make it portable across the service
● It is compatible with Google App Engine and executes GAE on premise or over the other cloud
infrastructure without modification
● It execute GAE application over Amazon EC@ and Eucalyptus as well as XEN and KVM that
supports python ,java
● It abstract and multiplexes cloud and system services across multiple application enabling write
one rn anywhere program development for cloud
● Its implements a multi tier distributed web service stack with automatic development ,load
balancing and scaling along with API adaptors for alternative for each service API
● FEATURE:
○ It is ease of use and high availability that users have to come to expect from public cloud
platforms and infrastructures .
○ This include elasticity and fault detection ,recovery authentication and user control ,
monitoring and logging cross cloud data and application migration hybrid cloud
multitasking offline analytics disaster recovery