0% found this document useful (0 votes)
153 views11 pages

Green Cloud Report

Cloud computing delivers computing resources and services over the internet. However, data centers that power cloud computing consume huge amounts of energy, contributing to high costs and carbon emissions. Green cloud computing aims to make cloud computing more energy efficient and sustainable by minimizing energy consumption through techniques like virtualization, power management, and optimizing resource allocation. It is needed to ensure cloud computing's continued growth does not cause a massive increase in energy usage and emissions.

Uploaded by

Umesh Walunjkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
153 views11 pages

Green Cloud Report

Cloud computing delivers computing resources and services over the internet. However, data centers that power cloud computing consume huge amounts of energy, contributing to high costs and carbon emissions. Green cloud computing aims to make cloud computing more energy efficient and sustainable by minimizing energy consumption through techniques like virtualization, power management, and optimizing resource allocation. It is needed to ensure cloud computing's continued growth does not cause a massive increase in energy usage and emissions.

Uploaded by

Umesh Walunjkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 11

Distributed & Cloud Computing CSC 557 Sphoorthy Asuri Maringanti

GREEN CLOUD COMPUTING


Cloud Computing is a model for delivering services in which resources are retrieved from the
internet through web-based tools and applications, rather than a direct connection to a server.
Data is stored in servers. Cloud computing structure allows access to information as long as an
electronic device has access to the web. This type of system allows employees to work remotely.
It enables hosting of applications from consumer, scientific and business domains. But data
centers hosting cloud computing applications consume huge amounts of energy, contributing to
high operational costs and carbon footprints to the environment. With energy shortages and
global climate change leading our concerns these days, the power consumption of data centers
has become a key issue. Therefore, green cloud computing solutions saves energy as well as
reduces operational costs. The vision for energy efficient management of cloud computing
environments is presented here.
Green Cloud computing can be used to achieve efficient processing and utilization of computing
infrastructure and minimize energy consumption. It is needed for ensuring that the future growth
of Cloud computing is sustainable else, cloud computing with increasing front-end client devices
interacting with back-end data centers will cause an enormous escalation of energy usage.

INTRODUCTION
Cloud computing is used by IT Services companies for the delivery of computing requirements
as a service to a heterogeneous community of end-recipients. The vision of computing utilities
based on a service provisioning model anticipated the massive transformation of the entire
computing industry in the 21st century whereby computing services will be readily available on
demand, like other utility services available in today’s society. Similarly, users need to pay
providers only when they access the computing services. In addition, consumers no longer need
to invest heavily or encounter difficulties in building and maintaining complex IT infrastructure.
In such a model, users access services based on their requirements without regard to where the
services are hosted. This model has been referred to as utility computing, or as Cloud computing.
The latter term denotes the infrastructure as a “Cloud” from which businesses and users can
access applications as services from anywhere in the world on demand. Hence, Cloud computing
can be classified as a new paradigm for the dynamic provisioning of computing services
supported by state-of-the-art data centers that usually employ Virtual Machine (VM)
technologies for consolidation and environment isolation purposes .
Cloud computing delivers infrastructure, platform, and software (applications) as services, which
are made available to consumers as subscription-based services under the pay-as-you-go model.
In industry these services are referred to as

 Infrastructure as a Service (IaaS)


 Platform as a Service (PaaS), and
 Software as a Service (SaaS)
Clouds aim to drive the design of the next generation data centers by architecting them as
networks of virtual services (hardware, database, user-interface, application logic) so that users
Distributed & Cloud Computing CSC 557 Sphoorthy Asuri Maringanti

can access and deploy applications from anywhere in the world on demand at competitive costs
depending on their QoS (Quality of Service) requirements .
Clouds are virtualized datacenters and applications offered as services on a subscription basis.
They require high energy usage for its operation. Today, a typical datacenter with 1000 racks
need 10 Megawatt of power to operate, which results in higher operational cost. Thus, for a
datacenter, the energy cost is a significant component of its operating and up-front costs.
According to a report published by the European Union, a decrease in emission volume of 15%–
30% is required before year 2020 to keep the global temperature increase below 2 C. Thus,
energy consumption and carbon emission by Cloud infrastructures has become a key
environmental concern

Green Computing
Green computing is the eco-friendly use of computers and related resources. Such practices
include the implementation of energy-efficient central processing units, servers, peripherals as
well as reduced resource consumption and proper disposal of electronic waste. Green computing
is a study and practice of designing , manufacturing, using, and disposing of computers, servers,
and associated subsystems—such as monitors, printers, storage devices, and networking and
communications systems—efficiently and effectively with minimal or no impact on the
environment." The goals of green computing are similar to green chemistry; reduce the use of
hazardous materials, maximize energy efficiency during the product's lifetime, and promote the
recyclability or biodegradability of defunct products and factory waste. Research continues into
key areas such as making the use of computers as energy-efficient as possible, and designing
algorithms and systems for efficiency-related computer technologies.
There are several approaches to green computing namely
• Algorithmic efficiency
• Resource allocation
• Virtualization
• Power management

Need of green computing in clouds


Modern data centers, operating under the Cloud computing model are hosting a variety of
applications ranging from those that run for a few seconds to those that run for longer periods of
time on shared hardware platforms. The need to manage multiple applications in a data center
creates the challenge of on-demand resource provisioning and allocation in response to time-
varying workloads. Normally, data center resources are statically allocated to applications, based
on peak load characteristics, in order to maintain isolation and provide performance guarantees.
Recently, high performance has been the sole concern in data center deployments and this
demand has been fulfilled without paying much attention to energy consumption. The average
data center consumes as much energy as 25,000 households. As energy costs are increasing
Distributed & Cloud Computing CSC 557 Sphoorthy Asuri Maringanti

while availability dwindles, there is a need to shift focus from optimizing data center resource
management for pure performance to optimizing for energy efficiency while maintaining high
service level performance. According to certain reports, the total estimated energy bill for data
centers in 2010 is $11.5 billion and energy costs in a typical data center double every five years.
Data centers are not only expensive to maintain, but also unfriendly to the environment.
Data centers now drive more in carbon emissions than both Argentina and the Netherlands. High
energy costs and huge carbon footprints are incurred due to massive amounts of electricity
needed to power and cool numerous servers hosted in these data centers. Cloud service providers
need to adopt measures to ensure that their profit margin is not dramatically reduced due to high
energy costs. For instance, Google, Microsoft, and Yahoo are building large data centers in
barren desert land surrounding the Columbia River, USA to exploit cheap and reliable
hydroelectric power.
Lowering the energy usage of data centers is a challenging and complex issue because
computing applications and data are growing so quickly that increasingly larger servers and disks
are needed to process them fast enough within the required time period. Green Cloud computing
is envisioned to achieve not only efficient processing and utilization of computing infrastructure,
but also minimize energy consumption. This is essential for ensuring that the future growth of
Cloud computing is sustainable. Otherwise, Cloud computing with increasingly pervasive front-
end client devices interacting with back-end data centers will cause an enormous escalation of
energy usage.
To address this problem, data center resources need to be managed in an energy-efficient manner
to drive Green Cloud computing. In particular, Cloud resources need to be allocated not only to
satisfy QoS requirements specified by users via Service Level Agreements (SLA), but also to
reduce energy usage.

Energy savings in the cloud


Here are some of the ways that the cloud can help a company cut its carbon footprint down to
size:
 Fewer machines – With the cloud, server utilization rates are typically 60-70%, while in many
small business and corporate environments, utilization rates hover around 5 or 10%. As a
result, shared data centers can employ fewer machines to get the same capacity.
 Equipment efficiency – Larger data centers often have the resources to allow them to upgrade
to energy-saving equipment and building systems. Usually, this is not an option in smaller
organizations where this efficiency is not the focus.
 Consolidated climate control costs – In order for a server to run at its peak performance, its
temperature and humidity level must be carefully controlled, and cloud providers can use high
density efficient layouts that are hard for in-house centers to replicate.
 Dynamically allocated resources – In-house data centers need extra servers to handle peak
data loads, and cloud providers can dynamically allocate resources where necessary in order for
fewer machines to sit idle.
Cloud computing has enormous potential to transform the world of IT: reducing costs, improving
efficiency and business agility, and contributing to a more sustainable world.
Distributed & Cloud Computing CSC 557 Sphoorthy Asuri Maringanti

Architecture of a green cloud computing platform

Fig 2: architecture of a green cloud computing environment


Figure 2 shows the high-level architecture for supporting energy-efficient service allocation in
Green Cloud computing infrastructure. There are basically four main entities involved:
a) Consumers: Cloud consumers submit service requests from anywhere in the world to the
Cloud. It is important to notice that there can be a difference between Cloud consumers and users
of deployed services. For instance, a consumer can be a company deploying a Web application,
which presents varying workload according to the number of users accessing it.
b) Green Resource Allocator: Acts as the interface between the Cloud infrastructure and
consumers. It requires the interaction of the following components to support energy-efficient
resource management:
• Green Negotiator: Negotiates with the consumers/brokers to finalize the SLA with specified
prices and penalties (for violations of SLA) between the Cloud provider and consumer
depending on the consumer’s QoS requirements and energy saving schemes. In case of Web
applications, for instance, QoS metric can be 95% of requests being served in less than 3
seconds.
• Service Analyzer: Interprets and analyses the service requirements of a submitted request
before deciding whether to accept or reject it. Hence, it needs the latest load and energy
information from VM Manager and Energy Monitor respectively.
Distributed & Cloud Computing CSC 557 Sphoorthy Asuri Maringanti

• Consumer Profiler: Gathers specific characteristics of consumers so that important consumers


can be granted special privileges and prioritized over other consumers.
• Pricing: Decides how service requests are charged to manage the supply and demand of
computing resources and facilitate in prioritizing service allocations effectively.
• Energy Monitor: Observes and determines which physical machines to power on/off.
• Service Scheduler: Assigns requests to VMs and determines resource entitlements for allocated
VMs. It also decides when VMs are to be added or removed to meet demand.
• VM Manager: Keeps track of the availability of VMs and their resource entitlements. It is also
in charge of migrating VMs across physical machines.
• Accounting: Maintains the actual usage of resources by requests to compute usage costs.
Historical usage information can also be used to improve service allocation decisions.
c) VMs: Multiple VMs can be dynamically started and stopped on a single physical machine to
meet accepted requests, hence providing maximum flexibility to configure various partitions of
resources on the same physical machine to different specific requirements of service requests.
Multiple VMs can also concurrently run applications based on different operating system
environments on a single physical machine. In addition, by dynamically migrating VMs across
physical machines, workloads can be consolidated and unused resources can be put on a low-
power state, turned off or configured to operate at low-performance levels (e.g., using DVFS) in
order to save energy.
d) Physical Machines: The underlying physical computing servers provide hardware
infrastructure for creating virtualized resources to meet service demands.

Making cloud computing more green


Three approaches have been tried out to make cloud computing environments more
environmental friendly. These approaches have been tried out in the data centers under
experimental conditions. The methods are:
• Dynamic Voltage frequency scaling technique (DVFS):- Every electronic circuitry will
have an operating clock associated with it. The operating frequency of this clock is adjusted so
that the supply voltage is regulated. Thus, this method heavily depends on the hardware and is
not controllable according to the varying needs. The power savings are also low compared to
other approaches. The power savings to cost incurred ratio is also low.
• Resource allocation or virtual machine migration techniques: - In a cloud computing
environment, every physical machine hosts a number of virtual machines upon which the
applications are run. These virtual machines can be transferred across the hosts according to the
varying needs and available resources. The VM migration method focusses on transferring VMs
in such a way that the power increase is least. The most power efficient nodes are selected and
the VMs are transferred across to them. This method is dealt in detail later.
• Algorithmic approaches: - It has been experimentally determined that an ideal server
consumes about 70% of the power utilized by a fully utilized server.
Distributed & Cloud Computing CSC 557 Sphoorthy Asuri Maringanti

Fig 3: Power consumption under different workloads.

Using a neural network predictor, the green scheduling algorithms first estimates required
dynamic workload on the servers. Then unnecessary servers are turned off in order to minimize
the number of running servers, thus minimizing the energy use at the points of consumption to
provide benefits to all other levels. Also, several servers are added to help assure service-level
agreement. The bottom line is to protect the environment and to reduce the total cost of
ownership while ensuring quality of service.

VM Migration
The problem of VM allocation can be divided in two: the first part is admission of new requests
for VM provisioning and placing the VMs on hosts, whereas the second part is optimization of
current allocation of VMs.
Optimization of current allocation of VMs is carried out in two steps: at the first step we select
VMs that need to be migrated, at the second step chosen VMs are placed on hosts using MBFD
algorithm. We propose four heuristics for choosing VMs to migrate. The first heuristic, Single
Threshold (ST), is based on the idea of setting upper utilization threshold for hosts and placing
VMs while keeping the total utilization of CPU below this threshold. The aim is to preserve free
resources to prevent SLA violation due to consolidation in cases when utilization by VMs
increases. At each time frame all VMs are reallocated using MBFD algorithm with additional
condition of keeping the upper utilization threshold not violated. The new placement is achieved
by live migration of VMs.
The other three heuristics are based on the idea of setting upper and lower utilization thresholds
for hosts and keeping total utilization of CPU by all VMs between these thresholds. If the
utilization of CPU for a host goes below the lower threshold, all VMs have to be migrated from
this host and the host has to be switched off in order to eliminate the idle power consumption. If
the utilization goes over the upper threshold, some VMs have to be migrated from the host to
reduce utilization in order to prevent potential SLA violation. We propose three policies for
choosing VMs that have to be migrated from the host.
Distributed & Cloud Computing CSC 557 Sphoorthy Asuri Maringanti

• Minimization of Migrations (MM) – migrating the least number of VMs to minimize migration
overhead.
• Highest Potential Growth (HPG) – migrating VMs that have the lowest usage of CPU relatively
to the requested in order to minimize total potential increase of the utilization and SLA violation.
• Random Choice (RC) – choosing the necessary number of VMs by picking them according to a
uniformly distributed random variable.

Experimental Setup
A generic Cloud computing environment is essential to evaluate on a large-scale virtualized data
center infrastructure. It is difficult to conduct large-scale experiments on a real infrastructure,
especially when it is necessary to repeat the experiment with the same conditions. Therefore,
simulations have been chosen as a way to evaluate the proposed heuristics. The CloudSim toolkit
has been chosen as a simulation platform as it is a modern simulation framework aimed at Cloud
computing environments. In contrast to alternative simulation toolkits (e.g. SimGrid, GandSim),
it supports modeling of on-demand virtualization enabled resource and application management.
It has been extended in order to enable power-aware simulations as the core framework does not
provide this capability. Apart from the power consumption modeling and accounting, the ability
to simulate service applications with variable over time workload has been incorporated.
Few assumptions have been made to simplify the model of the system and enable simulation-
driven evaluation. The first assumption is that the overhead of VM migration is considered as
negligible. Modeling the cost of migration of VMs is another research problem and is being
currently investigated. However, it has been shown that application of live migration of VMs can
provide reasonable performance overhead. Moreover, with advancements of virtualization
technologies, the efficiency of VM migration is going to be improved. Another assumption is
that due to unknown types of applications running on VMs, it is not possible to build the exact
model of such a mixed workload. Therefore, rather than simulating particular applications, the
utilization of CPU by a VM is generated as a uniformly distributed random variable. In the
simulations we have defined that SLA violation occurs when a VM cannot get amount of MIPS
that are requested. This can happen in cases when VMs sharing the same host require higher
CPU performance that cannot be provided due to consolidation. To compare efficiency of the
algorithms we use a characteristic called SLA violation percentage, or simply SLA violation,
which is defined as a percentage of SLA violation events relatively to the total number of
measurements.
A simulation of a data center comprises 100 heterogeneous physical nodes. Each node is
modeled to have one CPU core with performance equivalent to 1000, 2000 or 3000 Million
Instructions Per Second (MIPS), 8 Gb of RAM and 1 TB of storage. According to this model, a
host consumes from 175 W with 0% CPU utilization and up to 250 W with 100% CPU
utilization. Each VM requires one CPU core with 250, 500, 750 or 1000 MIPS, 128 MB of RAM
and 1 GB of storage. The users submit requests for provisioning of 290 heterogeneous VMs that
fills the full capacity of the simulated data center. Each VM runs a web-application or any kind
of application with variable workload, which is modeled to create the utilization of CPU
according to a uniformly distributed random variable. The application runs for 150,000 MIPS
that equals to 10 minutes of execution on 250 MIPS CPU with 100% utilization. Initially, VMs
Distributed & Cloud Computing CSC 557 Sphoorthy Asuri Maringanti

are allocated according to the requested characteristics assuming 100% utilization. Each
experiment has been run 10 times and the presented results are built upon the mean values.

Results

For the benchmark experimental results a Non Power Aware (NPA) policy has been used. It does
not apply any power aware optimizations and means that all hosts run at 100% CPU utilization
and use maximum power. The second policy applies DVFS, but does not perform any adaptation
of allocation of VMs in run-time. The NPA policy leads to the total energy consumption of 9.15
KWh, whereas DVFS allows decreasing this value to 4.4 KWh for simulation setup.

Energy Consumption and SLA violation of ST policy

Fig 4: simulation results of ST policy


The simulation results are presented in Figure 4. The energy consumption can be significantly
reduced with respect to NPA and DVFS policies by 77% and 53% respectively with 5.4% of
SLA violations as show in figure. The growth of the utilization threshold energy consumption
decreases, SLA violations increases. This is due to the fact that higher utilization threshold
allows more aggressive consolidation of VMs, by the cost of the increased risk of SLA
violations.

Energy consumption and SLA violations of other policies


MM policy is compared with HPG and RC policies varying exact values of the
thresholds. These types of policies allow the achievement of nearly the same values of energy
consumption and SLA violations. But the number of VM migrations produced by MM policy is
reduced in comparison to HPG policy by maximum of 57% and 40% on average and in
comparison to RC policy by maximum of 49% and 27% on average.
Distributed & Cloud Computing CSC 557 Sphoorthy Asuri Maringanti

Fig 5: energy consumption of different policies

Fig6: SLA violations of different policies under different thresholds


Distributed & Cloud Computing CSC 557 Sphoorthy Asuri Maringanti

Conclusion
Cloud computing is emerging as a significant shift as today's organizations which are
facing extreme data overload and skyrocketing energy costs. Green Cloud architecture, which
can help consolidate workload and achieve significant energy saving for cloud computing
environment, at the same time, guarantees the real-time performance for many performance-
sensitive applications.
In the future, there are still a number of research activities that we plan to carry out, which could
improve the performance of Green Cloud and bring solid value to users to achieve their business
goals and their social responsibility in Green IT Applying green technologies is highly essential
for the sustainable development of cloud computing. Of the various green methodologies
enquired, the DVFS technology is a highly hardware oriented approach and hence less flexible.
Green scheduling algorithms based on neural predictors can lead to a 70% power savings. These
policies also enable us to cut down data Centre energy costs, thus leading to a strong,
competitive cloud computing industry. End users will also benefit from the decreased energy
bills. As a conclusion, Green Cloud effectively saves energy by dynamically adapting to
workload leveraging live VM migrations, at the same time meeting system SLAs.
Distributed & Cloud Computing CSC 557 Sphoorthy Asuri Maringanti

References

1. “Energy efficient management of data center resources for cloud computing: A vision,
architectural elements and Open Challenges” Raj Kumar Buyya,Anton Beloglazov,Jemal
Abawajy Proc. of 9th IEEE International Symposium on Cluster Computing and the Grid
(CCGrid 2009), Rio De Janeiro, Brazil, May 2009.
2. “Performance Evaluation of a green Scheduling algorithm for energy savings in cloud
computing” Trough Vinh Troung Duy, Yukinori Sato, Yashushi Inoguchi IEEE Explore, March
2010
3. www.wikipedia.com/greencomputing.
4. www.ibm.com/developerworks/websphere/zones/hipods

You might also like