Green Cloud Computing
Green Cloud Computing
6 - Part A
Aim: Case Study on Green Cloud Computing.
Requirements: Basic Concepts of Cloud Computing, Internet Connection.
Outcome:
Gain knowledge to make informed decisions about migrating to cloud infrastructure and
choosing the best deployment model for their organization.
Case Study:
Instructions:
1. Prepare a report covering all the points.
2. Additional Points can be included if required.
Case study:
Green Cloud Computing
The Green Computing is the practice of executing policies and the procedures to improve
efficiency of the computing resources in such a way as to reduce the energy consumption and
environmental impact of their utilization. Green Cloud Architecture is rising from these trends
leading, not only to energy efficiency, but also a carbon emission aware concept. In order to reduce
energy consumption, a greener environment needs to be built.
Among all industries, the information communication technology (ICT) industry is arguably
responsible for a larger portion of the worldwide growth in energy consumption. The goal of green
cloud computing is to promote the recyclability or biodegradability of defunct products and factory
waste by reducing the use of hazardous materials and maximizing the energy efficiency during the
product’s lifetime.
Green cloud computing allows users to utilize the benefits of cloud storage while decreasing its
adverse effects on the environment, ultimately affecting human well-being.
Green cloud
Green cloud is a buzzword that refers to the potential environmental benefits that information
technology (IT) services delivered over the Internet can offer society. The term combines the
words green — meaning environmentally friendly — and cloud, the traditional symbol for the
Internet and the shortened name for a type of service delivery model known as cloud computing.
Cloud computing has become an essential infrastructural demand for a modern organization for
several reasons covering scalability, security, and cost-effectiveness features. However, this ever-
growing demand has also resulted in elevated energy consumption levels, which has ultimately
bolded the environment’s carbon footprint. As more and more data centers are added to the
organizational realms, they’ll require thousands of servers and other necessary materials to enable
its full-fledged operation.
Green Computing
Green Computing is the term used to donate efficient use of resources in computing. It is also
known as Green IT. Green Computing is “Where organizations adopt a policy of ensuring that the
setup and operations of Information Technology produces the minimal carbon footprint”. Green
Cloud is “the study and practice of designing, manufacturing, using and disposing of computers,
servers and associated subsystems .
Key issues are energy efficiency in computing and promoting environmentally friendly computer
technologies.
In this program we utilize the sunlight and produce the Solar Power for personal and commercial
usage. Canada, Spain and California have first position for implementing this technology. This is
great achievement for green technology. When we talk about green computing then photovoltaic
solar panels are most miracle example because it easily converts electricity power into electrical
energy.
Other great type is Wind Turbine system because with the help of this system anyone can generate
electricity power. After embedding wind turbine has no bad effect to environment. It decrease the
carbon dioxide emissions. But require huge money for set up of wind turbine, so it is not possible
to everyone.
Geothermal Power
This is also exclusive type of green technology. With the help of this Geothermal plant can be
generated electricity, and people can utilize of this power in daily usage such s heating and cooling
house.
All datacenters of clouds maximum time use generator to provide backup power and with data
center CO2 dissipation and also GHGs. Instead of it renewable energy sources such as hydro
energy, wind energy, solar energy should be sued to generate electricity for fulfilling the power
requirement and cooling requirement of data center to save energy and environment form
pollution. Because only Google data centers around the world continuously draw almost 260
million watts which is about a quarter of the output of a nuclear power plant.
Increased concern over energy consumption in modern typical data centers, a new, distributed
computing platform called Nano Data Centres (NaDa) is preferred. NaDa provides computing and
storage services and adopts a managed peer-to-peer model to form a distributed data centre
infrastructure. Instead of few large data centres it consists of large number of geographically
distributed nano data centres which are of smaller in sized, interconnected and spread along the
network edges. In energy consumption VoD access model data access from NaDa saves at least
20% to 30% of the energy compared to traditional data centres. Energy consumed in
Storage of cloud should be replaced by energy efficient storages components. As the life time of a
data centre is limited up to 9 to 10 years, so while renovating existing data centre developer should
has to use energy efficient memory for e.g., solid state storage and other updated efficient storage
devices. Solid state storage has no moving mechanical component as hard disk drive because of
which it requires less cooling as compared to hard disk drive, causes less energy requirement for
cooling. International Journal of Grid Distribution Computing
Processor consumes electrical energy in form of charging for its operation, for its switching
devices and for cooling of transistors and numerous chips. It also dissipates this energy in
surroundings in the form of heat. By adapting free cooling this power dissipation can be reduced.
Clock gater is a hardware switch to activate and deactivate the clock pulse. The clock of a logic
block must be activated only when the logic block is doing some work and must be turned off
when logic block is in ideal mode. This popular technique has been used in many synchronous
circuits but it can also be used in globally asynchronous and locally synchronous circuits for
reducing dynamic power dissipation.
6. By Reducing Cooling Requirements
Previously cooling was done by mechanical refrigerator that takes service of compressor inside the
data centre or externally chilled water is supplied to main air handler for the cooling its IT
equipment. Now free cooling system is developed to optimize the power requirement of cooling
instead of mechanical cooling. This newly adopted technique says if the air temperature of outside
world is below or at the critical point, then the mechanical refrigerator can provide direct or
indirect cooling by itself. It does not reduce the required fan energy for cooling it only eliminates
the need of mechanical cooling energy.
Hardware temperature control energy saving strategy includes two aspects: one is to computer
hardware (such as servers, storage medium, etc.) and cooling measures, another is temperature
control mechanism for processor, storage hardware and its temperature shock. Some scholars have
carried out extensive research in this regard, such as the blade server system, Solid State Disk
(SSD) storage space, IBM's "electron spin" storage technology, APC "thermal channel sealing
system" and so on. When upper application of load and operation characteristics changed, the
processor and other hardware may be caused temperature raise. Therefore, energy consumption
rising problem is produced. It became necessary thermal sensor technology around the hardware
for real-time monitoring and dynamic intelligent cooling to hardware based on the monitoring
results. On the other hand, large ICT companies like Google displaced their server farms to the
banks of the Columbia River to take advantage of the energy offered by the nearby hydroelectric
power plants. The river water flow may in addition be used within the cooling systems, as
experimented by Google and it also take the advantage of cold environment of the river bank in the
cooling prospective. In Tent and Marlow projects an alternative cooling system is investigated by
Microsoft by leaving servers in the open air so that heating dissipates more easily.
8. Server Consolidation
In most of the time one physical server uses only less than 10% of total CPU utilization. So it is
better to consolidate several roles in a single physical server i.e. migrating server roles from
different underutilized physical servers onto virtual machines. This reduces total number of
hardware and energy consumption. International Journal of Grid Distribution Computing
Optimization of compiler technology also takes a major role in energy saving activity. A better
compilation technology optimizes the application technology as well as helps analysing the
behaviours of application programs to minimize the system or processor operation power
consumption. A model is proposed by Zhao Rongcai in which is used to reduce the execution
frequency to reduce the power consumption by multithreading system structure.
11. Energy Saving Strategy of Application Software Power
With the application of some method to the source program structure level it is possible to optimize
software power. For example cycle structure optimization techniques have been reaslized in the
compiler and it gives a good power optimization result. Application software power consumption
strategy focuses on the reduction of space complexity and time complexity of the algorithm to
reduce the system power. This can be achieved by implementing some effective algorithms viz.
compression data storage space etc. which reduces repeated calculation and redundancy of the
algorithm.
This strategy includes energy efficient scheduling between nuclear, equipment resource
management and dynamic energy consumption management of operating system. The former
represents an energy efficient scheduling method in poly nuclear system. Wang Jing analyzed
thread scheduling strategy and resource classification mechanism for reducing resource contention
and discussed the possible future research direction of development. The equipment resource
management maintains an optimal environment for all the individual computing elements.
Dynamic energy consumption refers to the operating system dynamically manipulates the system
unit to obtain minimum power consumption without degrading the task assigned to be performed.
This strategy draws in it these aspects: Energy management in interface support, in framework and
desk-top class virtual machine and many more. Stoess in proposed a framework for energy
management in virtualized server. Some major area of research in cloud computing to manage
energy consumption are model of energy management and task scheduling strategy among
operating systems, virtual machine manager and upper application of hardware.
Live migration is nothing but moving a running VM from one host to another. It provides various
benefits to data centers such as load balancing, power management International Journal of Grid
Distribution Computing and transparent IT maintenance. The cost of this kind of live migration is
affected by some parameters. Parameters related to physical machine is resource utilization of
source and host systems. The available average network bandwidth between source and target host
is also a parameter in the migration of VM. In addition to live migration of VM green cloud
architecture switches off the underutilized servers to meet minimum power consumption without
losing quality of service.
Cloud computing is a networked platform. So, by applying some energy management strategy a
quite good result can be obtained. Energy management in network protocols and algorithm is the
focal point of this strategy. A large number of network protocols and algorithms do not meet the
requirement of energy saving. Some such protocols are viz. TCP, CSMA/CD, and CSMA/CA in
wireless network. Moreover, in the literature works on adaptive link rate and sleep modes. The
adaptive link rate methods dynamically adjust the data rate of the links according to the traffic
requirements. Another way sleep mode technique achieves energy saving by turning off the
component or switch to sleep mode of a subset of idle components. At the same time the remaining
active elements should still satisfy the fluctuating requirements. However the QoS requirements
should be satisfied and fault tolerance should be completely avoided.
16. Task Consolidation for Efficient Energy Consumption
The task consolidation means assigning a set of n tasks to a set r of cloud resources without
violating the time constraint to minimize energy consumption. The author in described that the
energy consumed by a task can be calculated from the information of its processing time and
processor utilization. For a given resource ri at any given time the utilization is defined as: Where,
n = number of task running at that time and uij = resources used by a task. The assignment of task
to a resource must be very carefully handled. Because a task which requires 70% resource
utilization cannot be assigned to a resource which is associated with only 50% resource utilization
at the time of the task arrival. The energy saving can also be met using slack reclamation with the
support of dynamic voltage/frequency scaling (DVFS). The DVFS technique enables energy
consumption by lowering the voltage and sacrificing the speed of the system. Some energy
efficient task consolidation algorithms are Energy Consolidation and Task Consolidation (ECTC)
and Maximum Utilization (MaxUtil).
A unified solution to enable Green Cloud computing is proposed. A Green Cloud framework,
which takes into account these goals of provider while curbing the energy consumption of Clouds.
The goal of this architecture is to make Cloud green from both user and providers perspective.
In the Green Cloud architecture, users submit their Cloud service requests through a new
middleware Green Broker that manages the selection of the greenest Cloud provider to serve the
users request. A user service request can be of three types i.e., software, platform or infrastructure.
The Cloud providers can register their services in the form of green offers to a public directory
which is accessed by Green Broker. The green offers consist of green services, pricing and time
when it should be accessed for least carbon emission. Green Broker gets the current status of
energy parameters for using various Cloud services from Carbon Emission Directory.
The Carbon Emission Directory maintains all the data related to energy efficiency of Cloud
service. This data may include PUE and cooling efficiency of Cloud datacenter which is providing
the service, the network cost and carbon emission rate of electricity, Green Broker calculates the
carbon emission of all the Cloud providers who are offering the requested Cloud service. Then, it
selects the set of services that will result in least carbon emission and buy these services on behalf
users.
The Green Cloud framework is designed such that it keeps track of overall energy usage of serving
a user request. It relies on two main components, Carbon Emission Directory and Green Cloud
offers, which keep track of energy efficiency of each Cloud provider and also give incentive to
Cloud providers to make their service “Green”. From user side, the Green Broker plays a crucial
role in monitoring and selecting the Cloud services based on the user QoS requirements, and
ensuring minimum carbon emission for serving a user. In general, a user can use Cloud to access
any of these three types of services (SaaS, PaaS, and IaaS), and therefore process of serving them
should also be energy efficient. In other words, from the Cloud provider side, each Cloud layer
needs to be “Green” conscious.
1)SaaS Level: Since SaaS providers mainly offer software installed on their own datacenters or
resources from IaaS providers, the SaaS providers need to model and measure energy efficiency of
their software design, implementation, and deployment.For serving users, the SaaS provider
chooses the datacenters which are not only energy efficient but also near to users. The minimum
number of replicas of users confidentialdata should be maintained using energy-efficient storage.
2)PaaS level: PaaS providers offer in general the platform services for application development.
The platform facilitates the development of applications which ensures system wide energy
efficiency. This can be done by inclusion of various energy profiling tools such as JouleSort . It is a
software energy efficiency benchmark that measures the energy required to perform an external
sort. In addition, platforms itself can be designed to have various code level optimizations which
can cooperate with underlying complier in energy efficient execution of applications. Other than
application development, Cloud platforms also allow the deployment of user applications on
Hybrid Cloud. In this case, to achieve maximum energy efficiency, the platforms profile the
application and decide which portion of application or data should be processed in house and in
Cloud.
3)IaaS level: Providers in this layer plays most crucial role in the success of whole Green
Architecture since IaaS level not only offer independent infrastructure services but also support
other services offered by Clouds. By using virtualization and consolidation, the energy
consumption is further reduced by switching-off unutilized server. Various energy meters and
sensors are installed to calculate the current energy efficiency of each IaaS providers and their
sites. This information is advertised regularly by Cloud providers in Carbon Emission
Applications of Green Cloud Computing in Energy Efficiency and Environmental Sustainability
Directory. Various green scheduling and resource provisioning policies will ensure minimum
energy usage. In addition, the Cloud provider designs various green offers and pricing schemes for
providing incentive to users to use their services during off-peak or maximum energy-efficiency
hours.
Traditional data center server utilization rates are typically very low. Virtualization can allow a
single system to support multiple applications or images, improving use of IT equipment
capabilities and executing more workloads in less space with less energy.
Cloud is a dynamic way to quickly provision resources. With the capability of automatically
turning on and off computers according to demands, cloud helps use hardware resources
efficiently and intelligently.
Cloud providers have invested in advanced hardware technology and replaced equipment to
offer improved performance with reduced power consumption. For example, IBM
POWER8 can provide up to twice per core performance when compared with the previous
generation, consuming a similar quantity of energy.
Another example is cloud hierarchical storage management (storage tiering), which places data
on the right hardware. This means data that is used less is stored on higher latency
equipment. IBM FlashSystem and solid-state drives (SSDs) as storage systems are good options,
as they use no spinning disks.
Cloud is great to consider when an organization has budget restrictions for refreshing older
hardware.
3. Reducing costs
Cloud helps keep costs manageable with a pay-as-you-go structure, rather than a company
deploying an entire additional system involving cabling peripherals, racks, increased energy
consumption and more.
The Uptime Institute states that “decommissioning a single 1U rack server can result in $500 per
year in energy savings, an additional $500 in operating system licenses and $1,500 in hardware
maintenance costs.”
As technology has improved, so has data center design. In many cases, data centers designed
several years ago can waste a lot of energy, have less efficient cooling systems and fail to adhere
to more recent air quality regulations. Instead of a complete redesign or adding more space to
address new demands, you could consider moving to cloud.
Cloud can help simplify IT by providing scalability, reducing provisioning time for a new
system or application and enabling standardization.