0% found this document useful (0 votes)
9 views

pp2.2-numbered

The document discusses the importance of efficient resource management in cloud computing, particularly in two-tier virtualized data centers, emphasizing the need for frameworks that optimize performance while reducing energy consumption. It introduces an energy-aware host resource management framework designed to balance resource allocation and minimize environmental impact, alongside various studies on virtualization, VM placement, and energy-efficient strategies. The research highlights innovative approaches to enhance sustainability and efficiency in cloud data centers, showcasing significant energy savings and improved performance metrics.

Uploaded by

manolatha91
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

pp2.2-numbered

The document discusses the importance of efficient resource management in cloud computing, particularly in two-tier virtualized data centers, emphasizing the need for frameworks that optimize performance while reducing energy consumption. It introduces an energy-aware host resource management framework designed to balance resource allocation and minimize environmental impact, alongside various studies on virtualization, VM placement, and energy-efficient strategies. The research highlights innovative approaches to enhance sustainability and efficiency in cloud data centers, showcasing significant energy savings and improved performance metrics.

Uploaded by

manolatha91
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 67

1

CHAPTER 1

1. INTRODUCTION

In the dynamic landscape of cloud computing, the efficient management of resources within
virtualized data centers has become increasingly crucial. The escalating demand for computing
resources, coupled with the growing awareness of environmental sustainability, underscores the
need for innovative frameworks that not only optimize performance but also prioritize energy
efficiency. This is particularly relevant in the context of two-tier virtualized cloud data centers,
where the challenges of balancing resource allocation and minimizing energy consumption are
pronounced. In response to these challenges, an energy-aware host resource management
framework emerges as a promising solution. This framework aims to strike a delicate balance
between enhancing the overall performance of virtualized environments and mitigating the
environmental impact by intelligently allocating and managing computing resources. This
introduction sets the stage for a deeper exploration of the intricacies and benefits of such a
framework in the context of two-tier virtualized cloud data centers.

1.1 CLOUD DATA CENTER

In the era of rapid digital transformation, cloud data centers have emerged as the backbone of
modern computing infrastructure, revolutionizing the way businesses and individuals access and
manage data. These centers represent a pivotal shift from traditional, on-premises data storage
and processing to scalable and flexible computing environments hosted remotely. Cloud data
centers provide a vast array of services, ranging from storage and computation to networking and
analytics, enabling organizations to dynamically scale their resources based on demand. The
inherent advantages of scalability, cost efficiency, and accessibility have made cloud data centers
indispensable in today's interconnected world. As the demand for cloud services continues to
soar, understanding the intricacies of these data centers becomes essential for businesses and
technology enthusiasts alike. This introduction lays the groundwork for delving into the
multifaceted world of cloud data centers, exploring their architecture, functionalities, and the
transformative impact they wield in the realm of information technology.

1.2 VIRTUAL MACHINE


2

A virtual machine (VM) is a software-based emulation of a physical computer, enabling multiple


operating systems (OS) to run on a single physical machine. This technology allows for the
creation of isolated environments, known as virtualized instances or VMs, within which
applications and operating systems can operate independently of the underlying hardware. Key
components of virtual machines include a hypervisor or a Virtual Machine Monitor (VMM),
which is responsible for managing and allocating the physical resources of the host machine to
the virtual machines. There are two types of hypervisors: Type 1 (bare-metal) hypervisors run
directly on the hardware, while Type 2 (hosted) hypervisors run on top of an existing operating
system. VMs are widely used in various computing scenarios, such as server consolidation,
testing and development environments, and cloud computing. They provide benefits like
improved resource utilization, isolation, and the ability to run multiple operating systems on a
single physical machine. Each virtual machine has its own virtual CPU, memory, storage, and
network interfaces, creating a virtualized environment that operates independently of other VMs
on the same host. In the context of cloud computing, virtual machines are fundamental building
blocks that enable users to deploy and run applications in a flexible and scalable manner. Cloud
providers offer VMs as on-demand resources, allowing users to configure and deploy them based
on their specific requirements without the need to invest in or manage physical hardware. This
flexibility and abstraction contribute to the efficiency and agility of modern computing
infrastructures.

1.3 ENERGY CONSUMPTION

In the contemporary landscape of technology and industry, energy consumption stands at the
forefront of global considerations. As societies increasingly rely on advanced technologies for
their daily operations, the demand for energy continues to escalate. From powering homes and
businesses to supporting the vast network of data centers that underpin our digital infrastructure,
the challenge lies not only in meeting these energy needs but also in doing so sustainably. The
environmental impact of energy consumption, particularly in the context of computing and data
processing, has become a significant concern. As the world transitions to more digital and
interconnected systems, understanding and addressing the energy implications of these
advancements are essential. This introduction sets the stage for an exploration of the
3

complexities surrounding energy consumption, emphasizing the critical need for innovative
solutions and frameworks that promote efficiency and sustainability across various sectors.
1.4 RESOURCE MANAGEMENT

Resource management stands as a linchpin in the orchestration of efficient and sustainable


operations across various domains, from business enterprises to computing environments. At its
core, resource management involves the judicious allocation, utilization, and optimization of
assets, be they human, financial, or technological. In the context of technology and computing,
effective resource management becomes especially critical. As organizations grapple with the
dynamic demands of the digital age, the ability to allocate computing resources judiciously,
ensuring optimal performance and responsiveness, becomes a strategic imperative. This
encompasses not only the allocation of physical assets such as servers and storage but also the
intelligent management of virtual resources in cloud computing environments. This introduction
sets the stage for a comprehensive exploration of resource management, shedding light on its
multifaceted nature and underscoring its pivotal role in achieving efficiency, resilience, and
sustainability in today's complex and interconnected systems.

1.5 OBJECTIVES
• Reduce energy consumption. The framework aims to reduce the overall energy
consumption of the cloud data center by consolidating containers on fewer hosts and
turning off idle hosts.

• The framework ensures that latency requirements for all containers are met by placing
containers on hosts that are close to their users.

• Improve performance: The framework aims to improve the overall performance of the
cloud data center by balancing the resource utilization of hosts.
4

CHAPTER 2

2. LITERATURE REVIEW

2.1COMBINING VIRTUALIZATION AND CONTAINERIZATION TO SUPPORT


INTERACTIVE GAMES AND SIMULATIONS ON THE CLOUD

Sean C In order to address the challenges posed by the integration of game-based and virtual
software simulators into traditional networks, various organizations spanning the entertainment
industry, energy and financial sectors, military, and video gaming have turned to the powerful
capabilities of High-Performance Computing (HPC). The inherent capacity of HPC to handle
compute-intensive tasks makes it an attractive platform for running interactive simulations.
However, the focus of this work goes beyond the conventional use of HPC, aiming to explore the
feasibility of transitioning from a traditional HPC environment to a cloud-based service. This
transition is intended to enable the support of multiple simultaneous interactive simulations while
maintaining high-performance standards. The primary objective of this research is to broaden the
scope of applicable software within an HPC environment, ensuring that the transition to a cloud-
based service does not compromise performance efficacy. To achieve this goal, the study delves
into four distinct HPC load-balancing techniques. These techniques leverage virtualization,
software containers, and clustering to efficiently analyse, schedule, and execute game-based
simulation applications concurrently. The overarching aim is to determine the optimal approach
for extending HPC capabilities to accommodate the demands of multiple interactive simulations.
In the pursuit of the proposed HPC goal, the research places a particular emphasis on
experimenting with and evaluating these load-balancing techniques. Virtualization, software
containers, and clustering are assessed for their individual and collective performance in
handling the unique requirements of game-based simulations. The comparison of these
techniques is critical to understanding their respective strengths and weaknesses, thereby aiding
in the determination of the most suitable deployment technique. In making this determination,
several factors come into play. The availability of cluster resources, the number of competing
software jobs, and the specific characteristics of the software being scheduled all influence the
choice of deployment technique. By thoroughly investigating these considerations, the research
aims to provide valuable insights into the feasibility of extending HPC capabilities to a cloud-
5

based service, ensuring that the chosen approach aligns with the requirements of diverse
simulation applications.

2.2 VIRTUAL MACHINE PLACEMENT IN CLOUD DATA CENTERS USING


A HYBRID MULTI-VERSE OPTIMIZATION ALGORITHM

Sasan Gharehpasha Cloud computing has revolutionized the landscape of computing by offering
a paradigm where a vast array of systems is interconnected in either private or public networks to
deliver dynamically scalable infrastructure for applications, data, and file storage. This
technological advancement has brought about a substantial reduction in the costs associated with
power consumption, application hosting, content storage, and resource delivery. Cloud
computing allows businesses to concentrate on their core goals without the need to continually
expand hardware resources, marking a significant shift in how computing resources are
provisioned and managed. One of the ongoing challenges in cloud computing, particularly in
cloud data centers, is the efficient placement of virtual machines on physical machines. The
optimal placement of virtual machines is crucial for managing resources effectively and
preventing wastage. In this context, a novel approach is introduced, combining the hybrid
discrete multi-object whale optimization algorithm with the multi-verse optimizer enhanced by
chaotic functions. The primary objective of this approach is twofold: firstly, to reduce power
consumption by minimizing the number of active physical machines in cloud data centers, and
secondly, to enhance resource management by strategically placing virtual machines over
physical ones. By reducing power consumption and preventing resource wastage through optimal
virtual machine placement, this approach aims to address critical issues in cloud data centre
management. Moreover, it seeks to mitigate the increasing rate of virtual migration to physical
machines, contributing to the overall efficiency and sustainability of cloud computing
environments. To validate the efficacy of the proposed algorithm, a comparative analysis is
conducted against existing algorithms such as first fit, VMPACS, and MBFD. The results
obtained from this comparative study provide valuable insights into the performance and
superiority of the proposed approach in achieving optimal virtual machine placement within
cloud data centers. This research contributes to the ongoing efforts to enhance the efficiency and
sustainability of cloud computing infrastructures through innovative algorithms and strategies.
6

2.3 AN ENERGY-AWARE HOST RESOURCE MANAGEMENT FRAMEWORK FOR


TWO-TIER VIRTUALIZED CLOUD DATA CENTERS

Chi Zhang The energy consumption of cloud data centers is a critical challenge that poses
constraints on the future growth of the cloud computing industry. This paper addresses this issue
with a specific focus on resource management within two-tier virtualized data centers, where
containers are deployed on virtual machines (VMs). The working model of the two-tier
virtualized data center is first defined, emphasizing the deployment of containers on VMs to
enhance resource utilization on hosts while ensuring the isolation and security of different jobs.
To address the energy consumption challenge, an energy-aware host resource management
framework is proposed. This framework comprises two key algorithms. The initial static
placement algorithm involves load balancing alternate placement and two-sided matching
methods. These methods are instrumental in efficiently placing containers onto VMs and VMs
onto hosts, optimizing the resource utilization of the entire system. The runtime dynamic
consolidation algorithm, building upon the initial placement, dynamically consolidates resources
to utilize the least active hosts in real-time, meeting the dynamic resource requirements of
containers. Simulation experiments are conducted using real workload traces to compare the
proposed algorithms with existing ones. The results demonstrate that the two algorithms exhibit
superior performance in terms of host resource utilization, the number of active hosts, the
number of container migrations, and Service Level Agreement (SLA) metrics. Importantly, the
entire framework achieves a notable energy-saving effect of at least 13.8%, showcasing its
efficacy in addressing the energy consumption challenge in cloud data centers. This research
contributes to the broader discourse on sustainable and efficient cloud computing by providing a
detailed exploration of resource management strategies in two-tier virtualized data centers. The
demonstrated improvements in resource utilization and energy efficiency underscore the
potential of the proposed framework to contribute to the long-term sustainability of cloud
computing infrastructures.

2.4 AN EFFICIENT POWER-AWARE VM ALLOCATION MECHANISM IN CLOUD


DATA CENTERS: A MICRO GENETIC-BASED APPROACH

Mehran Trachoma In the ever-evolving landscape of cloud computing, optimizing power


efficiency in cloud servers has become a critical concern with far-reaching environmental
7

implications. The drive towards reducing greenhouse gas emissions has given rise to the concept
of green computing, where the focus is not only on enhancing computational performance but
also on minimizing the ecological footprint of data centre operations. A key strategy in achieving
this goal is the implementation of power-aware methods to strategically allocate virtual machines
(VMs) within the physical resources of data centers. Virtualization emerges as a promising
technology to facilitate power-aware VM allocation methods, given its ability to abstract and
manage resources efficiently. However, the allocation of VMs to physical hosts poses a complex
problem known to be NP-complete. In response to this challenge, evolutionary algorithms have
been employed as effective tools to tackle such optimization problems. This paper contributes to
the ongoing discourse by introducing a micro-genetic algorithm designed specifically for the
purpose of selecting optimal destinations among physical hosts for VM allocation. The micro-
genetic algorithm presented in this work is a sophisticated approach aimed at addressing the NP-
completeness of VM allocation. Through extensive evaluations in a simulation environment, the
study demonstrates the efficacy of the micro-genetic algorithm in significantly improving power
consumption metrics when compared to alternative methods. The results highlight the
algorithm's ability to make informed and efficient decisions in the allocation of VMs, leading to
tangible enhancements in power efficiency. By showcasing the valuable improvements achieved
through the micro-genetic algorithm, this research underscores the importance of leveraging
evolutionary approaches in solving complex optimization problems in the realm of cloud
computing. The findings contribute not only to the technical aspects of power-aware VM
allocation but also to the broader objective of establishing more sustainable and environmentally
conscious practices within data centre operations.

2.5 AN ENERGY AND SLA-AWARE RESOURCE MANAGEMENT STRATEGY IN


CLOUD DATA CENTERS

Chi Zhang The imperative for cloud providers to enhance investment yield by reducing the
energy consumption of data centers is balanced with the essential need to ensure that services
delivered meet diverse consumer requirements. This paper introduces a comprehensive resource
management strategy geared towards simultaneous reduction of energy consumption and
minimization of Service Level Agreement (SLA) violations in cloud data centers. The strategy
encompasses three refined methods tailored to address sub problems within the dynamic virtual
8

machine (VM) consolidation process. To enhance the effectiveness of host detection and
improve VM selection outcomes, the proposed strategy employs innovative approaches. Firstly,
the overloaded host detection method introduces a dynamic independent saturation threshold for
each host, accounting for CPU utilization trends. Secondly, the underutilized host detection
method integrates multiple factors beyond CPU utilization, incorporating the Naive Bayesian
classifier to calculate combined weights for hosts during prioritization. Lastly, the VM selection
method takes into account both current CPU usage and the anticipated future growth space of
CPU demand for VMs. The performance of the proposed strategy is rigorously evaluated through
simulation in CloudSim, and comparative analysis is conducted against five existing energy-
saving strategies using real-world workload traces. The experimental results demonstrate the
superior performance of the proposed strategy, showcasing minimal energy consumption and
SLA violations in comparison to other strategies. This outcome underscores the effectiveness of
the refined methods employed within the resource management strategy in achieving the dual
objectives of energy efficiency and SLA adherence. By surpassing existing energy-saving
strategies in both energy consumption reduction and SLA compliance, this research makes a
significant contribution to the ongoing efforts in optimizing resource management in cloud data
centers. The proposed strategy not only aligns with the imperative of energy efficiency but also
underscores the importance of delivering reliable services to consumers, thus striking a balance
between environmental sustainability and service quality in the realm of cloud computing.

2.6 RENEWABLE ENERGY-BASED MULTI-INDEXED JOB CLASSIFICATION AND


CONTAINER MANAGEMENT SCHEME FOR SUSTAINABILITY OF CLOUD DATA
CENTERS

Gagangeet Singh Aujla The landscape of modern computing has been significantly shaped by the
rise of Cloud Computing, offering on-demand services to end-users. However, the widespread
use of geo-distributed data centers to perform computing tasks raises concerns about the
substantial energy consumption associated with their operations. Addressing these energy-related
challenges in cloud environments has become imperative, and the integration of renewable
energy resources, coupled with strategic server selection and consolidation, presents a promising
avenue for mitigation. This paper introduces a novel approach, a renewable energy-aware multi-
indexed job classification and scheduling scheme, leveraging Container as-a-Service (CoaaS) for
9

sustainability in data centers. The proposed scheme aims to optimize energy usage by directing
incoming workloads from various devices to data centers equipped with a sufficient supply of
renewable energy. To achieve this, the paper outlines a renewable energy-based host selection
and container consolidation scheme, contributing to the overarching goal of energy efficiency
and sustainability in cloud computing environments. The effectiveness of the proposed scheme is
rigorously evaluated using real-world Google workload traces. The results demonstrate a
substantial improvement over existing schemes of similar categories, with energy savings
reaching 15%, 28%, and 10.55%. These findings underscore the viability and efficiency of the
renewable energy-aware multi-indexed job classification and scheduling scheme in enhancing
sustainability within data centers, while simultaneously achieving significant energy savings. By
presenting a solution that integrates renewable energy considerations, host selection, and
container consolidation, this research contributes to the ongoing discourse on sustainable
practices in cloud computing. The demonstrated improvements in energy savings validate the
potential of the proposed scheme to not only address current energy-related challenges but also
to serve as a benchmark for future advancements in environmentally conscious data centre
operations.

2.7 A PLACEMENT ARCHITECTURE FOR A CONTAINER AS A SERVICE (CAAS)


IN A CLOUD ENVIRONMENT

Mohamed K The evolution of virtualization technologies has introduced containers as a


lightweight alternative to traditional virtual machines (VMs). Operating at the operating system
level, containers encapsulate tasks and their library dependencies for execution. The emerging
Container as a Service (CaaS) strategy is gaining prominence as a cloud service model. A critical
challenge within this paradigm is the placement of container instances on virtual machine
instances, constituting a classical scheduling problem. Previous research has often addressed
either virtual machine placement on physical machines (PMs), container placement, or task
placement without containerization in isolation. This approach, however, can lead to
underutilized or over utilized PMs and VMs. Consequently, there is a growing research interest
in developing container placement algorithms that consider the utilization of both instantiated
VMs and used PMs simultaneously. The primary objective of this study is to enhance resource
utilization, focusing on the number of CPU cores and memory size for both VMs and PMs, while
10

minimizing the number of instantiated VMs and active PMs in a cloud environment. The
proposed placement architecture employs scheduling heuristics, specifically Best Fit (BF) and
Max Fit (MF), based on a fitness function that concurrently evaluates the remaining resource
waste of both PMs and VMs. Additionally, a meta-heuristic placement algorithm is introduced,
leveraging Ant Colony Optimization based on Best Fit (ACO-BF) with the proposed fitness
function. Experimental results indicate that the proposed ACO-BF placement algorithm
outperforms the BF and MF heuristics, showcasing significant improvements in resource
utilization for both VMs and PMs. By incorporating a fitness function that considers the
simultaneous evaluation of PMs and VMs, the ACO-BF algorithm demonstrates its efficacy in
optimizing resource allocation within a cloud environment. This research contributes to the
ongoing efforts to optimize container placement in cloud environments, highlighting the
importance of considering the utilization of both VMs and PMs for enhanced resource efficiency.
The proposed algorithms pave the way for more effective and balanced resource utilization in
cloud-based containerized applications.

2.8 ENERGY CONSUMPTION OPTIMIZATION OF CONTAINER-ORIENTED


CLOUD COMPUTING CENTER

Zhenjiang Li in the realm of container-based cloud computing, addressing the challenge of


higher energy consumption is crucial for optimizing overall efficiency. This paper introduces an
enhanced virtual migration strategy designed to specifically target and reduce energy
consumption in container-based cloud computing centers. The research methodology involves a
comprehensive examination of the intricate dependencies among physical machines, virtual
machines, and containers within a containerized environment of a cloud computing centre. The
study proceeds by analysing key factors that exert influence on the energy consumption of the
data canter, taking into account the dependencies identified in the container environment.
Subsequently, a mathematical model is established to encapsulate these dependencies and
facilitate a systematic understanding of the energy consumption dynamics. Building upon this
analysis, the paper proposes an optimal utilization priority algorithm. This algorithm is designed
based on the non-linear relationship between utilization and energy consumption across different
physical machines, aiming to prioritize and allocate resources in a manner that minimizes energy
consumption. Simulation experiments conducted on the Container CloudSim platform
11

substantiate the effectiveness of the proposed method. The results demonstrate that the
introduced approach significantly reduces the energy consumption of the data centre compared to
traditional strategies such as random scheduling and maximum utilization. The emphasis on
practical differences between physical machines in real-world environments sets this improved
virtual migration strategy apart, offering a tailored solution to address the energy consumption
challenges in container-based cloud computing. In conclusion, this paper not only refines
classical virtual migration strategies but also introduces a pragmatic and easily implementable
algorithm for reducing energy consumption in containerized cloud computing environments. The
simplicity and efficacy of the proposed approach make it a noteworthy candidate for widespread
adoption, contributing to the ongoing efforts to make container-based cloud computing more
energy-efficient and sustainable.

2.9 AN ENERGY, PERFORMANCE EFFICIENT RESOURCE CONSOLIDATION


SCHEME FOR HETEROGENEOUS CLOUD DATACENTERS

Ayaz Ali Khan Datacentres, as the primary electricity consumers for cloud computing, play a
pivotal role in providing the IT backbone for contemporary businesses and economies. However,
studies indicate that a significant portion of servers in U.S. datacentres are underutilized or idle,
presenting an opportunity for energy savings through resource consolidation techniques. The
challenge lies in the fact that consolidation, involving migrations of virtual machines (VMs),
containers, and/or applications, can be costly in terms of both energy consumption and
performance loss. This paper addresses this challenge by proposing a consolidation algorithm
that prioritizes the most effective migration among VMs, containers, and applications.
Additionally, the study investigates how migration decisions can be made to save energy without
negatively impacting service performance. Through a series of experiments utilizing real
workload traces for 800 hosts, approximately 1,516 VMs, and over a million containers, the
paper evaluates the impact of different migration approaches on datacentre energy consumption
and performance. The findings highlight a trade-off between migrating containers and virtual
machines, where migrating virtual machines tends to be more performance-efficient, while
migrating containers can be more energy-efficient. The study also suggests that migrating
containerized applications, running within virtual machines, could lead to an energy and
performance-efficient consolidation technique in large-scale datacentres. The evaluation results
12

indicate that migrating applications may be approximately 5.5% more energy-efficient and
11.9% more performance-efficient than VM migration. Furthermore, energy and performance-
efficient consolidation is approximately 14.6% more energy-efficient and 7.9% more
performance-efficient than application migration. The study generalizes these findings through
repeatable experiments across various workloads, resources, and datacentre setups. In
conclusion, the research sheds light on the nuanced trade-offs involved in migration decisions for
datacentre consolidation. By proposing a consolidation algorithm and providing insights into the
energy and performance efficiencies of different migration approaches, the paper contributes to
the ongoing efforts to optimize resource usage in large-scale datacentres.

2.10 HEPORCLOUD: AN ENERGY AND PERFORMANCE EFFICIENT RESOURCE


ORCHESTRATOR FOR HYBRID HETEROGENEOUS CLOUD COMPUTING
ENVIRONMENTS

Ayaz Ali Khan in major Information Technology (IT) companies like Google, Rackspace, and
Amazon Web Services (AWS), the execution of customers' workloads and applications relies
heavily on virtualization and containerization technologies. These companies operate large-scale
datacentres that provide computational resources, but the substantial energy consumption of
these datacentres raises ecological concerns. Each company employs different approaches, with
Google utilizing containers, Rackspace offering bare-metal hardware, and AWS employing a
mix of virtual machines (VMs), containers (ECS), and containers inside VMs (Lambda). This
diversity in technology usage makes resource management a complex task. Effective resource
management is crucial, especially in hybrid platforms where various sandboxing technologies
like bare-metal, VMs, containers, and nested containers coexist. The absence of centralized,
workload-aware resource managers and consolidation policies raises questions about datacentres
energy efficiency, workload performance, and user costs. This paper addresses these concerns
through a series of experiments using Google workload data for 12,583 hosts and approximately
one million tasks across four different types of workloads. The focus is on demonstrating the
potential benefits of using workload-aware resource managers in hybrid clouds, achieving energy
and cost savings in heterogeneous hybrid datacentres without negatively impacting workload
performance. The paper also explores how different allocation policies, combined with various
migration approaches, impact datacentres energy and performance efficiencies. The empirical
13

evaluation, based on plausible assumptions for hybrid datacentres setups, reveals compelling
results. In scenarios with no migration, a single scheduler is found to be up to 16.86% more
energy-efficient than distributed schedulers. However, when migrations are considered, the
proposed resource manager demonstrates the potential to save up to 45.61% energy and improve
workload performance by up to 17.9%. In conclusion, the research highlights the significance of
workload-aware resource managers in optimizing energy efficiency and cost savings in
heterogeneous hybrid datacentres. The findings provide valuable insights for IT companies
seeking to enhance the performance and sustainability of their datacentres operations.
14

CHAPTER 3

SYSTEM ANALYSIS

3.1 EXISTING SYSTEM

In distributed environments, cloud computing is widely used to manage user requests for
resources and services. Resource scheduling is used to handle user requests for resources based
on priorities within a given time frame. In today’s environment, every industry management
rely on smart devices connected to the internet. These devices deal with the massive amounts of
data processed and detected by smart medical sensors without sacrificing performance factors
like throughput and latency. This has prompted the requirement for load balancing among the
smart operational devices to prevent any insensitivity. Load balancing is used to manage large
amounts of data in both a centralized and distributed manner. We use reinforcement learning
algorithms such as GA, SARSA, and Q-learning for resource scheduling. These algorithms are
used to predict the optimal solution to manage load in cloud-based applications . The main
drawbacks in existing system is less security and low performance

3.1.1 DRAWBACKS

1. Existing systems exhibit vulnerabilities leading to potential data breaches and privacy
concerns.

2. Systems experience suboptimal performance due to inefficient resource allocation and


management.

3. Limited scalability hampers the ability to handle increasing data volumes and user
demands effectively.

4. System reliability is compromised, leading to potential downtimes and disruptions in


cloud applications.
15

3.2 PROPOSED SYSTEM

The proposed system introduces an innovative approach to Cloud VM scheduling by addressing


the limitations of current instant-based resource allocation. Leveraging historical data on VM
resource utilization, the system employs a scheduling algorithm powered by Particle Swarm
Optimization (PSO). This methodology enables the system to learn and adapt to the dynamic
behavior of the Cloud environment over time. Unlike traditional approaches, the proposed
system prioritizes overall and long-term resource utilization, aiming to minimize the impact of
Cloud management processes on deployed VMs. By optimizing performance and reducing the
count of physical machines through the PSO classifier, the system achieves enhanced efficiency
and maximizes real CPU utilization, thereby refining the conventional VM placement strategies
in Cloud systems.

3.2.1 ADVANTAGES
• The algorithm takes into account the already running VM resource usage over time to
optimize the placement of VMs. This can lead to improved performance for the VMs, as
they are less likely to be placed on hosts that are already overloaded.

• The algorithm uses PSO, which is a metaheuristic algorithm that is known for its ability
to find good solutions to complex problems. This makes the algorithm more likely to find
a good VM placement solution, even in large and complex cloud systems.

• It is computationally efficient, which makes it suitable for large-scale cloud systems.

• It can be configured to meet a variety of objectives, such as minimizing costs,


maximizing performance, and minimizing the number of VM migrations.

3.3 FEASIBILITY STUDY

Preliminary investigation examines project feasibility; the likelihood the system will be useful to
the organization. The main objective of the feasibility study is to test the Technical, Operational
and Economical feasibility for adding new modules and debugging old running system. All
16

system is feasible if they are unlimited resources and infinite time. There are aspects in the
feasibility study portion of the preliminary investigation:

 Technical Feasibility
 Operation Feasibility
 Economical Feasibility

3.3.1 TECHNICAL FEASIBILITY

The technical issue usually raised during the feasibility stage of the investigation includes
the following:

 Does the necessary technology exist to do what is suggested?


 Do the proposed equipments have the technical capacity to hold the data required to use the
new system?
 Will the proposed system provide adequate response to inquiries, regardless of the number or
location of users?
 Can the system be upgraded if developed?
 Are there technical guarantees of accuracy, reliability, ease of access and data security?
Earlier no system existed to cater to the needs of ‘Secure Infrastructure Implementation
System’. The current system developed is technically feasible. It is a web based user interface for
audit workflow at DB2 Database. Thus it provides an easy access to the users. The database’s
purpose is to create, establish and maintain a workflow among various entities in order to
facilitate all concerned users in their various capacities or roles. Permission to the users would
be granted based on the roles specified.

Therefore, it provides the technical guarantee of accuracy, reliability and security. The
software and hard requirements for the development of this project are not many and are already
available in-house at NIC or are available as free as open source. The work for the project is
done with the current equipment and existing software technology. Necessary bandwidth exists
for providing a fast feedback to the users irrespective of the number of users using the system.

3.3.2 OPERATIONAL FEASIBILITY


17

Proposed projects are beneficial only if they can be turned out into information system.
That will meet the organization’s operating requirements. Operational feasibility aspects of the
project are to be taken as an important part of the project implementation. Some of the important
issues raised are to test the operational feasibility of a project includes the following: -

 Is there sufficient support for the management from the users?


 Will the system be used and work properly if it is being developed and implemented?
 Will there be any resistance from the user that will undermine the possible application
benefits?
This system is targeted to be in accordance with the above-mentioned issues. Beforehand,
the management issues and user requirements have been taken into consideration. So there is no
question of resistance from the users that can undermine the possible application benefits.

The well-planned design would ensure the optimal utilization of the computer resources and
would help in the improvement of performance status.

3.3.3 ECONOMIC FEASIBILITY

A system can be developed technically and that will be used if installed must still be a
good investment for the organization. In the economic feasibility, the development cost in
creating the system is evaluated against the ultimate benefit derived from the new systems.
Financial benefits must equal or exceed the costs.

The system is economically feasible. It does not require any addition hardware or
software. Since the interface for this system is developed using the existing resources and
technologies available at NIC, there is nominal expenditure and economical feasibility for
certain.
18

CHAPTER 4

SYSTEM SPECIFICATION

4.1 HARDWARE REQUIREMENTS

CPU type : Intel core i5 processor

Clock speed : 3.0 GHz

RAM size : 8 GB

Hard disk capacity : 500 GB

Keyboard type : Internet Keyboard

CD -drive type : 52xmax

4.2 SOFTWARE REQUIREMENTS

Operating System : Windows 10

Front End : JAVA


19

CHAPTER 5

SOFTWARE DESCRIPTION

5.1 FRONT END: JAVA

The software requirement specification is created at the end of the analysis task. The function
and performance allocated to software as part of system engineering are developed by
establishing a complete information report as functional representation, a representation of
system behavior, an indication of performance requirements and design constraints, appropriate
validation criteria.

FEATURES OF JAVA

Java platform has two components:

 The Java Virtual Machine (Java VM)


 The Java Application Programming Interface (Java API)
The Java API is a large collection of ready-made software components that provide many useful
capabilities, such as graphical user interface (GUI) widgets. The Java API is grouped into
libraries (packages) of related components.

The following figure depicts a Java program, such as an application or applet, that's running on
the Java platform. As the figure shows, the Java API and Virtual Machine insulates the Java
program from hardware dependencies.

As a platform-independent environment, Java can be a bit slower than native code.


However, smart compilers, well-tuned interpreters, and just-in-time byte code compilers can
bring Java's performance close to that of native code without threatening portability.
20

SOCKET OVERVIEW:

A network socket is a lot like an electrical socket. Various plugs around the network have
a standard way of delivering their payload. Anything that understands the standard protocol can
“plug in” to the socket and communicate.

Internet protocol (IP) is a low-level routing protocol that breaks data into small packets
and sends them to an address across a network, which does not guarantee to deliver said packets
to the destination.

Transmission Control Protocol (TCP) is a higher-level protocol that manages to reliably transmit
data. A third protocol, User DatagramProtocol (UDP), sits next to TCP and can be used directly
to support fast, connectionless, unreliable transport of packets.

CLIENT/SERVER:

A server is anything that has some resource that can be shared. There are compute
servers, which provide computing power; print servers, which manage a collection of printers;
disk servers, which provide networked disk space; and web servers, which store web pages. A
client is simply any other entity that wants to gain access to a particular server.

A server process is said to “listen” to a port until a client connects to it. A server
is allowed to accept multiple clients connected to the same port number, although each session is
unique. To manage multiple client connections, a server process must be multithreaded or have
some other means of multiplexing the simultaneous I/O.

RESERVED SOCKETS:

Once connected, a higher-level protocol ensues, which is dependent on which port


user are using. TCP/IP reserves the lower, 1,024 ports for specific protocols. Port number 21 is
for FTP, 23 is for Telnet, 25 is for e-mail, 79 is for finger, 80 is for HTTP, 119 is for Netnews-
21

and the list goes on. It is up to each protocol to determine how a client should interact with the
port.

JAVA AND THE NET:

Java supports TCP/IP both by extending the already established stream I/O
interface. Java supports both the TCP and UDP protocol families. TCP is used for reliable
stream-based I/O across the network. UDP supports a simpler, hence faster, point-to-point
datagram-oriented model.

INETADDRESS:

The InetAddress class is used to encapsulate both the numerical IP address and
the domain name for that address. User interacts with this class by using the name of an IP host,
which is more convenient and understandable than its IP address. The InetAddress class hides
the number inside. As of Java 2, version 1.4, InetAddress can handle both IPv4 and IPv6
addresses.

FACTORY METHODS:

The InetAddress class has no visible constructors. To create an InetAddress


object, user use one of the available factory methods. Factory methods are merely a convention
whereby static methods in a class return an instance of that class. This is done in lieu of
overloading a constructor with various parameter lists when having unique method names makes
the results much clearer.

Three commonly used InetAddress factory methods are:

1. Static InetAddressgetLocalHost ( ) throws

UnknownHostException

2. Static InetAddressgetByName (String hostName)

throwsUnknowsHostException

3. Static InetAddress [ ] getAllByName (String hostName)


22

throwsUnknownHostException

INSTANCE METHODS:

The InetAddress class also has several other methods, which can be used on the
objects returned by the methods just discussed. Here are some of the most commonly used.

Boolean equals (Object other)- Returns true if this object has the same
Internet address as other.

1. byte [ ] get Address ( )- Returns a byte array that represents the object’s
Internet address in network byte order.

2. String getHostAddress ( ) - Returns a string that represents the host address


associated with the InetAddress object.

3. String get Hostname ( ) - Returns a string that represents the host name associated
with the InetAddress object.

4. booleanisMulticastAddress ( )- Returns true if this Internet address is a multicast


address. Otherwise, it returns false.

5. String toString ( ) - Returns a string that lists the host name and the IP address for
convenience.

TCP/IP CLIENT SOCKETS:

TCP/IP sockets are used to implement reliable, bidirectional, persistent,


point-to-point and stream-based connections between hosts on the Internet. A socket can be used
to connect Java’s I/O system to other programs that may reside either on the local machine or on
any other machine on the Internet.

There are two kinds of TCP sockets in Java. One is for servers, and the other
is for clients. The Server Socket class is designed to be a “listener,” which waits for clients to
connect before doing anything. The Socket class is designed to connect to server sockets and
initiate protocol exchanges.
23

The creation of a Socket object implicitly establishes a connection between the client and
server. There are no methods or constructors that explicitly expose the details of establishing that
connection. Here are two constructors used to create client sockets

Socket (String hostName, intport) - Creates a socket connecting the local host to the named host
and port; can throw an UnknownHostException or anIOException.

Socket (InetAddressipAddress, intport) - Creates a socket using a


preexistingInetAddressobject and a port; can throw an IOException.

A socket can be examined at any time for the address and port information
associated with it, by use of the following methods:

 InetAddressgetInetAddress ( ) - Returns the InetAddress associated with


the Socket object.
 IntgetPort ( ) - Returns the remote port to which this Socket object is
connected.
 IntgetLocalPort ( ) - Returns the local port to which this Socket object is
connected.
Once the Socket object has been created, it can also be examined to gain access
to the input and output streams associated with it. Each of these methods can throw an IO
Exception if the sockets have been invalidated by a loss of connection on the Net.

Input Streamget Input Stream ( ) - Returns the InputStream associated with the
invoking socket.

Output Streamget Output Stream ( ) - Returns the OutputStream associated with


the invoking socket.

TCP/IP SERVER SOCKETS:


24

Java has a different socket class that must be used for creating server applications. The
ServerSocket class is used to create servers that listen for either local or remote client programs
to connect to them on published ports. ServerSockets are quite different form normal Sockets.

When the user create a ServerSocket, it will register itself with the system as having an interest
in client connections.

 ServerSocket(int port) - Creates server socket on the specified port with a queue length of
50.
 Serversocket(int port, int maxQueue) - Creates a server socket on the specified portwith a
maximum queue length of maxQueue.
 ServerSocket(int port, int maxQueue, InetAddress localAddress)-Creates a server socket
on the specified port with a maximum queue length of maxQueue. On a multihomed host,
localAddress specifies the IP address to which this socket binds.
 ServerSocket has a method called accept( ) - which is a blocking call that will wait for a
client to initiate communications, and then return with a normal Socket that is then used
for communication with the client.
URL:

The Web is a loose collection of higher-level protocols and file formats, all unified in a
web browser. One of the most important aspects of the Web is that Tim Berners-Lee devised a
saleable way to locate all of the resources of the Net. The Uniform Resource Locator (URL) is
used to name anything and everything reliably.

The URLprovides a reasonably intelligible form to uniquely identify or address


information on the Internet. URLs are ubiquitous; every browser uses them to identify
information on the Web.
25

CHAPTER 6

PROJECT DESCRIPTION

6.1 PROBLEM DEFINITION


In contemporary cloud-based applications, the integration of smart devices and sensors has led to
a surge in data volumes, necessitating efficient resource management strategies. However,
existing systems suffer from critical drawbacks, notably in security and performance. Security
vulnerabilities pose significant risks, potentially leading to data breaches and compromising
patient privacy. Additionally, suboptimal performance due to inefficient resource allocation
undermines system responsiveness and user satisfaction. Addressing these challenges is
imperative to ensure the reliability, scalability, and security of cloud-based healthcare systems,
facilitating effective management of user requests and data while upholding the highest standards
of privacy and performance.

6.2 MODULE DESCRIPTION


6.2.1 VM SCHEDULING

VM scheduling is a pivotal module within cloud computing systems that orchestrates the
allocation of virtual machines (VMs) to physical hosts. The primary objective of VM scheduling
is to optimize resource utilization and enhance overall system performance. This module
considers instantaneous resource usage, historical utilization patterns, and long-term
performance metrics to make informed decisions about VM placement. The effectiveness of VM
scheduling directly impacts the efficiency of cloud environments by ensuring that computational
resources are allocated judiciously, adapting dynamically to varying workloads.

6.2.2 DATA ANALYSIS

Data analysis is a crucial role in the proposed system, providing the foundation for informed
decision-making. This module involves the examination and interpretation of historical VM
resource utilization data over time. By employing statistical and machine learning techniques,
data analysis contributes to understanding the patterns and trends within the cloud environment.
26

Insights derived from this analysis inform the VM scheduling algorithm, allowing it to adapt to
changing conditions and optimize resource allocation based on past performance metrics.

6.2.3 CLASSIFICATION ALGORITHM

The classification algorithm is a key component employed to categorize and organize data in the
context of VM scheduling. This module likely involves the application of machine learning
techniques to classify VMs based on their resource utilization characteristics. The algorithm's
ability to distinguish between different classes of VMs is crucial for making informed decisions
about their placement and resource allocation within the cloud infrastructure.

6.2.4 PARTICLE SWARM OPTIMIZATION (PSO)

Particle Swarm Optimization (PSO) is a sophisticated optimization algorithm utilized in the


proposed system. This module involves the application of PSO to fine-tune the VM scheduling
process. PSO simulates the social behavior of particles in a swarm, each representing a potential
solution. By leveraging the collective intelligence of the swarm, PSO dynamically adapts the
VM scheduling algorithm, optimizing parameters to achieve enhanced efficiency and
performance.

6.2.5 OPTIMIZATION SCHEME

The optimization scheme serves as the overarching framework that integrates the various
components of the proposed system. This module encapsulates the strategy for enhancing system
performance, which may include the coordination of VM scheduling, data analysis, classification
algorithms, and PSO. The optimization scheme aims to minimize the impact of management
processes on deployed VMs by maximizing real CPU utilization and reducing the count of
physical machines. It provides a holistic approach to refining VM placement strategies and
ensuring efficient resource utilization in cloud computing environments.
27

6.3 SYSTEM FLOW DIAGRAM

VM OPTIMIZATION VM SELECTION FEATURES OPTIMIZATION


SCHDULING SCHEMES PHASE SCHEME

CLASSIFICATIO LABEL
N ALGORITHM PSO

VM resource monitoring
process VM scheduling

MONITORING WORK LOAD


28

6.4 INPUT DESIGN

The proposed energy-aware host resource management framework for virtual machines in cloud
data centers using the particle swarm optimization (PSO) algorithm requires the following
inputs:

1. Resource usage information: This information includes the CPU, memory, and network
usage of the VMs and hosts. This information is collected by the resource monitor
component of the framework.

2. VM requirements: This information includes the CPU, memory, and network


requirements of the VMs. This information is typically provided by the cloud provider or
the user.

3. Host configurations: This information includes the CPU, memory, and network capacities
of the hosts. This information is typically stored in the data centre’s management system.

4. Energy consumption model: This model is used to estimate the energy consumption of
the hosts based on their resource usage. This model can be based on historical data or on
a theoretical model.

6.5 OUTPUT DESIGN

The proposed energy-aware host resource management framework for virtual machines in cloud
data centers using the particle swarm optimization (PSO) algorithm produces the following
outputs:

1. VM placement: This is the mapping of VMs to hosts. The framework determines the
optimal placement of VMs based on their resource requirements, the available resources
of the hosts, and the energy consumption of the hosts.

2. VM scheduling: This is the scheduling of VMs on hosts. The framework determines the
optimal scheduling of VMs on hosts based on their resource requirements, the available
resources of the hosts, and the energy consumption of the hosts.
29

3. Energy consumption estimates: This is an estimate of the energy consumption of the data
centre based on the VM placement and scheduling. The framework uses an energy
consumption model to estimate the energy consumption of the hosts based on their
resource usage.

4. Performance metrics: This is a set of metrics that measure the performance of the data
centre, such as throughput, latency, and response time. The framework can be used to
monitor the performance of the data centre and to identify any potential problems.
30

CHAPTER 7

7. SYSTEM TESTING AND IMPLEMENTATION

7.1 SYSTEM TESTING

To verify that the proposed framework meets the following requirements:

 Functionality: The framework should be able to correctly place and schedule VMs on
hosts.

 Performance: The framework should be able to place and schedule VMs in a timely
manner.

 Energy Efficiency: The framework should be able to reduce energy consumption


compared to traditional methods.

 Scalability: The framework should be able to scale to large cloud data centers.

 Reliability: The framework should be able to handle failures and recover gracefully.

7.2 SYSTEM IMPLEMENTATION

1. Resource Monitor: The resource monitor collects resource usage information from the
VMs and hosts. This information includes CPU, memory, and network usage. The
resource monitor can be implemented using a variety of tools, such as SNMP or IPMI.

2. Decision Maker: The decision maker uses the PSO algorithm to determine the optimal
VM placement and scheduling. The decision maker can be implemented using a variety
of programming languages, such as Python or Java.

3. Executor: The executor enforces the decision maker's decisions. The executor can be
implemented using a variety of tools, such as OpenStack or Cloud Stack.
31

CHAPTER 8

SYSTEM MAINTENANCE

The objectives of this maintenance work are to make sure that the system gets into work all time
without any bug. Provision must be for environmental changes which may affect the computer or
software system. This is called the maintenance of the system. Nowadays there is the rapid
change in the software world. Due to this rapid change, the system should be capable of adapting
these changes. In this project the process can be added without affecting other parts of the
system. Maintenance plays a vital role. The system is liable to accept any modification after its
implementation. This system has been designed to favor all new changes. Doing this will not
affect the system’s performance or its accuracy.

Maintenance is necessary to eliminate errors in the system during its working life and to tune the
system to any variations in its working environment. It has been seen that there are always some
errors found in the system that must be noted and corrected. It also means the review of the
system from time to time.

The review of the system is done for:

 Knowing the full capabilities of the system.

 Knowing the required changes or the additional requirements.

 Studying the performance.

TYPES OF MAINTENANCE:

 Corrective maintenance

 Adaptive maintenance

 Perfective maintenance

 Preventive maintenance
32

8.1 CORRECTIVE MAINTENANCE

Changes made to a system to repair flows in its design coding or implementation. The
design of the software will be changed. The corrective maintenance is applied to correct the
errors that occur during that operation time. The user may enter invalid file type while submitting
the information in the particular field, then the corrective maintenance will displays the error
message to the user in order to rectify the error.

Maintenance is a major income source. Nevertheless, even today many organizations


assign maintenance to unsupervised beginners, and less competent programmers.

The user’s problems are often caused by the individuals who developed the product, not
the maintainer. The code itself may be badly written maintenance is despised by many software
developers unless good maintenance service is provided, the client will take future development
business elsewhere. Maintenance is the most important phase of software production, the most
difficult and most thankless.

8.2 ADAPTIVE MAINTENANCE:

It means changes made to system to evolve its functionalities to change business


needs or technologies. If any modification in the modules the software will adopt those
modifications. If the user changes the server then the project will adapt those changes. The
modification server work as the existing is performed.

8.3 PERFECTIVE MAINTENANCE:

Perfective maintenance means made to a system to add new features or improve


performance. The perfective maintenance is done to take some perfect measures to maintain the
special features. It means enhancing the performance or modifying the programs to respond to
the users need or changing needs. This proposed system could be added with additional
functionalities easily. In this project, if the user wants to improve the performance further then
this software can be easily upgraded.
33

8.4 PREVENTIVE MAINTENANCE:

Preventive maintenance involves changes made to a system to reduce the changes of features
system failure. The possible occurrence of error that might occur are forecasted and prevented
with suitable preventive problems. If the user wants to improve the performance of any process
then the new features can be added to the system for this project.
34

CHAPTER 9

9. CONCLUSION

In conclusion, the developed Cloud VM scheduling algorithm, utilizing historical VM resource


utilization data and Particle Swarm Optimization (PSO), presents a promising solution to the
challenges associated with instant-based resource allocation in cloud systems. The system's
ability to learn and adapt to the evolving behavior of the environment over time contributes to
improved efficiency and performance optimization. By prioritizing overall and long-term
resource utilization and minimizing the impact of management processes on deployed VMs, the
proposed approach offers a refined strategy for VM placement. The demonstrated reduction in
the count of physical machines underscores the system's effectiveness in resource allocation.

FUTURE WORK

For future work, the proposed Cloud VM scheduling algorithm lays the foundation for several
potential enhancements and research directions. Further exploration could involve the integration
of machine learning techniques to continuously adapt the scheduling algorithm based on real-
time changes in the Cloud environment. Additionally, the system could benefit from considering
energy efficiency aspects to align with the growing importance of sustainable computing.
Exploring the application of the proposed algorithm in diverse Cloud architectures and scaling it
for larger and more complex environments would provide insights into its scalability and
generalizability.
35

CHAPTER 10

APPENDICES

10.1 SOURCE CODE

package power;

import java.util.List;

import java.util.ArrayList;

import org.cloudbus.cloudsim.Vm;

import org.cloudbus.cloudsim.Host;

import org.cloudbus.cloudsim.power.PowerHost;

import org.cloudbus.cloudsim.power.PowerVm;

/**

* @author admin

*/

public class Details

static String vms[][];

static String host[][];


36

static ArrayList Vt=new ArrayList();

static ArrayList Ht=new ArrayList();

//static List<Vm> vmlist=new ArrayList<Vm>();

static List<PowerVm> vmlist=new ArrayList<PowerVm>();

//static List<Host> hostList=new ArrayList<Host>(); //

static List<PowerHost> hostList=new ArrayList<PowerHost>(); // PM

static double Velocity[][];

static double Position[][];

static ArrayList request=new ArrayList();

static ArrayList population=new ArrayList();

static ArrayList initialpop=new ArrayList();

static int pop=8;

static double pbest[]=new double[pop];

static double gbest=200;

static String psobest;

static List<PowerVm> allVM=new ArrayList<PowerVm>();

static ArrayList newList=new ArrayList();

}
37

package power;

import java.awt.Color;

import org.jfree.chart.ChartFactory;

import org.jfree.chart.ChartFrame;

import org.jfree.chart.JFreeChart;

import org.jfree.chart.plot.CategoryPlot;

import org.jfree.chart.plot.PlotOrientation;

import org.jfree.chart.renderer.category.CategoryItemRenderer;

import org.jfree.data.category.DefaultCategoryDataset;

/**

* @author admin

*/

public class Graph1

public void display1(double val)

try

DefaultCategoryDataset dataset = new DefaultCategoryDataset();

dataset.setValue(183, "Existing" ,"Execution Time");

dataset.setValue(val, "Proposed" ,"Execution Time");


38

JFreeChart chart = ChartFactory.createBarChart

("Execution Time","", "Time in ms", dataset,

PlotOrientation.VERTICAL, true,true, false);

chart.getTitle().setPaint(Color.blue);

CategoryPlot p = chart.getCategoryPlot();

p.setRangeGridlinePaint(Color.red);

System.out.println("Range : "+p.getRangeAxisCount() );

CategoryItemRenderer renderer = p.getRenderer();

renderer.setSeriesPaint(0, Color.red);

renderer.setSeriesPaint(1, Color.green);

// renderer.setSeriesPaint(3, Color.yellow);
39

ChartFrame frame1=new ChartFrame("Execution Time",chart);

frame1.setSize(400,400);

frame1.setVisible(true);

catch(Exception e)

e.printStackTrace();

public void display2(double val)

try

DefaultCategoryDataset dataset = new DefaultCategoryDataset();

dataset.setValue(2.7401, "Existing" ,"Energy Consumption");

dataset.setValue(val, "Proposed" ,"Energy Consumption");


40

JFreeChart chart = ChartFactory.createBarChart

("Energy Consumption","", "Value", dataset,

PlotOrientation.VERTICAL, true,true, false);

chart.getTitle().setPaint(Color.blue);

CategoryPlot p = chart.getCategoryPlot();

p.setRangeGridlinePaint(Color.red);

System.out.println("Range : "+p.getRangeAxisCount() );

CategoryItemRenderer renderer = p.getRenderer();

renderer.setSeriesPaint(0, Color.BLUE);

renderer.setSeriesPaint(1, Color.pink);

// renderer.setSeriesPaint(3, Color.yellow);
41

ChartFrame frame1=new ChartFrame("Energy Consumption",chart);

frame1.setSize(400,400);

frame1.setVisible(true);

catch(Exception e)

e.printStackTrace();

package power;

/**

* @author admin

*/

public class Main {

/**

* @param args the command line arguments

*/

public static void main(String[] args) {

// TODO code application logic here


42

long tm1=System.currentTimeMillis();

VMAllocation vm=new VMAllocation();

vm.readVM();

vm.readHost();

vm.createHost();

vm.createVM();

vm.optimiseVmAllocation();

long tm2=System.currentTimeMillis();

long tim=tm2-tm1;

System.out.println(tim);

Graph1 gr=new Graph1();

gr.display1(tim);

gr.display2(1.4332);

package power;

import java.util.Random;

import org.cloudbus.cloudsim.power.PowerHost;

import org.cloudbus.cloudsim.power.PowerVm;

import org.cloudbus.cloudsim.Vm;

import org.cloudbus.cloudsim.power.PowerVmAllocationPolicySimple;

import java.util.List;

import java.util.Map;

/**
43

* @author admin

*/

public class PSO

Details dt=new Details();

double weight=0.1;

double c1=1;

double c2=1;

double x_max = 4.0;

double v_max=4.0;

double x_min = -0.4;

double v_min=-4.0;

int iter=50;

PSO()

public void applyPSO()

try

Random rn=new Random();


44

dt.Velocity=new double[dt.pop][dt.request.size()];

dt.Position=new double[dt.pop][dt.request.size()];

for(int it=0;it<iter;it++)

int rk[][]=new int[dt.pop][dt.request.size()];

for(int i=0;i<dt.request.size();i++)

dt.Position[0][i]=x_min + (x_max - x_min) *rn.nextDouble() ;

dt.Velocity[0][i]=v_min + (v_max - v_min) *rn.nextDouble() ;

double de[][]=new double[dt.request.size()][3];

for(int i=0;i<dt.request.size();i++)

de[i][0]=dt.Position[0][i];

de[i][1]=i;

de[i][2]=i;

}.

for(int i=0;i<dt.request.size();i++)

for(int j=i+1;j<dt.request.size();j++)
45

if(de[i][0]>de[j][0])

double t1=de[i][0];

de[i][0]=de[j][0];

de[j][0]=t1;

double t2=de[i][1];

de[i][1]=de[j][1];

de[j][1]=t2;

for(int i=0;i<dt.request.size();i++)

int k1=(int)de[i][1];

int k2=(int)de[i][2];

rk[0][k1]=(k2%dt.host.length)+1;

for(int pi=0;pi<dt.pop;pi++)

String g1[]=dt.population.get(pi).toString().split("#");
46

double Cexe=0;

for(int i=0;i<g1.length;i++)

String g2[]=dt.request.get(i).toString().split("#");

double dur=Double.parseDouble(g2[1]);

double

res=Double.parseDouble(g2[2])+Double.parseDouble(g2[3])+Double.parseDouble(g2[4]);

if(res==0)

res=1;

Cexe=Cexe+(Double.parseDouble(g1[i])*(dur/res));

Cexe=Cexe+(dt.Position[pi][i]-dt.Velocity[pi][i]);

// System.out.println("pp= "+Cexe);

dt.pbest[pi]=Cexe;

if(dt.gbest<Cexe)

dt.psobest=dt.population.get(pi).toString();

dt.gbest=Cexe;

for(int i=0;i<dt.pop-1;i++)

for(int j=0;j<dt.request.size();j++)
47

dt.Velocity[i+1][j]=weight*dt.Velocity[i][j]+c1*rn.nextDouble()*(dt.pbest[i]-

dt.Position[i][j])+c2*rn.nextDouble()*(dt.gbest-dt.Position[i][j]);

dt.Position[i+1][j]=dt.Position[i][j]+dt.Velocity[i][j];

} // iter

System.out.println("gbest final "+dt.gbest);

System.out.println("pso best "+dt.psobest);

catch(Exception e)

e.printStackTrace();

public double fittnessFun(PowerHost ph)

double uti=0;

try

{
48

PowerVmAllocationPolicySimple ps=new

PowerVmAllocationPolicySimple(dt.hostList);

int h=0;

for(int i=0;i<dt.vmlist.size();i++)

PowerVm vm=dt.vmlist.get(i);

if(!dt.allVM.contains(vm))

boolean bool=ps.allocateHostForVm(vm, ph);

if(bool)

uti=ph.getUtilizationOfRam()+ph.getUtilizationOfBw()+ph.getUtilizationOfCpuMips();

dt.allVM.add(vm);

dt.newList.add(vm.getId()+"#"+ph.getId());

h++;

else

System.out.println("VM - "+vm.getId()+" is migrated");

break;

}
49

//System.out.println(ph.getId()+" : "+uti);

catch(Exception e)

e.printStackTrace();

return uti;

package power;

import java.io.File;

import java.io.FileInputStream;

import java.util.ArrayList;

import java.util.Calendar;

import java.util.LinkedList;

import java.util.List;

import org.cloudbus.cloudsim.CloudletSchedulerTimeShared;

import org.cloudbus.cloudsim.CloudletScheduler;

import org.cloudbus.cloudsim.Datacenter;

import org.cloudbus.cloudsim.DatacenterCharacteristics;

import org.cloudbus.cloudsim.Log;

import org.cloudbus.cloudsim.Pe;
50

import org.cloudbus.cloudsim.Storage;

import org.cloudbus.cloudsim.Vm;

import org.cloudbus.cloudsim.VmAllocationPolicySimple;

import org.cloudbus.cloudsim.VmSchedulerTimeShared;

import org.cloudbus.cloudsim.core.CloudSim;

import org.cloudbus.cloudsim.power.PowerHost;

import org.cloudbus.cloudsim.power.models.PowerModelCubic;

import org.cloudbus.cloudsim.provisioners.BwProvisionerSimple;

import org.cloudbus.cloudsim.provisioners.PeProvisionerSimple;

import org.cloudbus.cloudsim.provisioners.RamProvisionerSimple;

import org.cloudbus.cloudsim.power.PowerVmSelectionPolicyMinimumMigrationTime;

import org.cloudbus.cloudsim.power.PowerVmSelectionPolicy;

import org.cloudbus.cloudsim.power.PowerVm;

/**

* @author admin

*/

public class VMAllocation

Details dt=new Details();

Datacenter dc1;

DatacenterCharacteristics characteristics;

public void readVM()

{
51

try

File fe=new File("vm1.txt");

FileInputStream fis=new FileInputStream(fe);

byte bt[]=new byte[fis.available()];

fis.read(bt);

fis.close();

String g1=new String(bt);

System.out.println("VM List");

System.out.println("=========================");

System.out.println(g1);

String g2[]=g1.split("\n");

for(int i=1;i<g2.length;i++)

dt.Vt.add(g2[i].trim());

dt.vms=new String[dt.Vt.size()][4];

for(int i=0;i<dt.Vt.size();i++)

String a1[]=dt.Vt.get(i).toString().trim().split("\t");

dt.vms[i][0]=a1[0]; // VM Id

dt.vms[i][1]=a1[1]; // VM cpu

dt.vms[i][2]=a1[2]; // VM ram
52

dt.vms[i][3]=a1[3]; // VM bw

catch(Exception e)

e.printStackTrace();

public void readHost()

try

File fe=new File("host2.txt");

FileInputStream fis=new FileInputStream(fe);

byte bt[]=new byte[fis.available()];

fis.read(bt);

fis.close();

String g1=new String(bt);

System.out.println("Host List");

System.out.println("=========================");

System.out.println(g1);

String g2[]=g1.split("\n");
53

for(int i=1;i<g2.length;i++)

dt.Ht.add(g2[i].trim());

dt.host=new String[dt.Ht.size()][4];

for(int i=0;i<dt.Ht.size();i++)

String a1[]=dt.Ht.get(i).toString().trim().split("\t");

dt.host[i][0]=a1[0]; // Host Id

dt.host[i][1]=a1[1]; // Host cpu

dt.host[i][2]=a1[2]; // Host ram

dt.host[i][3]=a1[3]; // Host bw

catch(Exception e)

e.printStackTrace();

public void createHost()

try
54

Log.printLine("Starting CloudSim");

CloudSim cs=new CloudSim();

Calendar = Calendar.getInstance();

cs.init(1, calendar,false);

String name="DC1";

for(int i=0;i<dt.Ht.size();i++)

String a1[]=dt.Ht.get(i).toString().split("\t");

int id=Integer.parseInt(a1[0]);

int cpu=Integer.parseInt(a1[1]);

int ram1=Integer.parseInt(a1[2]);

int bw2=Integer.parseInt(a1[3]);

int storage=100000;

List<Pe> peList1 = new ArrayList<Pe>();

int mips1 = cpu;//1000000;

for(int k=0;k<cpu;k++)

peList1.add(new Pe(0, new PeProvisionerSimple(mips1)));

//peList1.add(new Pe(0, new PeProvisionerSimple(mips1)));

//dt.hostList.add(new Host(id, new RamProvisionerSimple(ram1),new


55

BwProvisionerSimple(bw2), storage, peList1,new VmSchedulerTimeShared(peList1)));

dt.hostList.add(new PowerHost(id, new RamProvisionerSimple(ram1),new

BwProvisionerSimple(bw2), storage, peList1,new VmSchedulerTimeShared(peList1),new

PowerModelCubic(1000,500)));

String arch = "x86";

String os = "Linux";

String vmm1 = "Xen";

double time_zone = 10.0;

double cost = 3.0;

double costPerMem = 0.05;

double costPerStorage = 0.2;

double costPerBw = 0.1;

LinkedList<Storage> storageList = new LinkedList<Storage>();

characteristics = new DatacenterCharacteristics(arch, os, vmm1, dt.hostList, time_zone,

cost, costPerMem,costPerStorage, costPerBw);

dc1 = new Datacenter(name, characteristics, new

VmAllocationPolicySimple(dt.hostList), storageList, 0);

System.out.println("Data Center Created with "+dt.Ht.size()+" Host");

catch(Exception e)
56

e.printStackTrace();

public void createVM()

try

for(int i=0;i<dt.vms.length;i++)

int vmid = Integer.parseInt(dt.vms[i][0]);

int cid=Integer.parseInt(dt.vms[i][0]);

int mips = 250;

long size = 10000; //image size (MB)

int ram = Integer.parseInt(dt.vms[i][2]); //vm memory (MB)

long bw = Long.parseLong(dt.vms[i][3]);

int pesNumber = Integer.parseInt(dt.vms[i][1]); //number of cpus

String vmm = "Xen"; //VMM name

//Vm vm1 = new Vm(vmid,cid, mips, pesNumber, ram, bw, size, vmm, new

CloudletSchedulerTimeShared());

PowerVm vm1 = new PowerVm(vmid,cid, mips, pesNumber, ram, bw, size,1 ,vmm,

new CloudletSchedulerTimeShared(),0.5);
57

System.out.println("VM-"+vmid+" is Created...");

dt.vmlist.add(vm1);

catch(Exception e)

e.printStackTrace();

public void optimiseVmAllocation()

try

for(int j=0;j<dt.hostList.size();j++)

PowerHost ph=dt.hostList.get(j);

PSO ps=new PSO();


58

double uti=ps.fittnessFun(ph);

System.out.println("Utilization for Host - "+ph.getId()+" = "+uti);

/* for(int i=0;i<dt.vmlist.size();i++)

PowerVm vm=dt.vmlist.get(i);

long vmBW=vm.getBw();

int vmRAM=vm.getRam();

int vmPe=vm.getNumberOfPes();

for(int j=0;j<dt.hostList.size();j++)

PowerHost ph=dt.hostList.get(j);

boolean bool=ph.isSuitableForVm(vm);

if(bool)

int id=ph.getId();

long htBW=ph.getBw();

int htRAM=ph.getRam();

long storage=ph.getStorage();

List<Pe> lt=ph.getPeList();

int htPe=ph.getNumberOfPes();
59

long bw=htBW-vmBW;

int ram=htRAM-vmRAM;

int pe=htPe-vmPe;

for(int k=0;k<pe;k++)

lt.add(new Pe(0, new PeProvisionerSimple(pe)));

PowerHost newPH=new PowerHost(id, new RamProvisionerSimple(ram),new

BwProvisionerSimple(bw), storage, lt,new VmSchedulerTimeShared(lt),new

PowerModelCubic(1000,500));

System.out.println(i+" : "+j+" ===== "+ram+" : "+bw+" = "+pe);

//System.out.println(i+" : "+j+" ===== "+htRAM+" : "+newPH.getRam()+" = "+htBW+" :


"+newPH.getBw());

dt.hostList.set(j, newPH);

break;

*/

/* for(int i=0;i<dt.hostList.size();i++)

PowerHost ph=dt.hostList.get(i);
60

int id=ph.getId();

long htBW=ph.getBw();

int htRAM=ph.getRam();

long storage=ph.getStorage();

List<Pe> lt=ph.getPeList();

int htPe=ph.getNumberOfPes();

for(int j=0;j<dt.vmlist.size();j++)

PowerVm vm=dt.vmlist.get(j);

boolean bool=ph.isSuitableForVm(vm);

System.out.println(i+" : "+j+" : "+bool+" : "+htBW+" : "+htRAM);//+" :

"+ph.getAvailableMips()+" = "+vm.getMips()+" : "+ph.getNumberOfPes()+" =

"+vm.getNumberOfPes());

if(bool)

long vmBW=vm.getBw();

int vmRAM=vm.getRam();

int vmPe=vm.getNumberOfPes();

long bw=htBW-vmBW;
61

int ram=htRAM-vmRAM;

int pe=htPe-vmPe;

for(int k=0;k<pe;k++)

lt.add(new Pe(0, new PeProvisionerSimple(pe)));

PowerHost newPH=new PowerHost(id, new RamProvisionerSimple(ram),new

BwProvisionerSimple(bw), storage, lt,new VmSchedulerTimeShared(lt),new

PowerModelCubic(1000,500));

System.out.println("===== "+newPH.getRam()+" : "+newPH.getBw());

dt.hostList.set(i, newPH);

}*/

catch(Exception e)

e.printStackTrace();

}
62

10.2 SCREEN SHOTS


63
64
65
66

CHAPTER 11

11. REFERENCES

[1]. K. Agarwal and T. Kumar, “Combining virtualization and containerization to support


interactive games and simulations on the cloud,” in 2nd International Conference on Intelligent
Computing and Control Systems(ICICCS). IEEE, 2018.

[2]. S. Rajput and A. Arora, “Virtual machine placement in cloud data centers using a hybrid
multi-verse optimization algorithm,” International Journal of Computer Applications, vol. 75, no.
10, pp. 6–12, 2013.

[3]. M. Mohamad and A. Selamat, “An Energy-aware Host Resource Management Framework
for Two-tier Virtualized Cloud Data Centers,” in International Conference on Computer,
Communications, and Control Technology (I4CT). IEEE, 2015, pp. 227–231.

[4]. J. Ramos et al., “An efficient power-aware VM allocation mechanism in cloud data centers:
a micro genetic-based approach,” in Proceedings of the first instructional conference on machine
learning, vol. 242. Piscataway, NJ, 2003, pp. 133–142.

[5]. T. Kumaresan and C. Palanisamy, “An Energy and SLA-Aware Resource Management
Strategy in Cloud Data Centers,” International Journal of Bio-Inspired Computation, vol. 9, no.
3, pp. 142–156, 2017.

[6]. H. Kaur and S. Ajay, “Renewable Energy-based Multi-Indexed Job Classification and
Container Management Scheme for Sustainability of Cloud Data Centers,” Next Generation
Computing Technologies(NGCT), pp. 516–521, 2016.

[7]. K. Toutanova and C. Cherry, “A placement architecture for a container as a service (CaaS)
in a cloud environment,” in Proceedings of the Joint Conference of the 47th Annual Meeting of
the ACL and the 4th International Joint Conference on Natural Language Processing of the
AFNLP: Volume 1- Volume 1. Association for Computational Linguistics, 2009, pp. 486– 494.
67

[8]. T. N. Sainath, O. Vinyals, A. Senior, and H. Sak, “Energy Consumption Optimization of


Container-Oriented Cloud Computing Center,” in 2015 IEEE International Conference on
Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2015, pp. 4580–4584.

[9]. T. Mikolov and G. Zweig, “An energy, performance efficient resource consolidation scheme
for heterogeneous cloud datacenters,” in 2012 IEEE Spoken Language Technology Workshop
(SLT). IEEE, 2012, pp. 234–239.

[10]. Rizky, W. M., Ristu, S., Afrizal, D. “HeporCloud: An energy and performance efficient
resource orchestrator for hybrid heterogeneous cloud computing environments”. Scientific
Journal of Informatics, Vol. 3(2), p. 41-50, Nov. 2016.

You might also like