pp2.2-numbered
pp2.2-numbered
CHAPTER 1
1. INTRODUCTION
In the dynamic landscape of cloud computing, the efficient management of resources within
virtualized data centers has become increasingly crucial. The escalating demand for computing
resources, coupled with the growing awareness of environmental sustainability, underscores the
need for innovative frameworks that not only optimize performance but also prioritize energy
efficiency. This is particularly relevant in the context of two-tier virtualized cloud data centers,
where the challenges of balancing resource allocation and minimizing energy consumption are
pronounced. In response to these challenges, an energy-aware host resource management
framework emerges as a promising solution. This framework aims to strike a delicate balance
between enhancing the overall performance of virtualized environments and mitigating the
environmental impact by intelligently allocating and managing computing resources. This
introduction sets the stage for a deeper exploration of the intricacies and benefits of such a
framework in the context of two-tier virtualized cloud data centers.
In the era of rapid digital transformation, cloud data centers have emerged as the backbone of
modern computing infrastructure, revolutionizing the way businesses and individuals access and
manage data. These centers represent a pivotal shift from traditional, on-premises data storage
and processing to scalable and flexible computing environments hosted remotely. Cloud data
centers provide a vast array of services, ranging from storage and computation to networking and
analytics, enabling organizations to dynamically scale their resources based on demand. The
inherent advantages of scalability, cost efficiency, and accessibility have made cloud data centers
indispensable in today's interconnected world. As the demand for cloud services continues to
soar, understanding the intricacies of these data centers becomes essential for businesses and
technology enthusiasts alike. This introduction lays the groundwork for delving into the
multifaceted world of cloud data centers, exploring their architecture, functionalities, and the
transformative impact they wield in the realm of information technology.
In the contemporary landscape of technology and industry, energy consumption stands at the
forefront of global considerations. As societies increasingly rely on advanced technologies for
their daily operations, the demand for energy continues to escalate. From powering homes and
businesses to supporting the vast network of data centers that underpin our digital infrastructure,
the challenge lies not only in meeting these energy needs but also in doing so sustainably. The
environmental impact of energy consumption, particularly in the context of computing and data
processing, has become a significant concern. As the world transitions to more digital and
interconnected systems, understanding and addressing the energy implications of these
advancements are essential. This introduction sets the stage for an exploration of the
3
complexities surrounding energy consumption, emphasizing the critical need for innovative
solutions and frameworks that promote efficiency and sustainability across various sectors.
1.4 RESOURCE MANAGEMENT
1.5 OBJECTIVES
• Reduce energy consumption. The framework aims to reduce the overall energy
consumption of the cloud data center by consolidating containers on fewer hosts and
turning off idle hosts.
• The framework ensures that latency requirements for all containers are met by placing
containers on hosts that are close to their users.
• Improve performance: The framework aims to improve the overall performance of the
cloud data center by balancing the resource utilization of hosts.
4
CHAPTER 2
2. LITERATURE REVIEW
Sean C In order to address the challenges posed by the integration of game-based and virtual
software simulators into traditional networks, various organizations spanning the entertainment
industry, energy and financial sectors, military, and video gaming have turned to the powerful
capabilities of High-Performance Computing (HPC). The inherent capacity of HPC to handle
compute-intensive tasks makes it an attractive platform for running interactive simulations.
However, the focus of this work goes beyond the conventional use of HPC, aiming to explore the
feasibility of transitioning from a traditional HPC environment to a cloud-based service. This
transition is intended to enable the support of multiple simultaneous interactive simulations while
maintaining high-performance standards. The primary objective of this research is to broaden the
scope of applicable software within an HPC environment, ensuring that the transition to a cloud-
based service does not compromise performance efficacy. To achieve this goal, the study delves
into four distinct HPC load-balancing techniques. These techniques leverage virtualization,
software containers, and clustering to efficiently analyse, schedule, and execute game-based
simulation applications concurrently. The overarching aim is to determine the optimal approach
for extending HPC capabilities to accommodate the demands of multiple interactive simulations.
In the pursuit of the proposed HPC goal, the research places a particular emphasis on
experimenting with and evaluating these load-balancing techniques. Virtualization, software
containers, and clustering are assessed for their individual and collective performance in
handling the unique requirements of game-based simulations. The comparison of these
techniques is critical to understanding their respective strengths and weaknesses, thereby aiding
in the determination of the most suitable deployment technique. In making this determination,
several factors come into play. The availability of cluster resources, the number of competing
software jobs, and the specific characteristics of the software being scheduled all influence the
choice of deployment technique. By thoroughly investigating these considerations, the research
aims to provide valuable insights into the feasibility of extending HPC capabilities to a cloud-
5
based service, ensuring that the chosen approach aligns with the requirements of diverse
simulation applications.
Sasan Gharehpasha Cloud computing has revolutionized the landscape of computing by offering
a paradigm where a vast array of systems is interconnected in either private or public networks to
deliver dynamically scalable infrastructure for applications, data, and file storage. This
technological advancement has brought about a substantial reduction in the costs associated with
power consumption, application hosting, content storage, and resource delivery. Cloud
computing allows businesses to concentrate on their core goals without the need to continually
expand hardware resources, marking a significant shift in how computing resources are
provisioned and managed. One of the ongoing challenges in cloud computing, particularly in
cloud data centers, is the efficient placement of virtual machines on physical machines. The
optimal placement of virtual machines is crucial for managing resources effectively and
preventing wastage. In this context, a novel approach is introduced, combining the hybrid
discrete multi-object whale optimization algorithm with the multi-verse optimizer enhanced by
chaotic functions. The primary objective of this approach is twofold: firstly, to reduce power
consumption by minimizing the number of active physical machines in cloud data centers, and
secondly, to enhance resource management by strategically placing virtual machines over
physical ones. By reducing power consumption and preventing resource wastage through optimal
virtual machine placement, this approach aims to address critical issues in cloud data centre
management. Moreover, it seeks to mitigate the increasing rate of virtual migration to physical
machines, contributing to the overall efficiency and sustainability of cloud computing
environments. To validate the efficacy of the proposed algorithm, a comparative analysis is
conducted against existing algorithms such as first fit, VMPACS, and MBFD. The results
obtained from this comparative study provide valuable insights into the performance and
superiority of the proposed approach in achieving optimal virtual machine placement within
cloud data centers. This research contributes to the ongoing efforts to enhance the efficiency and
sustainability of cloud computing infrastructures through innovative algorithms and strategies.
6
Chi Zhang The energy consumption of cloud data centers is a critical challenge that poses
constraints on the future growth of the cloud computing industry. This paper addresses this issue
with a specific focus on resource management within two-tier virtualized data centers, where
containers are deployed on virtual machines (VMs). The working model of the two-tier
virtualized data center is first defined, emphasizing the deployment of containers on VMs to
enhance resource utilization on hosts while ensuring the isolation and security of different jobs.
To address the energy consumption challenge, an energy-aware host resource management
framework is proposed. This framework comprises two key algorithms. The initial static
placement algorithm involves load balancing alternate placement and two-sided matching
methods. These methods are instrumental in efficiently placing containers onto VMs and VMs
onto hosts, optimizing the resource utilization of the entire system. The runtime dynamic
consolidation algorithm, building upon the initial placement, dynamically consolidates resources
to utilize the least active hosts in real-time, meeting the dynamic resource requirements of
containers. Simulation experiments are conducted using real workload traces to compare the
proposed algorithms with existing ones. The results demonstrate that the two algorithms exhibit
superior performance in terms of host resource utilization, the number of active hosts, the
number of container migrations, and Service Level Agreement (SLA) metrics. Importantly, the
entire framework achieves a notable energy-saving effect of at least 13.8%, showcasing its
efficacy in addressing the energy consumption challenge in cloud data centers. This research
contributes to the broader discourse on sustainable and efficient cloud computing by providing a
detailed exploration of resource management strategies in two-tier virtualized data centers. The
demonstrated improvements in resource utilization and energy efficiency underscore the
potential of the proposed framework to contribute to the long-term sustainability of cloud
computing infrastructures.
implications. The drive towards reducing greenhouse gas emissions has given rise to the concept
of green computing, where the focus is not only on enhancing computational performance but
also on minimizing the ecological footprint of data centre operations. A key strategy in achieving
this goal is the implementation of power-aware methods to strategically allocate virtual machines
(VMs) within the physical resources of data centers. Virtualization emerges as a promising
technology to facilitate power-aware VM allocation methods, given its ability to abstract and
manage resources efficiently. However, the allocation of VMs to physical hosts poses a complex
problem known to be NP-complete. In response to this challenge, evolutionary algorithms have
been employed as effective tools to tackle such optimization problems. This paper contributes to
the ongoing discourse by introducing a micro-genetic algorithm designed specifically for the
purpose of selecting optimal destinations among physical hosts for VM allocation. The micro-
genetic algorithm presented in this work is a sophisticated approach aimed at addressing the NP-
completeness of VM allocation. Through extensive evaluations in a simulation environment, the
study demonstrates the efficacy of the micro-genetic algorithm in significantly improving power
consumption metrics when compared to alternative methods. The results highlight the
algorithm's ability to make informed and efficient decisions in the allocation of VMs, leading to
tangible enhancements in power efficiency. By showcasing the valuable improvements achieved
through the micro-genetic algorithm, this research underscores the importance of leveraging
evolutionary approaches in solving complex optimization problems in the realm of cloud
computing. The findings contribute not only to the technical aspects of power-aware VM
allocation but also to the broader objective of establishing more sustainable and environmentally
conscious practices within data centre operations.
Chi Zhang The imperative for cloud providers to enhance investment yield by reducing the
energy consumption of data centers is balanced with the essential need to ensure that services
delivered meet diverse consumer requirements. This paper introduces a comprehensive resource
management strategy geared towards simultaneous reduction of energy consumption and
minimization of Service Level Agreement (SLA) violations in cloud data centers. The strategy
encompasses three refined methods tailored to address sub problems within the dynamic virtual
8
machine (VM) consolidation process. To enhance the effectiveness of host detection and
improve VM selection outcomes, the proposed strategy employs innovative approaches. Firstly,
the overloaded host detection method introduces a dynamic independent saturation threshold for
each host, accounting for CPU utilization trends. Secondly, the underutilized host detection
method integrates multiple factors beyond CPU utilization, incorporating the Naive Bayesian
classifier to calculate combined weights for hosts during prioritization. Lastly, the VM selection
method takes into account both current CPU usage and the anticipated future growth space of
CPU demand for VMs. The performance of the proposed strategy is rigorously evaluated through
simulation in CloudSim, and comparative analysis is conducted against five existing energy-
saving strategies using real-world workload traces. The experimental results demonstrate the
superior performance of the proposed strategy, showcasing minimal energy consumption and
SLA violations in comparison to other strategies. This outcome underscores the effectiveness of
the refined methods employed within the resource management strategy in achieving the dual
objectives of energy efficiency and SLA adherence. By surpassing existing energy-saving
strategies in both energy consumption reduction and SLA compliance, this research makes a
significant contribution to the ongoing efforts in optimizing resource management in cloud data
centers. The proposed strategy not only aligns with the imperative of energy efficiency but also
underscores the importance of delivering reliable services to consumers, thus striking a balance
between environmental sustainability and service quality in the realm of cloud computing.
Gagangeet Singh Aujla The landscape of modern computing has been significantly shaped by the
rise of Cloud Computing, offering on-demand services to end-users. However, the widespread
use of geo-distributed data centers to perform computing tasks raises concerns about the
substantial energy consumption associated with their operations. Addressing these energy-related
challenges in cloud environments has become imperative, and the integration of renewable
energy resources, coupled with strategic server selection and consolidation, presents a promising
avenue for mitigation. This paper introduces a novel approach, a renewable energy-aware multi-
indexed job classification and scheduling scheme, leveraging Container as-a-Service (CoaaS) for
9
sustainability in data centers. The proposed scheme aims to optimize energy usage by directing
incoming workloads from various devices to data centers equipped with a sufficient supply of
renewable energy. To achieve this, the paper outlines a renewable energy-based host selection
and container consolidation scheme, contributing to the overarching goal of energy efficiency
and sustainability in cloud computing environments. The effectiveness of the proposed scheme is
rigorously evaluated using real-world Google workload traces. The results demonstrate a
substantial improvement over existing schemes of similar categories, with energy savings
reaching 15%, 28%, and 10.55%. These findings underscore the viability and efficiency of the
renewable energy-aware multi-indexed job classification and scheduling scheme in enhancing
sustainability within data centers, while simultaneously achieving significant energy savings. By
presenting a solution that integrates renewable energy considerations, host selection, and
container consolidation, this research contributes to the ongoing discourse on sustainable
practices in cloud computing. The demonstrated improvements in energy savings validate the
potential of the proposed scheme to not only address current energy-related challenges but also
to serve as a benchmark for future advancements in environmentally conscious data centre
operations.
minimizing the number of instantiated VMs and active PMs in a cloud environment. The
proposed placement architecture employs scheduling heuristics, specifically Best Fit (BF) and
Max Fit (MF), based on a fitness function that concurrently evaluates the remaining resource
waste of both PMs and VMs. Additionally, a meta-heuristic placement algorithm is introduced,
leveraging Ant Colony Optimization based on Best Fit (ACO-BF) with the proposed fitness
function. Experimental results indicate that the proposed ACO-BF placement algorithm
outperforms the BF and MF heuristics, showcasing significant improvements in resource
utilization for both VMs and PMs. By incorporating a fitness function that considers the
simultaneous evaluation of PMs and VMs, the ACO-BF algorithm demonstrates its efficacy in
optimizing resource allocation within a cloud environment. This research contributes to the
ongoing efforts to optimize container placement in cloud environments, highlighting the
importance of considering the utilization of both VMs and PMs for enhanced resource efficiency.
The proposed algorithms pave the way for more effective and balanced resource utilization in
cloud-based containerized applications.
substantiate the effectiveness of the proposed method. The results demonstrate that the
introduced approach significantly reduces the energy consumption of the data centre compared to
traditional strategies such as random scheduling and maximum utilization. The emphasis on
practical differences between physical machines in real-world environments sets this improved
virtual migration strategy apart, offering a tailored solution to address the energy consumption
challenges in container-based cloud computing. In conclusion, this paper not only refines
classical virtual migration strategies but also introduces a pragmatic and easily implementable
algorithm for reducing energy consumption in containerized cloud computing environments. The
simplicity and efficacy of the proposed approach make it a noteworthy candidate for widespread
adoption, contributing to the ongoing efforts to make container-based cloud computing more
energy-efficient and sustainable.
Ayaz Ali Khan Datacentres, as the primary electricity consumers for cloud computing, play a
pivotal role in providing the IT backbone for contemporary businesses and economies. However,
studies indicate that a significant portion of servers in U.S. datacentres are underutilized or idle,
presenting an opportunity for energy savings through resource consolidation techniques. The
challenge lies in the fact that consolidation, involving migrations of virtual machines (VMs),
containers, and/or applications, can be costly in terms of both energy consumption and
performance loss. This paper addresses this challenge by proposing a consolidation algorithm
that prioritizes the most effective migration among VMs, containers, and applications.
Additionally, the study investigates how migration decisions can be made to save energy without
negatively impacting service performance. Through a series of experiments utilizing real
workload traces for 800 hosts, approximately 1,516 VMs, and over a million containers, the
paper evaluates the impact of different migration approaches on datacentre energy consumption
and performance. The findings highlight a trade-off between migrating containers and virtual
machines, where migrating virtual machines tends to be more performance-efficient, while
migrating containers can be more energy-efficient. The study also suggests that migrating
containerized applications, running within virtual machines, could lead to an energy and
performance-efficient consolidation technique in large-scale datacentres. The evaluation results
12
indicate that migrating applications may be approximately 5.5% more energy-efficient and
11.9% more performance-efficient than VM migration. Furthermore, energy and performance-
efficient consolidation is approximately 14.6% more energy-efficient and 7.9% more
performance-efficient than application migration. The study generalizes these findings through
repeatable experiments across various workloads, resources, and datacentre setups. In
conclusion, the research sheds light on the nuanced trade-offs involved in migration decisions for
datacentre consolidation. By proposing a consolidation algorithm and providing insights into the
energy and performance efficiencies of different migration approaches, the paper contributes to
the ongoing efforts to optimize resource usage in large-scale datacentres.
Ayaz Ali Khan in major Information Technology (IT) companies like Google, Rackspace, and
Amazon Web Services (AWS), the execution of customers' workloads and applications relies
heavily on virtualization and containerization technologies. These companies operate large-scale
datacentres that provide computational resources, but the substantial energy consumption of
these datacentres raises ecological concerns. Each company employs different approaches, with
Google utilizing containers, Rackspace offering bare-metal hardware, and AWS employing a
mix of virtual machines (VMs), containers (ECS), and containers inside VMs (Lambda). This
diversity in technology usage makes resource management a complex task. Effective resource
management is crucial, especially in hybrid platforms where various sandboxing technologies
like bare-metal, VMs, containers, and nested containers coexist. The absence of centralized,
workload-aware resource managers and consolidation policies raises questions about datacentres
energy efficiency, workload performance, and user costs. This paper addresses these concerns
through a series of experiments using Google workload data for 12,583 hosts and approximately
one million tasks across four different types of workloads. The focus is on demonstrating the
potential benefits of using workload-aware resource managers in hybrid clouds, achieving energy
and cost savings in heterogeneous hybrid datacentres without negatively impacting workload
performance. The paper also explores how different allocation policies, combined with various
migration approaches, impact datacentres energy and performance efficiencies. The empirical
13
evaluation, based on plausible assumptions for hybrid datacentres setups, reveals compelling
results. In scenarios with no migration, a single scheduler is found to be up to 16.86% more
energy-efficient than distributed schedulers. However, when migrations are considered, the
proposed resource manager demonstrates the potential to save up to 45.61% energy and improve
workload performance by up to 17.9%. In conclusion, the research highlights the significance of
workload-aware resource managers in optimizing energy efficiency and cost savings in
heterogeneous hybrid datacentres. The findings provide valuable insights for IT companies
seeking to enhance the performance and sustainability of their datacentres operations.
14
CHAPTER 3
SYSTEM ANALYSIS
In distributed environments, cloud computing is widely used to manage user requests for
resources and services. Resource scheduling is used to handle user requests for resources based
on priorities within a given time frame. In today’s environment, every industry management
rely on smart devices connected to the internet. These devices deal with the massive amounts of
data processed and detected by smart medical sensors without sacrificing performance factors
like throughput and latency. This has prompted the requirement for load balancing among the
smart operational devices to prevent any insensitivity. Load balancing is used to manage large
amounts of data in both a centralized and distributed manner. We use reinforcement learning
algorithms such as GA, SARSA, and Q-learning for resource scheduling. These algorithms are
used to predict the optimal solution to manage load in cloud-based applications . The main
drawbacks in existing system is less security and low performance
3.1.1 DRAWBACKS
1. Existing systems exhibit vulnerabilities leading to potential data breaches and privacy
concerns.
3. Limited scalability hampers the ability to handle increasing data volumes and user
demands effectively.
3.2.1 ADVANTAGES
• The algorithm takes into account the already running VM resource usage over time to
optimize the placement of VMs. This can lead to improved performance for the VMs, as
they are less likely to be placed on hosts that are already overloaded.
• The algorithm uses PSO, which is a metaheuristic algorithm that is known for its ability
to find good solutions to complex problems. This makes the algorithm more likely to find
a good VM placement solution, even in large and complex cloud systems.
Preliminary investigation examines project feasibility; the likelihood the system will be useful to
the organization. The main objective of the feasibility study is to test the Technical, Operational
and Economical feasibility for adding new modules and debugging old running system. All
16
system is feasible if they are unlimited resources and infinite time. There are aspects in the
feasibility study portion of the preliminary investigation:
Technical Feasibility
Operation Feasibility
Economical Feasibility
The technical issue usually raised during the feasibility stage of the investigation includes
the following:
Therefore, it provides the technical guarantee of accuracy, reliability and security. The
software and hard requirements for the development of this project are not many and are already
available in-house at NIC or are available as free as open source. The work for the project is
done with the current equipment and existing software technology. Necessary bandwidth exists
for providing a fast feedback to the users irrespective of the number of users using the system.
Proposed projects are beneficial only if they can be turned out into information system.
That will meet the organization’s operating requirements. Operational feasibility aspects of the
project are to be taken as an important part of the project implementation. Some of the important
issues raised are to test the operational feasibility of a project includes the following: -
The well-planned design would ensure the optimal utilization of the computer resources and
would help in the improvement of performance status.
A system can be developed technically and that will be used if installed must still be a
good investment for the organization. In the economic feasibility, the development cost in
creating the system is evaluated against the ultimate benefit derived from the new systems.
Financial benefits must equal or exceed the costs.
The system is economically feasible. It does not require any addition hardware or
software. Since the interface for this system is developed using the existing resources and
technologies available at NIC, there is nominal expenditure and economical feasibility for
certain.
18
CHAPTER 4
SYSTEM SPECIFICATION
RAM size : 8 GB
CHAPTER 5
SOFTWARE DESCRIPTION
The software requirement specification is created at the end of the analysis task. The function
and performance allocated to software as part of system engineering are developed by
establishing a complete information report as functional representation, a representation of
system behavior, an indication of performance requirements and design constraints, appropriate
validation criteria.
FEATURES OF JAVA
The following figure depicts a Java program, such as an application or applet, that's running on
the Java platform. As the figure shows, the Java API and Virtual Machine insulates the Java
program from hardware dependencies.
SOCKET OVERVIEW:
A network socket is a lot like an electrical socket. Various plugs around the network have
a standard way of delivering their payload. Anything that understands the standard protocol can
“plug in” to the socket and communicate.
Internet protocol (IP) is a low-level routing protocol that breaks data into small packets
and sends them to an address across a network, which does not guarantee to deliver said packets
to the destination.
Transmission Control Protocol (TCP) is a higher-level protocol that manages to reliably transmit
data. A third protocol, User DatagramProtocol (UDP), sits next to TCP and can be used directly
to support fast, connectionless, unreliable transport of packets.
CLIENT/SERVER:
A server is anything that has some resource that can be shared. There are compute
servers, which provide computing power; print servers, which manage a collection of printers;
disk servers, which provide networked disk space; and web servers, which store web pages. A
client is simply any other entity that wants to gain access to a particular server.
A server process is said to “listen” to a port until a client connects to it. A server
is allowed to accept multiple clients connected to the same port number, although each session is
unique. To manage multiple client connections, a server process must be multithreaded or have
some other means of multiplexing the simultaneous I/O.
RESERVED SOCKETS:
and the list goes on. It is up to each protocol to determine how a client should interact with the
port.
Java supports TCP/IP both by extending the already established stream I/O
interface. Java supports both the TCP and UDP protocol families. TCP is used for reliable
stream-based I/O across the network. UDP supports a simpler, hence faster, point-to-point
datagram-oriented model.
INETADDRESS:
The InetAddress class is used to encapsulate both the numerical IP address and
the domain name for that address. User interacts with this class by using the name of an IP host,
which is more convenient and understandable than its IP address. The InetAddress class hides
the number inside. As of Java 2, version 1.4, InetAddress can handle both IPv4 and IPv6
addresses.
FACTORY METHODS:
UnknownHostException
throwsUnknowsHostException
throwsUnknownHostException
INSTANCE METHODS:
The InetAddress class also has several other methods, which can be used on the
objects returned by the methods just discussed. Here are some of the most commonly used.
Boolean equals (Object other)- Returns true if this object has the same
Internet address as other.
1. byte [ ] get Address ( )- Returns a byte array that represents the object’s
Internet address in network byte order.
3. String get Hostname ( ) - Returns a string that represents the host name associated
with the InetAddress object.
5. String toString ( ) - Returns a string that lists the host name and the IP address for
convenience.
There are two kinds of TCP sockets in Java. One is for servers, and the other
is for clients. The Server Socket class is designed to be a “listener,” which waits for clients to
connect before doing anything. The Socket class is designed to connect to server sockets and
initiate protocol exchanges.
23
The creation of a Socket object implicitly establishes a connection between the client and
server. There are no methods or constructors that explicitly expose the details of establishing that
connection. Here are two constructors used to create client sockets
Socket (String hostName, intport) - Creates a socket connecting the local host to the named host
and port; can throw an UnknownHostException or anIOException.
A socket can be examined at any time for the address and port information
associated with it, by use of the following methods:
Input Streamget Input Stream ( ) - Returns the InputStream associated with the
invoking socket.
Java has a different socket class that must be used for creating server applications. The
ServerSocket class is used to create servers that listen for either local or remote client programs
to connect to them on published ports. ServerSockets are quite different form normal Sockets.
When the user create a ServerSocket, it will register itself with the system as having an interest
in client connections.
ServerSocket(int port) - Creates server socket on the specified port with a queue length of
50.
Serversocket(int port, int maxQueue) - Creates a server socket on the specified portwith a
maximum queue length of maxQueue.
ServerSocket(int port, int maxQueue, InetAddress localAddress)-Creates a server socket
on the specified port with a maximum queue length of maxQueue. On a multihomed host,
localAddress specifies the IP address to which this socket binds.
ServerSocket has a method called accept( ) - which is a blocking call that will wait for a
client to initiate communications, and then return with a normal Socket that is then used
for communication with the client.
URL:
The Web is a loose collection of higher-level protocols and file formats, all unified in a
web browser. One of the most important aspects of the Web is that Tim Berners-Lee devised a
saleable way to locate all of the resources of the Net. The Uniform Resource Locator (URL) is
used to name anything and everything reliably.
CHAPTER 6
PROJECT DESCRIPTION
VM scheduling is a pivotal module within cloud computing systems that orchestrates the
allocation of virtual machines (VMs) to physical hosts. The primary objective of VM scheduling
is to optimize resource utilization and enhance overall system performance. This module
considers instantaneous resource usage, historical utilization patterns, and long-term
performance metrics to make informed decisions about VM placement. The effectiveness of VM
scheduling directly impacts the efficiency of cloud environments by ensuring that computational
resources are allocated judiciously, adapting dynamically to varying workloads.
Data analysis is a crucial role in the proposed system, providing the foundation for informed
decision-making. This module involves the examination and interpretation of historical VM
resource utilization data over time. By employing statistical and machine learning techniques,
data analysis contributes to understanding the patterns and trends within the cloud environment.
26
Insights derived from this analysis inform the VM scheduling algorithm, allowing it to adapt to
changing conditions and optimize resource allocation based on past performance metrics.
The classification algorithm is a key component employed to categorize and organize data in the
context of VM scheduling. This module likely involves the application of machine learning
techniques to classify VMs based on their resource utilization characteristics. The algorithm's
ability to distinguish between different classes of VMs is crucial for making informed decisions
about their placement and resource allocation within the cloud infrastructure.
The optimization scheme serves as the overarching framework that integrates the various
components of the proposed system. This module encapsulates the strategy for enhancing system
performance, which may include the coordination of VM scheduling, data analysis, classification
algorithms, and PSO. The optimization scheme aims to minimize the impact of management
processes on deployed VMs by maximizing real CPU utilization and reducing the count of
physical machines. It provides a holistic approach to refining VM placement strategies and
ensuring efficient resource utilization in cloud computing environments.
27
CLASSIFICATIO LABEL
N ALGORITHM PSO
VM resource monitoring
process VM scheduling
The proposed energy-aware host resource management framework for virtual machines in cloud
data centers using the particle swarm optimization (PSO) algorithm requires the following
inputs:
1. Resource usage information: This information includes the CPU, memory, and network
usage of the VMs and hosts. This information is collected by the resource monitor
component of the framework.
3. Host configurations: This information includes the CPU, memory, and network capacities
of the hosts. This information is typically stored in the data centre’s management system.
4. Energy consumption model: This model is used to estimate the energy consumption of
the hosts based on their resource usage. This model can be based on historical data or on
a theoretical model.
The proposed energy-aware host resource management framework for virtual machines in cloud
data centers using the particle swarm optimization (PSO) algorithm produces the following
outputs:
1. VM placement: This is the mapping of VMs to hosts. The framework determines the
optimal placement of VMs based on their resource requirements, the available resources
of the hosts, and the energy consumption of the hosts.
2. VM scheduling: This is the scheduling of VMs on hosts. The framework determines the
optimal scheduling of VMs on hosts based on their resource requirements, the available
resources of the hosts, and the energy consumption of the hosts.
29
3. Energy consumption estimates: This is an estimate of the energy consumption of the data
centre based on the VM placement and scheduling. The framework uses an energy
consumption model to estimate the energy consumption of the hosts based on their
resource usage.
4. Performance metrics: This is a set of metrics that measure the performance of the data
centre, such as throughput, latency, and response time. The framework can be used to
monitor the performance of the data centre and to identify any potential problems.
30
CHAPTER 7
Functionality: The framework should be able to correctly place and schedule VMs on
hosts.
Performance: The framework should be able to place and schedule VMs in a timely
manner.
Scalability: The framework should be able to scale to large cloud data centers.
Reliability: The framework should be able to handle failures and recover gracefully.
1. Resource Monitor: The resource monitor collects resource usage information from the
VMs and hosts. This information includes CPU, memory, and network usage. The
resource monitor can be implemented using a variety of tools, such as SNMP or IPMI.
2. Decision Maker: The decision maker uses the PSO algorithm to determine the optimal
VM placement and scheduling. The decision maker can be implemented using a variety
of programming languages, such as Python or Java.
3. Executor: The executor enforces the decision maker's decisions. The executor can be
implemented using a variety of tools, such as OpenStack or Cloud Stack.
31
CHAPTER 8
SYSTEM MAINTENANCE
The objectives of this maintenance work are to make sure that the system gets into work all time
without any bug. Provision must be for environmental changes which may affect the computer or
software system. This is called the maintenance of the system. Nowadays there is the rapid
change in the software world. Due to this rapid change, the system should be capable of adapting
these changes. In this project the process can be added without affecting other parts of the
system. Maintenance plays a vital role. The system is liable to accept any modification after its
implementation. This system has been designed to favor all new changes. Doing this will not
affect the system’s performance or its accuracy.
Maintenance is necessary to eliminate errors in the system during its working life and to tune the
system to any variations in its working environment. It has been seen that there are always some
errors found in the system that must be noted and corrected. It also means the review of the
system from time to time.
TYPES OF MAINTENANCE:
Corrective maintenance
Adaptive maintenance
Perfective maintenance
Preventive maintenance
32
Changes made to a system to repair flows in its design coding or implementation. The
design of the software will be changed. The corrective maintenance is applied to correct the
errors that occur during that operation time. The user may enter invalid file type while submitting
the information in the particular field, then the corrective maintenance will displays the error
message to the user in order to rectify the error.
The user’s problems are often caused by the individuals who developed the product, not
the maintainer. The code itself may be badly written maintenance is despised by many software
developers unless good maintenance service is provided, the client will take future development
business elsewhere. Maintenance is the most important phase of software production, the most
difficult and most thankless.
Preventive maintenance involves changes made to a system to reduce the changes of features
system failure. The possible occurrence of error that might occur are forecasted and prevented
with suitable preventive problems. If the user wants to improve the performance of any process
then the new features can be added to the system for this project.
34
CHAPTER 9
9. CONCLUSION
FUTURE WORK
For future work, the proposed Cloud VM scheduling algorithm lays the foundation for several
potential enhancements and research directions. Further exploration could involve the integration
of machine learning techniques to continuously adapt the scheduling algorithm based on real-
time changes in the Cloud environment. Additionally, the system could benefit from considering
energy efficiency aspects to align with the growing importance of sustainable computing.
Exploring the application of the proposed algorithm in diverse Cloud architectures and scaling it
for larger and more complex environments would provide insights into its scalability and
generalizability.
35
CHAPTER 10
APPENDICES
package power;
import java.util.List;
import java.util.ArrayList;
import org.cloudbus.cloudsim.Vm;
import org.cloudbus.cloudsim.Host;
import org.cloudbus.cloudsim.power.PowerHost;
import org.cloudbus.cloudsim.power.PowerVm;
/**
* @author admin
*/
}
37
package power;
import java.awt.Color;
import org.jfree.chart.ChartFactory;
import org.jfree.chart.ChartFrame;
import org.jfree.chart.JFreeChart;
import org.jfree.chart.plot.CategoryPlot;
import org.jfree.chart.plot.PlotOrientation;
import org.jfree.chart.renderer.category.CategoryItemRenderer;
import org.jfree.data.category.DefaultCategoryDataset;
/**
* @author admin
*/
try
chart.getTitle().setPaint(Color.blue);
CategoryPlot p = chart.getCategoryPlot();
p.setRangeGridlinePaint(Color.red);
System.out.println("Range : "+p.getRangeAxisCount() );
renderer.setSeriesPaint(0, Color.red);
renderer.setSeriesPaint(1, Color.green);
// renderer.setSeriesPaint(3, Color.yellow);
39
frame1.setSize(400,400);
frame1.setVisible(true);
catch(Exception e)
e.printStackTrace();
try
chart.getTitle().setPaint(Color.blue);
CategoryPlot p = chart.getCategoryPlot();
p.setRangeGridlinePaint(Color.red);
System.out.println("Range : "+p.getRangeAxisCount() );
renderer.setSeriesPaint(0, Color.BLUE);
renderer.setSeriesPaint(1, Color.pink);
// renderer.setSeriesPaint(3, Color.yellow);
41
frame1.setSize(400,400);
frame1.setVisible(true);
catch(Exception e)
e.printStackTrace();
package power;
/**
* @author admin
*/
/**
*/
long tm1=System.currentTimeMillis();
vm.readVM();
vm.readHost();
vm.createHost();
vm.createVM();
vm.optimiseVmAllocation();
long tm2=System.currentTimeMillis();
long tim=tm2-tm1;
System.out.println(tim);
gr.display1(tim);
gr.display2(1.4332);
package power;
import java.util.Random;
import org.cloudbus.cloudsim.power.PowerHost;
import org.cloudbus.cloudsim.power.PowerVm;
import org.cloudbus.cloudsim.Vm;
import org.cloudbus.cloudsim.power.PowerVmAllocationPolicySimple;
import java.util.List;
import java.util.Map;
/**
43
* @author admin
*/
double weight=0.1;
double c1=1;
double c2=1;
double v_max=4.0;
double v_min=-4.0;
int iter=50;
PSO()
try
dt.Velocity=new double[dt.pop][dt.request.size()];
dt.Position=new double[dt.pop][dt.request.size()];
for(int it=0;it<iter;it++)
for(int i=0;i<dt.request.size();i++)
for(int i=0;i<dt.request.size();i++)
de[i][0]=dt.Position[0][i];
de[i][1]=i;
de[i][2]=i;
}.
for(int i=0;i<dt.request.size();i++)
for(int j=i+1;j<dt.request.size();j++)
45
if(de[i][0]>de[j][0])
double t1=de[i][0];
de[i][0]=de[j][0];
de[j][0]=t1;
double t2=de[i][1];
de[i][1]=de[j][1];
de[j][1]=t2;
for(int i=0;i<dt.request.size();i++)
int k1=(int)de[i][1];
int k2=(int)de[i][2];
rk[0][k1]=(k2%dt.host.length)+1;
for(int pi=0;pi<dt.pop;pi++)
String g1[]=dt.population.get(pi).toString().split("#");
46
double Cexe=0;
for(int i=0;i<g1.length;i++)
String g2[]=dt.request.get(i).toString().split("#");
double dur=Double.parseDouble(g2[1]);
double
res=Double.parseDouble(g2[2])+Double.parseDouble(g2[3])+Double.parseDouble(g2[4]);
if(res==0)
res=1;
Cexe=Cexe+(Double.parseDouble(g1[i])*(dur/res));
Cexe=Cexe+(dt.Position[pi][i]-dt.Velocity[pi][i]);
// System.out.println("pp= "+Cexe);
dt.pbest[pi]=Cexe;
if(dt.gbest<Cexe)
dt.psobest=dt.population.get(pi).toString();
dt.gbest=Cexe;
for(int i=0;i<dt.pop-1;i++)
for(int j=0;j<dt.request.size();j++)
47
dt.Velocity[i+1][j]=weight*dt.Velocity[i][j]+c1*rn.nextDouble()*(dt.pbest[i]-
dt.Position[i][j])+c2*rn.nextDouble()*(dt.gbest-dt.Position[i][j]);
dt.Position[i+1][j]=dt.Position[i][j]+dt.Velocity[i][j];
} // iter
catch(Exception e)
e.printStackTrace();
double uti=0;
try
{
48
PowerVmAllocationPolicySimple ps=new
PowerVmAllocationPolicySimple(dt.hostList);
int h=0;
for(int i=0;i<dt.vmlist.size();i++)
PowerVm vm=dt.vmlist.get(i);
if(!dt.allVM.contains(vm))
if(bool)
uti=ph.getUtilizationOfRam()+ph.getUtilizationOfBw()+ph.getUtilizationOfCpuMips();
dt.allVM.add(vm);
dt.newList.add(vm.getId()+"#"+ph.getId());
h++;
else
break;
}
49
//System.out.println(ph.getId()+" : "+uti);
catch(Exception e)
e.printStackTrace();
return uti;
package power;
import java.io.File;
import java.io.FileInputStream;
import java.util.ArrayList;
import java.util.Calendar;
import java.util.LinkedList;
import java.util.List;
import org.cloudbus.cloudsim.CloudletSchedulerTimeShared;
import org.cloudbus.cloudsim.CloudletScheduler;
import org.cloudbus.cloudsim.Datacenter;
import org.cloudbus.cloudsim.DatacenterCharacteristics;
import org.cloudbus.cloudsim.Log;
import org.cloudbus.cloudsim.Pe;
50
import org.cloudbus.cloudsim.Storage;
import org.cloudbus.cloudsim.Vm;
import org.cloudbus.cloudsim.VmAllocationPolicySimple;
import org.cloudbus.cloudsim.VmSchedulerTimeShared;
import org.cloudbus.cloudsim.core.CloudSim;
import org.cloudbus.cloudsim.power.PowerHost;
import org.cloudbus.cloudsim.power.models.PowerModelCubic;
import org.cloudbus.cloudsim.provisioners.BwProvisionerSimple;
import org.cloudbus.cloudsim.provisioners.PeProvisionerSimple;
import org.cloudbus.cloudsim.provisioners.RamProvisionerSimple;
import org.cloudbus.cloudsim.power.PowerVmSelectionPolicyMinimumMigrationTime;
import org.cloudbus.cloudsim.power.PowerVmSelectionPolicy;
import org.cloudbus.cloudsim.power.PowerVm;
/**
* @author admin
*/
Datacenter dc1;
DatacenterCharacteristics characteristics;
{
51
try
fis.read(bt);
fis.close();
System.out.println("VM List");
System.out.println("=========================");
System.out.println(g1);
String g2[]=g1.split("\n");
for(int i=1;i<g2.length;i++)
dt.Vt.add(g2[i].trim());
dt.vms=new String[dt.Vt.size()][4];
for(int i=0;i<dt.Vt.size();i++)
String a1[]=dt.Vt.get(i).toString().trim().split("\t");
dt.vms[i][0]=a1[0]; // VM Id
dt.vms[i][1]=a1[1]; // VM cpu
dt.vms[i][2]=a1[2]; // VM ram
52
dt.vms[i][3]=a1[3]; // VM bw
catch(Exception e)
e.printStackTrace();
try
fis.read(bt);
fis.close();
System.out.println("Host List");
System.out.println("=========================");
System.out.println(g1);
String g2[]=g1.split("\n");
53
for(int i=1;i<g2.length;i++)
dt.Ht.add(g2[i].trim());
dt.host=new String[dt.Ht.size()][4];
for(int i=0;i<dt.Ht.size();i++)
String a1[]=dt.Ht.get(i).toString().trim().split("\t");
dt.host[i][0]=a1[0]; // Host Id
dt.host[i][3]=a1[3]; // Host bw
catch(Exception e)
e.printStackTrace();
try
54
Log.printLine("Starting CloudSim");
Calendar = Calendar.getInstance();
cs.init(1, calendar,false);
String name="DC1";
for(int i=0;i<dt.Ht.size();i++)
String a1[]=dt.Ht.get(i).toString().split("\t");
int id=Integer.parseInt(a1[0]);
int cpu=Integer.parseInt(a1[1]);
int ram1=Integer.parseInt(a1[2]);
int bw2=Integer.parseInt(a1[3]);
int storage=100000;
for(int k=0;k<cpu;k++)
PowerModelCubic(1000,500)));
String os = "Linux";
catch(Exception e)
56
e.printStackTrace();
try
for(int i=0;i<dt.vms.length;i++)
int cid=Integer.parseInt(dt.vms[i][0]);
long bw = Long.parseLong(dt.vms[i][3]);
//Vm vm1 = new Vm(vmid,cid, mips, pesNumber, ram, bw, size, vmm, new
CloudletSchedulerTimeShared());
PowerVm vm1 = new PowerVm(vmid,cid, mips, pesNumber, ram, bw, size,1 ,vmm,
new CloudletSchedulerTimeShared(),0.5);
57
System.out.println("VM-"+vmid+" is Created...");
dt.vmlist.add(vm1);
catch(Exception e)
e.printStackTrace();
try
for(int j=0;j<dt.hostList.size();j++)
PowerHost ph=dt.hostList.get(j);
double uti=ps.fittnessFun(ph);
/* for(int i=0;i<dt.vmlist.size();i++)
PowerVm vm=dt.vmlist.get(i);
long vmBW=vm.getBw();
int vmRAM=vm.getRam();
int vmPe=vm.getNumberOfPes();
for(int j=0;j<dt.hostList.size();j++)
PowerHost ph=dt.hostList.get(j);
boolean bool=ph.isSuitableForVm(vm);
if(bool)
int id=ph.getId();
long htBW=ph.getBw();
int htRAM=ph.getRam();
long storage=ph.getStorage();
List<Pe> lt=ph.getPeList();
int htPe=ph.getNumberOfPes();
59
long bw=htBW-vmBW;
int ram=htRAM-vmRAM;
int pe=htPe-vmPe;
for(int k=0;k<pe;k++)
PowerModelCubic(1000,500));
dt.hostList.set(j, newPH);
break;
*/
/* for(int i=0;i<dt.hostList.size();i++)
PowerHost ph=dt.hostList.get(i);
60
int id=ph.getId();
long htBW=ph.getBw();
int htRAM=ph.getRam();
long storage=ph.getStorage();
List<Pe> lt=ph.getPeList();
int htPe=ph.getNumberOfPes();
for(int j=0;j<dt.vmlist.size();j++)
PowerVm vm=dt.vmlist.get(j);
boolean bool=ph.isSuitableForVm(vm);
"+vm.getNumberOfPes());
if(bool)
long vmBW=vm.getBw();
int vmRAM=vm.getRam();
int vmPe=vm.getNumberOfPes();
long bw=htBW-vmBW;
61
int ram=htRAM-vmRAM;
int pe=htPe-vmPe;
for(int k=0;k<pe;k++)
PowerModelCubic(1000,500));
dt.hostList.set(i, newPH);
}*/
catch(Exception e)
e.printStackTrace();
}
62
CHAPTER 11
11. REFERENCES
[2]. S. Rajput and A. Arora, “Virtual machine placement in cloud data centers using a hybrid
multi-verse optimization algorithm,” International Journal of Computer Applications, vol. 75, no.
10, pp. 6–12, 2013.
[3]. M. Mohamad and A. Selamat, “An Energy-aware Host Resource Management Framework
for Two-tier Virtualized Cloud Data Centers,” in International Conference on Computer,
Communications, and Control Technology (I4CT). IEEE, 2015, pp. 227–231.
[4]. J. Ramos et al., “An efficient power-aware VM allocation mechanism in cloud data centers:
a micro genetic-based approach,” in Proceedings of the first instructional conference on machine
learning, vol. 242. Piscataway, NJ, 2003, pp. 133–142.
[5]. T. Kumaresan and C. Palanisamy, “An Energy and SLA-Aware Resource Management
Strategy in Cloud Data Centers,” International Journal of Bio-Inspired Computation, vol. 9, no.
3, pp. 142–156, 2017.
[6]. H. Kaur and S. Ajay, “Renewable Energy-based Multi-Indexed Job Classification and
Container Management Scheme for Sustainability of Cloud Data Centers,” Next Generation
Computing Technologies(NGCT), pp. 516–521, 2016.
[7]. K. Toutanova and C. Cherry, “A placement architecture for a container as a service (CaaS)
in a cloud environment,” in Proceedings of the Joint Conference of the 47th Annual Meeting of
the ACL and the 4th International Joint Conference on Natural Language Processing of the
AFNLP: Volume 1- Volume 1. Association for Computational Linguistics, 2009, pp. 486– 494.
67
[9]. T. Mikolov and G. Zweig, “An energy, performance efficient resource consolidation scheme
for heterogeneous cloud datacenters,” in 2012 IEEE Spoken Language Technology Workshop
(SLT). IEEE, 2012, pp. 234–239.
[10]. Rizky, W. M., Ristu, S., Afrizal, D. “HeporCloud: An energy and performance efficient
resource orchestrator for hybrid heterogeneous cloud computing environments”. Scientific
Journal of Informatics, Vol. 3(2), p. 41-50, Nov. 2016.